

Senior Data Engineer (On-site, Reston VA)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Reston, VA, with a contract length of 2+ years at a pay rate of "X". Requires 8+ years of experience, strong Python and AWS Data Services skills, and familiarity with large-scale data processing.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 30, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Reston, VA
-
π§ - Skills detailed
#SQL (Structured Query Language) #JMeter #Redshift #Cloud #Jenkins #Oracle #Web Development #Terraform #IAM (Identity and Access Management) #DynamoDB #S3 (Amazon Simple Storage Service) #SNS (Simple Notification Service) #Consulting #Data Processing #Databases #XML (eXtensible Markup Language) #Scripting #SaaS (Software as a Service) #Azure #AJAX (Asynchronous JavaScript and XML) #Amazon EMR (Amazon Elastic MapReduce) #Lambda (AWS Lambda) #Data Management #Maven #Java #Documentation #Tableau #Datasets #Automation #Hadoop #NoSQL #PySpark #GCP (Google Cloud Platform) #Data Engineering #Computer Science #Data Modeling #AWS Glue #Python #Scala #Ansible #Programming #SQS (Simple Queue Service) #RDS (Amazon Relational Database Service) #SQL Queries #Data Pipeline #Data Analysis #BitBucket #JavaScript #Spark (Apache Spark) #Informatica #AWS (Amazon Web Services) #GIT #GitLab #Agile
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
About the Role:
Join our team as a Senior Data Engineer, where you will be instrumental in shaping the future of data management and analytics for a leading Mortgaging Fintech Giant. In this role, you will leverage your expertise to design, build, deploy, and monitor robust, scalable data pipelines within a dynamic AWS environment. You will focus on manipulating large volumes of structured and unstructured data, normalizing databases, and ensuring data structures meet application requirements. This position requires a proactive individual capable of combining raw information from various sources into consistent, machine-readable formats, enabling data-driven decision-making at scale.
TLDR:
We are seeking a highly skilled Senior Data Engineer with 8+ years of experience in building and optimizing robust data pipelines on AWS. The ideal candidate will have strong Python expertise, deep knowledge of AWS Data Services (PySpark, EMR, AWS Glue, Redshift, S3, Lambda, Step Functions), and proficiency in SQL and data modeling. Experience with large-scale data processing, CI/CD, and a drive to hit the ground running are essential.
Key Responsibilities:
β’ Design, develop, and optimize scalable data pipelines using Python and AWS data services.
β’ Build and deploy applications in AWS, leveraging services such as S3, Glue, Redshift, RDS, EMR, CloudWatch, Lambda, Step Functions, SNS, SQS, ECS, and AppFlow.
β’ Implement and manage large-scale data processing solutions using Spark, PySpark, and Amazon EMR/Hadoop.
β’ Write and optimize complex SQL queries for relational (Postgres, Oracle, Redshift) and NoSQL (DynamoDB) databases.
β’ Normalize databases and ascertain that data structures meet the requirements of accessing applications.
β’ Construct datasets that are easy to analyze and support company requirements, combining raw information from different sources.
β’ Develop and maintain CI/CD pipelines on AWS using tools like CloudFormation, Git, GitLab, BitBucket, Maven, and Jenkins.
β’ Automate repetitive tasks related to provisioning AWS Services, installation, configuration, or management of infrastructure using Terraform, Gitlab, Ansible, and/or Python scripting.
β’ Troubleshoot and provide production support for data pipelines and applications in a 24/7 cycle.
β’ Actively participate in all Agile ceremonies (planning, grooming, product demonstration, retrospectives).
β’ Create and update detailed technical documentation including specifications, implementation guides, and architecture diagrams.
Required Qualifications:
β’ Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field.
β’ 8+ years of relevant experience in Data Engineering and software development.
β’ Strong proficiency in Python programming (required).
β’ 3+ years of experience with Python, SQL, and NoSQL.
β’ 2+ years of experience with Spark and Amazon EMR/Hadoop, PySpark.
β’ 2+ years of experience in building and deploying applications in AWS, with hands-on experience in S3, Lambda, Glue, Redshift, RDS, EMR, CloudWatch, SNS, SQS, and Step Functions.
β’ Deep knowledge of data modeling, data analysis, and data warehousing concepts.
β’ Experience processing large amounts of structured and unstructured data.
β’ Experience with CI/CD with knowledge of Git, BitBucket, Maven, and Jenkins.
β’ Experience with APIs and RESTful services.
β’ Excellent communication, time management, and technical presentation skills.
β’ Strong problem-solving, logic, and analytical skills.
Preferred Qualifications (Nice to Have):
β’ Existing Fannie Mae institutional knowledge and mortgage industry experience.
β’ Experience with Tableau reporting.
β’ Experience in IDMC (Informatica's SAAS platform Intelligent Data Management on Cloud) and setting up Informatica on cloud.
β’ Experience with Terraform Module creation and usage.
β’ Familiarity with other cloud services like Azure or GCP.
β’ Prior Identity and Access Management experience.
β’ Experience with web development using Java, XML, JavaScript, AJAX, or other programming languages.
β’ Knowledge of TDD, BDD, Test Automation Tools (SauceLabs, UFT, Jmeter, Gatling, LoadRunner) and Frameworks (Selenium-Cucumber).
Other Details:
β’ Location: Reston, VA (On-site, hybrid position, currently 3 days/week but could be 5 days/week). Local candidates are preferred.
β’ Length: 2+ years, long-term.
β’ Client: Mortgaging Fintech Giant.
β’ Employment Type: Open to W2 full-time with benefits or C2C.
The difference between something good and something great is attention to detail - AVM Consulting.