

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in McLean, VA, for 3 months at a competitive pay rate. Requires 5+ years in data engineering, expertise in PySpark, AWS EKS, PostgreSQL, and experience migrating legacy systems to cloud architectures.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 18, 2025
π - Project duration
3 to 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
McLean, VA
-
π§ - Skills detailed
#Agile #Scala #Cloud #Automated Testing #Pytest #Microservices #Kubernetes #Spark (Apache Spark) #Migration #Data Engineering #Jenkins #AWS (Amazon Web Services) #DevOps #Snowflake #Security #Angular #Informatica #Python #Hadoop #PostgreSQL #SonarQube #SAS #PySpark #Deployment
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Senior Data Engineer
Location: McLean, VA (Onsite)
Interview Type: In-Person Only
Duration: 3 Months
Project Overview:
One of our Client is seeking a highly skilled Senior Data Engineer with strong experience in PySpark/Python-based microservices, AWS EKS, and PostgreSQL to lead the migration of a legacy system built on Informatica, SAS, and DB2 to a modern cloud-native architecture. The ideal candidate will have a deep understanding of both legacy and modern data ecosystems and be capable of driving end-to-end migration and modernization efforts.
Key Responsibilities:
β’ Analyze and understand the existing system built on Informatica, SAS, and DB2.
β’ Design and develop scalable PySpark/Python-based microservices deployed on AWS EKS.
β’ Migrate data and workflows to PostgreSQL and Snowflake.
β’ Implement automated testing using Behave/Cucumber and PyTest.
β’ Ensure code quality and security using SonarQube and Fortify.
β’ Collaborate with UI developers for the 10% Angular-based front-end work.
β’ Participate in CI/CD processes using Jenkins.
β’ Document technical designs, migration plans, and deployment strategies.
Must-Have Qualifications:
β’ 5+ years of experience in software/data engineering.
β’ Strong hands-on experience with:
β’ PySpark, Python 3.11
β’ AWS EKS, Kubernetes
β’ PostgreSQL, Snowflake
β’ SAS, Informatica
β’ DB2 (for legacy understanding)
β’ Behave/Cucumber, PyTest
β’ SonarQube, Fortify
β’ Jenkins
β’ Experience with Spark 3.5, Hadoop 3, and Hadoop 2.3.4.
β’ Proven track record of migrating legacy systems to modern cloud-based architectures.
Nice-to-Have:
β’ Experience with Angular (10% of the project involves UI work).
β’ Familiarity with Agile methodologies and DevOps practices.