

Python Data Engineer (PySpark) ___ McLean, VA (Onsite) ___ Contract [F2F Interview - Locals Only]
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Python Data Engineer (PySpark) in McLean, VA, on a contract basis. Requires 8+ years of experience in Python, PySpark, AWS, and event-based systems. Face-to-face interview mandatory; local candidates only.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 28, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
McLean, VA
-
π§ - Skills detailed
#Scala #Docker #Compliance #Version Control #PostgreSQL #NoSQL #Kubernetes #MongoDB #DynamoDB #Migration #Microservices #VPC (Virtual Private Cloud) #Spark (Apache Spark) #Storage #Database Migration #Oracle #RDBMS (Relational Database Management System) #EC2 #S3 (Amazon Simple Storage Service) #PySpark #Agile #Data Storage #Kafka (Apache Kafka) #Cloud #Data Engineering #Data Science #AWS (Amazon Web Services) #Data Security #Jenkins #MySQL #GIT #Databases #Data Pipeline #Data Processing #Security #Python
Role description
Job Title: Senior Python Data Engineer (PySpark)
Location: McLean, VA (Onsite β Face-to-Face Interview Required)
Type: Contract
Job Description
We are looking for a Senior Python Data Engineer with strong expertise in PySpark, event-based systems, and AWS cloud services to join our team onsite in McLean, VA. The ideal candidate will bring deep technical skills in building scalable data pipelines, microservices, and real-time streaming solutions.
Responsibilities
β’ Design, build, and optimize large-scale data pipelines using Python and PySpark.
β’ Develop event-driven and streaming solutions leveraging Kafka.
β’ Build and maintain microservices-based data processing systems.
β’ Work with AWS cloud services (EC2, EKS, S3, AIM, VPC) to deploy and scale data engineering solutions.
β’ Ensure reliability, scalability, and performance of data infrastructure.
β’ Implement CI/CD pipelines using Jenkins, Git, and Flyway for version control and database migration.
β’ Collaborate with data scientists, product owners, and architects to deliver data solutions that support business needs.
β’ Work with both RDBMS and NoSQL databases for structured and unstructured data storage.
Required Skills & Experience
β’ 8+ years of professional experience in Python development (senior-level).
β’ Strong hands-on experience in Data Engineering and building data pipelines.
β’ Expertise in PySpark and distributed data processing.
β’ Experience in event-based systems and streaming (Kafka).
β’ Strong knowledge of AWS services: EC2, EKS, S3, AIM, VPC.
β’ Proficiency with CI/CD tools: Flyway, Jenkins, Git.
β’ Experience with microservices development.
β’ Hands-on with both RDBMS (e.g., PostgreSQL, Oracle, MySQL) and NoSQL (e.g., MongoDB, DynamoDB).
β’ Strong problem-solving skills and ability to work onsite with cross-functional teams.
Nice To Have
β’ Knowledge of containerization (Docker, Kubernetes).
β’ Experience with data security and compliance practices in financial services.
β’ Exposure to Agile/SAFe development methodologies.
If interested, please share resume to Sanghab@acestackllc.com
Job Title: Senior Python Data Engineer (PySpark)
Location: McLean, VA (Onsite β Face-to-Face Interview Required)
Type: Contract
Job Description
We are looking for a Senior Python Data Engineer with strong expertise in PySpark, event-based systems, and AWS cloud services to join our team onsite in McLean, VA. The ideal candidate will bring deep technical skills in building scalable data pipelines, microservices, and real-time streaming solutions.
Responsibilities
β’ Design, build, and optimize large-scale data pipelines using Python and PySpark.
β’ Develop event-driven and streaming solutions leveraging Kafka.
β’ Build and maintain microservices-based data processing systems.
β’ Work with AWS cloud services (EC2, EKS, S3, AIM, VPC) to deploy and scale data engineering solutions.
β’ Ensure reliability, scalability, and performance of data infrastructure.
β’ Implement CI/CD pipelines using Jenkins, Git, and Flyway for version control and database migration.
β’ Collaborate with data scientists, product owners, and architects to deliver data solutions that support business needs.
β’ Work with both RDBMS and NoSQL databases for structured and unstructured data storage.
Required Skills & Experience
β’ 8+ years of professional experience in Python development (senior-level).
β’ Strong hands-on experience in Data Engineering and building data pipelines.
β’ Expertise in PySpark and distributed data processing.
β’ Experience in event-based systems and streaming (Kafka).
β’ Strong knowledge of AWS services: EC2, EKS, S3, AIM, VPC.
β’ Proficiency with CI/CD tools: Flyway, Jenkins, Git.
β’ Experience with microservices development.
β’ Hands-on with both RDBMS (e.g., PostgreSQL, Oracle, MySQL) and NoSQL (e.g., MongoDB, DynamoDB).
β’ Strong problem-solving skills and ability to work onsite with cross-functional teams.
Nice To Have
β’ Knowledge of containerization (Docker, Kubernetes).
β’ Experience with data security and compliance practices in financial services.
β’ Exposure to Agile/SAFe development methodologies.
If interested, please share resume to Sanghab@acestackllc.com