

American Business Solutions, Inc
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position in McLean, VA, lasting over 6 months, with a pay rate of $131,374.86 - $158,214.89 per year. Requires 3-7 years of data engineering experience, proficiency in Python, PySpark, AWS, and ETL expertise.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
719
-
🗓️ - Date
November 14, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Fairfax, VA 22030
-
🧠 - Skills detailed
#S3 (Amazon Simple Storage Service) #Databricks #Data Quality #ML (Machine Learning) #Redshift #Athena #Data Lifecycle #Cloud #Jenkins #Airflow #Spark (Apache Spark) #Data Lake #Data Ingestion #SQL (Structured Query Language) #Process Automation #Storage #Version Control #Terraform #Agile #PySpark #Data Engineering #Data Pipeline #Automation #"ETL (Extract #Transform #Load)" #Python #AWS Glue #Lambda (AWS Lambda) #DevOps #GIT #AWS (Amazon Web Services) #Scala
Role description
Job OverviewWe are seeking a dynamic and detail-oriented Data Engineer to join our innovative data team. In this role, you will be instrumental in designing, building, and maintaining scalable data pipelines and architectures that empower data-driven decision-making across the organization. Your expertise will enable seamless integration of diverse data sources, ensuring high-quality, accessible data for analytics, reporting, and machine learning initiatives. If you thrive in a fast-paced environment and are passionate about transforming complex data into actionable insights, this opportunity is perfect for you!
DuKey Responsibilities:
Design, build, and optimize ETL pipelines using Python, PySpark, and Spark
Develop scalable data solutions leveraging Databricks, AWS Glue, EMR, and S3
Collaborate with cross-functional engineering and analytics teams to implement best practices in data ingestion, transformation, and storage
Support data quality, performance tuning, and process automation across the data lifecycle
Work in Agile environments with CI/CD and version control tools
Required Skills and Experience:
3 to 7 plus years of experience in data engineering, preferably in cloud-based environments
Strong proficiency in Python, PySpark, Spark, and SQL
Hands-on experience with AWS data services (S3, Glue, EMR, Redshift, Lambda, Athena)
Experience with Databricks or equivalent data lake platforms
Familiarity with modern DevOps practices (Git, Jenkins, Terraform, Airflow, etc.)
Job Type: Contract
Pay: $131,374.86 - $158,214.89 per year
Application Question(s):
Are you okay to work on W2 or C2C ?
Are you local for this position ? McLean, VA
Are you ex Capital One ?
Experience:
ETL : 8 years (Required)
Python: 8 years (Required)
PySpark: 8 years (Required)
Spark: 8 years (Required)
AWS : 8 years (Required)
SQL: 8 years (Required)
Work Location: In person
Job OverviewWe are seeking a dynamic and detail-oriented Data Engineer to join our innovative data team. In this role, you will be instrumental in designing, building, and maintaining scalable data pipelines and architectures that empower data-driven decision-making across the organization. Your expertise will enable seamless integration of diverse data sources, ensuring high-quality, accessible data for analytics, reporting, and machine learning initiatives. If you thrive in a fast-paced environment and are passionate about transforming complex data into actionable insights, this opportunity is perfect for you!
DuKey Responsibilities:
Design, build, and optimize ETL pipelines using Python, PySpark, and Spark
Develop scalable data solutions leveraging Databricks, AWS Glue, EMR, and S3
Collaborate with cross-functional engineering and analytics teams to implement best practices in data ingestion, transformation, and storage
Support data quality, performance tuning, and process automation across the data lifecycle
Work in Agile environments with CI/CD and version control tools
Required Skills and Experience:
3 to 7 plus years of experience in data engineering, preferably in cloud-based environments
Strong proficiency in Python, PySpark, Spark, and SQL
Hands-on experience with AWS data services (S3, Glue, EMR, Redshift, Lambda, Athena)
Experience with Databricks or equivalent data lake platforms
Familiarity with modern DevOps practices (Git, Jenkins, Terraform, Airflow, etc.)
Job Type: Contract
Pay: $131,374.86 - $158,214.89 per year
Application Question(s):
Are you okay to work on W2 or C2C ?
Are you local for this position ? McLean, VA
Are you ex Capital One ?
Experience:
ETL : 8 years (Required)
Python: 8 years (Required)
PySpark: 8 years (Required)
Spark: 8 years (Required)
AWS : 8 years (Required)
SQL: 8 years (Required)
Work Location: In person






