

The Eventus Consulting Group
AWS Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer, offering a 6-month contract at $60.00 - $65.00 per hour. Candidates must have strong Python/PySpark skills, AWS experience (Glue, Redshift), and solid SQL proficiency. On-site work is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
January 11, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Reston, VA 20191
-
🧠 - Skills detailed
#GIT #Documentation #Version Control #Data Storage #Lambda (AWS Lambda) #Data Engineering #Data Processing #SQS (Simple Queue Service) #AWS Glue #Python #AWS (Amazon Web Services) #Deployment #Redshift #SNS (Simple Notification Service) #PySpark #DevOps #Storage #Spark (Apache Spark) #Data Pipeline #Data Modeling #Data Transformations #"ETL (Extract #Transform #Load)" #Amazon Redshift #Monitoring #Data Quality #Scala #SQL (Structured Query Language)
Role description
Seeking an AWS Data Engineer to design, build, and maintain scalable data pipelines and ETL solutions using Python/Pyspark and AWS.
Key Responsibilities
Build and maintain ETL pipelines using Python and PySpark on AWS Glue and other compute platforms Orchestrate workflows with AWS Step Functions and serverless components (Lambda)
Implement messaging and event-driven patterns using AWS SNS and SQS
Design and optimize data storage and querying in Amazon Redshift
Write performant SQL for data transformations, validation, and reporting
Ensure data quality, monitoring, error handling and operational support for pipelines
Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions Contribute to CI/CD, infrastructure-as-code, and documentation for reproducible deployment.
Required Skills
Strong experience with Python and Pyspark for large-scale data processing
Proven hands-on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions Solid SQLSQL skills and familiarity with data modeling and query optimization Experience with ETL best practices, data quality checks, and monitoring/alerting Familiarity with version control (Git) and basic DevOps/CI-CD workflows
Pay: $60.00 - $65.00 per hour
Expected hours: 40 per week
Benefits:
401(k)
Dental insurance
Flexible schedule
Health insurance
Vision insurance
Work Location: In person
Seeking an AWS Data Engineer to design, build, and maintain scalable data pipelines and ETL solutions using Python/Pyspark and AWS.
Key Responsibilities
Build and maintain ETL pipelines using Python and PySpark on AWS Glue and other compute platforms Orchestrate workflows with AWS Step Functions and serverless components (Lambda)
Implement messaging and event-driven patterns using AWS SNS and SQS
Design and optimize data storage and querying in Amazon Redshift
Write performant SQL for data transformations, validation, and reporting
Ensure data quality, monitoring, error handling and operational support for pipelines
Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions Contribute to CI/CD, infrastructure-as-code, and documentation for reproducible deployment.
Required Skills
Strong experience with Python and Pyspark for large-scale data processing
Proven hands-on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions Solid SQLSQL skills and familiarity with data modeling and query optimization Experience with ETL best practices, data quality checks, and monitoring/alerting Familiarity with version control (Git) and basic DevOps/CI-CD workflows
Pay: $60.00 - $65.00 per hour
Expected hours: 40 per week
Benefits:
401(k)
Dental insurance
Flexible schedule
Health insurance
Vision insurance
Work Location: In person





