Jobs via Dice

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Reston, VA, with a contract duration of over 6 months. Key skills include Python, PySpark, AWS services (Glue, Redshift, Lambda), and SQL. An in-person interview is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 7, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Reston, VA
-
🧠 - Skills detailed
#SQS (Simple Queue Service) #"ETL (Extract #Transform #Load)" #Amazon Redshift #Data Storage #Storage #Deployment #Redshift #Data Transformations #Lambda (AWS Lambda) #Python #PySpark #SNS (Simple Notification Service) #Monitoring #Scala #Data Engineering #AWS (Amazon Web Services) #Data Pipeline #Data Quality #DevOps #Documentation #SQL (Structured Query Language) #Spark (Apache Spark) #Data Processing #AWS Glue #Version Control #GIT #Data Modeling
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Hexaware Technologies, Inc, is seeking the following. Apply via Dice today! ROLE: AWS Data Engineer Location- Reston, VA (day one onsite) Duration- Full Time/Contract In Person Interview Is Required. Job Description: Seeking an AWS Data Engineer to design, build, and maintain scalable data pipelines and ETL solutions using Python/Pyspark and AWS managed services to support analytics and data product needs. Key Responsibilities • Build and maintain ETL pipelines using Python and PySpark on AWS Glue and other compute platforms • Orchestrate workflows with AWS Step Functions and serverless components (Lambda) • Implement messaging and event-driven patterns using AWS SNS and SQS • Design and optimize data storage and querying in Amazon Redshift • Write performant SQL for data transformations, validation, and reporting • Ensure data quality, monitoring, error handling and operational support for pipelines • Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions • Contribute to CI/CD, infrastructure-as-code, and documentation for reproducible deployments Required Skills • Strong experience with Python and Pyspark for large-scale data processing • Proven hands-on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions • Solid SQL skills and familiarity with data modeling and query optimization • Experience with ETL best practices, data quality checks, and monitoring/alerting • Familiarity with version control (Git) and basic DevOps/CI-CD workflows