

SIDRAM TECHNOLOGIES
AWS Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Reston, VA, with a contract length of over 6 months at a pay rate of $60/hr on W2. Requires strong Python/PySpark, AWS services expertise, and solid SQL skills. Only USC and GC candidates eligible.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
March 25, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Reston, VA
-
🧠 - Skills detailed
#Version Control #PySpark #GIT #Lambda (AWS Lambda) #SNS (Simple Notification Service) #Deployment #Data Quality #Scala #AWS Glue #Monitoring #Data Transformations #Data Modeling #Python #Data Processing #AWS (Amazon Web Services) #Data Pipeline #Documentation #SQS (Simple Queue Service) #Data Storage #"ETL (Extract #Transform #Load)" #Data Engineering #Spark (Apache Spark) #Redshift #SQL (Structured Query Language) #DevOps #Amazon Redshift #Storage
Role description
Role: AWS Data Engineer
Location- Reston, VA(In Person Interview at Herndon VA )
Duration: Fulltime/Contract
Onsite: 5 Days Onsite
need only USC and GC
Rate: 60/hr on W2
Job Description
Seeking an AWS Data Engineer to design, build, and maintain scalable data pipelines and ETL solutions using Python/PySpark and AWS managed services to support analytics and data product needs.
Key Responsibilities
• Build and maintain ETL pipelines using Python and PySpark on AWS Glue and other compute platforms
• Orchestrate workflows with AWS Step Functions and serverless components (Lambda)
• Implement messaging and event-driven patterns using AWS SNS and SQS
• Design and optimize data storage and querying in Amazon Redshift
• Write performant SQL for data transformations, validation, and reporting
• Ensure data quality, monitoring, error handling, and operational support for pipelines
• Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions
• Contribute to CI/CD, infrastructure-as-code, and documentation for reproducible deployments
Required Skills
• Strong experience with Python and PySpark for large-scale data processing
• Proven hands-on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions
• Solid SQLSQL skills and familiarity with data modeling and query optimization
• Experience with ETL best practices, data quality checks, and monitoring/alerting
• Familiarity with version control (Git) and basic DevOps/CI-CD workflows
Rachael
IT Services | Development | Staffing
URL: http:/www.sidramtech.com |
Email: rachael@sidramtech.com
Direct: 4705239688
Led by 25+ Years of Industry Experience
Role: AWS Data Engineer
Location- Reston, VA(In Person Interview at Herndon VA )
Duration: Fulltime/Contract
Onsite: 5 Days Onsite
need only USC and GC
Rate: 60/hr on W2
Job Description
Seeking an AWS Data Engineer to design, build, and maintain scalable data pipelines and ETL solutions using Python/PySpark and AWS managed services to support analytics and data product needs.
Key Responsibilities
• Build and maintain ETL pipelines using Python and PySpark on AWS Glue and other compute platforms
• Orchestrate workflows with AWS Step Functions and serverless components (Lambda)
• Implement messaging and event-driven patterns using AWS SNS and SQS
• Design and optimize data storage and querying in Amazon Redshift
• Write performant SQL for data transformations, validation, and reporting
• Ensure data quality, monitoring, error handling, and operational support for pipelines
• Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions
• Contribute to CI/CD, infrastructure-as-code, and documentation for reproducible deployments
Required Skills
• Strong experience with Python and PySpark for large-scale data processing
• Proven hands-on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions
• Solid SQLSQL skills and familiarity with data modeling and query optimization
• Experience with ETL best practices, data quality checks, and monitoring/alerting
• Familiarity with version control (Git) and basic DevOps/CI-CD workflows
Rachael
IT Services | Development | Staffing
URL: http:/www.sidramtech.com |
Email: rachael@sidramtech.com
Direct: 4705239688
Led by 25+ Years of Industry Experience






