

TechClub Inc
AWS Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer on a W2 contract for 9+ months, offering a competitive pay rate. Key skills include Python, PySpark, AWS services (Lambda, Glue, Redshift), SQL, and ETL best practices.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 12, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, United States
-
🧠 - Skills detailed
#Data Modeling #Lambda (AWS Lambda) #SNS (Simple Notification Service) #GIT #Scala #Documentation #Data Pipeline #Amazon Redshift #Spark (Apache Spark) #DevOps #Data Quality #Python #Data Transformations #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Storage #Deployment #SQS (Simple Queue Service) #AWS Glue #PySpark #Redshift #AWS (Amazon Web Services) #Version Control #Data Engineering #Monitoring #Data Storage #Data Processing
Role description
Job Title: AWS Data Engineer
Experience: 9+ Genuine Experience
Postion: Contract W2 Only - No C2C
Job Description:
Seeking an AWS Data Engineer to design, build, and maintain scalable data pipelines and ETL solutions using Python/Pyspark and AWS managed services to support analytics and data product needs.
Key Responsibilities
• Build and maintain ETL pipelines using Python and PySpark on AWS Glue and other compute platforms
• Orchestrate workflows with AWS Step Functions and server less components (Lambda)
• Implement messaging and event-driven patterns using AWS SNS and SQS
• Design and optimize data storage and querying in Amazon Redshift
• Write performant SQL for data transformations, validation, and reporting
• Ensure data quality, monitoring, error handling and operational support for pipelines
• Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions
• Contribute to CI/CD, infrastructure-as-code, and documentation for reproducible deployments
Required Skills
• Strong experience with Python and Pyspark for large-scale data processing
• Proven hands-on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions
• Solid SQLSQL skills and familiarity with data modeling and query optimization
• Experience with ETL best practices, data quality checks, and monitoring/alerting
• Familiarity with version control (Git) and basic DevOps/CI-CD workflows
Job Title: AWS Data Engineer
Experience: 9+ Genuine Experience
Postion: Contract W2 Only - No C2C
Job Description:
Seeking an AWS Data Engineer to design, build, and maintain scalable data pipelines and ETL solutions using Python/Pyspark and AWS managed services to support analytics and data product needs.
Key Responsibilities
• Build and maintain ETL pipelines using Python and PySpark on AWS Glue and other compute platforms
• Orchestrate workflows with AWS Step Functions and server less components (Lambda)
• Implement messaging and event-driven patterns using AWS SNS and SQS
• Design and optimize data storage and querying in Amazon Redshift
• Write performant SQL for data transformations, validation, and reporting
• Ensure data quality, monitoring, error handling and operational support for pipelines
• Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions
• Contribute to CI/CD, infrastructure-as-code, and documentation for reproducible deployments
Required Skills
• Strong experience with Python and Pyspark for large-scale data processing
• Proven hands-on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions
• Solid SQLSQL skills and familiarity with data modeling and query optimization
• Experience with ETL best practices, data quality checks, and monitoring/alerting
• Familiarity with version control (Git) and basic DevOps/CI-CD workflows





