AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer with 7+ years of experience, focusing on AWS, Azure, and GCP. It is a 12-month remote contract at a W2 pay rate, requiring strong skills in SQL, Python, and data governance.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 22, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Washington, DC
-
🧠 - Skills detailed
#Spark (Apache Spark) #Data Science #AWS (Amazon Web Services) #Agile #Data Pipeline #S3 (Amazon Simple Storage Service) #Scrum #SQL (Structured Query Language) #GitHub #Kafka (Apache Kafka) #RDS (Amazon Relational Database Service) #BigQuery #Security #Data Engineering #ADF (Azure Data Factory) #Scala #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #BI (Business Intelligence) #Infrastructure as Code (IaC) #Jenkins #Data Quality #Redshift #Data Modeling #PySpark #Synapse #Azure #Lambda (AWS Lambda) #Datasets #GDPR (General Data Protection Regulation) #Data Architecture #Azure Data Factory #AWS Glue #Dataflow #Terraform #Compliance #Data Lake #GCP (Google Cloud Platform) #Databricks #Data Governance #Cloud #Python #Athena
Role description
Job Title: AWS Data Engineer (with Azure & GCP) Location: Remote (Across USA) Duration: 12 Months Contract Type: W2 Only (No C2C / No Employer Submissions) Job Description: We are seeking a highly skilled Data Engineer with strong expertise in AWS cloud services, and additional exposure to Azure and GCP. The candidate will be responsible for designing, developing, and optimizing data pipelines, ETL workflows, and data lake/warehouse solutions across multi-cloud environments. Responsibilities: β€’ Design and implement scalable data pipelines and ETL/ELT workflows on AWS, Azure, and GCP. β€’ Develop and optimize data lake and warehouse architectures for high-performance analytics. β€’ Work with large-scale structured and unstructured datasets to enable business intelligence and machine learning use cases. β€’ Collaborate with cross-functional teams including Data Scientists, Analysts, and Architects. β€’ Ensure data quality, governance, and compliance standards across cloud platforms. β€’ Implement CI/CD pipelines for data engineering workflows. β€’ Troubleshoot and optimize queries, performance tuning, and cost optimization in cloud environments. Required Skills: β€’ 7+ years of experience as a Data Engineer in enterprise environments. β€’ Strong expertise in AWS (Glue, S3, Redshift, Lambda, EMR, RDS, Athena). β€’ Working knowledge of Azure (Data Factory, Synapse, Databricks) and GCP (BigQuery, Dataflow, Pub/Sub). β€’ Proficiency in SQL, Python, PySpark, and ETL development. β€’ Hands-on experience with data modeling, performance tuning, and cloud-native data architecture. β€’ Experience with CI/CD (GitHub Actions, Jenkins, or similar). β€’ Strong understanding of data governance, security, and compliance (HIPAA, GDPR, etc.). β€’ Familiarity with Agile/Scrum methodologies. Nice to Have: β€’ Experience with Kafka or Kinesis for real-time streaming. β€’ Knowledge of Terraform/CloudFormation for infrastructure as code. β€’ Exposure to ML pipelines and MLOps practices. Note: β€’ W2 Candidates Only (C2C & Employer Submissions will NOT be considered).