

TechTriad
Senior AWS Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Data Engineer, offering an 8+ month W2 contract in Washington, DC or New York, NY. Requires 7–10+ years in data engineering, expertise in Databricks and AWS, and proficiency in Python.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 31, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, United States
-
🧠 - Skills detailed
#AWS (Amazon Web Services) #Distributed Computing #Python #Data Engineering #Lambda (AWS Lambda) #AWS S3 (Amazon Simple Storage Service) #Batch #Data Science #Databricks #Java #MLflow #ML (Machine Learning) #Scala #SQL (Structured Query Language) #Spark (Apache Spark) #Data Pipeline #S3 (Amazon Simple Storage Service) #Delta Lake #PySpark #Data Quality #"ETL (Extract #Transform #Load)" #SageMaker
Role description
Senior Data Engineer
📍 Location: Washington, DC (20006) or New York, NY (10002) – Hybrid, 3 days onsite
🕓 Duration: 8+ Months (Possible Extension)
💼 Type: W2 Contract Role
About the Role
We’re seeking an experienced Senior Data Engineer to design, build, and optimize scalable data systems that power our machine learning models, analytics, and measurement pipelines. You’ll play a key role in building robust data pipelines, feature stores, and ML workflows using Databricks and AWS technologies.
This is a hands-on engineering position where you’ll collaborate with data scientists, ML engineers, and product teams to transform analytical concepts into production-ready data pipelines that drive personalization, audience targeting, and performance insights across the business
Required Qualifications
• 7–10+ years of experience in data engineering or distributed data systems.
• Strong expertise with Databricks (PySpark, Delta Lake, SQL) and AWS (S3, Glue, EMR, Kinesis, Lambda).
• Proven experience designing and managing Feature Stores (Databricks Feature Store, Feast, or similar).
• Proficiency in Python (or Scala/Java) with strong coding and optimization skills.
• Experience with batch and streaming pipelines, large-scale distributed computing, and ML model workflows (MLflow, SageMaker).
• Working knowledge of CI/CD, data quality frameworks, and infrastructure-as-code.
• Excellent communication and problem-solving abilities with a collaborative mindset.
Senior Data Engineer
📍 Location: Washington, DC (20006) or New York, NY (10002) – Hybrid, 3 days onsite
🕓 Duration: 8+ Months (Possible Extension)
💼 Type: W2 Contract Role
About the Role
We’re seeking an experienced Senior Data Engineer to design, build, and optimize scalable data systems that power our machine learning models, analytics, and measurement pipelines. You’ll play a key role in building robust data pipelines, feature stores, and ML workflows using Databricks and AWS technologies.
This is a hands-on engineering position where you’ll collaborate with data scientists, ML engineers, and product teams to transform analytical concepts into production-ready data pipelines that drive personalization, audience targeting, and performance insights across the business
Required Qualifications
• 7–10+ years of experience in data engineering or distributed data systems.
• Strong expertise with Databricks (PySpark, Delta Lake, SQL) and AWS (S3, Glue, EMR, Kinesis, Lambda).
• Proven experience designing and managing Feature Stores (Databricks Feature Store, Feast, or similar).
• Proficiency in Python (or Scala/Java) with strong coding and optimization skills.
• Experience with batch and streaming pipelines, large-scale distributed computing, and ML model workflows (MLflow, SageMaker).
• Working knowledge of CI/CD, data quality frameworks, and infrastructure-as-code.
• Excellent communication and problem-solving abilities with a collaborative mindset.





