

TechTriad
Data Engineer AWS - Hybrid in NY
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (AWS) in NY, hybrid, with a contract length of "unknown." Pay rate is "unknown." Requires 7–10+ years in data engineering, expertise in Databricks and AWS, and strong Python skills. USC or GC only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 26, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Airflow #Delta Lake #Python #S3 (Amazon Simple Storage Service) #Scala #Data Engineering #Data Science #Data Quality #Model Deployment #Java #Kubernetes #AWS (Amazon Web Services) #PySpark #SQL (Structured Query Language) #Databricks #Observability #Spark (Apache Spark) #AWS S3 (Amazon Simple Storage Service) #Lambda (AWS Lambda) #Data Pipeline #ML (Machine Learning) #Deployment #MLflow
Role description
USC or GC ONLY
Locals ONLY
NO Vendors
Summary:
Seeking a Senior Data Engineer to design, build, and optimize large-scale data systems powering machine learning and analytics. The role focuses on developing the Feature Store, building robust data pipelines, and ensuring scalable, efficient performance across Databricks and AWS environments.
Responsibilities:
• Build and optimize pipelines using Databricks (PySpark, Delta Lake, SQL) and AWS (S3, Glue, EMR, Lambda, Kinesis)
• Develop and maintain a centralized Feature Store
• Support model deployment, CI/CD, and data quality frameworks
• Collaborate with data scientists and ML engineers to productionize ML workflows
Qualifications:
• 7–10+ years in Data engineering or distributed systems
• Expertise with Databricks and AWS
• Strong skills in Python (preferred), Scala, or Java
• Experience with Feature Stores, ML pipelines, and CI/CD
Preferred:
Experience with Unity Catalog, MLflow, Airflow, Kubernetes, and data observability tools
USC or GC ONLY
Locals ONLY
NO Vendors
Summary:
Seeking a Senior Data Engineer to design, build, and optimize large-scale data systems powering machine learning and analytics. The role focuses on developing the Feature Store, building robust data pipelines, and ensuring scalable, efficient performance across Databricks and AWS environments.
Responsibilities:
• Build and optimize pipelines using Databricks (PySpark, Delta Lake, SQL) and AWS (S3, Glue, EMR, Lambda, Kinesis)
• Develop and maintain a centralized Feature Store
• Support model deployment, CI/CD, and data quality frameworks
• Collaborate with data scientists and ML engineers to productionize ML workflows
Qualifications:
• 7–10+ years in Data engineering or distributed systems
• Expertise with Databricks and AWS
• Strong skills in Python (preferred), Scala, or Java
• Experience with Feature Stores, ML pipelines, and CI/CD
Preferred:
Experience with Unity Catalog, MLflow, Airflow, Kubernetes, and data observability tools






