TechTriad

Data Engineer AWS - Hybrid in NY

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (AWS) with 7–10+ years of experience, focusing on Databricks and AWS environments. Contract length is unspecified, pay rate is not provided, and it requires local USC or GC candidates only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 24, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Data Engineering #ML (Machine Learning) #Java #AWS (Amazon Web Services) #PySpark #Lambda (AWS Lambda) #Kubernetes #MLflow #Data Science #Delta Lake #Databricks #Scala #Python #Spark (Apache Spark) #Data Quality #S3 (Amazon Simple Storage Service) #SQL (Structured Query Language) #Observability #Model Deployment #AWS S3 (Amazon Simple Storage Service) #Deployment #Airflow #Data Pipeline
Role description
USC or GC ONLY Locals ONLY NO Vendors Summary: Seeking a Senior Data Engineer to design, build, and optimize large-scale data systems powering machine learning and analytics. The role focuses on developing the Feature Store, building robust data pipelines, and ensuring scalable, efficient performance across Databricks and AWS environments. Responsibilities: • Build and optimize pipelines using Databricks (PySpark, Delta Lake, SQL) and AWS (S3, Glue, EMR, Lambda, Kinesis) • Develop and maintain a centralized Feature Store • Support model deployment, CI/CD, and data quality frameworks • Collaborate with data scientists and ML engineers to productionize ML workflows Qualifications: • 7–10+ years in Data engineering or distributed systems • Expertise with Databricks and AWS • Strong skills in Python (preferred), Scala, or Java • Experience with Feature Stores, ML pipelines, and CI/CD Preferred: Experience with Unity Catalog, MLflow, Airflow, Kubernetes, and data observability tools