TechTriad

Databricks Data Engineer AWS - Hybrid in NY

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 7–10+ years of experience, focusing on Databricks and AWS, for a 6-month contract at a pay rate of "$X/hour". Candidates must be USC or GC, and local to NY.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 29, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, United States
-
🧠 - Skills detailed
#Data Pipeline #Data Quality #SQL (Structured Query Language) #PySpark #Data Engineering #Kubernetes #Lambda (AWS Lambda) #AWS S3 (Amazon Simple Storage Service) #Spark (Apache Spark) #Deployment #ML (Machine Learning) #Delta Lake #Model Deployment #Python #Scala #MLflow #Observability #Data Science #Databricks #Airflow #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Java
Role description
USC or GC ONLY Locals ONLY NO Vendors Summary: Seeking a Senior Data Engineer to design, build, and optimize large-scale data systems powering machine learning and analytics. The role focuses on developing the Feature Store, building robust data pipelines, and ensuring scalable, efficient performance across Databricks and AWS environments. Responsibilities: • Build and optimize pipelines using Databricks (PySpark, Delta Lake, SQL) and AWS (S3, Glue, EMR, Lambda, Kinesis) • Develop and maintain a centralized Feature Store • Support model deployment, CI/CD, and data quality frameworks • Collaborate with data scientists and ML engineers to productionize ML workflows Qualifications: • 7–10+ years in Data engineering or distributed systems • Expertise with Databricks and AWS • Strong skills in Python (preferred), Scala, or Java • Experience with Feature Stores, ML pipelines, and CI/CD Preferred: Experience with Unity Catalog, MLflow, Airflow, Kubernetes, and data observability tools