New York Technology Partners

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6-month contract, remote, with a pay rate of "pay rate". Key skills include Azure Databricks, Azure Data Factory, Python, SQL, and data warehousing. Healthcare experience is preferred; no sponsorship available.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
360
-
🗓️ - Date
April 8, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Databricks #Azure Data Factory #PySpark #Spark (Apache Spark) #Azure Databricks #ADF (Azure Data Factory) #BI (Business Intelligence) #"ETL (Extract #Transform #Load)" #Data Modeling #Apache Airflow #Data Quality #Azure cloud #Agile #Python #Scrum #Azure #Airflow #Cloud #Data Pipeline #Scala #Data Engineering #SQL (Structured Query Language) #ML (Machine Learning) #Data Transformations #Complex Queries #Data Integration #Data Warehouse
Role description
We are currently partnered with a digital consultancy that is looking for a Senior Data Engineer that has a strong focus in Azure Databricks, Azure Data Factory, and Data Warehouses. This is a 6-month contract position to start with the potential to extend or convert full-time. Healthcare experience is preferred. No sponsorship is provided at this time and third-party candidates will not be accepted. Job Overview: We are seeking a Senior Data Engineer to join a distributed data engineering team supporting enterprise scale data platforms. This role will work closely with onshore and offshore engineers to design, build, and maintain scalable data solutions on Azure. The ideal candidate has strong experience with Python, SQL, Databricks, and modern data pipelines, with exposure to AI/ML data integration. Your Impact: • Collaborate with onshore and offshore engineering teams to deliver scalable data solutions • Design and implement enterprise data models for analytics and reporting use cases • Build and maintain data pipelines using Azure Databricks, PySpark, and Azure Data Factory • Develop and optimize data warehouse solutions to support business intelligence and analytics • Ensure data quality, reliability, and performance across pipelines and platforms • Support data preparation and integration for AI/ML initiatives • Participate in Agile/Scrum ceremonies and contribute to continuous improvement Your Skills & Experience: • Python: 5+ years of hands-on development experience • SQL: 6–8 years of experience writing complex queries and optimizing performance • Data Modeling: 5+ years of experience with enterprise?level data modeling • Strong experience with Azure Cloud services for data engineering • Hands-on experience with Azure Databricks • PySpark: 5–7 years of experience building scalable data transformations • Experience creating and orchestrating pipelines using Azure Data Factory Preferred Qualifications • Experience with Apache Airflow • Exposure to or experience supporting AI/ML data pipelines • Experience working in Agile/Scrum environments • Prior experience working with globally distributed teams