

New York Technology Partners
Data Engineer (Azure) ($45/hr)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Azure) with a contract length of "unknown" at a pay rate of $45/hr. Key skills include Azure Databricks, PySpark, SQL, and data modeling. Remote work is available, requiring 5+ years in Python and data warehousing.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
360
-
ποΈ - Date
April 7, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Programming #BI (Business Intelligence) #Data Modeling #Scrum #Agile #Databricks #Data Quality #Data Engineering #Azure cloud #Azure Databricks #ML (Machine Learning) #Azure Data Factory #Data Processing #Data Ingestion #Spark (Apache Spark) #Scala #Azure #Complex Queries #AI (Artificial Intelligence) #Data Warehouse #Apache Airflow #Data Pipeline #Airflow #"ETL (Extract #Transform #Load)" #Cloud #Python #PySpark #ADF (Azure Data Factory) #SQL (Structured Query Language)
Role description
We are seeking a highly skilled Senior Data Engineer to join a distributed team responsible for building and supporting enterprise-scale data platforms. This role will partner closely with both onshore and offshore teams to design, develop, and maintain scalable, high-performance data solutions within the Azure ecosystem.
The ideal candidate brings deep expertise in Azure Databricks, PySpark, Data Warehousing, and Azure Data Factory, along with strong programming and data modeling capabilities. Exposure to AI/ML data workflows is a plus.
Key Responsibilities
β’ Collaborate with globally distributed engineering teams to deliver scalable and reliable data solutions
β’ Design and implement enterprise-grade data models to support analytics and reporting needs
β’ Build, optimize, and maintain data pipelines using Azure Databricks, PySpark, and Azure Data Factory
β’ Develop and enhance data warehouse architectures for business intelligence and advanced analytics
β’ Ensure high standards of data quality, integrity, performance, and reliability across all data platforms
β’ Support data ingestion, transformation, and preparation for AI/ML use cases
β’ Participate in Agile/Scrum ceremonies and contribute to continuous improvement initiatives
Required Skills & Experience
β’ Python: 5+ years of hands-on development experience
β’ SQL: 6β8 years of experience writing complex queries and optimizing performance
β’ Data Modeling: 5+ years designing and implementing enterprise-level data models
β’ Strong experience with Azure cloud data services
β’ Hands-on experience with Azure Databricks
β’ PySpark: 5β7 years building scalable data processing and transformation pipelines
β’ Experience developing and orchestrating workflows using Azure Data Factory
Preferred Qualifications
β’ Experience with Apache Airflow or similar orchestration tools
β’ Exposure to or experience supporting AI/ML data pipelines
β’ Experience working in Agile/Scrum environments
β’ Proven ability to collaborate effectively with globally distributed teams
We are seeking a highly skilled Senior Data Engineer to join a distributed team responsible for building and supporting enterprise-scale data platforms. This role will partner closely with both onshore and offshore teams to design, develop, and maintain scalable, high-performance data solutions within the Azure ecosystem.
The ideal candidate brings deep expertise in Azure Databricks, PySpark, Data Warehousing, and Azure Data Factory, along with strong programming and data modeling capabilities. Exposure to AI/ML data workflows is a plus.
Key Responsibilities
β’ Collaborate with globally distributed engineering teams to deliver scalable and reliable data solutions
β’ Design and implement enterprise-grade data models to support analytics and reporting needs
β’ Build, optimize, and maintain data pipelines using Azure Databricks, PySpark, and Azure Data Factory
β’ Develop and enhance data warehouse architectures for business intelligence and advanced analytics
β’ Ensure high standards of data quality, integrity, performance, and reliability across all data platforms
β’ Support data ingestion, transformation, and preparation for AI/ML use cases
β’ Participate in Agile/Scrum ceremonies and contribute to continuous improvement initiatives
Required Skills & Experience
β’ Python: 5+ years of hands-on development experience
β’ SQL: 6β8 years of experience writing complex queries and optimizing performance
β’ Data Modeling: 5+ years designing and implementing enterprise-level data models
β’ Strong experience with Azure cloud data services
β’ Hands-on experience with Azure Databricks
β’ PySpark: 5β7 years building scalable data processing and transformation pipelines
β’ Experience developing and orchestrating workflows using Azure Data Factory
Preferred Qualifications
β’ Experience with Apache Airflow or similar orchestration tools
β’ Exposure to or experience supporting AI/ML data pipelines
β’ Experience working in Agile/Scrum environments
β’ Proven ability to collaborate effectively with globally distributed teams






