New York Technology Partners

Data Engineer (Azure) ($45/hr)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Azure) with a contract length of "unknown" at a pay rate of $45/hr. Key skills include Azure Databricks, PySpark, SQL, and data modeling. Remote work is available, requiring 5+ years in Python and data warehousing.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
360
-
πŸ—“οΈ - Date
April 7, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Programming #BI (Business Intelligence) #Data Modeling #Scrum #Agile #Databricks #Data Quality #Data Engineering #Azure cloud #Azure Databricks #ML (Machine Learning) #Azure Data Factory #Data Processing #Data Ingestion #Spark (Apache Spark) #Scala #Azure #Complex Queries #AI (Artificial Intelligence) #Data Warehouse #Apache Airflow #Data Pipeline #Airflow #"ETL (Extract #Transform #Load)" #Cloud #Python #PySpark #ADF (Azure Data Factory) #SQL (Structured Query Language)
Role description
We are seeking a highly skilled Senior Data Engineer to join a distributed team responsible for building and supporting enterprise-scale data platforms. This role will partner closely with both onshore and offshore teams to design, develop, and maintain scalable, high-performance data solutions within the Azure ecosystem. The ideal candidate brings deep expertise in Azure Databricks, PySpark, Data Warehousing, and Azure Data Factory, along with strong programming and data modeling capabilities. Exposure to AI/ML data workflows is a plus. Key Responsibilities β€’ Collaborate with globally distributed engineering teams to deliver scalable and reliable data solutions β€’ Design and implement enterprise-grade data models to support analytics and reporting needs β€’ Build, optimize, and maintain data pipelines using Azure Databricks, PySpark, and Azure Data Factory β€’ Develop and enhance data warehouse architectures for business intelligence and advanced analytics β€’ Ensure high standards of data quality, integrity, performance, and reliability across all data platforms β€’ Support data ingestion, transformation, and preparation for AI/ML use cases β€’ Participate in Agile/Scrum ceremonies and contribute to continuous improvement initiatives Required Skills & Experience β€’ Python: 5+ years of hands-on development experience β€’ SQL: 6–8 years of experience writing complex queries and optimizing performance β€’ Data Modeling: 5+ years designing and implementing enterprise-level data models β€’ Strong experience with Azure cloud data services β€’ Hands-on experience with Azure Databricks β€’ PySpark: 5–7 years building scalable data processing and transformation pipelines β€’ Experience developing and orchestrating workflows using Azure Data Factory Preferred Qualifications β€’ Experience with Apache Airflow or similar orchestration tools β€’ Exposure to or experience supporting AI/ML data pipelines β€’ Experience working in Agile/Scrum environments β€’ Proven ability to collaborate effectively with globally distributed teams