Highbrow Technology Inc

Senior Azure Data Engineer (W2/1099 Only)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Azure Data Engineer (W2/1099 Only) with a remote work location. The contract length and pay rate are unspecified. Key skills include Azure Databricks, Azure Data Factory, Python, and Spark. Experience in data pipeline design and enterprise lakehouse architecture is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 1, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Databricks #Batch #ADF (Azure Data Factory) #Data Ingestion #Data Pipeline #NumPy #Spark SQL #Data Processing #Vault #Azure Data Factory #Spark (Apache Spark) #PySpark #Azure #Synapse #Scala #Data Engineering #Automation #SQL (Structured Query Language) #Cloud #Azure Databricks #ADLS (Azure Data Lake Storage) #Data Modeling #DevOps #Delta Lake #Python #Pandas
Role description
Role: Data Engineer Location: Remote Job Summary We are seeking a highly skilled Azure Data Engineer with strong expertise in Databricks, Azure data services, and modern data engineering practices. The ideal candidate will have hands-on experience designing, building, and optimizing scalable data pipelines and enterprise-grade lakehouse architectures in hybrid cloud environments. Key Responsibilities • Design, build, and orchestrate scalable data pipelines using Azure-native services. • Develop and optimize batch and near real-time data processing solutions using Spark and Databricks. • Implement and manage enterprise-grade Lakehouse architectures (Medallion architecture). • Perform data processing, validation, and automation using Python (PySpark, Pandas, NumPy). • Optimize performance of large-scale data systems, including Spark tuning and query optimization. • Work with CDC (Change Data Capture) pipelines and large-volume data ingestion. • Collaborate with cross-functional teams to deliver secure, scalable, and high-performance data platforms. • Implement CI/CD pipelines and DevOps best practices for data engineering workflows. Required Skills & Experience • Strong experience with Azure Databricks, including notebooks, clusters, and Delta Lake. • Hands-on expertise in Azure Data Factory (ADF) and pipeline orchestration. • Proficiency in Python (PySpark, Pandas, NumPy) and SQL. • Experience with Spark SQL, performance tuning, and distributed data processing. • Solid understanding of Azure ecosystem: ADLS, Synapse Analytics, Azure Functions, Event Grid, Key Vault, Purview. • Knowledge of dimensional data modeling and analytics solutions.