

Highbrow Technology Inc
Senior Azure Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Azure Data Engineer, offering a hybrid contract for an unspecified length at a pay rate of "unknown." Key skills include Azure Databricks, Azure Data Factory, Python, and experience with data lake architectures.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 20, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#NumPy #Pandas #DevOps #Scala #ADLS (Azure Data Lake Storage) #PySpark #Azure Databricks #Databricks #Azure Data Factory #Batch #Data Engineering #Data Ingestion #SQL (Structured Query Language) #Azure #Spark (Apache Spark) #Python #Cloud #Delta Lake #Data Lake #Data Modeling #Data Processing #Data Pipeline #ADF (Azure Data Factory) #Spark SQL #Automation #Synapse #Vault
Role description
Job Summary
We are seeking a highly skilled Senior Azure Data Engineer with strong expertise in Databricks, Azure data services, and modern data engineering practices. The ideal candidate will have hands-on experience designing, building, and optimizing scalable data pipelines and enterprise-grade lakehouse architectures in hybrid cloud environments.
Key Responsibilities
• Design, build, and orchestrate scalable data pipelines using Azure-native services.
• Develop and optimize batch and near real-time data processing solutions using Spark and Databricks.
• Implement and manage enterprise-grade Lakehouse architectures (Medallion architecture).
• Perform data processing, validation, and automation using Python (PySpark, Pandas, NumPy).
• Optimize performance of large-scale data systems, including Spark tuning and query optimization.
• Work with CDC (Change Data Capture) pipelines and large-volume data ingestion.
• Collaborate with cross-functional teams to deliver secure, scalable, and high-performance data platforms.
• Implement CI/CD pipelines and DevOps best practices for data engineering workflows.
Required Skills & Experience
• Strong experience with Azure Databricks, including notebooks, clusters, and Delta Lake.
• Hands-on expertise in Azure Data Factory (ADF) and pipeline orchestration.
• Proficiency in Python (PySpark, Pandas, NumPy) and SQL.
• Experience with Spark SQL, performance tuning, and distributed data processing.
• Solid understanding of Azure ecosystem: ADLS, Synapse Analytics, Azure Functions, Event Grid, Key Vault, Purview.
• Experience designing and implementing data lake / lakehouse architectures.
• Knowledge of dimensional data modeling and analytics solutions.
• Familiarity with DevOps and CI/CD automation in data platforms.
Preferred Qualifications
• Experience with Microsoft Fabric.
• Proven track record of improving pipeline performance (e.g., 30–40% optimization).
• Exposure to hybrid cloud environments.
• Prior experience working with large enterprise clients.
Additional Information
• Proven experience delivering production-grade, scalable data solutions.
• Strong alignment with senior-level Azure data engineering roles.
Job Summary
We are seeking a highly skilled Senior Azure Data Engineer with strong expertise in Databricks, Azure data services, and modern data engineering practices. The ideal candidate will have hands-on experience designing, building, and optimizing scalable data pipelines and enterprise-grade lakehouse architectures in hybrid cloud environments.
Key Responsibilities
• Design, build, and orchestrate scalable data pipelines using Azure-native services.
• Develop and optimize batch and near real-time data processing solutions using Spark and Databricks.
• Implement and manage enterprise-grade Lakehouse architectures (Medallion architecture).
• Perform data processing, validation, and automation using Python (PySpark, Pandas, NumPy).
• Optimize performance of large-scale data systems, including Spark tuning and query optimization.
• Work with CDC (Change Data Capture) pipelines and large-volume data ingestion.
• Collaborate with cross-functional teams to deliver secure, scalable, and high-performance data platforms.
• Implement CI/CD pipelines and DevOps best practices for data engineering workflows.
Required Skills & Experience
• Strong experience with Azure Databricks, including notebooks, clusters, and Delta Lake.
• Hands-on expertise in Azure Data Factory (ADF) and pipeline orchestration.
• Proficiency in Python (PySpark, Pandas, NumPy) and SQL.
• Experience with Spark SQL, performance tuning, and distributed data processing.
• Solid understanding of Azure ecosystem: ADLS, Synapse Analytics, Azure Functions, Event Grid, Key Vault, Purview.
• Experience designing and implementing data lake / lakehouse architectures.
• Knowledge of dimensional data modeling and analytics solutions.
• Familiarity with DevOps and CI/CD automation in data platforms.
Preferred Qualifications
• Experience with Microsoft Fabric.
• Proven track record of improving pipeline performance (e.g., 30–40% optimization).
• Exposure to hybrid cloud environments.
• Prior experience working with large enterprise clients.
Additional Information
• Proven experience delivering production-grade, scalable data solutions.
• Strong alignment with senior-level Azure data engineering roles.






