

TEK NINJAS
Azure Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Databricks Engineer with 8+ years of experience, offering a remote/hybrid contract. Key skills required include Azure Databricks, PySpark, Delta Lake, and strong SQL/Python development. Pay rate is competitive.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 4, 2025
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Python #Kafka (Apache Kafka) #Spark (Apache Spark) #SQL (Structured Query Language) #Scala #Synapse #Data Transformations #ADF (Azure Data Factory) #Azure #Delta Lake #Azure DevOps #Azure Databricks #DevOps #MLflow #"ETL (Extract #Transform #Load)" #PySpark #Batch #Azure cloud #ADLS (Azure Data Lake Storage) #Cloud #Data Engineering #Databricks
Role description
Job Title: Azure Databricks Engineer
Location: Remote / Hybrid
Experience: 8+ Years
Role Summary:
We are hiring an expert-level Azure Databricks Engineer to design, optimize, and manage large-scale data engineering workloads on Databricks. This role requires deep hands-on experience in Spark, Delta Lake, and Azure cloud services.
Key Responsibilities:
β’ Build and optimize ETL/ELT pipelines using Azure Databricks.
β’ Implement Delta Lake, medallion architecture, and advanced data transformations.
β’ Develop scalable PySpark/Spark workflows for batch & streaming.
β’ Integrate with ADF, ADLS, Synapse, and Azure Event/Kafka streams.
β’ Drive performance tuning, cluster optimization, and cost management.
Required Skills:
β’ Azure Databricks, PySpark, Delta Lake, MLflow
β’ ADF, ADLS Gen2, Synapse
β’ Strong SQL & Python development experience
β’ Azure DevOps CI/CD pipelines
Job Title: Azure Databricks Engineer
Location: Remote / Hybrid
Experience: 8+ Years
Role Summary:
We are hiring an expert-level Azure Databricks Engineer to design, optimize, and manage large-scale data engineering workloads on Databricks. This role requires deep hands-on experience in Spark, Delta Lake, and Azure cloud services.
Key Responsibilities:
β’ Build and optimize ETL/ELT pipelines using Azure Databricks.
β’ Implement Delta Lake, medallion architecture, and advanced data transformations.
β’ Develop scalable PySpark/Spark workflows for batch & streaming.
β’ Integrate with ADF, ADLS, Synapse, and Azure Event/Kafka streams.
β’ Drive performance tuning, cluster optimization, and cost management.
Required Skills:
β’ Azure Databricks, PySpark, Delta Lake, MLflow
β’ ADF, ADLS Gen2, Synapse
β’ Strong SQL & Python development experience
β’ Azure DevOps CI/CD pipelines





