

Databricks Lead
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Databricks Lead position based in Cincinnati, OH, on a contract basis. Key skills include Azure Databricks, Apache Spark, ETL processes, and data governance. Experience with Azure SQL migration and optimization is essential. Pay rate is unspecified.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
May 21, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Cincinnati, OH
-
π§ - Skills detailed
#SQL Queries #Security #Storage #Indexing #Azure #Azure SQL #"ETL (Extract #Transform #Load)" #Apache Spark #Delta Lake #Databricks #Spark (Apache Spark) #SQL (Structured Query Language) #Data Processing #Azure Databricks #Compliance #Scala #Data Governance #Data Pipeline #Migration
Role description
Job role : Databricks Lead
Location : Cincinnati, OH
Job Type: Contract
Job Description
β’ Design and implement scalable data pipelines and architectures on Azure Databricks.
β’ Optimize ETL/ELT workflows, ensuring efficiency in data processing, storage, and retrieval.
β’ Leverage Apache Spark, Delta Lake, and Azure-native services to build high-performance data solutions.
β’ Ensure best practices in data governance, security, and compliance within Azure environments.
β’ Troubleshoot and fine-tune Spark jobs for optimal performance and cost efficiency.
β’ Lead the migration of Azure SQL to Azure Databricks, ensuring a seamless transition of data workloads.
β’ Design and implement scalable data pipelines to extract, transform, and load (ETL/ELT) data from Azure SQL into Databricks Delta Lake.
β’ Optimize Azure SQL queries and indexing strategies before migration to enhance performance in Databricks.
Job role : Databricks Lead
Location : Cincinnati, OH
Job Type: Contract
Job Description
β’ Design and implement scalable data pipelines and architectures on Azure Databricks.
β’ Optimize ETL/ELT workflows, ensuring efficiency in data processing, storage, and retrieval.
β’ Leverage Apache Spark, Delta Lake, and Azure-native services to build high-performance data solutions.
β’ Ensure best practices in data governance, security, and compliance within Azure environments.
β’ Troubleshoot and fine-tune Spark jobs for optimal performance and cost efficiency.
β’ Lead the migration of Azure SQL to Azure Databricks, ensuring a seamless transition of data workloads.
β’ Design and implement scalable data pipelines to extract, transform, and load (ETL/ELT) data from Azure SQL into Databricks Delta Lake.
β’ Optimize Azure SQL queries and indexing strategies before migration to enhance performance in Databricks.