

Azure Databricks Lead
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an "Azure Databricks Lead" in Erlanger, Kentucky, on a contract basis. It requires 5+ years in Python programming, 3+ years in Azure Databricks, and expertise in CI/CD pipelines, Delta Live Tables, and Unity Catalog. Pay rate is unspecified.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 17, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Erlanger, KY
-
π§ - Skills detailed
#Data Processing #Data Quality #Monitoring #Debugging #Python #Data Engineering #Azure Blob Storage #Data Ingestion #Deployment #Compliance #Data Science #Azure #Spark (Apache Spark) #Version Control #"ETL (Extract #Transform #Load)" #Databricks #Pandas #GIT #NumPy #Data Governance #DevOps #Leadership #Azure DevOps #Cloud #Scala #Storage #Jenkins #Business Analysis #Libraries #Programming #GitHub #Azure Databricks #PySpark #ADLS (Azure Data Lake Storage)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title:Azure Databricks Lead
Location:Erlanger, Kentucky (Onsite)
Contract
Overview:
We are seeking a highly skilled and motivated Senior Technical Lead to drive the development and deployment of Azure Databricks-based solutions. This role requires deep expertise in:
β’ Python programming
β’ Databricks administration
β’ CI/CD pipelines for Databricks artifacts
β’ Delta Live Tables
β’ Auto Loader
β’ Unity Catalog
β’ Databricks Asset Bundles
The ideal candidate will bring a strong technical background, proven team leadership experience, and a passion for building high-quality, scalable solutions.
Key Responsibilities:
Python Programming
β’ Lead the development of robust, scalable, and efficient Python code for data processing, transformation, and analysis within Databricks.
β’ Ensure adherence to coding standards, quality, and performance best practices.
Azure Databricks Administration
β’ Oversee platform configuration, resource management, cluster optimization, and monitoring.
β’ Implement best practices for managing workspaces, libraries, and notebooks.
CI/CD for Databricks Artifacts
β’ Design and implement CI/CD pipelines to automate deployment of Databricks artifacts (notebooks, libraries, jobs, Delta tables).
β’ Utilize tools like Azure DevOps, GitHub Actions, or Jenkins.
Delta Live Tables
β’ Design and manage Delta Live Tables pipelines for real-time data processing.
β’ Ensure data quality, reliability, and optimal performance.
Auto Loader
β’ Implement and optimize Auto Loader for scalable, fault-tolerant data ingestion from cloud storage.
Unity Catalog
β’ Configure and manage Unity Catalog for centralized data governance, access control, lineage, and compliance.
Databricks Asset Bundles
β’ Leverage Databricks Asset Bundles to manage and share reusable components across teams and projects.
Team Leadership
β’ Mentor and guide junior team members.
β’ Foster a collaborative environment that encourages innovation and personal growth.
β’ Provide technical leadership and support to drive successful outcomes.
Collaboration & Communication
β’ Work closely with data engineers, data scientists, and business analysts to understand requirements.
β’ Deliver end-to-end solutions.
β’ Communicate effectively with technical and non-technical stakeholders.
Required Qualifications:
Python Programming
β’ 5+ years of experience focused on data engineering and data science workflows.
β’ Expertise in designing and building scalable ETL pipelines with libraries such as Pandas, NumPy, PySpark, requests.
β’ Proficiency in debugging, optimizing, and maintaining Python code in distributed environments.
Azure Databricks
β’ 3+ years of hands-on experience in production Databricks environments.
β’ Strong knowledge of clusters, workspaces, and runtime configurations.
β’ Ability to optimize and troubleshoot Spark-based jobs and notebooks.
CI/CD Pipelines
β’ Proven experience setting up and managing CI/CD pipelines for Databricks artifacts.
β’ Familiarity with Git and tools such as Azure DevOps, GitHub Actions, or Jenkins.
β’ Experience integrating Databricks with external version control repositories.
Delta Live Tables
β’ Strong experience designing, managing, and optimizing DLT pipelines for real-time, reliable data processing.
β’ Knowledge of automated data quality checks and validation within DLT.
Auto Loader
β’ Practical experience with Auto Loader for real-time ingestion from cloud storage (Azure Blob Storage, ADLS Gen2).
β’ Understanding of performance optimization best practices.
Unity Catalog
β’ In-depth experience implementing Unity Catalog for centralized data governance.
β’ Proven ability to set up fine-grained access controls and manage permissions.