KANINI

Databricks Data Engineering Lead

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineering Lead, a full-time contract position based remotely in Nashville, TN. Required skills include 8+ years in data engineering, expertise in Azure services, and healthcare data experience. Microsoft certifications preferred. Expected duration exceeds 6 months.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 2, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Nashville Metropolitan Area
-
🧠 - Skills detailed
#Agile #DevOps #ML (Machine Learning) #Scala #Data Storage #Azure SQL #Data Governance #Databricks #Azure cloud #Microsoft Power BI #Azure Data Factory #BI (Business Intelligence) #GIT #Azure #PySpark #Data Engineering #Big Data #Spark (Apache Spark) #Data Quality #Python #Storage #Batch #FHIR (Fast Healthcare Interoperability Resources) #Synapse #Hadoop #Data Architecture #Data Access #"ETL (Extract #Transform #Load)" #Data Lake #Security #Compliance #Kafka (Apache Kafka) #Strategy #Delta Lake #Data Pipeline #ADF (Azure Data Factory) #Azure DevOps #Data Science #IoT (Internet of Things) #SQL (Structured Query Language) #Data Processing #Cloud
Role description
Title: Databricks Data Engineering Lead Location: Nashville TN - Remote Employment Type: Full-time / Contract Work Authorization: We are currently unable to sponsor visas or use 3rd party agencies. Only candidates who are visa-independent will be considered. OVERVIEW: We are seeking an accomplished Databricks Data Engineering Lead with deep expertise in Azure Data ecosystem and Databricks, advanced data engineering, and healthcare data science. This role is critical to our mission of leveraging data to transform healthcare outcomes. As part of our data engineering team, you will collaborate with data scientists, analysts, and software engineers to deliver secure, high-performance solutions that adhere to healthcare standards and compliance requirements. The ideal candidate will have a proven ability to design and scale enterprise-grade data pipelines, optimize workflows in the cloud, and deliver predictive insights that directly impact patient care and operational efficiency. KEY RESPONSIBILITIES: β€’ Lead the design and architecture of end-to-end data warehousing and data lake solutions, focusing on the Databricks platform, incorporating best practices for scalability, performance, security, and cost optimization β€’ Design, build, and optimize scalable ETL/ELT pipelines in Azure for diverse healthcare data, including EHR, claims, IoT, and unstructured sources. β€’ Implement advanced data storage and processing solutions leveraging Azure Data Lake, Synapse Analytics, Databricks, and Azure SQL. β€’ Develop and optimize Delta Lake tables following the Medallion architecture (Bronze, Silver, Gold). β€’ Orchestrate ETL workflows via Azure Data Factory (ADF) and Databricks Workflows. β€’ Implement data access controls, account-level permissions, and object ownership via Unity Catalog. β€’ Collaborate with architects, developers, and business stakeholders to ensure data quality, lineage, and governance. β€’ Troubleshoot and optimize ETL pipelines for performance, scalability, and cost efficiency. β€’ Support CI/CD integration using Databricks Repos, Git, and Azure DevOps. β€’ Partner with data scientists to develop, deploy, and operationalize machine learning models and predictive analytics solutions supporting healthcare use cases. β€’ Ensure data quality, governance, security, and compliance with HIPAA, PHI/PII, and other healthcare regulations. β€’ Collaborate cross-functionally to translate business needs into robust technical solutions, aligning data engineering practices with healthcare standards (FHIR, HL7). β€’ Contribute to the evolution of our data architecture strategy, incorporating best practices for cloud optimization, data governance, and emerging healthcare data trends. SKILLS & QUALIFICATIONS: Required: β€’ 8+ years of experience as a Data Engineer, with extensive work in Azure cloud services. β€’ Strong proficiency in SQL, Python, PySpark, and Databricks for data engineering and advanced analytics. β€’ Hands-on expertise with Azure Data Factory, Azure Synapse, Azure Data Lake, and Power BI. β€’ Experience designing and implementing machine learning solutions in a healthcare environment. β€’ Working knowledge of FHIR, HL7, and other healthcare interoperability standards. β€’ Familiarity with DevOps practices and CI/CD pipelines for automating data workflows. β€’ Strong analytical and problem-solving skills; proven ability to deliver in fast-paced, agile environments. Preferred: β€’ Experience with Big Data ecosystems (Kafka, Hadoop, etc.) for real-time and batch data processing. β€’ Knowledge of data governance frameworks and compliance regulations in healthcare. β€’ Microsoft certifications such as Azure Data Engineer Associate or Databricks certifications. β€’ Understanding of cloud cost optimization strategies for large-scale data workloads. Why Join Us? At Kanini, you will have the opportunity to: β€’ Shape the future of healthcare data through innovative cloud and data science solutions. β€’ Work with cutting-edge technologies while ensuring compliance and security in one of the most critical industries. β€’ Collaborate with a passionate, mission-driven team that values innovation, agility, and excellence.