

Golden Technology
Senior Databricks Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Databricks Engineer with a contract length of "unknown" and a pay rate of "unknown," located in "unknown." Candidates should have 5+ years of experience in Azure Databricks, PySpark, and Delta Lake, along with strong data engineering skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 23, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cincinnati, OH
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Strategy #Automation #Spark (Apache Spark) #Data Architecture #Terraform #PySpark #DevOps #Ansible #SQL (Structured Query Language) #API (Application Programming Interface) #Documentation #Scala #Databricks #Azure #Data Security #Jenkins #Delta Lake #Data Integration #Data Engineering #Data Lineage #Data Pipeline #Data Strategy #Azure Databricks #Security #Infrastructure as Code (IaC) #"ACID (Atomicity #Consistency #Isolation #Durability)"
Role description
We are seeking a Senior Databricks Engineer with deep hands-on experience designing and implementing large-scale data solutions on Azure Databricks. The ideal candidate has real-world experience building and troubleshooting production-grade data pipelines, optimizing Spark workloads, managing Delta Lake architecture, and implementing DevOps best practices using IaC and CI/CD automation.
Key Responsibilities
• Design, develop, and maintain data pipelines and ETL solutions in Azure Databricks using PySpark and Delta Lake.
• Implement data integration frameworks and API-based ingestion using tools like Apigee or Kong.
• Analyze, design, and deliver enterprise data architecture solutions focusing on scalability, performance, and governance.
• Implement automation tools and CI/CD pipelines using Jenkins, Ansible, or Terraform.
• Troubleshoot production failures and performance bottlenecks — fix partitioning, caching, shuffle, cluster sizing, and Z-ordering issues.
• Manage Unity Catalog, enforce data security (row/column-level access), and maintain data lineage.
• Administer Databricks clusters, jobs, and SQL warehouses, optimizing costs through auto-stop, job clusters, and Photon usage.
• Collaborate with cross-functional teams to drive data strategy and standards across domains.
• Create and maintain detailed architectural diagrams, interface specs, and data flow documentation.
• Mentor junior engineers on Databricks, Spark optimization, and Azure data best practices.
Required Skills & Experience
• 5+ years of experience as a Data Engineer with strong hands-on experience in Azure Databricks and PySpark.
• Solid understanding of Delta Lake, Z-ordering, partitioning, OPTIMIZE, and ACID transactions.
We are seeking a Senior Databricks Engineer with deep hands-on experience designing and implementing large-scale data solutions on Azure Databricks. The ideal candidate has real-world experience building and troubleshooting production-grade data pipelines, optimizing Spark workloads, managing Delta Lake architecture, and implementing DevOps best practices using IaC and CI/CD automation.
Key Responsibilities
• Design, develop, and maintain data pipelines and ETL solutions in Azure Databricks using PySpark and Delta Lake.
• Implement data integration frameworks and API-based ingestion using tools like Apigee or Kong.
• Analyze, design, and deliver enterprise data architecture solutions focusing on scalability, performance, and governance.
• Implement automation tools and CI/CD pipelines using Jenkins, Ansible, or Terraform.
• Troubleshoot production failures and performance bottlenecks — fix partitioning, caching, shuffle, cluster sizing, and Z-ordering issues.
• Manage Unity Catalog, enforce data security (row/column-level access), and maintain data lineage.
• Administer Databricks clusters, jobs, and SQL warehouses, optimizing costs through auto-stop, job clusters, and Photon usage.
• Collaborate with cross-functional teams to drive data strategy and standards across domains.
• Create and maintain detailed architectural diagrams, interface specs, and data flow documentation.
• Mentor junior engineers on Databricks, Spark optimization, and Azure data best practices.
Required Skills & Experience
• 5+ years of experience as a Data Engineer with strong hands-on experience in Azure Databricks and PySpark.
• Solid understanding of Delta Lake, Z-ordering, partitioning, OPTIMIZE, and ACID transactions.






