Golden Technology

Sr Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer with a contract length of "unknown", offering a pay rate of "unknown". Required skills include Azure Databricks, PySpark, and Delta Lake. Candidates should have 5+ years of data engineering experience and expertise in ETL solutions and DevOps practices.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 13, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Cincinnati, OH
-
🧠 - Skills detailed
#Data Security #"ACID (Atomicity #Consistency #Isolation #Durability)" #Data Strategy #Data Engineering #Data Lineage #PySpark #Ansible #Scala #Delta Lake #Data Integration #Strategy #Documentation #Jenkins #SQL (Structured Query Language) #Azure #DevOps #"ETL (Extract #Transform #Load)" #Terraform #Data Pipeline #Security #Spark (Apache Spark) #Infrastructure as Code (IaC) #API (Application Programming Interface) #Automation #Azure Databricks #Databricks #Data Architecture
Role description
We are seeking a Senior Databricks Engineer with deep hands-on experience designing and implementing large-scale data solutions on Azure Databricks. The ideal candidate has real-world experience building and troubleshooting production-grade data pipelines, optimizing Spark workloads, managing Delta Lake architecture, and implementing DevOps best practices using IaC and CI/CD automation. Key Responsibilities β€’ Design, develop, and maintain data pipelines and ETL solutions in Azure Databricks using PySpark and Delta Lake. β€’ Implement data integration frameworks and API-based ingestion using tools like Apigee or Kong. β€’ Analyze, design, and deliver enterprise data architecture solutions focusing on scalability, performance, and governance. β€’ Implement automation tools and CI/CD pipelines using Jenkins, Ansible, or Terraform. β€’ Troubleshoot production failures and performance bottlenecks β€” fix partitioning, caching, shuffle, cluster sizing, and Z-ordering issues. β€’ Manage Unity Catalog, enforce data security (row/column-level access), and maintain data lineage. β€’ Administer Databricks clusters, jobs, and SQL warehouses, optimizing costs through auto-stop, job clusters, and Photon usage. β€’ Collaborate with cross-functional teams to drive data strategy and standards across domains. β€’ Create and maintain detailed architectural diagrams, interface specs, and data flow documentation. β€’ Mentor junior engineers on Databricks, Spark optimization, and Azure data best practices. Required Skills & Experience β€’ 5+ years of experience as a Data Engineer with strong hands-on experience in Azure Databricks and PySpark. β€’ Solid understanding of Delta Lake, Z-ordering, partitioning, OPTIMIZE, and ACID transactions.