Golden Technology

Sr Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer with 5+ years of experience in Azure Databricks and PySpark, focused on designing data pipelines and ETL solutions. Contract length is unspecified, pay rate is "unknown," and it's an on-site position for local candidates.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cincinnati Metropolitan Area
-
🧠 - Skills detailed
#Data Integration #Data Lineage #Terraform #Jenkins #Security #API (Application Programming Interface) #"ACID (Atomicity #Consistency #Isolation #Durability)" #Automation #Data Pipeline #Infrastructure as Code (IaC) #"ETL (Extract #Transform #Load)" #Azure #Spark (Apache Spark) #PySpark #Azure Databricks #Delta Lake #SQL (Structured Query Language) #DevOps #Scala #Databricks #Data Engineering #Data Security #Data Architecture #Documentation #Ansible #Strategy #Data Strategy
Role description
• • • This position is only for Local candidates • • • • • We are seeking a Senior Databricks Engineer with deep hands-on experience designing and implementing large-scale data solutions on Azure Databricks. The ideal candidate has real-world experience building and troubleshooting production-grade data pipelines, optimizing Spark workloads, managing Delta Lake architecture, and implementing DevOps best practices using IaC and CI/CD automation. Key Responsibilities • Design, develop, and maintain data pipelines and ETL solutions in Azure Databricks using PySpark and Delta Lake. • Implement data integration frameworks and API-based ingestion using tools like Apigee or Kong. • Analyze, design, and deliver enterprise data architecture solutions focusing on scalability, performance, and governance. • Implement automation tools and CI/CD pipelines using Jenkins, Ansible, or Terraform. • Troubleshoot production failures and performance bottlenecks — fix partitioning, caching, shuffle, cluster sizing, and Z-ordering issues. • Manage Unity Catalog, enforce data security (row/column-level access), and maintain data lineage. • Administer Databricks clusters, jobs, and SQL warehouses, optimizing costs through auto-stop, job clusters, and Photon usage. • Collaborate with cross-functional teams to drive data strategy and standards across domains. • Create and maintain detailed architectural diagrams, interface specs, and data flow documentation. • Mentor junior engineers on Databricks, Spark optimization, and Azure data best practices. Required Skills & Experience • 5+ years of experience as a Data Engineer with strong hands-on experience in Azure Databricks and PySpark. • Solid understanding of Delta Lake, Z-ordering, partitioning, OPTIMIZE, and ACID transactions.