Golden Technology

Sr Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer (W2 Contractor) with a contract length of "X months" and a pay rate of "$Y/hour". Requires 5+ years of experience in Azure Databricks, PySpark, and Delta Lake, focusing on ETL, data architecture, and DevOps practices.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 20, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cincinnati Metropolitan Area
-
🧠 - Skills detailed
#Spark (Apache Spark) #Azure #API (Application Programming Interface) #Delta Lake #DevOps #Data Architecture #"ETL (Extract #Transform #Load)" #Security #Scala #Automation #Data Lineage #Azure Databricks #Data Security #Data Pipeline #Data Integration #SQL (Structured Query Language) #Data Strategy #"ACID (Atomicity #Consistency #Isolation #Durability)" #Ansible #Databricks #Terraform #Data Engineering #Documentation #Strategy #PySpark #Infrastructure as Code (IaC) #Jenkins
Role description
• • • This position is only for W2 Contractor • • • • • We are seeking a Senior Databricks Engineer with deep hands-on experience designing and implementing large-scale data solutions on Azure Databricks. The ideal candidate has real-world experience building and troubleshooting production-grade data pipelines, optimizing Spark workloads, managing Delta Lake architecture, and implementing DevOps best practices using IaC and CI/CD automation. Key Responsibilities • Design, develop, and maintain data pipelines and ETL solutions in Azure Databricks using PySpark and Delta Lake. • Implement data integration frameworks and API-based ingestion using tools like Apigee or Kong. • Analyze, design, and deliver enterprise data architecture solutions focusing on scalability, performance, and governance. • Implement automation tools and CI/CD pipelines using Jenkins, Ansible, or Terraform. • Troubleshoot production failures and performance bottlenecks — fix partitioning, caching, shuffle, cluster sizing, and Z-ordering issues. • Manage Unity Catalog, enforce data security (row/column-level access), and maintain data lineage. • Administer Databricks clusters, jobs, and SQL warehouses, optimizing costs through auto-stop, job clusters, and Photon usage. • Collaborate with cross-functional teams to drive data strategy and standards across domains. • Create and maintain detailed architectural diagrams, interface specs, and data flow documentation. • Mentor junior engineers on Databricks, Spark optimization, and Azure data best practices. Required Skills & Experience • 5+ years of experience as a Data Engineer with strong hands-on experience in Azure Databricks and PySpark. • Solid understanding of Delta Lake, Z-ordering, partitioning, OPTIMIZE, and ACID transactions.