

The Zig
Senior Data Engineer – Databricks
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer – Databricks, with a contract length of "unknown", offering a pay rate of "unknown" and located in "unknown". Key skills include Azure Databricks, PySpark, SQL, and CI/CD. Requires 8+ years in data engineering, preferably with utilities/energy sector experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 7, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Modeling #Agile #Databricks #DevOps #Spark SQL #Data Quality #Synapse #Data Engineering #ADLS (Azure Data Lake Storage) #Azure Databricks #Automated Testing #GitHub #Spark (Apache Spark) #Monitoring #Scala #Azure #Data Governance #Azure DevOps #Automation #Data Pipeline #"ETL (Extract #Transform #Load)" #Metadata #Delta Lake #PySpark #ADF (Azure Data Factory) #SQL (Structured Query Language)
Role description
We are hiring a Senior Data Engineer (Databricks) to build and scale modern data platforms using Databricks, Delta Live Tables (DLT), and Asset Bundles. This role requires a hands-on engineer with strong coding skills and experience in metadata-driven frameworks and CI/CD automation. You will work on designing scalable, reusable, and production-grade pipelines supporting enterprise analytics and data products.
Key Responsibilities
• Build and optimize data pipelines using Databricks (PySpark, SQL)
• Develop and manage Delta Live Tables (DLT) pipelines
• Implement metadata-driven frameworks for scalable ingestion and transformation
• Design pipelines following Medallion Architecture (Bronze/Silver/Gold)
• Develop and deploy solutions using Databricks Asset Bundles
• Implement CI/CD pipelines (Dev → QA → Prod) with automated testing
• Ensure data quality, monitoring, and performance optimization
• Collaborate with cross-functional teams and mentor junior engineers
Required Skills
• Strong experience with Azure Databricks
• Advanced coding in PySpark and SQL
• Hands-on experience with:
• Delta Live Tables (DLT)
• Databricks Asset Bundles
• Experience building metadata-driven data pipelines
• Strong understanding of Delta Lake & Medallion Architecture
• Experience with CI/CD (Azure DevOps, GitHub, etc.)
Nice to Have
• Experience with Unity Catalog & data governance
• Exposure to Azure ecosystem (ADF, ADLS, Synapse, Fabric)
• Knowledge of data modeling (Kimball)
• Experience with streaming pipelines
Domain Experience (Big Plus)
• Utilities / Energy / Renewable sector experience (SCADA, metering systems, energy data pipelines)
Candidate Profile
• 8+ years in data engineering
• Strong hands-on + architecture mindset
• Experience building scalable and reusable frameworks
• Comfortable in Agile / fast-paced environments
We are hiring a Senior Data Engineer (Databricks) to build and scale modern data platforms using Databricks, Delta Live Tables (DLT), and Asset Bundles. This role requires a hands-on engineer with strong coding skills and experience in metadata-driven frameworks and CI/CD automation. You will work on designing scalable, reusable, and production-grade pipelines supporting enterprise analytics and data products.
Key Responsibilities
• Build and optimize data pipelines using Databricks (PySpark, SQL)
• Develop and manage Delta Live Tables (DLT) pipelines
• Implement metadata-driven frameworks for scalable ingestion and transformation
• Design pipelines following Medallion Architecture (Bronze/Silver/Gold)
• Develop and deploy solutions using Databricks Asset Bundles
• Implement CI/CD pipelines (Dev → QA → Prod) with automated testing
• Ensure data quality, monitoring, and performance optimization
• Collaborate with cross-functional teams and mentor junior engineers
Required Skills
• Strong experience with Azure Databricks
• Advanced coding in PySpark and SQL
• Hands-on experience with:
• Delta Live Tables (DLT)
• Databricks Asset Bundles
• Experience building metadata-driven data pipelines
• Strong understanding of Delta Lake & Medallion Architecture
• Experience with CI/CD (Azure DevOps, GitHub, etc.)
Nice to Have
• Experience with Unity Catalog & data governance
• Exposure to Azure ecosystem (ADF, ADLS, Synapse, Fabric)
• Knowledge of data modeling (Kimball)
• Experience with streaming pipelines
Domain Experience (Big Plus)
• Utilities / Energy / Renewable sector experience (SCADA, metering systems, energy data pipelines)
Candidate Profile
• 8+ years in data engineering
• Strong hands-on + architecture mindset
• Experience building scalable and reusable frameworks
• Comfortable in Agile / fast-paced environments






