

Jobs via Dice
Senior Data Engineer – Databricks / PySpark / Delta Lake
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 10+ years of experience, focusing on Databricks, PySpark, and Delta Lake. It is a 3-month remote contract requiring expertise in Azure cloud environments and production-quality code delivery.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 6, 2026
🕒 - Duration
3 to 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
1099 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Cloud #Documentation #Synapse #Data Lake #Data Modeling #Azure ADLS (Azure Data Lake Storage) #DevOps #Databricks #PySpark #Agile #Spark (Apache Spark) #Scala #Logging #"ETL (Extract #Transform #Load)" #Batch #GIT #Storage #Vault #Azure #Data Pipeline #ADLS (Azure Data Lake Storage) #Data Processing #Azure Data Factory #Compliance #Azure cloud #"ACID (Atomicity #Consistency #Isolation #Durability)" #Delta Lake #Data Engineering #ADF (Azure Data Factory)
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Genzeon, is seeking the following. Apply via Dice today!
Job Title: Senior Data Engineer – Databricks, PySpark, Delta Lake
Location: USA (100% Remote)
Duration: 3 Months (Initial Contract, with potential extension)
Experience Required: 10+ Years
Role Overview
We are seeking a highly experienced Senior Data Engineer who can take ownership of designing, building, and optimizing enterprise-scale data pipelines on Databricks using PySpark and Delta Lake within an Azure cloud environment. This role requires strong hands-on expertise and the ability to deliver production-quality solutions in a fast-paced, agile setting.
Key Responsibilities
• Design, develop, and maintain end-to-end data pipelines using Databricks and PySpark
• Implement ETL/ELT frameworks for large-scale batch and/or streaming data
• Build and manage Delta Lake tables, ensuring ACID compliance, schema enforcement, and schema evolution
• Apply data modeling techniques (fact/dimension models, SCDs) to support analytics and reporting
• Optimize Spark jobs for performance, scalability, and cost efficiency
• Collaborate with cross-functional teams including product owners, architects, and DevOps
• Participate in agile ceremonies and contribute to sprint planning and delivery
• Ensure production-grade code quality, including error handling, logging, testing, and documentation
• Troubleshoot and resolve data pipeline and performance issues
Required Skills & Qualifications
• 10+ years of experience in Data Engineering
• Strong, hands-on experience with:
• Databricks
• PySpark
• Delta Lake
• Solid understanding of:
• Data pipelines and ETL architecture
• Distributed data processing and Spark internals
• Data modeling and analytics-driven design
• Proven experience working in Azure cloud environments, including services such as:
• Azure Data Lake Storage (ADLS Gen2)
• Azure Data Factory (ADF)
• Azure Synapse (preferred)
• Azure Key Vault
• Experience delivering production-quality code using Git and CI/CD practices
• Ability to work independently and collaboratively under tight timelines
Preferred Qualifications
• Databricks Certification (Associate or Professional)
• Experience with performance tuning, job orchestration, and Delta optimization strategies
• Exposure to streaming pipelines and structured streaming is a plus
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Genzeon, is seeking the following. Apply via Dice today!
Job Title: Senior Data Engineer – Databricks, PySpark, Delta Lake
Location: USA (100% Remote)
Duration: 3 Months (Initial Contract, with potential extension)
Experience Required: 10+ Years
Role Overview
We are seeking a highly experienced Senior Data Engineer who can take ownership of designing, building, and optimizing enterprise-scale data pipelines on Databricks using PySpark and Delta Lake within an Azure cloud environment. This role requires strong hands-on expertise and the ability to deliver production-quality solutions in a fast-paced, agile setting.
Key Responsibilities
• Design, develop, and maintain end-to-end data pipelines using Databricks and PySpark
• Implement ETL/ELT frameworks for large-scale batch and/or streaming data
• Build and manage Delta Lake tables, ensuring ACID compliance, schema enforcement, and schema evolution
• Apply data modeling techniques (fact/dimension models, SCDs) to support analytics and reporting
• Optimize Spark jobs for performance, scalability, and cost efficiency
• Collaborate with cross-functional teams including product owners, architects, and DevOps
• Participate in agile ceremonies and contribute to sprint planning and delivery
• Ensure production-grade code quality, including error handling, logging, testing, and documentation
• Troubleshoot and resolve data pipeline and performance issues
Required Skills & Qualifications
• 10+ years of experience in Data Engineering
• Strong, hands-on experience with:
• Databricks
• PySpark
• Delta Lake
• Solid understanding of:
• Data pipelines and ETL architecture
• Distributed data processing and Spark internals
• Data modeling and analytics-driven design
• Proven experience working in Azure cloud environments, including services such as:
• Azure Data Lake Storage (ADLS Gen2)
• Azure Data Factory (ADF)
• Azure Synapse (preferred)
• Azure Key Vault
• Experience delivering production-quality code using Git and CI/CD practices
• Ability to work independently and collaboratively under tight timelines
Preferred Qualifications
• Databricks Certification (Associate or Professional)
• Experience with performance tuning, job orchestration, and Delta optimization strategies
• Exposure to streaming pipelines and structured streaming is a plus






