Jobs via Dice

Senior Databricks Engineer with Python Experience

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Databricks Engineer with Python experience, offering a long-term contract in Wilmington, DE. Key skills include Databricks, PySpark, SQL, and cloud platforms (Azure preferred). Strong data engineering and ETL experience required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 16, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Wilmington, DE
-
🧠 - Skills detailed
#Data Quality #Data Engineering #ML (Machine Learning) #Cloud #Data Governance #PySpark #Security #Data Pipeline #Scala #Data Science #Snowflake #Consulting #Databricks #AWS (Amazon Web Services) #SQL (Structured Query Language) #ADF (Azure Data Factory) #Documentation #Data Warehouse #Python #Spark (Apache Spark) #Storage #Azure #Batch #Delta Lake #GCP (Google Cloud Platform) #Airflow #BI (Business Intelligence) #"ETL (Extract #Transform #Load)"
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Alltech Consulting Services, Inc., is seeking the following. Apply via Dice today! Job Title Senior Databricks Engineer With Python Experience Location: Wilmington, DE 5 days onsite role, no hybrid Long Term Contract role Job Description: We are looking for a Senior Data Engineer with strong experience in Databricks, PySpark, and modern Data Warehouse systems. The ideal candidate can design, build, and optimize scalable data pipelines and work closely with analytics, product, and engineering teams. Key Responsibilities: Design and build ETL/ELT pipelines using Databricks and PySpark Develop and maintain data models and data warehouse structures (dimensional modeling, star/snowflake schemas) Optimize data workflows for performance, scalability, and cost Work with cloud platforms (Azure/AWS/Google Cloud Platform) for storage, compute, and orchestration Ensure data quality, reliability, and security across pipelines Collaborate with cross-functional teams (Data Science, BI, Product) Write clean, reusable code and follow engineering best practices Troubleshoot issues in production data pipelines Required Skills: Strong hands-on skills in Databricks, PySpark, and SQL Experience with data warehouse concepts, ETL frameworks, batch/streaming pipelines Solid understanding of Delta Lake and Lakehouse architecture Experience With At Least One Cloud Platform (Azure Preferred) Experience with workflow orchestration tools (Airflow, ADF, Prefect, etc.) Nice to Have: Experience with CI/CD for data pipelines Knowledge of data governance tools (Unity Catalog or similar) Exposure to ML data preparation pipelines Soft Skills: Strong communication and documentation skills Ability to work independently and mentor others Problem-solver with a focus on delivering business value