

Experis UK
Databricks Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer in London, offering a 6+ month contract at an umbrella rate, inside IR35. Key skills include DBT, Apache Airflow, Databricks, strong SQL, and Python. Hybrid work requires 3 days onsite weekly.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 3, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Azure DevOps #Data Modeling #Apache Airflow #dbt (data build tool) #Monitoring #Data Vault #GitLab #Datasets #"ETL (Extract #Transform #Load)" #Azure #GitHub #Version Control #Delta Lake #DevOps #Data Engineering #Spark SQL #Scripting #Airflow #Data Quality #Documentation #Scala #GCP (Google Cloud Platform) #BI (Business Intelligence) #SQL (Structured Query Language) #Data Pipeline #Databricks #Python #Vault #Cloud #AWS (Amazon Web Services) #Deployment #Spark (Apache Spark) #Data Science
Role description
London- hybrid- 3 days per week on-site
6 Months +
UMBRELLA only- Inside IR35
Key Responsibilities
• Design, develop, and maintain ETL/ELT pipelines using Airflow for orchestration and scheduling.
• Build and manage data transformation workflows in DBT running on Databricks.
• Optimize data models in Delta Lake for performance, scalability, and cost efficiency.
• Collaborate with analytics, BI, and data science teams to deliver clean, reliable datasets.
• Implement data quality checks (dbt tests, monitoring) and ensure governance standards.
• Manage and monitor Databricks clusters & SQL Warehouses to support workloads.
• Contribute to CI/CD practices for data pipelines (version control, testing, deployments).
• Troubleshoot pipeline failures, performance bottlenecks, and scaling challenges.
• Document workflows, transformations, and data models for knowledge sharing.
Required Skills & Qualifications
• 3-6 years of experience as a Data Engineer (or similar).
• Hands-on expertise with:
• DBT (dbt-core, dbt-databricks adapter, testing & documentation).
• Apache Airflow (DAG design, operators, scheduling, dependencies).
• Databricks (Spark, SQL, Delta Lake, job clusters, SQL Warehouses).
• Strong SQL skills and understanding of data modeling (Kimball, Data Vault, or similar).
• Proficiency in Python for scripting and pipeline development.
• Experience with CI/CD tools (e.g., GitHub Actions, GitLab CI, Azure DevOps).
• Familiarity with cloud platforms (AWS, Azure, or GCP).
• Strong problem-solving skills and ability to work in cross-functional teams.
All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!
London- hybrid- 3 days per week on-site
6 Months +
UMBRELLA only- Inside IR35
Key Responsibilities
• Design, develop, and maintain ETL/ELT pipelines using Airflow for orchestration and scheduling.
• Build and manage data transformation workflows in DBT running on Databricks.
• Optimize data models in Delta Lake for performance, scalability, and cost efficiency.
• Collaborate with analytics, BI, and data science teams to deliver clean, reliable datasets.
• Implement data quality checks (dbt tests, monitoring) and ensure governance standards.
• Manage and monitor Databricks clusters & SQL Warehouses to support workloads.
• Contribute to CI/CD practices for data pipelines (version control, testing, deployments).
• Troubleshoot pipeline failures, performance bottlenecks, and scaling challenges.
• Document workflows, transformations, and data models for knowledge sharing.
Required Skills & Qualifications
• 3-6 years of experience as a Data Engineer (or similar).
• Hands-on expertise with:
• DBT (dbt-core, dbt-databricks adapter, testing & documentation).
• Apache Airflow (DAG design, operators, scheduling, dependencies).
• Databricks (Spark, SQL, Delta Lake, job clusters, SQL Warehouses).
• Strong SQL skills and understanding of data modeling (Kimball, Data Vault, or similar).
• Proficiency in Python for scripting and pipeline development.
• Experience with CI/CD tools (e.g., GitHub Actions, GitLab CI, Azure DevOps).
• Familiarity with cloud platforms (AWS, Azure, or GCP).
• Strong problem-solving skills and ability to work in cross-functional teams.
All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!