

Databricks Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer on a 6-month contract, paying £450-£550 per day. Key skills include advanced Databricks, Delta Lake, PySpark, and Azure Data Lake experience. Financial services industry experience is preferred.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
550
-
🗓️ - Date discovered
August 5, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Agile #SQL (Structured Query Language) #Deployment #Delta Lake #Model Deployment #Spark (Apache Spark) #Data Science #Databricks #MLflow #PySpark #Cloud #Data Lake #Data Engineering #Data Pipeline #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Scala #Azure #SSIS (SQL Server Integration Services) #Python #Automation
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
DATABRICKS ENGINEER
6-MONTH CONTRACT
£450-£550 PER DAY (OUTSIDE IR35)
This role is a great opportunity for a skilled Databricks Engineer to join a data-driven financial services firm undergoing a major transformation of their data infrastructure. You'll be a key player in modernising their data platform with Databricks and Azure, while supporting scalable data pipelines and machine learning workflows.
THE COMPANY
This business is investing heavily in data as it digitises operations across its risk, product, and customer intelligence teams. They're building a modern Lakehouse architecture using Databricks, Delta Lake, and Azure Data Lake. You'll be part of a collaborative, forward-thinking team with a focus on best practices, automation, and end-to-end data enablement.
THE ROLE
You'll work closely with Data Engineers, Scientists, and Architects to design, build, and optimise core data pipelines on Databricks. Your focus will be on enabling analytics and machine learning at scale, using best-in-class tools across the Azure stack.
Your responsibilities will include:
• Designing and deploying ETL pipelines using PySpark and Delta Lake on Databricks.
• Supporting the deployment and operationalisation of ML models with MLflow and Databricks Workflows.
• Building out reusable data products and feature stores for data science teams.
• Tuning performance across clusters, jobs, and workflows.
• Migrating legacy systems (SSIS/SQL) to Databricks and cloud-native tools.
• Collaborating with data scientists, analysts, and product teams to improve data usability and performance.
KEY SKILLS AND REQUIREMENTS
• Advanced experience with Databricks, Delta Lake, and PySpark.
• Strong background in data engineering and distributed processing.
• Hands-on knowledge of Azure Data Lake, Data Factory, or similar orchestration tools.
• Experience with ML model deployment, preferably using MLflow or similar tools.
• Proficient in SQL, Python, and cloud-based data pipelines.
• Comfortable in fast-paced, agile delivery environments.
DESIRABLE SKILLS
• Experience building or integrating feature stores.
• Familiarity with Unity Catalog, Databricks SQL, and cost optimisation.
• Exposure to MLOps practices and production-grade model lifecycle management.
• Prior experience in financial services or other regulated sectors.
HOW TO APPLY
Please register your interest by sending your CV via the apply link on this page. For more information, feel free to reach out directly.