

Primus Core
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown" and a pay rate of £550–£650 per day, remote (UK/EU). Key skills include Databricks, PySpark, Delta Lake, and CI/CD pipelines. Databricks certifications are preferred.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
650
-
🗓️ - Date
March 23, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#GitHub #AWS (Amazon Web Services) #Data Science #Scala #Data Pipeline #MLflow #Data Engineering #Spark SQL #Databricks #AI (Artificial Intelligence) #Azure #PySpark #Delta Lake #SQL (Structured Query Language) #Azure DevOps #Security #Spark (Apache Spark) #Terraform #Deployment #DevOps #Infrastructure as Code (IaC)
Role description
Senior Data Engineer / Data Platform Engineer (Databricks)
Contract | Outside IR35
Remote (UK/EU)
£550–£650 per day (DOE)
We’re partnering with a forward-thinking Data & AI consultancy that is helping to scale a Databricks Lakehouse platform.
They are looking for a Senior Data Engineer / Data Platform Engineer who can operate across both Data Engineering & Platform Engineering.
This is an end-to-end hands-on role.
What You’ll Be Doing:
• Designing and building end-to-end data pipelines using Databricks (PySpark / SQL / Delta Lake)
• Developing and managing robust, scalable Lakehouse architectures
• Implementing Delta Live Tables (DLT) for reliable, testable, production-grade pipelines
• Optimising performance using:
• Working with Unity Catalog to implement Centralised governance
• Leveraging Databricks Workflows / Jobs for orchestration
• Building CI/CD pipelines for data & platform deployments (Terraform, GitHub Actions, Azure DevOps etc.)
• Enabling real-time and streaming pipelines (Structured Streaming, Auto Loader)
• Contributing to MLOps workflows, integrating models into production pipelines
• Collaborating with Data Scientists, Analysts, and Platform teams to improve developer experience and platform usability
Key Skills & Experience:
• Strong experience building pipelines with PySpark / Spark SQL
• Deep understanding of Delta Lake
• Databricks Platform Expertise
• Hands-on experience with: Unity Catalog (governance & security), Spark Declarative Pipelines, Databricks Workflows / Jobs
• Strong understanding of Lakehouse architecture principles
• Performance & Optimisation
• Platform / DevOps Engineering, Infrastructure as Code, CI/CD pipelines
• Strong experience in at least one - Azure, AWS
Very Nice to Have:
• Databricks certifications (Data Engineer Associate / Professional)
• Experience with Feature Store / MLflow
• Exposure to Databricks AI / Mosaic AI / Model Serving
If you’re interested in building high-performance Databricks platforms at scale, send your CV to cv@primus-connect.com or click apply.
Senior Data Engineer / Data Platform Engineer (Databricks)
Contract | Outside IR35
Remote (UK/EU)
£550–£650 per day (DOE)
We’re partnering with a forward-thinking Data & AI consultancy that is helping to scale a Databricks Lakehouse platform.
They are looking for a Senior Data Engineer / Data Platform Engineer who can operate across both Data Engineering & Platform Engineering.
This is an end-to-end hands-on role.
What You’ll Be Doing:
• Designing and building end-to-end data pipelines using Databricks (PySpark / SQL / Delta Lake)
• Developing and managing robust, scalable Lakehouse architectures
• Implementing Delta Live Tables (DLT) for reliable, testable, production-grade pipelines
• Optimising performance using:
• Working with Unity Catalog to implement Centralised governance
• Leveraging Databricks Workflows / Jobs for orchestration
• Building CI/CD pipelines for data & platform deployments (Terraform, GitHub Actions, Azure DevOps etc.)
• Enabling real-time and streaming pipelines (Structured Streaming, Auto Loader)
• Contributing to MLOps workflows, integrating models into production pipelines
• Collaborating with Data Scientists, Analysts, and Platform teams to improve developer experience and platform usability
Key Skills & Experience:
• Strong experience building pipelines with PySpark / Spark SQL
• Deep understanding of Delta Lake
• Databricks Platform Expertise
• Hands-on experience with: Unity Catalog (governance & security), Spark Declarative Pipelines, Databricks Workflows / Jobs
• Strong understanding of Lakehouse architecture principles
• Performance & Optimisation
• Platform / DevOps Engineering, Infrastructure as Code, CI/CD pipelines
• Strong experience in at least one - Azure, AWS
Very Nice to Have:
• Databricks certifications (Data Engineer Associate / Professional)
• Experience with Feature Store / MLflow
• Exposure to Databricks AI / Mosaic AI / Model Serving
If you’re interested in building high-performance Databricks platforms at scale, send your CV to cv@primus-connect.com or click apply.





