Harnham

MLOPs Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior MLOps Engineer with a contract length of "unknown" at a pay rate of £500-600 per day. It requires expertise in MLOps, Databricks, MLflow, Apache Spark, Python, and SQL, with a focus on real-time model serving and CI/CD pipelines. Hybrid work is available, ideally one day per week in-office.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
640
-
🗓️ - Date
November 13, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
England, United Kingdom
-
🧠 - Skills detailed
#Deployment #Cloud #Version Control #Apache Spark #ML (Machine Learning) #GIT #Data Engineering #Strategy #MLflow #GCP (Google Cloud Platform) #SQL (Structured Query Language) #Big Data #DevOps #Batch #Python #Datasets #Spark (Apache Spark) #Data Science #Automation #Databricks
Role description
MLOps Engineer Outside IR35 - 500-600 Per Day Ideally, 1 day per week/fortnight in the office, flexibility for remote work for the right candidate. A market-leading global e-commerce client is urgently seeking a Senior MLOps Lead to establish and drive operational excellence within their largest, most established data function (60+ engineers). This is a mission-critical role focused on scaling their core on-site advertising platform from daily batch processing to real-time capability. This role suits a hands-on MLOps expert who is capable of implementing new standards, automating deployment lifecycles, and mentoring a large engineering team on best practices. What you'll be doing: MLOps Strategy & Implementation: Design and deploy end-to-end MLOps processes, focusing heavily on governance, reproducibility, and automation. Real-Time Pipeline Build: Architect and implement solutions to transition high-volume model serving (10M+ customers, 1.2M+ product variants) to real-time performance. MLflow & Databricks Mastery: Lead the optimal integration and use of MLflow for model registry, experiment tracking, and deployment within the Databricks platform. DevOps for ML: Build and automate robust CI/CD pipelines using GIT to ensure stable, reliable, and frequent model releases. Performance Engineering: Profile and optimise large-scale Spark/Python codebases for production efficiency, focusing on minimising latency and cost. Knowledge Transfer: Act as the technical lead to embed MLOps standards into the core Data Engineering team. Key Skills: Must Have: • MLOps: Proven experience designing and implementing end-to-end MLOps processes in a production environment. • Cloud ML Stack: Expert proficiency with Databricks and MLflow. • Big Data/Coding: Expert Apache Spark and Python engineering experience on large datasets. • Core Engineering: Strong experience with GIT for version control and building CI/CD / release pipelines. • Data Fundamentals: Excellent SQL skills. Nice-to-Have/Desirable Skills • DevOps/CICD (Pipeline experience) • GCP (Familiarity with Google Cloud Platform) • Data Science (Good understanding of math/model fundamentals for optimisation) • Familiarity with low-latency data stores (e.g., CosmosDB). If you have the capability to bring MLOps maturity to a traditional Engineering team using the MLFlow/Databricks/Spark stack, please email: with your CV and contract details. Desired Skills and Experience MLOPS GIT MLFlow Spark Python SQL GCP DevOps CICD