

Stott and May
MLOps Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an MLOps Engineer in London, UK (Hybrid), with a 6-month contract at market rate (Inside IR35). Key skills include Python, ML libraries, CI/CD pipelines, and cloud platforms. Experience deploying ML models is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 6, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Logging #Docker #Model Deployment #Spark (Apache Spark) #Python #Deployment #Scala #Data Pipeline #Cloud #AWS (Amazon Web Services) #ML (Machine Learning) #Security #PySpark #AI (Artificial Intelligence) #Terraform #Libraries #Data Ingestion #GitHub #Data Science #Compliance #Airflow #Grafana #Programming #PyTorch #Azure #Automation #SageMaker #Snowpark #Monitoring
Role description
Job Description
MLOps Engineer
Location: London, UK (Hybrid – 2 days per week in office)
Day Rate: Market rate (Inside IR35
Duration: 6 months
Role Overview
As an MLOps Engineer, you will support machine learning products from inception, working across the full data ecosystem. This includes developing application-specific data pipelines, building CI/CD pipelines that automate ML model training and deployment, publishing model results for downstream consumption, and building APIs to serve model outputs on-demand.
The role requires close collaboration with data scientists and other stakeholders to ensure ML models are production-ready, performant, secure, and compliant.
Key Responsibilities
• Design, implement, and maintain scalable ML model deployment pipelines (CI/CD for ML)
• Build infrastructure to monitor model performance, data drift, and other key metrics in production
• Develop and maintain tools for model versioning, reproducibility, and experiment tracking
• Optimize model serving infrastructure for latency, scalability, and cost
• Automate the end-to-end ML workflow, from data ingestion to model training, testing, deployment, and monitoring
• Collaborate with data scientists to ensure models are production-ready
• Implement security, compliance, and governance practices for ML systems
• Support troubleshooting and incident response for deployed ML systems
Required Skills And Experience
• Strong programming skills in Python; experience with ML libraries such as Snowpark, PySpark, or PyTorch
• Experience with containerization tools like Docker and orchestration tools like Airflow or Astronomer
• Familiarity with cloud platforms (AWS, Azure) and ML services (e.g., SageMaker, Vertex AI)
• Experience with CI/CD pipelines and automation tools such as GitHub Actions
• Understanding of monitoring and logging tools (e.g., NewRelic, Grafana)
Desirable Skills And Experience
• Prior experience deploying ML models in production environments
• Knowledge of infrastructure-as-code tools like Terraform or CloudFormation
• Familiarity with model interpretability and responsible AI practices
• Experience with feature stores and model registries
Job Description
MLOps Engineer
Location: London, UK (Hybrid – 2 days per week in office)
Day Rate: Market rate (Inside IR35
Duration: 6 months
Role Overview
As an MLOps Engineer, you will support machine learning products from inception, working across the full data ecosystem. This includes developing application-specific data pipelines, building CI/CD pipelines that automate ML model training and deployment, publishing model results for downstream consumption, and building APIs to serve model outputs on-demand.
The role requires close collaboration with data scientists and other stakeholders to ensure ML models are production-ready, performant, secure, and compliant.
Key Responsibilities
• Design, implement, and maintain scalable ML model deployment pipelines (CI/CD for ML)
• Build infrastructure to monitor model performance, data drift, and other key metrics in production
• Develop and maintain tools for model versioning, reproducibility, and experiment tracking
• Optimize model serving infrastructure for latency, scalability, and cost
• Automate the end-to-end ML workflow, from data ingestion to model training, testing, deployment, and monitoring
• Collaborate with data scientists to ensure models are production-ready
• Implement security, compliance, and governance practices for ML systems
• Support troubleshooting and incident response for deployed ML systems
Required Skills And Experience
• Strong programming skills in Python; experience with ML libraries such as Snowpark, PySpark, or PyTorch
• Experience with containerization tools like Docker and orchestration tools like Airflow or Astronomer
• Familiarity with cloud platforms (AWS, Azure) and ML services (e.g., SageMaker, Vertex AI)
• Experience with CI/CD pipelines and automation tools such as GitHub Actions
• Understanding of monitoring and logging tools (e.g., NewRelic, Grafana)
Desirable Skills And Experience
• Prior experience deploying ML models in production environments
• Knowledge of infrastructure-as-code tools like Terraform or CloudFormation
• Familiarity with model interpretability and responsible AI practices
• Experience with feature stores and model registries






