
ML Ops Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an ML Ops Engineer on a 6-month contract, hybrid location in East London, offering £400 per day. Requires 5+ years in engineering, 3+ years in ML Ops, strong Python and AWS skills, and experience with LLMs in production.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
-
🗓️ - Date discovered
September 12, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London
-
🧠 - Skills detailed
#Transformers #"ETL (Extract #Transform #Load)" #Data Pipeline #MLflow #Kubernetes #NLP (Natural Language Processing) #PyTorch #Monitoring #SageMaker #Observability #Python #Dataiku #Scala #Jupyter #Pandas #Data Science #ML Ops (Machine Learning Operations) #Terraform #AWS (Amazon Web Services) #Microservices #Cloud #Data Engineering #AI (Artificial Intelligence) #TensorFlow #NumPy #FastAPI #Hugging Face #Agile #SQLAlchemy #Lean #ML (Machine Learning) #Langchain #Automatic Speech Recognition (ASR)
Role description
Job Title: ML Ops / LLM Ops Engineer
Location: Hybrid (potentially 1 day per week in east London)
Contract: 6 month Contract
Day Rate: £400 per day (Outside IR35)
About the Role
We are seeking an experienced ML Ops / LLM Ops Engineer to join a high-profile digital transformation initiative. This role focuses on operationalising advanced Machine Learning services including Transformers, Large Language Models (LLMs), Automatic Speech Recognition (ASR), and Text-to-Speech (TTS) solutions.
You will work closely with developers, technical leads, product owners, and QA teams to design, deploy, and support production-grade ML services. This is a fast-moving environment where cutting-edge Generative AI technologies are constantly evolving, so adaptability and technical excellence are essential.
Key Responsibilities
• Design and implement tooling and technologies to support ML models and LLMs in production.
• Deploy, maintain, and optimise machine learning services within a cloud environment (AWS).
• Recommend and implement prompt management tools and provide expertise in prompt engineering.
• Introduce and manage observability, monitoring, and evaluation frameworks for ML and AI services.
• Enable auto-evaluation of prompts and models against domain-specific requirements.
• Build Python-based microservices, data pipelines, and serverless functions.
• Collaborate with stakeholders to translate data and AI requirements into scalable solutions.
Essential Experience & Skills
• 5+ years' engineering experience, with at least 3 years in ML Ops, Data Engineering, or AI infrastructure.
• Strong Python engineering skills (Pandas, Numpy, Jupyter, FastAPI, SQLAlchemy).
• Expertise in AWS services (certification desirable).
• Proven experience deploying and supporting LLMs in production.
• Strong understanding of LLM fine-tuning (PyTorch, TensorFlow, Hugging Face Trainer, etc.).
• Experience with ML tooling (e.g. SageMaker, LangChain/LangSmith, MLflow, Dataiku, DataRobot).
• Knowledge of embeddings, their applications, and limitations.
• Hands-on experience in Agile / Lean / XP environments.
• Excellent communication, problem-solving, and cross-team collaboration skills.
• Proactive interest in Generative AI trends and best practices.
Desired Skills
• Experience with chatbots and conversational AI (voice or text).
• Familiarity with Terraform, Helm, Kubernetes, or Postgres.
• Exposure to Data Science, NLP, Explainable AI (XAI).
• Real-world delivery of Generative AI solutions, especially LLM-driven applications.
• Rates depend on experience and client requirements
Job Title: ML Ops / LLM Ops Engineer
Location: Hybrid (potentially 1 day per week in east London)
Contract: 6 month Contract
Day Rate: £400 per day (Outside IR35)
About the Role
We are seeking an experienced ML Ops / LLM Ops Engineer to join a high-profile digital transformation initiative. This role focuses on operationalising advanced Machine Learning services including Transformers, Large Language Models (LLMs), Automatic Speech Recognition (ASR), and Text-to-Speech (TTS) solutions.
You will work closely with developers, technical leads, product owners, and QA teams to design, deploy, and support production-grade ML services. This is a fast-moving environment where cutting-edge Generative AI technologies are constantly evolving, so adaptability and technical excellence are essential.
Key Responsibilities
• Design and implement tooling and technologies to support ML models and LLMs in production.
• Deploy, maintain, and optimise machine learning services within a cloud environment (AWS).
• Recommend and implement prompt management tools and provide expertise in prompt engineering.
• Introduce and manage observability, monitoring, and evaluation frameworks for ML and AI services.
• Enable auto-evaluation of prompts and models against domain-specific requirements.
• Build Python-based microservices, data pipelines, and serverless functions.
• Collaborate with stakeholders to translate data and AI requirements into scalable solutions.
Essential Experience & Skills
• 5+ years' engineering experience, with at least 3 years in ML Ops, Data Engineering, or AI infrastructure.
• Strong Python engineering skills (Pandas, Numpy, Jupyter, FastAPI, SQLAlchemy).
• Expertise in AWS services (certification desirable).
• Proven experience deploying and supporting LLMs in production.
• Strong understanding of LLM fine-tuning (PyTorch, TensorFlow, Hugging Face Trainer, etc.).
• Experience with ML tooling (e.g. SageMaker, LangChain/LangSmith, MLflow, Dataiku, DataRobot).
• Knowledge of embeddings, their applications, and limitations.
• Hands-on experience in Agile / Lean / XP environments.
• Excellent communication, problem-solving, and cross-team collaboration skills.
• Proactive interest in Generative AI trends and best practices.
Desired Skills
• Experience with chatbots and conversational AI (voice or text).
• Familiarity with Terraform, Helm, Kubernetes, or Postgres.
• Exposure to Data Science, NLP, Explainable AI (XAI).
• Real-world delivery of Generative AI solutions, especially LLM-driven applications.
• Rates depend on experience and client requirements