

Stealth iT Consulting
Machine Learning Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Machine Learning Engineer position with a contract length of "ASAP" at a pay rate of £530 Inside IR35, remote location. Key skills required include Python, LangChain or LlamaIndex, and experience with large language models and cloud AI services. BPSS clearance needed.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
530
-
🗓️ - Date
November 11, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#ML (Machine Learning) #"ETL (Extract #Transform #Load)" #Deep Learning #Cloud #Databases #Python #PyTorch #GIT #API (Application Programming Interface) #Programming #Pandas #Scala #Libraries #Azure #Django #REST (Representational State Transfer) #FastAPI #AI (Artificial Intelligence) #Langchain #Transformers #Microsoft Azure #TensorFlow #Keras #Observability
Role description
Role: Machine Learning Engineer
Rate: £530 Inside IR35
Location: Remote
Start Date: ASAP
Clearence: BPSS
As a Machine Learning Engineer (Conversational AI) you will:
• Design and build sophisticated, agentic AI workflows using frameworks like LangChain or LlamaIndex to handle complex, multi-step user queries.
• Fine-tune LLM models to improve accuracy, reduce latency, and optimize infrastructure costs.
• Build LLM model evals for more stable, reliable application that is resilient to code and model changes.
• Develop, review, and maintain the core application logic in Python 3 and Git, ensuring the service is robust, scalable, and maintainable.
• Integrate a wide range of services, including third-party APIs and foundation models from hyperscalers like Google (Vertex AI), Amazon (Bedrock), and Microsoft (Azure AI).
• Build secure and performant RESTful APIs using Python frameworks like FastAPI or Django REST Framework to connect the AI service with Back End government systems.
• Work with vector databases and retrieval mechanisms to provide the AI agent with accurate, up-to-date information.
• Collaborate in a multi-disciplinary team to continuously improve the agent's performance, reasoning capabilities, and reliability.
Who you are:
We're looking for people with a passion for public service and Generative AI skills to make a difference. You will have:
• Proven experience building and deploying machine learning models in a production environment.
• Strong programming skills and deep expertise in Python.
• Hands-on experience building with agentic or RAG (Retrieval-Augmented Generation) frameworks like LangChain or LlamaIndex.
• Familiarity with tools for working with Large Language Models via API or in a local context (eg HuggingFace transformers).
• Practical experience using managed AI services and foundation models from a major cloud provider (eg, Amazon Bedrock, Google Vertex AI, Azure AI Services).
• Experience with a major conversational AI platform (Google Dialogflow, Amazon Lex, Rasa, or similar).
• A solid understanding of core Python ML libraries (Keras, scikit-learn, Pandas) and deep learning frameworks (TensorFlow, PyTorch).
• Ability to explain complex technical concepts to both technical and non-technical audiences.
• A humble attitude and eagerness to help and mentor others with empathy.
• Ability to navigate ambiguity and prioritise effectively in dynamic environments.
• Collaborating with design and user research disciplines to deliver valuable product outcomes.
Desirable (but not essential) experience:
• Working with tools/interfaces for AI applications eg MCP protocol.
• Training traditional ML and DL models using tools like Axolotl, LoRA, or QLoRA.
• Experience with multi-agent orchestration frameworks (LangGraph, AutoGen, CrewAI)
• Experience with observability and evaluation tools for LLMs such as TruLens or Helicone.
• Experience with AI safety and reliability frameworks like Guardrails AI.
Role: Machine Learning Engineer
Rate: £530 Inside IR35
Location: Remote
Start Date: ASAP
Clearence: BPSS
As a Machine Learning Engineer (Conversational AI) you will:
• Design and build sophisticated, agentic AI workflows using frameworks like LangChain or LlamaIndex to handle complex, multi-step user queries.
• Fine-tune LLM models to improve accuracy, reduce latency, and optimize infrastructure costs.
• Build LLM model evals for more stable, reliable application that is resilient to code and model changes.
• Develop, review, and maintain the core application logic in Python 3 and Git, ensuring the service is robust, scalable, and maintainable.
• Integrate a wide range of services, including third-party APIs and foundation models from hyperscalers like Google (Vertex AI), Amazon (Bedrock), and Microsoft (Azure AI).
• Build secure and performant RESTful APIs using Python frameworks like FastAPI or Django REST Framework to connect the AI service with Back End government systems.
• Work with vector databases and retrieval mechanisms to provide the AI agent with accurate, up-to-date information.
• Collaborate in a multi-disciplinary team to continuously improve the agent's performance, reasoning capabilities, and reliability.
Who you are:
We're looking for people with a passion for public service and Generative AI skills to make a difference. You will have:
• Proven experience building and deploying machine learning models in a production environment.
• Strong programming skills and deep expertise in Python.
• Hands-on experience building with agentic or RAG (Retrieval-Augmented Generation) frameworks like LangChain or LlamaIndex.
• Familiarity with tools for working with Large Language Models via API or in a local context (eg HuggingFace transformers).
• Practical experience using managed AI services and foundation models from a major cloud provider (eg, Amazon Bedrock, Google Vertex AI, Azure AI Services).
• Experience with a major conversational AI platform (Google Dialogflow, Amazon Lex, Rasa, or similar).
• A solid understanding of core Python ML libraries (Keras, scikit-learn, Pandas) and deep learning frameworks (TensorFlow, PyTorch).
• Ability to explain complex technical concepts to both technical and non-technical audiences.
• A humble attitude and eagerness to help and mentor others with empathy.
• Ability to navigate ambiguity and prioritise effectively in dynamic environments.
• Collaborating with design and user research disciplines to deliver valuable product outcomes.
Desirable (but not essential) experience:
• Working with tools/interfaces for AI applications eg MCP protocol.
• Training traditional ML and DL models using tools like Axolotl, LoRA, or QLoRA.
• Experience with multi-agent orchestration frameworks (LangGraph, AutoGen, CrewAI)
• Experience with observability and evaluation tools for LLMs such as TruLens or Helicone.
• Experience with AI safety and reliability frameworks like Guardrails AI.






