Stealth iT Consulting

Machine Learning Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer (Conversational AI) with a contract length of "unknown," offering £600 Inside IR35. It requires strong Python skills, experience with LangChain or LlamaIndex, and familiarity with major cloud AI services. Remote work with occasional visits.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
600
-
🗓️ - Date
October 24, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Keras #Deep Learning #Langchain #PyTorch #ML (Machine Learning) #API (Application Programming Interface) #Pandas #TensorFlow #AI (Artificial Intelligence) #Azure #Microsoft Azure #Programming #Scala #Databases #Python #Transformers #Libraries #Django #Observability #"ETL (Extract #Transform #Load)" #Cloud #GIT #FastAPI #REST (Representational State Transfer)
Role description
Role: Machine Learning Engineer Rate: £600 Inside IR35 Location: Remote (With Occasional visits) Start Date: ASAP Clearence: BPSS As a Machine Learning Engineer (Conversational AI) you will: • Design and build sophisticated, agentic AI workflows using frameworks like LangChain or LlamaIndex to handle complex, multi-step user queries. • Fine-tune LLM models to improve accuracy, reduce latency, and optimize infrastructure costs. • Build LLM model evals for more stable, reliable application?that?is resilient to code?and?model changes. • Develop, review, and maintain the core application logic in Python 3 and Git, ensuring the service is robust, scalable, and maintainable. • Integrate a wide range of services, including third-party APIs and foundation models from hyperscalers like Google (Vertex AI), Amazon (Bedrock), and Microsoft (Azure AI). • Build secure and performant RESTful APIs using Python frameworks like FastAPI or Django REST Framework to connect the AI service with Back End government systems. • Work with vector databases and retrieval mechanisms to provide the AI agent with accurate, up-to-date information. • Collaborate in a multi-disciplinary team to continuously improve the agent's performance, reasoning capabilities, and reliability. Who you are: We're looking for people with a passion for public service and Generative AI skills to make a difference. You will have: • Proven experience building and deploying machine learning models in a production environment. • Strong programming skills and deep expertise in Python. • Hands-on experience building with agentic or RAG (Retrieval-Augmented Generation) frameworks like LangChain or LlamaIndex. • Familiarity with tools for working with Large Language Models via API or in a local context (eg HuggingFace transformers). • Practical experience using managed AI services and foundation models from a major cloud provider (eg, Amazon Bedrock, Google Vertex AI, Azure AI Services). • Experience with a major conversational AI platform (Google Dialogflow, Amazon Lex, Rasa, or similar). • A solid understanding of core Python ML libraries (Keras, scikit-learn, Pandas) and deep learning frameworks (TensorFlow, PyTorch). • Ability to explain complex technical concepts to both technical and non-technical audiences. • A humble attitude and eagerness to help and mentor others with empathy. • Ability to navigate ambiguity and prioritise effectively in dynamic environments. • Collaborating with design and user research disciplines to deliver valuable product outcomes. Desirable (but not essential) experience: • Working with tools/interfaces for AI applications eg MCP protocol. • Training traditional ML and DL models using tools like Axolotl, LoRA, or QLoRA. • Experience with multi-agent orchestration frameworks (LangGraph, AutoGen, CrewAI) • Experience with observability and evaluation tools for LLMs such as TruLens or Helicone. • Experience with AI safety and reliability frameworks like Guardrails AI.