

Russell Tobin
Machine Learning Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer in London (Hybrid) on a contract outside IR35, paying £500–£550 per day. Key skills include Python, LLM deployment, and experience with LangChain. A Master's in a technical field is preferred.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
550
-
🗓️ - Date
May 8, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Forecasting #Observability #AWS (Amazon Web Services) #Classification #"ETL (Extract #Transform #Load)" #Cloud #ML (Machine Learning) #GCP (Google Cloud Platform) #Deployment #Computer Science #SQL (Structured Query Language) #Scala #Azure #Data Manipulation #Mathematics #FastAPI #Datasets #Python #AI (Artificial Intelligence) #SaaS (Software as a Service) #Data Science #Langchain
Role description
Data Scientist / Machine Learning Scientist
Location: London (Hybrid)
Contract: Outside IR35
Rate: £500–£550 per day (depending on interview outcome)
We’re looking for AI operators who ship — not experiment.
This is an opportunity to join a major AI build focused on deploying real-world LLM and agentic systems at scale across both AI products and enterprise transformation initiatives.
You’ll be working in a production-first environment where the emphasis is on building reliable, scalable AI systems that deliver measurable business impact.
What You’ll Be Working On
• Designing and building AI agents and agentic workflows powered by LLMs
• Developing systems using RAG, reasoning, planning, memory, and tool orchestration
• Building multi-step intelligent systems capable of real-world tool usage
• Working with MCP-style architectures (or equivalent) to structure context and improve interoperability
• Contributing to recommendation, classification, and forecasting systems using large-scale datasets
• Automating business workflows and decision-making processes through AI-driven solutions
What You’ll Be Doing
• Owning projects end-to-end from concept through to production deployment and iteration
• Building and deploying AI agents that operate reliably in production environments
• Integrating AI systems into APIs, products, and operational workflows
• Collaborating closely with engineering teams to ensure scalability, observability, and maintainability
• Making pragmatic decisions balancing model performance, latency, and cost efficiency
Core Requirements
• Strong Python skills with experience writing production-grade code
• Proven experience deploying LLM-powered systems into production environments
• Hands-on experience with LangChain, LangGraph, or equivalent orchestration frameworks
• Experience building AI agents and agentic workflows with tool usage and multi-step reasoning
• Strong understanding and implementation experience of RAG systems
• Familiarity with MCP/FastMCP/FastAPI or similar orchestration patterns
• Strong understanding of LLM trade-offs including hallucination mitigation, latency, and cost optimisation
• Experience deploying AI systems in cloud environments such as AWS, GCP, or Azure
• Working knowledge of SQL/data manipulation (Working knowledge of SQL or data manipulation is expected, but it is not a primary focus for this role.)
Strong signals include:
• Experience working on SaaS or B2B AI products or delivering AI-driven transformation within an organisation.
• A background in high-growth or scaling environments, where speed and pragmatism are critical.
• Clear evidence of systems that are actively used and delivering value, rather than experimental work.
Ideal Background
• Masters degree or higher in Computer Science, Mathematics, Engineering, or a related technical field
• Experience building and iterating on AI systems delivering measurable value
• Strong ownership mindset and ability to operate in fast-moving environments
• Product-focused approach with a bias toward delivering impact
Why This Role
• Work on live AI systems used at scale
• Join a well-supported AI engineering environment
• High ownership and visibility across products and operations
• Opportunity to shape enterprise AI adoption in a meaningful way
Data Scientist / Machine Learning Scientist
Location: London (Hybrid)
Contract: Outside IR35
Rate: £500–£550 per day (depending on interview outcome)
We’re looking for AI operators who ship — not experiment.
This is an opportunity to join a major AI build focused on deploying real-world LLM and agentic systems at scale across both AI products and enterprise transformation initiatives.
You’ll be working in a production-first environment where the emphasis is on building reliable, scalable AI systems that deliver measurable business impact.
What You’ll Be Working On
• Designing and building AI agents and agentic workflows powered by LLMs
• Developing systems using RAG, reasoning, planning, memory, and tool orchestration
• Building multi-step intelligent systems capable of real-world tool usage
• Working with MCP-style architectures (or equivalent) to structure context and improve interoperability
• Contributing to recommendation, classification, and forecasting systems using large-scale datasets
• Automating business workflows and decision-making processes through AI-driven solutions
What You’ll Be Doing
• Owning projects end-to-end from concept through to production deployment and iteration
• Building and deploying AI agents that operate reliably in production environments
• Integrating AI systems into APIs, products, and operational workflows
• Collaborating closely with engineering teams to ensure scalability, observability, and maintainability
• Making pragmatic decisions balancing model performance, latency, and cost efficiency
Core Requirements
• Strong Python skills with experience writing production-grade code
• Proven experience deploying LLM-powered systems into production environments
• Hands-on experience with LangChain, LangGraph, or equivalent orchestration frameworks
• Experience building AI agents and agentic workflows with tool usage and multi-step reasoning
• Strong understanding and implementation experience of RAG systems
• Familiarity with MCP/FastMCP/FastAPI or similar orchestration patterns
• Strong understanding of LLM trade-offs including hallucination mitigation, latency, and cost optimisation
• Experience deploying AI systems in cloud environments such as AWS, GCP, or Azure
• Working knowledge of SQL/data manipulation (Working knowledge of SQL or data manipulation is expected, but it is not a primary focus for this role.)
Strong signals include:
• Experience working on SaaS or B2B AI products or delivering AI-driven transformation within an organisation.
• A background in high-growth or scaling environments, where speed and pragmatism are critical.
• Clear evidence of systems that are actively used and delivering value, rather than experimental work.
Ideal Background
• Masters degree or higher in Computer Science, Mathematics, Engineering, or a related technical field
• Experience building and iterating on AI systems delivering measurable value
• Strong ownership mindset and ability to operate in fast-moving environments
• Product-focused approach with a bias toward delivering impact
Why This Role
• Work on live AI systems used at scale
• Join a well-supported AI engineering environment
• High ownership and visibility across products and operations
• Opportunity to shape enterprise AI adoption in a meaningful way






