

Few&Far
Artificial Intelligence Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Artificial Intelligence Engineer on a 6-month remote contract in the UK, paying "£X/hour". Requires 5+ years in software engineering, strong Python skills, API development, and experience with LLM applications and client-facing roles.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
680
-
🗓️ - Date
November 12, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
England, United Kingdom
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Data Engineering #Databases #AWS (Amazon Web Services) #Programming #Agile #Consulting #Monitoring #GCP (Google Cloud Platform) #FastAPI #Observability #Deployment #Graph Databases #Azure #Data Pipeline #Cloud #Scala #API (Application Programming Interface) #Knowledge Graph #ArangoDB #AI (Artificial Intelligence) #Python #Docker #Schema Design #Leadership #Neo4J
Role description
🤖 5x AI Solutions Engineers | Outside IR35 | Remote (UK) | 6-Month Contract
Role Summary
The successful AI Solutions Engineer will extend and enhance our AI Operating System,
which leverages LLMs to solve industry-specific challenges across defence, legal, health,
infrastructure and management consulting sectors.
This is a hands-on lead role focused on rapidly prototyping and deploying AI-powered
solutions. Working directly with clients, you will translate their needs into scalable,
production-ready AI applications using modern frameworks and techniques.
Duties & Responsibilities
Technical Development
● Develop platform functionality using Python, building APIs and integrations to extend
capabilities for diverse client needs.
● Design and implement LLM-powered applications and workflows using open source
models such as Llama, Qwen and Gemma, as well as those online models from
OpenAI, Gemini, etc.
● Build AI agents with tool/function calling, prompt engineering and appropriate
guardrails using frameworks such as OpenAI AgentSDK, LangGraph or LlamaIndex.
● Implement testing and evaluation frameworks for LLM applications, covering prompt
testing, output quality metrics and agent behaviour validation.
● Apply relevant AI technologies as needed, including retrieval systems (RAG,
GraphRAG), knowledge graphs, vector databases or data pipelines.
Role Requirements
Work Experience
● At least five years as a software engineer on commercial platforms, with
demonstrable experience building production LLM-powered applications.
● Proven experience with API-level LLM usage, including tool/function calling, prompt
engineering and evaluation.
● Experience with agent frameworks (OpenAI AgentSDK, LangGraph, LlamaIndex
Agents or similar).
● Experience developing APIs using FastAPI or similar frameworks and integrating with
third-party platforms.
● Direct client-facing experience gathering requirements and delivering technical
implementations.
● Experience within agile development workflows and engineering teams.
Skills & Abilities
● Strong Python (or similar) programming skills with a focus on production-grade
applications.
● Excellent communication abilities, translating complex technical concepts for diverse
audiences.
● Strong analytical and problem-solving approach, identifying scalable and reusable
solutions.
● Leadership qualities, including technical mentorship, team collaboration and line
management.
● Ability to align solutions with business goals and industry-specific constraints.
● Self-sufficient contributor capable of working independently and seeking support
when needed.
Nice to Have
The following are examples of specialised areas that would be valuable. Deep expertise in
some of these areas is preferred over surface-level knowledge across all domains.
● Open source LLMs (Llama, Qwen, Gemma, GPT OSS) and local deployment
strategies.
● Frameworks and protocols such as Model Context Protocol (MCP) or Agent-to-Agent
(A2A).
● LLM evaluation tooling (OpenAI Evals, LangSmith, custom evaluation harnesses).
● Advanced agent patterns: multi-agent systems, supervision, delegation strategies.
● RAG, GraphRAG and knowledge graph design and implementation.
● Vector databases and similarity search systems.
● Graph databases (ArangoDB, Neo4j, Neptune) and property graph modelling.
● Data engineering: ETL pipelines, document processing, schema design for AI
applications.
● Cloud platforms (GCP preferred, AWS/Azure also relevant) and containerisation
(Docker).
● Observability and monitoring for LLM applications (tracing, metrics, cost tracking).
● Secure coding practices for regulated industries and sensitive data handling.
🤖 5x AI Solutions Engineers | Outside IR35 | Remote (UK) | 6-Month Contract
Role Summary
The successful AI Solutions Engineer will extend and enhance our AI Operating System,
which leverages LLMs to solve industry-specific challenges across defence, legal, health,
infrastructure and management consulting sectors.
This is a hands-on lead role focused on rapidly prototyping and deploying AI-powered
solutions. Working directly with clients, you will translate their needs into scalable,
production-ready AI applications using modern frameworks and techniques.
Duties & Responsibilities
Technical Development
● Develop platform functionality using Python, building APIs and integrations to extend
capabilities for diverse client needs.
● Design and implement LLM-powered applications and workflows using open source
models such as Llama, Qwen and Gemma, as well as those online models from
OpenAI, Gemini, etc.
● Build AI agents with tool/function calling, prompt engineering and appropriate
guardrails using frameworks such as OpenAI AgentSDK, LangGraph or LlamaIndex.
● Implement testing and evaluation frameworks for LLM applications, covering prompt
testing, output quality metrics and agent behaviour validation.
● Apply relevant AI technologies as needed, including retrieval systems (RAG,
GraphRAG), knowledge graphs, vector databases or data pipelines.
Role Requirements
Work Experience
● At least five years as a software engineer on commercial platforms, with
demonstrable experience building production LLM-powered applications.
● Proven experience with API-level LLM usage, including tool/function calling, prompt
engineering and evaluation.
● Experience with agent frameworks (OpenAI AgentSDK, LangGraph, LlamaIndex
Agents or similar).
● Experience developing APIs using FastAPI or similar frameworks and integrating with
third-party platforms.
● Direct client-facing experience gathering requirements and delivering technical
implementations.
● Experience within agile development workflows and engineering teams.
Skills & Abilities
● Strong Python (or similar) programming skills with a focus on production-grade
applications.
● Excellent communication abilities, translating complex technical concepts for diverse
audiences.
● Strong analytical and problem-solving approach, identifying scalable and reusable
solutions.
● Leadership qualities, including technical mentorship, team collaboration and line
management.
● Ability to align solutions with business goals and industry-specific constraints.
● Self-sufficient contributor capable of working independently and seeking support
when needed.
Nice to Have
The following are examples of specialised areas that would be valuable. Deep expertise in
some of these areas is preferred over surface-level knowledge across all domains.
● Open source LLMs (Llama, Qwen, Gemma, GPT OSS) and local deployment
strategies.
● Frameworks and protocols such as Model Context Protocol (MCP) or Agent-to-Agent
(A2A).
● LLM evaluation tooling (OpenAI Evals, LangSmith, custom evaluation harnesses).
● Advanced agent patterns: multi-agent systems, supervision, delegation strategies.
● RAG, GraphRAG and knowledge graph design and implementation.
● Vector databases and similarity search systems.
● Graph databases (ArangoDB, Neo4j, Neptune) and property graph modelling.
● Data engineering: ETL pipelines, document processing, schema design for AI
applications.
● Cloud platforms (GCP preferred, AWS/Azure also relevant) and containerisation
(Docker).
● Observability and monitoring for LLM applications (tracing, metrics, cost tracking).
● Secure coding practices for regulated industries and sensitive data handling.






