

Robson Bale
Senior Data/AI Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data/AI Engineer on a contract basis in London, lasting until February 27, with a pay rate of £520-550 per day. Key skills include expert Python proficiency, AWS services experience, and LLM application development.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
550
-
🗓️ - Date
March 24, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Apache Airflow #Docker #Storage #S3 (Amazon Simple Storage Service) #Data Ingestion #Data Manipulation #Python #ML (Machine Learning) #Databases #Scala #Cloud #Data Pipeline #"ETL (Extract #Transform #Load)" #TypeScript #AWS (Amazon Web Services) #Data Processing #Deployment #SageMaker #Data Engineering #Langchain #Airflow #OpenSearch
Role description
Senior Data/AI Engineer - Contract - London - 3 days per week onsite - £520-550pd via
Umbrella
Contract until Feb 27
Key Responsibilities
Data Engineering for AI: Build robust Python pipelines to ingest, clean, and chunk structured and unstructured data for vector embedding and storage.
AWS Infrastructure: Leverage AWS services (such as Bedrock, SageMaker, or OpenSearch) to deploy and manage LLM integrations and vector databases.
Containerization: Develop and deploy AI services using Docker, ensuring scalable and reproducible environments.
AI/LLM Application Development: Design and implement Retrieval-Augmented Generation (RAG) architectures to enable intelligent document search and question-answering capabilities.
Collaboration: Work closely with Principal Data Engineers and Frontend Developers to ensure seamless integration of AI features into the core application.
Optimization: Monitor and optimize the performance, latency, and cost of LLM queries and data processing workflows.
Required Skills
Python: Expert-level proficiency in Python, specifically for data manipulation and Back End development.
Generative AI & LLMs: Hands-on experience building applications using LLMs (eg, GPT, Llama, Anthropic) and orchestration frameworks (eg, LangChain, LlamaIndex).
AWS Ecosystem: Strong experience with AWS cloud services, particularly those related to AI/ML and data (Bedrock, SageMaker, Glue, S3).
Vector Databases: Experience implementing vector stores (eg, AWS OpenSearch, Pinecone, Weaviate, or pgvector) for semantic search.
Docker: Proficiency in containerizing applications for deployment.
Nice-to-Have Skills
Data Pipelines: Experience building ETL/ELT pipelines to prepare structured and unstructured data for machine learning tasks.
Domain Knowledge: Familiarity integrating complex financial data sources, specifically regarding Discounting data (eg, BroCalc), DSO, Energy & Commodities data (FACTS), and Cash/Credit product data.
Frontend Integration: Basic understanding of how AI APIs integrate with modern UIs (Next.js/Typescript) to support the Front End team.
Orchestration: Experience with Apache Airflow for managing data ingestion workflows.
Senior Data/AI Engineer - Contract - London - 3 days per week onsite - £520-550pd via
Umbrella
Contract until Feb 27
Key Responsibilities
Data Engineering for AI: Build robust Python pipelines to ingest, clean, and chunk structured and unstructured data for vector embedding and storage.
AWS Infrastructure: Leverage AWS services (such as Bedrock, SageMaker, or OpenSearch) to deploy and manage LLM integrations and vector databases.
Containerization: Develop and deploy AI services using Docker, ensuring scalable and reproducible environments.
AI/LLM Application Development: Design and implement Retrieval-Augmented Generation (RAG) architectures to enable intelligent document search and question-answering capabilities.
Collaboration: Work closely with Principal Data Engineers and Frontend Developers to ensure seamless integration of AI features into the core application.
Optimization: Monitor and optimize the performance, latency, and cost of LLM queries and data processing workflows.
Required Skills
Python: Expert-level proficiency in Python, specifically for data manipulation and Back End development.
Generative AI & LLMs: Hands-on experience building applications using LLMs (eg, GPT, Llama, Anthropic) and orchestration frameworks (eg, LangChain, LlamaIndex).
AWS Ecosystem: Strong experience with AWS cloud services, particularly those related to AI/ML and data (Bedrock, SageMaker, Glue, S3).
Vector Databases: Experience implementing vector stores (eg, AWS OpenSearch, Pinecone, Weaviate, or pgvector) for semantic search.
Docker: Proficiency in containerizing applications for deployment.
Nice-to-Have Skills
Data Pipelines: Experience building ETL/ELT pipelines to prepare structured and unstructured data for machine learning tasks.
Domain Knowledge: Familiarity integrating complex financial data sources, specifically regarding Discounting data (eg, BroCalc), DSO, Energy & Commodities data (FACTS), and Cash/Credit product data.
Frontend Integration: Basic understanding of how AI APIs integrate with modern UIs (Next.js/Typescript) to support the Front End team.
Orchestration: Experience with Apache Airflow for managing data ingestion workflows.





