

Oxford Global Resources
Data Engineering Lead
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineering Lead position for an Outside IR35 contract in Paddington, London, lasting until January 2026. Pay rate is £45-48/hour. Requires 5+ years in data engineering (Python, Spark, Azure) and strong project management skills.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
384
-
🗓️ - Date
October 25, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Monitoring #Scala #Security #Compliance #Leadership #Project Management #REST (Representational State Transfer) #Deployment #Big Data #Data Lineage #Observability #Cloud #Automation #Documentation #Data Quality #Metadata #Spark (Apache Spark) #Data Engineering #DevOps #Databricks #"ETL (Extract #Transform #Load)" #Azure #Data Architecture #Python #DataOps #Data Governance
Role description
•
• Outside IR35 contract role
•
•
•
• Hybrid in Paddington, London
•
• --- Client budget: 45-48 Pounds/hour ---
Start date: within 3 weeks
Duration: End of January 2026 plus extensions
Schedule: 2 days/week onsite in Paddington
Senior Data Engineer / Lead
Client are looking for a replacement for someone who is leaving the project – (person currently serving notice period and will provide a handover with chosen candidate). What is important for this position is someone who is happy with multi-tasking – At times, this role can involve above 70% workload of Project Management duties, requiring knowledge of internal Client processes handling lots of moving parts / multitasking, actually focusing away from technology implementation. Will be dealing with Offshore members, vendors etc so strong communication skills are required also. Will be expected to travel On-Site 1 day per week to Paddington, London also with the rest of the role performed Remotely.
Key Responsibilities:
• Hands-on data engineering development using Python, Spark, and Azure.
• Working closely with vendors to pick up tasks.
• Expected to perform technical lead duties, even if not directly managing people.
• Required Skills & Experience:
• Approximately five years of experience
• Hands-on experience in data engineering development (Python, Spark, Azure).
• Experience working within a large organization is highly valued due to the complexity of internal communication and processes.
Overview of what the role entails:
1. Team Leadership & Development
• Establishing and leading a high-performing team of data and DevOps engineers.
• Regular 1:1s, performance reviews, and skill development plans for team members.
• Fostering a collaborative team culture aligned with Client’s values.
1. Data Asset & Product Delivery
• Seeking to deliver high-quality, production-ready data products and pipelines for (industry) use cases.
• Aiming to ensure delivery aligns with business timelines and data quality standards.
• Translating business requirements into scalable, technical solutions.
1. Technical Architecture & Best Practices
• Designing and maintaining scalable data architectures using Azure and Databricks.
• Defining and enforcing best practices around data modelling, ETL development, and cloud-native solutions.
• Staying up to date on emerging tech in big data and analytics.
1. DataOps Implementation
• Setting up CI/CD pipelines for automated deployment and monitoring of data workflows.
• Driving automation and reliability in data engineering processes.
• Implementing observability tools to monitor pipeline health and performance.
1. Governance & Compliance
• Defining and implementing standards for data governance, metadata, lineage, and security.
• Aiming to ensure compliance with regulatory requirements and Clients’ internal policies.
• Seeking to improve data quality, consistency, and traceability across products.
1. Stakeholder Collaboration
• Engaging with Product Owners, D&A leadership, and segment stakeholders to align on priorities.
• Providing regular updates, roadmaps, and insights to drive business value.
• Acting as a bridge between engineering and business teams for seamless delivery.
1. Documentation & Knowledge Sharing
• Maintaining comprehensive documentation for data assets, pipeline processes, and operational practices.
• Contributing to internal wikis, training material, and onboarding guides.
• Leading or participating in internal knowledge-sharing sessions and workshops.
1. Cross-Functional Coordination
• Collaborating with Product, Finance, and Master Data teams for aligned data delivery.
• Contributing to shared domain assets like Customer, Product, Distribution Center data models.
• Coordinating with Malay’s mappings and team for consistency across domains
•
• Outside IR35 contract role
•
•
•
• Hybrid in Paddington, London
•
• --- Client budget: 45-48 Pounds/hour ---
Start date: within 3 weeks
Duration: End of January 2026 plus extensions
Schedule: 2 days/week onsite in Paddington
Senior Data Engineer / Lead
Client are looking for a replacement for someone who is leaving the project – (person currently serving notice period and will provide a handover with chosen candidate). What is important for this position is someone who is happy with multi-tasking – At times, this role can involve above 70% workload of Project Management duties, requiring knowledge of internal Client processes handling lots of moving parts / multitasking, actually focusing away from technology implementation. Will be dealing with Offshore members, vendors etc so strong communication skills are required also. Will be expected to travel On-Site 1 day per week to Paddington, London also with the rest of the role performed Remotely.
Key Responsibilities:
• Hands-on data engineering development using Python, Spark, and Azure.
• Working closely with vendors to pick up tasks.
• Expected to perform technical lead duties, even if not directly managing people.
• Required Skills & Experience:
• Approximately five years of experience
• Hands-on experience in data engineering development (Python, Spark, Azure).
• Experience working within a large organization is highly valued due to the complexity of internal communication and processes.
Overview of what the role entails:
1. Team Leadership & Development
• Establishing and leading a high-performing team of data and DevOps engineers.
• Regular 1:1s, performance reviews, and skill development plans for team members.
• Fostering a collaborative team culture aligned with Client’s values.
1. Data Asset & Product Delivery
• Seeking to deliver high-quality, production-ready data products and pipelines for (industry) use cases.
• Aiming to ensure delivery aligns with business timelines and data quality standards.
• Translating business requirements into scalable, technical solutions.
1. Technical Architecture & Best Practices
• Designing and maintaining scalable data architectures using Azure and Databricks.
• Defining and enforcing best practices around data modelling, ETL development, and cloud-native solutions.
• Staying up to date on emerging tech in big data and analytics.
1. DataOps Implementation
• Setting up CI/CD pipelines for automated deployment and monitoring of data workflows.
• Driving automation and reliability in data engineering processes.
• Implementing observability tools to monitor pipeline health and performance.
1. Governance & Compliance
• Defining and implementing standards for data governance, metadata, lineage, and security.
• Aiming to ensure compliance with regulatory requirements and Clients’ internal policies.
• Seeking to improve data quality, consistency, and traceability across products.
1. Stakeholder Collaboration
• Engaging with Product Owners, D&A leadership, and segment stakeholders to align on priorities.
• Providing regular updates, roadmaps, and insights to drive business value.
• Acting as a bridge between engineering and business teams for seamless delivery.
1. Documentation & Knowledge Sharing
• Maintaining comprehensive documentation for data assets, pipeline processes, and operational practices.
• Contributing to internal wikis, training material, and onboarding guides.
• Leading or participating in internal knowledge-sharing sessions and workshops.
1. Cross-Functional Coordination
• Collaborating with Product, Finance, and Master Data teams for aligned data delivery.
• Contributing to shared domain assets like Customer, Product, Distribution Center data models.
• Coordinating with Malay’s mappings and team for consistency across domains






