

Harnham
Platform Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Platform Data Engineer on a 6-month contract, paying £500-£600 per day, outside IR35. Key skills include Python, cloud environments (AWS/GCP), and modern data stack tools. Experience in building scalable data platforms is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
600
-
🗓️ - Date
October 24, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Data Engineering #Data Catalog #Automation #ML (Machine Learning) #dbt (data build tool) #IAM (Identity and Access Management) #Data Governance #AWS (Amazon Web Services) #Infrastructure as Code (IaC) #Kubernetes #Terraform #GCP (Google Cloud Platform) #Delta Lake #Data Science #Databricks #Kafka (Apache Kafka) #Scala #Jenkins #Python #Data Quality #Data Lake #S3 (Amazon Simple Storage Service) #Observability #Monitoring #RDS (Amazon Relational Database Service) #GitHub #Cloud #Docker #Deployment #Airflow #Data Pipeline
Role description
Senior Platform Engineer
£500-£600 per day
Outside IR35
6-months
Join one of the UK's leading online retailers as they evolve their next-generation data platform. This is an opportunity to shape the backbone of a modern data ecosystem, empowering analysts, ML engineers, and data scientists to deliver smarter, faster insights at scale.
The Role
You'll play a key role in designing and engineering platform services that treat data as a core product. This means building scalable, secure, and observable systems that help teams confidently leverage data across the business.
You'll work closely with a wide range of technical and non-technical partners to deliver resilient infrastructure, champion data governance, and mentor others in engineering excellence.
In this role, you will:
• Shape the data platform roadmap: Introduce modern observability, quality, and governance frameworks that elevate how teams access and trust data.
• Build and scale infrastructure: Develop services, APIs, and data pipelines using modern cloud tooling and automation-first principles.
• Drive engineering best practices: Implement CI/CD pipelines, testing frameworks, and container-based deployments to ensure reliability and repeatability.
• Lead cross-functional initiatives: Collaborate with product engineers, data scientists, and ML practitioners to understand their workflows and deliver high-impact platform solutions.
• Champion operational reliability: Proactively monitor system performance, automate incident response, and strengthen platform resilience.
What You'll Bring
• Strong proficiency in Python (or a similar high-level language) with a deep understanding of software engineering best practices - testing, automation, clean code, and CI/CD.
• Proven track record building and maintaining scalable data platforms in production, enabling advanced users such as ML and analytics engineers.
• Hands-on experience with modern data stack tools - Airflow, DBT, Databricks, and data catalogue/observability solutions like Monte Carlo, Atlan, or DataHub.
• Solid understanding of cloud environments (AWS or GCP), including IAM, S3, ECS, RDS, or equivalent services.
• Experience implementing Infrastructure as Code (Terraform) and CI/CD pipelines (e.g., Jenkins, GitHub Actions).
• A mindset focused on continuous improvement, learning, and staying at the forefront of emerging technologies.
Nice to Have
• Experience rolling out data governance and observability frameworks, including lineage tracking, SLAs, and data quality monitoring.
• Familiarity with modern data lake table formats such as Delta Lake, Iceberg, or Hudi.
• Background in stream processing (Kafka, Flink, or similar ecosystems).
• Exposure to containerisation and orchestration technologies such as Docker and Kubernetes.
Senior Platform Engineer
£500-£600 per day
Outside IR35
6-months
Join one of the UK's leading online retailers as they evolve their next-generation data platform. This is an opportunity to shape the backbone of a modern data ecosystem, empowering analysts, ML engineers, and data scientists to deliver smarter, faster insights at scale.
The Role
You'll play a key role in designing and engineering platform services that treat data as a core product. This means building scalable, secure, and observable systems that help teams confidently leverage data across the business.
You'll work closely with a wide range of technical and non-technical partners to deliver resilient infrastructure, champion data governance, and mentor others in engineering excellence.
In this role, you will:
• Shape the data platform roadmap: Introduce modern observability, quality, and governance frameworks that elevate how teams access and trust data.
• Build and scale infrastructure: Develop services, APIs, and data pipelines using modern cloud tooling and automation-first principles.
• Drive engineering best practices: Implement CI/CD pipelines, testing frameworks, and container-based deployments to ensure reliability and repeatability.
• Lead cross-functional initiatives: Collaborate with product engineers, data scientists, and ML practitioners to understand their workflows and deliver high-impact platform solutions.
• Champion operational reliability: Proactively monitor system performance, automate incident response, and strengthen platform resilience.
What You'll Bring
• Strong proficiency in Python (or a similar high-level language) with a deep understanding of software engineering best practices - testing, automation, clean code, and CI/CD.
• Proven track record building and maintaining scalable data platforms in production, enabling advanced users such as ML and analytics engineers.
• Hands-on experience with modern data stack tools - Airflow, DBT, Databricks, and data catalogue/observability solutions like Monte Carlo, Atlan, or DataHub.
• Solid understanding of cloud environments (AWS or GCP), including IAM, S3, ECS, RDS, or equivalent services.
• Experience implementing Infrastructure as Code (Terraform) and CI/CD pipelines (e.g., Jenkins, GitHub Actions).
• A mindset focused on continuous improvement, learning, and staying at the forefront of emerging technologies.
Nice to Have
• Experience rolling out data governance and observability frameworks, including lineage tracking, SLAs, and data quality monitoring.
• Familiarity with modern data lake table formats such as Delta Lake, Iceberg, or Hudi.
• Background in stream processing (Kafka, Flink, or similar ecosystems).
• Exposure to containerisation and orchestration technologies such as Docker and Kubernetes.






