

Harnham
Databricks Data Engineer (Contract)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineer (Contract) with a 6-month duration, offering £550-£600pd outside IR35. Remote work is available for UK-based applicants. Key skills include Python, AWS, Databricks, and experience with data governance and big data technologies.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
600
-
🗓️ - Date
October 24, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Data Engineering #Data Catalog #Automation #ML (Machine Learning) #dbt (data build tool) #IAM (Identity and Access Management) #DevOps #Data Governance #Big Data #AWS (Amazon Web Services) #Microservices #Kubernetes #Terraform #GCP (Google Cloud Platform) #Delta Lake #Data Lineage #Data Science #Databricks #Kafka (Apache Kafka) #Scala #Jenkins #Python #Data Quality #Data Lake #S3 (Amazon Simple Storage Service) #Observability #Monitoring #RDS (Amazon Relational Database Service) #Cloud #Docker #Airflow
Role description
£550-£600pd Outside IR35
Remote - UK based applicants only
6 months
A leading eCommerce brand is looking for a Data Platform Engineer to play a key role in evolving their data ecosystem. This is an exciting opportunity to shape how data is built, governed, and leveraged across the business, supporting a platform that impacts millions of users daily.
THE COMPANY
This brand is recognised for its innovation and community-driven approach. With data at the heart of its decision-making, they're investing heavily in scalable, modern data platforms to enable better insights, experimentation, and product development. You'll join a collaborative engineering culture that values autonomy, technical excellence, and continuous improvement.
THE ROLE
As a Data Platform Engineer, you'll be responsible for developing and scaling the company's core data platform, ensuring teams across the business can access, trust, and use data effectively. You'll drive initiatives that improve data quality, observability, and governance, while helping shape a platform-as-a-product mindset.
Key responsibilities include:
• Building and maintaining data infrastructure: Develop microservices, pipelines, and backend systems that power analytics and machine learning initiatives.
• Driving platform evolution: Design and implement scalable, secure, and efficient data services using tools such as Terraform, Docker, and AWS.
• Data governance and observability: Introduce and enhance tooling for data lineage, contracts, monitoring, and cataloguing.
• Operational excellence: Lead automation, monitoring, and incident response to maintain high platform reliability.
• Cross-functional collaboration: Work with data scientists, ML engineers, analysts, and product teams to understand and meet their data needs.
• Mentorship and culture: Support the growth of peers through knowledge sharing and by championing engineering best practices.
YOUR SKILLS AND EXPERIENCE
The successful candidate will have:
• Strong experience in Python and a solid foundation in software engineering best practices (testing, CI/CD, automation).
• Proven track record of designing, building, and scaling data platforms in production environments.
• Hands-on experience with big data technologies such as Airflow, DBT, Databricks, and data catalogue/observability tools (e.g. Monte Carlo, Atlan, Datahub).
• Knowledge of cloud infrastructure (AWS or GCP) - including services such as S3, RDS, EMR, ECS, IAM.
• Experience with DevOps tooling, particularly Terraform and CI/CD pipelines (e.g. Jenkins).
• A proactive, growth-oriented mindset with a passion for modern data and platform technologies.
Nice to Have:
• Experience implementing data governance and observability stacks (lineage, data contracts, quality monitoring).
• Knowledge of data lake formats (Delta Lake, Parquet, Iceberg, Hudi).
• Familiarity with containerisation and streaming technologies (Docker, Kubernetes, Kafka, Flink).
• Exposure to lakehouse or medallion architectures within Databricks.
£550-£600pd Outside IR35
Remote - UK based applicants only
6 months
A leading eCommerce brand is looking for a Data Platform Engineer to play a key role in evolving their data ecosystem. This is an exciting opportunity to shape how data is built, governed, and leveraged across the business, supporting a platform that impacts millions of users daily.
THE COMPANY
This brand is recognised for its innovation and community-driven approach. With data at the heart of its decision-making, they're investing heavily in scalable, modern data platforms to enable better insights, experimentation, and product development. You'll join a collaborative engineering culture that values autonomy, technical excellence, and continuous improvement.
THE ROLE
As a Data Platform Engineer, you'll be responsible for developing and scaling the company's core data platform, ensuring teams across the business can access, trust, and use data effectively. You'll drive initiatives that improve data quality, observability, and governance, while helping shape a platform-as-a-product mindset.
Key responsibilities include:
• Building and maintaining data infrastructure: Develop microservices, pipelines, and backend systems that power analytics and machine learning initiatives.
• Driving platform evolution: Design and implement scalable, secure, and efficient data services using tools such as Terraform, Docker, and AWS.
• Data governance and observability: Introduce and enhance tooling for data lineage, contracts, monitoring, and cataloguing.
• Operational excellence: Lead automation, monitoring, and incident response to maintain high platform reliability.
• Cross-functional collaboration: Work with data scientists, ML engineers, analysts, and product teams to understand and meet their data needs.
• Mentorship and culture: Support the growth of peers through knowledge sharing and by championing engineering best practices.
YOUR SKILLS AND EXPERIENCE
The successful candidate will have:
• Strong experience in Python and a solid foundation in software engineering best practices (testing, CI/CD, automation).
• Proven track record of designing, building, and scaling data platforms in production environments.
• Hands-on experience with big data technologies such as Airflow, DBT, Databricks, and data catalogue/observability tools (e.g. Monte Carlo, Atlan, Datahub).
• Knowledge of cloud infrastructure (AWS or GCP) - including services such as S3, RDS, EMR, ECS, IAM.
• Experience with DevOps tooling, particularly Terraform and CI/CD pipelines (e.g. Jenkins).
• A proactive, growth-oriented mindset with a passion for modern data and platform technologies.
Nice to Have:
• Experience implementing data governance and observability stacks (lineage, data contracts, quality monitoring).
• Knowledge of data lake formats (Delta Lake, Parquet, Iceberg, Hudi).
• Familiarity with containerisation and streaming technologies (Docker, Kubernetes, Kafka, Flink).
• Exposure to lakehouse or medallion architectures within Databricks.






