

Morgan McKinley
Data Engineer (GCP)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (GCP) on a hybrid/remote basis in the UK, offering £550-650 per day (Outside IR35). Key skills include GCP, Apache Spark, Airflow, and Python, with experience in regulated environments essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
650
-
🗓️ - Date
February 20, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Spark (Apache Spark) #Airflow #Delta Lake #Data Quality #Security #Data Processing #Scala #BigQuery #GCP (Google Cloud Platform) #Storage #Data Pipeline #Observability #SQL (Structured Query Language) #Python #Monitoring #Apache Spark #Apache Airflow #Data Science #Data Engineering #Terraform #Data Governance #Infrastructure as Code (IaC) #Cloud
Role description
Senior Data Engineer (GCP)
Client: Consultancy client
Location: UK – Hybrid / Remote (TBC)
Rate: £550-650 per day (Outside IR35)
Overview
We’re supporting a consultancy client delivering a large data platform programme, focused on building and maturing a modern, cloud-native data platform on Google Cloud Platform (GCP). The programme is moving beyond PoCs into large-scale, production-grade delivery and requires strong hands-on Data Engineers to help stabilise, scale, and operationalise core data pipelines and platform services.
This role suits engineers who enjoy working on complex data platforms, care about engineering quality, and have experience delivering data products in regulated, enterprise environments.
Key Responsibilities
• Design, build, and operate robust data pipelines on GCP using Spark and Airflow
• Develop and optimise data processing workloads in BigQuery and associated GCP services
• Contribute to the design and evolution of a lakehouse-style architecture (including open table formats such as Iceberg)
• Work closely with platform and architecture teams to ensure data pipelines are:
• Scalable
• Cost-efficient
• Secure and well-governed
• Help mature engineering standards, including:
• CI/CD for data pipelines
• Infrastructure as Code (e.g. Terraform)
• Observability, monitoring, and reliability of data workflows
• Support data consumers (analysts, data scientists, downstream teams) by producing high-quality, well-documented data products
• Contribute to technical decision-making and best practices, avoiding unnecessary lock-in to cloud-specific proprietary tooling where possible
Required Experience
• Strong hands-on experience as a Data Engineer in modern cloud environments
• Proven delivery experience on GCP, ideally including:
• BigQuery
• Cloud Storage
• Cloud SQL (or equivalent)
• Solid experience building pipelines with:
• Apache Spark
• Apache Airflow
• Strong Python (and/or Scala) for data engineering
• Experience working with large-scale data platforms (TBs+ data volumes)
• Comfortable operating in enterprise or regulated environments
Desirable / Nice to Have
• Experience with open table formats (Iceberg, Delta Lake, Hudi)
• Exposure to data platform engineering (not just analytics pipelines)
• Experience with Infrastructure as Code (Terraform or similar)
• Understanding of data governance, security, and data quality frameworks
• Experience working in multi-team, multi-supplier delivery environments
• Prior exposure to multi-cloud or cloud-agnostic design patterns
Senior Data Engineer (GCP)
Client: Consultancy client
Location: UK – Hybrid / Remote (TBC)
Rate: £550-650 per day (Outside IR35)
Overview
We’re supporting a consultancy client delivering a large data platform programme, focused on building and maturing a modern, cloud-native data platform on Google Cloud Platform (GCP). The programme is moving beyond PoCs into large-scale, production-grade delivery and requires strong hands-on Data Engineers to help stabilise, scale, and operationalise core data pipelines and platform services.
This role suits engineers who enjoy working on complex data platforms, care about engineering quality, and have experience delivering data products in regulated, enterprise environments.
Key Responsibilities
• Design, build, and operate robust data pipelines on GCP using Spark and Airflow
• Develop and optimise data processing workloads in BigQuery and associated GCP services
• Contribute to the design and evolution of a lakehouse-style architecture (including open table formats such as Iceberg)
• Work closely with platform and architecture teams to ensure data pipelines are:
• Scalable
• Cost-efficient
• Secure and well-governed
• Help mature engineering standards, including:
• CI/CD for data pipelines
• Infrastructure as Code (e.g. Terraform)
• Observability, monitoring, and reliability of data workflows
• Support data consumers (analysts, data scientists, downstream teams) by producing high-quality, well-documented data products
• Contribute to technical decision-making and best practices, avoiding unnecessary lock-in to cloud-specific proprietary tooling where possible
Required Experience
• Strong hands-on experience as a Data Engineer in modern cloud environments
• Proven delivery experience on GCP, ideally including:
• BigQuery
• Cloud Storage
• Cloud SQL (or equivalent)
• Solid experience building pipelines with:
• Apache Spark
• Apache Airflow
• Strong Python (and/or Scala) for data engineering
• Experience working with large-scale data platforms (TBs+ data volumes)
• Comfortable operating in enterprise or regulated environments
Desirable / Nice to Have
• Experience with open table formats (Iceberg, Delta Lake, Hudi)
• Exposure to data platform engineering (not just analytics pipelines)
• Experience with Infrastructure as Code (Terraform or similar)
• Understanding of data governance, security, and data quality frameworks
• Experience working in multi-team, multi-supplier delivery environments
• Prior exposure to multi-cloud or cloud-agnostic design patterns






