

Xcede
Data Engineer (GCP)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (GCP) with a contract length of "unknown" and an hourly pay rate of "unknown." It requires 5+ years of experience, strong Python and SQL skills, and familiarity with GCP tools. The work location is hybrid in London.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
February 10, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Outside IR35
-
π - Security
Unknown
-
π - Location detailed
Greater London, England, United Kingdom
-
π§ - Skills detailed
#SQL (Structured Query Language) #Data Pipeline #Compliance #Python #Snowflake #Documentation #Databricks #Data Engineering #DataOps #Airflow #Monitoring #"ETL (Extract #Transform #Load)" #Observability #Data Quality #Classification #Data Science #Cloud #GCP (Google Cloud Platform) #Terraform #Infrastructure as Code (IaC) #Spark (Apache Spark) #BigQuery #Scala #Datasets #Data Architecture #Data Governance
Role description
GCP Data Engineer
Hybrid 2/3 days - London
OUTSIDE IR/35
As a Data Engineer, youβll design, build, and operate scalable, reliable data pipelines and data infrastructure. Your work will ensure high-quality data is accessible, trusted, and ready for analytics and data science - powering business insights and decision-making across the company.
What youβll do
β’ Build and maintain data pipelines for ingestion, transformation, and export across multiple sources and destinations
β’ Develop and evolve scalable data architecture to meet business and performance requirements
β’ Partner with analysts and data scientists to deliver curated, analysis-ready datasets and enable self-service analytics
β’ Implement best practices for data quality, testing, monitoring, lineage, and reliability
β’ Optimise workflows for performance, cost, and scalability (e.g., tuning Spark jobs, query optimisation, partitioning strategies)
β’ Ensure secure data handling and compliance with relevant data protection standards and internal policies
β’ Contribute to documentation, standards, and continuous improvement of the data platform and engineering processes
What makes you a great fit
β’ 5+ years of experience as a Data Engineer, building and maintaining production-grade pipelines and datasets
β’ Strong Python and SQL skills with a solid understanding of data structures, performance, and optimisation strategies
β’ Familiarity with Β GCP and ecosystem knowledge: BigQuery, Composer, Dataproc, Cloud Run, Dataplex
β’ Hands-on experience with orchestration (like Airflow, Dagster, Databricks Workflows) and distributed processing in a cloud environment
β’ Experience with analytical data modelling (star and snowflake schemas), DWH, ETL/ELT patterns, and dimensional concepts
β’ Experience with data governance concepts: access control, retention, data classification, auditability, and compliance standards
β’ Familiarity with CI/CD for data pipelines, IaC (Terraform), and/or DataOps practices
β’ Experience building observability for data systems (metrics, alerting, data quality checks, incident response)
GCP Data Engineer
Hybrid 2/3 days - London
OUTSIDE IR/35
As a Data Engineer, youβll design, build, and operate scalable, reliable data pipelines and data infrastructure. Your work will ensure high-quality data is accessible, trusted, and ready for analytics and data science - powering business insights and decision-making across the company.
What youβll do
β’ Build and maintain data pipelines for ingestion, transformation, and export across multiple sources and destinations
β’ Develop and evolve scalable data architecture to meet business and performance requirements
β’ Partner with analysts and data scientists to deliver curated, analysis-ready datasets and enable self-service analytics
β’ Implement best practices for data quality, testing, monitoring, lineage, and reliability
β’ Optimise workflows for performance, cost, and scalability (e.g., tuning Spark jobs, query optimisation, partitioning strategies)
β’ Ensure secure data handling and compliance with relevant data protection standards and internal policies
β’ Contribute to documentation, standards, and continuous improvement of the data platform and engineering processes
What makes you a great fit
β’ 5+ years of experience as a Data Engineer, building and maintaining production-grade pipelines and datasets
β’ Strong Python and SQL skills with a solid understanding of data structures, performance, and optimisation strategies
β’ Familiarity with Β GCP and ecosystem knowledge: BigQuery, Composer, Dataproc, Cloud Run, Dataplex
β’ Hands-on experience with orchestration (like Airflow, Dagster, Databricks Workflows) and distributed processing in a cloud environment
β’ Experience with analytical data modelling (star and snowflake schemas), DWH, ETL/ELT patterns, and dimensional concepts
β’ Experience with data governance concepts: access control, retention, data classification, auditability, and compliance standards
β’ Familiarity with CI/CD for data pipelines, IaC (Terraform), and/or DataOps practices
β’ Experience building observability for data systems (metrics, alerting, data quality checks, incident response)






