

Glocomms
GCP Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Data Engineer with a contract length of "X months" and a pay rate of "$Y per hour." Key skills include Google BigQuery, DataOps, Terraform, and Python. Experience with data models, pipelines, and GCP is required.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
April 17, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
City Of London, England, United Kingdom
-
π§ - Skills detailed
#Storage #Infrastructure as Code (IaC) #Compliance #Bash #Shell Scripting #BigQuery #Data Engineering #Cloud #Scala #GCP (Google Cloud Platform) #Data Science #Data Pipeline #Data Warehouse #SQL (Structured Query Language) #dbt (data build tool) #Documentation #Data Lake #Data Quality #Python #Data Integration #Terraform #Scripting #DataOps
Role description
What you will be doing -
β’ Work with business stakeholders (initially Finance and Actuarial teams), data scientists, and engineers to design, build, optimise, and maintain productionβgrade data pipelines and reporting based on a modern cloud data warehouse solution.
β’ Collaborate closely with Finance, Actuarial, Data Science, and Engineering teams to identify and maximise the value of new internal and external data sources.
β’ Partner with thirdβparty delivery vendors to ensure the robust design and engineering of data models, management information (MI), and reporting, supporting organisational growth and scalability.
β’ Take BAU ownership of data models, reporting assets, and data integrations/pipelines.
β’ Create frameworks, infrastructure, and systems to manage and govern enterprise data assets effectively.
β’ Produce detailed technical and functional documentation to enable ongoing BAU support and maintenance of data structures, schemas, pipelines, and reporting.
β’ Work with the wider Engineering community to develop and enhance data platform and MLOps capabilities.
β’ Ensure high data quality, governance, and compliance with internal policies and external regulatory standards.
β’ Proactively monitor, troubleshoot, and resolve data pipeline issues, ensuring reliability, accuracy, and performance of data products.
Required skills -
Experience designing data models and developing industrialised data pipelines
Strong knowledge of database and data lake systems
Experience creating and maintaining DataOps pipelines
Hands-on experience with Google BigQuery, dbt, and GCP Cloud Storage
Knowledge of Cloud SQL and data integration tools such as Airbyte
Experience provisioning new infrastructure in a leading cloud provider, preferably Google Cloud Platform (GCP)
Proficient in Terraform for infrastructure as code
Experience working with Dagster for data pipeline orchestration
Comfortable with shell scripting using Bash or similar tools
Proficient in Python and SQL
Desired Skills and Experience
Experience designing data models and developing industrialised data pipelines
Strong knowledge of database and data lake systems
Experience creating and maintaining DataOps pipelines
Hands-on experience with Google BigQuery, dbt, and GCP Cloud Storage
Knowledge of Cloud SQL and data integration tools such as Airbyte
Experience provisioning new infrastructure in a leading cloud provider, preferably Google Cloud Platform (GCP)
Proficient in Terraform for infrastructure as code
Experience working with Dagster for data pipeline orchestration
Comfortable with shell scripting using Bash or similar tools
Proficient in Python and SQL
What you will be doing -
β’ Work with business stakeholders (initially Finance and Actuarial teams), data scientists, and engineers to design, build, optimise, and maintain productionβgrade data pipelines and reporting based on a modern cloud data warehouse solution.
β’ Collaborate closely with Finance, Actuarial, Data Science, and Engineering teams to identify and maximise the value of new internal and external data sources.
β’ Partner with thirdβparty delivery vendors to ensure the robust design and engineering of data models, management information (MI), and reporting, supporting organisational growth and scalability.
β’ Take BAU ownership of data models, reporting assets, and data integrations/pipelines.
β’ Create frameworks, infrastructure, and systems to manage and govern enterprise data assets effectively.
β’ Produce detailed technical and functional documentation to enable ongoing BAU support and maintenance of data structures, schemas, pipelines, and reporting.
β’ Work with the wider Engineering community to develop and enhance data platform and MLOps capabilities.
β’ Ensure high data quality, governance, and compliance with internal policies and external regulatory standards.
β’ Proactively monitor, troubleshoot, and resolve data pipeline issues, ensuring reliability, accuracy, and performance of data products.
Required skills -
Experience designing data models and developing industrialised data pipelines
Strong knowledge of database and data lake systems
Experience creating and maintaining DataOps pipelines
Hands-on experience with Google BigQuery, dbt, and GCP Cloud Storage
Knowledge of Cloud SQL and data integration tools such as Airbyte
Experience provisioning new infrastructure in a leading cloud provider, preferably Google Cloud Platform (GCP)
Proficient in Terraform for infrastructure as code
Experience working with Dagster for data pipeline orchestration
Comfortable with shell scripting using Bash or similar tools
Proficient in Python and SQL
Desired Skills and Experience
Experience designing data models and developing industrialised data pipelines
Strong knowledge of database and data lake systems
Experience creating and maintaining DataOps pipelines
Hands-on experience with Google BigQuery, dbt, and GCP Cloud Storage
Knowledge of Cloud SQL and data integration tools such as Airbyte
Experience provisioning new infrastructure in a leading cloud provider, preferably Google Cloud Platform (GCP)
Proficient in Terraform for infrastructure as code
Experience working with Dagster for data pipeline orchestration
Comfortable with shell scripting using Bash or similar tools
Proficient in Python and SQL






