

Ki
Temporary GCP Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Temporary GCP Data Engineer position for a contract length of "unknown" with a pay rate of "unknown." Key skills required include Big Query, Python, SQL, and experience with data pipelines. Familiarity with Agile methodologies is essential.
🌎 - Country
United Kingdom
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 11, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#ML (Machine Learning) #Bash #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Quality #Data Engineering #Agile #Kanban #dbt (data build tool) #Shell Scripting #Data Processing #Storage #Cloud #Python #GCP (Google Cloud Platform) #Terraform #Data Lake #Scripting #DataOps #Documentation #Data Pipeline #Visualization #Data Science #Compliance #Scrum #Tableau #Data Warehouse
Role description
Who are we? 👋
Look at the latest headlines and you will see something Ki insures. Think space shuttles, world tours, wind farms, and even footballers' legs.
Ki's mission is simple. Digitally transform and revolutionise a 335-year-old market. Working with Google and UCL, Ki has created a platform that uses algorithms, machine learning and large language models to give insurance brokers quotes in seconds, rather than days.
Ki is proudly the biggest global algorithmic insurance carrier. It's the fastest growing syndicate in the Lloyd's of London market, and the first ever to make $100m in profit in 3 years.
Ki's teams have varied backgrounds and work together in an agile, cross-functional way to build the very best experience for its customers. Ki has big ambitions but needs more excellent minds to challenge the status-quo and help it reach new horizons.
Where you come in?
While our broker platform is the core technology crucial to Ki's success - this role will focus on supporting the middle/back-office operations that will lay the foundations for further and sustained success. We're a multi-disciplined team, bringing together expertise in software and data engineering, full stack development, platform operations, algorithm research, and data science. Our squads focus on delivering high-impact solutions - we favour a highly iterative, analytical approach.
You will be designing and developing complex data processing modules and reporting using Big Query and Tableau. In addition, you will also work closely with the Ki Infrastructure/Platform Team, responsible for architecting, and operating the core of the Ki Data Analytics platform.
What you will be doing: 🖊️
• Work with both the business teams (finance and actuary initially), data scientists and engineers to design, build, optimise and maintain production grade data pipelines and reporting from an internal Data warehouse solution, based on GCP/Big Query
• Work with finance, actuaries, data scientists and engineers to understand how we can make best use of new internal and external data sources
• Work with our delivery partners at EY/IBM to ensure robustness of Design and engineering of the data model/ MI and reporting which can support our ambitions for growth and scale
• BAU ownership of data models, reporting and integrations/pipelines
• Create frameworks, infrastructure and systems to manage and govern Ki's data asset
• Produce detailed documentation to allow ongoing BAU support and maintenance of data structures, schema, reporting etc.
• Work with the broader Engineering community to develop our data and MLOps capability infrastructure
• Ensure data quality, governance, and compliance with internal and external standards.
• Monitor and troubleshoot data pipeline issues, ensuring reliability and accuracy.
Requirements
• Experience designing data models and developing industrialised data pipelines
• Strong knowledge of database and data lake systems
• Hands on experience in Big Query, dbt, GCP cloud storage
• Proficient in Python, SQL and Terraform
• Knowledge of Cloud SQL, Airbyte, Dagster
• Comfortable with shell scripting with Bash or similar
• Experience provisioning new infrastructure in a leading cloud provider, preferably GCP
• Proficient with Tableau Cloud for data visualization and reporting
• Experience creating DataOps pipelines
• Comfortable working in an Agile environment, actively participating in approaches such as Scrum or Kanban
Who are we? 👋
Look at the latest headlines and you will see something Ki insures. Think space shuttles, world tours, wind farms, and even footballers' legs.
Ki's mission is simple. Digitally transform and revolutionise a 335-year-old market. Working with Google and UCL, Ki has created a platform that uses algorithms, machine learning and large language models to give insurance brokers quotes in seconds, rather than days.
Ki is proudly the biggest global algorithmic insurance carrier. It's the fastest growing syndicate in the Lloyd's of London market, and the first ever to make $100m in profit in 3 years.
Ki's teams have varied backgrounds and work together in an agile, cross-functional way to build the very best experience for its customers. Ki has big ambitions but needs more excellent minds to challenge the status-quo and help it reach new horizons.
Where you come in?
While our broker platform is the core technology crucial to Ki's success - this role will focus on supporting the middle/back-office operations that will lay the foundations for further and sustained success. We're a multi-disciplined team, bringing together expertise in software and data engineering, full stack development, platform operations, algorithm research, and data science. Our squads focus on delivering high-impact solutions - we favour a highly iterative, analytical approach.
You will be designing and developing complex data processing modules and reporting using Big Query and Tableau. In addition, you will also work closely with the Ki Infrastructure/Platform Team, responsible for architecting, and operating the core of the Ki Data Analytics platform.
What you will be doing: 🖊️
• Work with both the business teams (finance and actuary initially), data scientists and engineers to design, build, optimise and maintain production grade data pipelines and reporting from an internal Data warehouse solution, based on GCP/Big Query
• Work with finance, actuaries, data scientists and engineers to understand how we can make best use of new internal and external data sources
• Work with our delivery partners at EY/IBM to ensure robustness of Design and engineering of the data model/ MI and reporting which can support our ambitions for growth and scale
• BAU ownership of data models, reporting and integrations/pipelines
• Create frameworks, infrastructure and systems to manage and govern Ki's data asset
• Produce detailed documentation to allow ongoing BAU support and maintenance of data structures, schema, reporting etc.
• Work with the broader Engineering community to develop our data and MLOps capability infrastructure
• Ensure data quality, governance, and compliance with internal and external standards.
• Monitor and troubleshoot data pipeline issues, ensuring reliability and accuracy.
Requirements
• Experience designing data models and developing industrialised data pipelines
• Strong knowledge of database and data lake systems
• Hands on experience in Big Query, dbt, GCP cloud storage
• Proficient in Python, SQL and Terraform
• Knowledge of Cloud SQL, Airbyte, Dagster
• Comfortable with shell scripting with Bash or similar
• Experience provisioning new infrastructure in a leading cloud provider, preferably GCP
• Proficient with Tableau Cloud for data visualization and reporting
• Experience creating DataOps pipelines
• Comfortable working in an Agile environment, actively participating in approaches such as Scrum or Kanban






