Undisclosed

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a DataOps Engineer in Kings Cross, London, with a 6-month contract at £850.66 per day, inside IR35. Requires 5+ years in Software Engineering, strong Python skills, and expertise in Google Cloud Platform (GCP). Hybrid work, ASAP start.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
904
-
🗓️ - Date
October 9, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#DataOps #Scala #DevOps #Data Engineering #Leadership #Data Analysis #Automation #GCP (Google Cloud Platform) #AI (Artificial Intelligence) #Monitoring #Python #Metadata #Cloud #Logging #Agile #Observability #Data Ingestion #"ETL (Extract #Transform #Load)" #R #ML (Machine Learning) #Data Governance
Role description
Job: DataOps Engineer Location: Kings Cross London 2/3 times per week Pay rate: £850.66 per day Inside IR35 Term: 6 months, ASAP start date Description: Our client want to supercharge their data capability to better understand their patients and accelerate their ability to discover vaccines and medicines. The organization represents a major investment by our clients’ R&D and Digital & Tech, designed to deliver a step-change in their ability to leverage data, knowledge, and prediction to find new medicines.They are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: • Building a next-generation, metadata- and automation-driven data experience for the clients’ scientists, engineers, and decision-makers, increasing productivity and reducing time spent on “data mechanics” • Providing best-in-class AI/ML and data analysis environments to accelerate the cleints’ predictive capabilities and attract top-tier talent • Aggressively engineering their data at scale, as one unified asset, to unlock the value of their unique collection of data and predictions in real-time Automation of end-to-end data flows: Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in ?12h). Enabling governance by design of external and internal data: with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products: to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Improving engineering efficiency: Extensible, reusable, scalable, updateable We are looking for an experienced DataOps Engineer to join our clients’ growing Data Ops team. As a Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizing biomedical and scientific data engineering, with demonstrable experience across the following areas: • Deliver declarative components for common data ingestion, transformation and publishing techniques • Define and implement data governance aligned to modern standards • Establish scalable, automated processes for data engineering teams across our client • Thought leader and partner with wider Onyx data engineering teams to advise on implementation and best practices • Cloud Infrastructure-as-Code • Define Service and Flow orchestration • Data as a configurable resource (including configuration-driven access to scientific data modelling tools) • Observability (monitoring, alerting, logging, tracing, ...) • Enable quality engineering through KPIs and code coverage and quality checks • Standardise GitOps/declarative software development lifecycle • Audit as a service What you’ll need to have to be successful: • Strong experience with coding in Python coding • A background of 5+ years in Software Engineering, using Google Cloud Platform (GCP) Data Ops Engineers take ownership of delivering high-performing, high-impact biomedical and scientific data ops products and services, from a description of a pattern that customer Data Engineers are trying to use all the way through to final delivery (and ongoing monitoring and operations) of a templated project and all associated automation. They are standard-bearers for software engineering and quality coding practices within the team and are expected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project. They devise useful metrics for ensuring their services are meeting customer demand and having an impact and iterate to deliver and improve on those metrics in an agile fashion.