

Adecco
DataOps Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a DataOps Engineer on a 6-month hybrid contract in London, requiring 2-3 days onsite weekly. Key skills include Data Engineering, Cloud Infrastructure, Automation, and AI/ML. Experience in biomedical data and data governance is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 8, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London
-
🧠 - Skills detailed
#Cloud #Data Analysis #Data Engineering #"ETL (Extract #Transform #Load)" #Leadership #AI (Artificial Intelligence) #DataOps #Data Ingestion #Monitoring #Scala #Metadata #DevOps #Data Governance #ML (Machine Learning) #Logging #Automation
Role description
My client is seeking to recruit a DataOps Enginer on an initial 6 month contract based in London. It is hybrid and will require 2/3x days onsite per week.
A Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizing?biomedical and scientific?data engineering, with demonstrable experience across the following areas:??
They are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward:
- Building a next-generation, metadata- and automation-driven data experience for scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics"
- Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent
- Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time
Automation of end-to-end data flows:?Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in 12h).
- Deliver declarative components for common data ingestion, transformation and publishing techniques?
- Define and implement data governance aligned to modern standards?
- Establish scalable, automated processes for data engineering teams
- Thought leader and partner with wider data engineering teams to advise on implementation and best practices?
- Cloud Infrastructure-as-Code??
- Define Service and Flow orchestration??
- Data as a configurable resource?(including configuration-driven access to scientific data modelling tools)??
- Observabilty (monitoring, alerting, logging, tracing, ...)??
- Enable quality engineering through KPIs and code coverage and quality checks??
- Standardise GitOps/declarative software development lifecycle?
- Audit as a service?
My client is seeking to recruit a DataOps Enginer on an initial 6 month contract based in London. It is hybrid and will require 2/3x days onsite per week.
A Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizing?biomedical and scientific?data engineering, with demonstrable experience across the following areas:??
They are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward:
- Building a next-generation, metadata- and automation-driven data experience for scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics"
- Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent
- Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time
Automation of end-to-end data flows:?Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in 12h).
- Deliver declarative components for common data ingestion, transformation and publishing techniques?
- Define and implement data governance aligned to modern standards?
- Establish scalable, automated processes for data engineering teams
- Thought leader and partner with wider data engineering teams to advise on implementation and best practices?
- Cloud Infrastructure-as-Code??
- Define Service and Flow orchestration??
- Data as a configurable resource?(including configuration-driven access to scientific data modelling tools)??
- Observabilty (monitoring, alerting, logging, tracing, ...)??
- Enable quality engineering through KPIs and code coverage and quality checks??
- Standardise GitOps/declarative software development lifecycle?
- Audit as a service?