Whitehall Resources

Data-Ops Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data-Ops Engineer, contract length unspecified, with pay rate "unknown," located "remote." Key skills include ETL/ELT tools, SQL, and cloud platforms. Experience in DataOps practices and strong analytical abilities are required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 20, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Norfolk, England, United Kingdom
-
🧠 - Skills detailed
#BI (Business Intelligence) #Talend #Kafka (Apache Kafka) #GIT #DevOps #Data Governance #ADF (Azure Data Factory) #Grafana #Scala #AWS (Amazon Web Services) #DataOps #Metadata #Batch #Cloud #dbt (data build tool) #SQL (Structured Query Language) #Data Pipeline #Kubernetes #Informatica #Security #Jenkins #"ETL (Extract #Transform #Load)" #Deployment #Azure #Data Engineering #Agile #GCP (Google Cloud Platform) #Docker #Monitoring #Airflow #Prometheus
Role description
Data-Ops Engineer Whitehall Resources are currently looking for a Data-Ops Engineer. This role will be Inside of IR35, so you will be required to use an FCSA Accredited Umbrella Company. Key Requirements: - Data‑Ops Engineer is responsible for designing, automating, and optimizing data pipelines and ensuring smooth data flow across the organization. - The role bridges data engineering, operations, and DevOps practices to deliver reliable, high‑quality, and timely data for analytics, reporting, and business applications. Key Responsibilities: - Design, build, and maintain automated, scalable data pipelines (batch & real‑time). - Optimize ETL/ELT jobs for performance, reliability, and cost efficiency. - Ensure data pipelines meet SLAs, quality standards, and security guidelines. - Manage and monitor data platform operations using DataOps/DevOps practices. - Ensure high availability and reliability of data platforms (cloud or on‑prem). - Troubleshoot pipeline failures and perform root cause analysis (RCA). - Implement CI/CD pipelines for data workflows. - Automate testing, deployment, and monitoring for data services. - Implement data validation, profiling, and quality checks. - Work with data governance teams to enforce metadata standards, lineage, and catalogs. - Collaborate with data engineers, BI teams, analysts, and product teams. - Translate business requirements into scalable data solutions. Key Experience: - Strong experience with ETL/ELT tools (e.g., Informatica, Talend, DBT, Airflow, ADF). - Proficiency in SQL, data modelling, and performance tuning. - Hands‑on with cloud platforms (Azure / AWS / GCP) and data services. - Understanding of DevOps tools: Git, Jenkins, Docker, Kubernetes (optional). - Familiar with messaging and streaming: Kafka, Event Hub, Kinesis. - Experience with monitoring tools (e.g., Grafana, Prometheus, CloudWatch). - Strong analytical and problem‑solving abilities. - Excellent communication and stakeholder management. - Ability to work in cross‑functional, agile environments.