

Queen Square Recruitment
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Ops Engineer on a 6-month contract in Norwich (Hybrid – 2/3 days onsite) at £475/day. Key skills include ETL/ELT tools, strong SQL, cloud platforms, and DataOps/DevOps experience.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
475
-
🗓️ - Date
March 19, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Norwich, England, United Kingdom
-
🧠 - Skills detailed
#Azure #ADF (Azure Data Factory) #Data Quality #Agile #DataOps #Security #Scala #Metadata #Monitoring #Cloud #Automation #Informatica #Talend #Data Governance #Airflow #Data Lifecycle #GIT #Batch #Kafka (Apache Kafka) #"ETL (Extract #Transform #Load)" #Docker #Data Management #Data Processing #BI (Business Intelligence) #Grafana #Prometheus #AWS (Amazon Web Services) #Deployment #dbt (data build tool) #Data Engineering #SQL (Structured Query Language) #GCP (Google Cloud Platform) #DevOps #Kubernetes #Data Pipeline #Jenkins
Role description
Data Ops Engineer
📍 Norwich (Hybrid – 2/3 days onsite) |💰 £475/day (Inside IR35) |📅 6-Month Contract
Our client, a top global organization, is seeking an experienced Data Ops Engineer to design, optimise, and support scalable data pipelines and platforms within a dynamic enterprise environment. This role sits at the intersection of data engineering, DevOps, and operations, ensuring high-quality, reliable data is delivered for analytics, reporting, and business applications. As a Data Ops Engineer, you will be responsible for building and maintaining robust data pipelines, ensuring platform reliability, and implementing automation across the data lifecycle.
Key Responsibilities
• Design, build, and maintain scalable ETL/ELT data pipelines (batch & real-time)
• Optimise pipelines for performance, reliability, and cost efficiency
• Ensure pipelines meet SLAs, data quality, and security standards
• Monitor and manage data platforms using DataOps/DevOps practices
• Ensure high availability and reliability of data services
• Troubleshoot issues and perform root cause analysis (RCA)
• Implement CI/CD pipelines for data workflows
• Automate testing, deployment, and monitoring of data services
• Implement data validation, profiling, and quality checks
• Support data governance, metadata management, and lineage tracking
• Work closely with data engineers, BI teams, and analysts
• Translate business requirements into scalable data solutions
Essential Skills & Experience
• Strong experience with ETL/ELT tools (e.g., Airflow, DBT, Informatica, Talend, ADF)
• Strong SQL skills with experience in data modelling and performance tuning
• Hands-on experience with cloud platforms (Azure, AWS, or GCP)
• Experience with DataOps / DevOps tools (Git, Jenkins, CI/CD pipelines)
• Familiarity with streaming/messaging technologies (Kafka, Event Hub, Kinesis)
• Experience with monitoring tools (Grafana, Prometheus, CloudWatch)
• Strong troubleshooting and problem-solving skills
• Ability to work in agile, cross-functional teams
Desirable Skills
• Experience with Docker and Kubernetes
• Exposure to real-time data processing and streaming architectures
• Knowledge of data governance frameworks and tooling
If this is relevant to your experience, please apply with your CV and we’ll be in touch. Thank you!
Data Ops Engineer
📍 Norwich (Hybrid – 2/3 days onsite) |💰 £475/day (Inside IR35) |📅 6-Month Contract
Our client, a top global organization, is seeking an experienced Data Ops Engineer to design, optimise, and support scalable data pipelines and platforms within a dynamic enterprise environment. This role sits at the intersection of data engineering, DevOps, and operations, ensuring high-quality, reliable data is delivered for analytics, reporting, and business applications. As a Data Ops Engineer, you will be responsible for building and maintaining robust data pipelines, ensuring platform reliability, and implementing automation across the data lifecycle.
Key Responsibilities
• Design, build, and maintain scalable ETL/ELT data pipelines (batch & real-time)
• Optimise pipelines for performance, reliability, and cost efficiency
• Ensure pipelines meet SLAs, data quality, and security standards
• Monitor and manage data platforms using DataOps/DevOps practices
• Ensure high availability and reliability of data services
• Troubleshoot issues and perform root cause analysis (RCA)
• Implement CI/CD pipelines for data workflows
• Automate testing, deployment, and monitoring of data services
• Implement data validation, profiling, and quality checks
• Support data governance, metadata management, and lineage tracking
• Work closely with data engineers, BI teams, and analysts
• Translate business requirements into scalable data solutions
Essential Skills & Experience
• Strong experience with ETL/ELT tools (e.g., Airflow, DBT, Informatica, Talend, ADF)
• Strong SQL skills with experience in data modelling and performance tuning
• Hands-on experience with cloud platforms (Azure, AWS, or GCP)
• Experience with DataOps / DevOps tools (Git, Jenkins, CI/CD pipelines)
• Familiarity with streaming/messaging technologies (Kafka, Event Hub, Kinesis)
• Experience with monitoring tools (Grafana, Prometheus, CloudWatch)
• Strong troubleshooting and problem-solving skills
• Ability to work in agile, cross-functional teams
Desirable Skills
• Experience with Docker and Kubernetes
• Exposure to real-time data processing and streaming architectures
• Knowledge of data governance frameworks and tooling
If this is relevant to your experience, please apply with your CV and we’ll be in touch. Thank you!






