eTeam

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a DataOps Engineer, a 6-month hybrid contract in Birmingham, paying £375/day. Key skills include AWS data tools, CI/CD expertise, and strong programming in SQL/Python/Spark. Experience in finance-related data engineering and Agile environments is required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
375
-
🗓️ - Date
April 2, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Birmingham, England, United Kingdom
-
🧠 - Skills detailed
#Data Vault #Data Lake #SageMaker #Scala #Data Science #Data Bricks #Agile #ML (Machine Learning) #SAS #AWS (Amazon Web Services) #Automation #Data Engineering #Redshift #Data Pipeline #Monitoring #PyTorch #TensorFlow #Terraform #Informatica #Python #DataOps #Infrastructure as Code (IaC) #Quality Assurance #Programming #SQL (Structured Query Language) #GitHub #Spark (Apache Spark) #Airflow #EC2 #AI (Artificial Intelligence) #Docker #S3 (Amazon Simple Storage Service) #Vault #DevOps #Qlik
Role description
Job Title: DataOps Engineer Location: Birmingham - Hybrid- 3 days onsite/week Duration: 6 months contract Pay Rate: £375 per day through FCSA Umbrella Role Description: • Data Ops Engineer will be assisting Senior Data Ops Engineer responsible for inspiring and assisting Data Engineering team in designing, developing, and testing quality data engineering solutions, delivered through domain orientated multidisciplined data product teams. • This role will act as a Continuous Integration/Continues Delivery (CI/CD) expert for the Data Office, helping Data Engineering teams automate as much of their work as possible, to reduce waste and improve quality. • Continually challenging and improving our processes, tools, and methodologies. Undertaking review and assurance activity, providing other team members with guidance on design, build and test activity. Requirements: • Data Engineering or DevOps related qualification and/or extensive Data/Data Ops/DevOps Development experience in a commercial & Agile environment. • What we’d like to see strong multi project experience in several the following or similar in a Data Engineering: • Strong experience in developing and automating scalable data pipelines in a Finance related data context with a DataOps/DevOps mindset and evolved your expertise toward operational excellence and automation in data environments. In addition to a solid foundation in data engineering, you also demonstrate expertise in automation, CI/CD pipelines, IaC, monitoring systems to ensure scalable, reliable data workflows. You bring professional experience with the following tools: • AWS data tooling such as S3/Glue/Redshift/SageMaker. • Familiarity with containerization (e.g., Docker/ec2), Orchestration in enterprise environment (Airflow), Infrastructure automation (Terraform), and CI/CD platform (Github Actions & Admin), Password/Secret management (hashicorp vault). • Strong Data related programming skills SQL/Python/Spark/Scala. • Database technologies in relation to Data Warehousing /Data Lake/ Lake housing patterns and relevant experience when handling structured and non-structured data • (Information Modeler) Experience in data modelling techniques and tooling. • (Test) Quality Assurance and Test Automation experience in a Data Pipeline. • (Machine Learning) Experience of industrialising and scaling machine learning models. • (Machine Learning) Experience using machine learning frameworks such as TensorFlow / PyTorch What would be nice to have: • Experience working in an Agile Team; preferably Safe. • Experience in specific tooling Qlik Replicate / Qlik Compose / Data Bricks / Informatica / SAS • An understanding of data modelling methodology (Kimball, Data Vault, Lakehouse) • Understanding of Data Science, AI and Machine Learning ways of working • Experience testing and testing standards.