Hydrogen Group

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (MLOps Engineer) on a long-term contract in Glasgow (hybrid). Pay is £350 per day. Requires strong AWS skills, Python proficiency, and experience with CI/CD for ML workloads. BPSS eligibility needed.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
350
-
🗓️ - Date
February 3, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#Security #Deployment #IAM (Identity and Access Management) #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Infrastructure as Code (IaC) #Data Pipeline #Data Processing #Scala #Data Science #PyTorch #Data Engineering #Lambda (AWS Lambda) #Automation #Python #Monitoring #ML (Machine Learning) #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #SageMaker #TensorFlow #Cloud
Role description
MLOps Engineer – Contract (Glasgow / Hybrid) We are looking for an experienced MLOps Engineer to join a large-scale data and machine learning programme within a regulated enterprise environment. This is a long-term contract opportunity for someone who enjoys building robust, production-grade ML platforms on AWS and working closely with data science and engineering teams. This role is hybrid, with 2–3 days per week on site in Glasgow, and will run until December 2026. Key details: • Contract role (PAYE via umbrella only) • Location: Glasgow (hybrid) • Rate: £350 per day (to umbrella) • Security clearance: BPSS eligibility required What you will be doing: • Designing and automating scalable ML infrastructure using AWS-native services and infrastructure as code • Building, deploying, and managing machine learning models across their full lifecycle • Creating and optimising data pipelines using distributed processing frameworks • Developing serverless automation and Python-based services • Implementing best-practice MLOps processes, including CI/CD for models and pipelines • Putting monitoring in place for model performance, drift, and reliability • Ensuring ML workloads are secure, compliant, and production-ready • Collaborating with data scientists, data engineers, and cloud teams to operationalise ML solutions What I’m looking for: • Strong hands-on experience in MLOps, ML engineering, or cloud automation • Deep experience with AWS, particularly: • CloudFormation (infrastructure as code) • SageMaker (training, inference, pipelines, model management) • Glue and Spark for ETL and large-scale data processing • Lambda, S3, IAM, KMS, CloudWatch • Strong Python skills for ML and data workflows • Solid understanding of the full ML lifecycle, from data preparation through to deployment and monitoring • Experience building CI/CD pipelines for machine learning workloads • Familiarity with common ML frameworks such as TensorFlow, PyTorch, or Scikit-learn If this sounds like something you’d be interested in, feel free to reach out or apply directly.