Machine Learning Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer on a 6-month rolling contract, remote, with a pay rate outside IR35. Key skills include Python, SQL, AWS, MLOps frameworks, and data engineering tools. Experience in ETL processes and agile environments is required.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 13, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Outside IR35
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Monitoring #Data Quality #SageMaker #SQL (Structured Query Language) #Data Modeling #Scala #Data Engineering #Deployment #dbt (data build tool) #Data Integrity #Spark (Apache Spark) #Python #Programming #MLflow #Model Deployment #Agile #AWS (Amazon Web Services) #Airflow #Security #Docker #Kafka (Apache Kafka) #"ETL (Extract #Transform #Load)" #Kubernetes #ML (Machine Learning) #AI (Artificial Intelligence) #Data Pipeline #Data Science
Role description
MLOps Data Engineer - Contract - Outside IR35 β€’ Remote based β€’ 6 months rolling (long term contract) β€’ Outside IR35 contract MLOps Data Engineer role overview: You will be designing, building and maintaining data pipelines and machine learning infrastructure that support scalable, reliable, and production-ready AI/ML solutions. You will work closely with data scientists, engineers, and product teams to operationalize models, streamline workflows, and ensure data quality and availability. β€’ Develop and maintain data pipelines to support machine learning and analytics use cases. β€’ Implement MLOps best practices for model deployment, monitoring, and lifecycle management. β€’ Build and optimize ETL/ELT processes for structured and unstructured data. β€’ Automate workflows for training, testing, and deploying ML models. β€’ Ensure data integrity, governance, and security across the ML lifecycle. MLOps Data Engineer Experience β€’ Strong programming skills in Python, SQL, and experience with AWS β€’ Proficiency with data engineering tools (e.g., Spark, Kafka, Airflow, dbt). β€’ Hands-on experience with MLOps frameworks (e.g., MLflow, Kubeflow, Vertex AI, SageMaker). β€’ Familiarity with CI/CD pipelines, containerization (Docker, Kubernetes) β€’ Solid understanding of data modeling, warehousing, and APIs. β€’ Strong problem-solving skills and ability to work in agile environments.