Machine Learning Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer with 3+ years of experience, proficient in Python and ML frameworks, focusing on GCP ML pipelines. The contract lasts 6 months, is remote, and requires knowledge of CI/CD, observability, and data pipelines.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 18, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Kubernetes #Monitoring #Data Engineering #AI (Artificial Intelligence) #PostgreSQL #MLflow #Scala #GCP (Google Cloud Platform) #Storage #DevOps #Compliance #GIT #Logging #Observability #Terraform #TensorFlow #Python #BigQuery #SageMaker #ML (Machine Learning) #Cloud #Data Science #PyTorch #Data Pipeline #Kafka (Apache Kafka)
Role description
We are seeking, on behalf of our client, a highly skilled Machine Learning Engineer to design, build, and deploy end-to-end ML pipelines in a cloud-native environment. You will work closely with Data Engineers and Cloud/DevOps Engineers to operationalize models, ensuring they are scalable, observable, and seamlessly integrated into production systems. Responsibilities: β€’ Design, implement, and maintain ML pipelines on GCP using tools like Vertex AI, Kubeflow, or MLflow. β€’ Collaborate with Data Engineers to source, preprocess, and validate high-quality training data from BigQuery, PostgreSQL, and cloud-native storage. β€’ Deploy, monitor, and optimize models in production environments, ensuring reliability, scalability, and cost efficiency. β€’ Automate ML workflows with CI/CD pipelines and Infrastructure-as-Code (Terraform, ArgoCD). β€’ Implement observability and monitoring for ML systems (drift detection, performance metrics, alerting). β€’ Work with product and analytics teams to translate business problems into ML solutions. β€’ Document processes, pipelines, and model governance for reproducibility and compliance. Requirements: β€’ 3+ years of experience as an ML Engineer or similar role (MLOps, Data Science with strong engineering background). β€’ Proficiency with Python and ML frameworks (TensorFlow, PyTorch, scikit-learn). β€’ Experience with cloud-native ML platforms (Vertex AI, SageMaker, or Kubeflow). β€’ Strong knowledge of data pipelines, feature stores, and model versioning. β€’ Familiarity with CI/CD, Git, Terraform, and container orchestration (Kubernetes/GKE). β€’ Understanding of observability for ML systems (logging, metrics, tracing, model drift). β€’ Bonus: experience with real-time ML/streaming data (Kafka, Pub/Sub) or responsible AI practices. Contract Details: β€’ Duration: 6 months (extendable based on project needs) β€’ Location: Remote β€’ Engagement: Contract