

Machine Learning Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer on a long-term contract in London, offering £550-600 per day. Key skills include proficiency in Spark ML, predictive modeling, and experience with Hadoop. Hybrid work allows a maximum of three days in the office.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
-
🗓️ - Date discovered
June 26, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Data Storage #Clustering #Data Pipeline #Scala #Batch #Regression #Spark (Apache Spark) #Datasets #Java #Apache Spark #Predictive Modeling #Model Evaluation #Data Processing #Distributed Computing #Storage #Classification #Hadoop #Python #ML (Machine Learning) #"ETL (Extract #Transform #Load)" #Data Engineering
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
We are seeking a skilled Machine Learning Developer with expertise in Spark ML, predictive modeling, and deploying training and inference pipelines on distributed systems such as Hadoop. The ideal candidate will design, implement, and optimize machine learning solutions for large-scale data processing and predictive analytics.
London
Long Term Contract
Rate - 550-600 pd
Hybrid - Max Three Days in the office
Responsibilities:
Develop and implement machine learning models using Spark ML for predictive analytics.
Design and optimize training and inference pipelines for distributed systems (e.g., Hadoop).
Process and analyze large-scale datasets to extract meaningful insights and features.
Collaborate with data engineers to ensure seamless integration of ML workflows with data pipelines.
Evaluate model performance and fine-tune hyperparameters to improve accuracy and efficiency.
Implement scalable solutions for real-time and batch inference.
Monitor and troubleshoot deployed models to ensure reliability and performance.
Stay updated with advancements in machine learning frameworks and distributed computing technologies.
Required Skills:
• Proficiency in Apache Spark and Spark MLlib for machine learning tasks.
• Strong understanding of predictive modeling techniques (e.g., regression, classification, clustering).
• Experience with distributed systems like Hadoop for data storage and processing.
• Proficiency in Python, Scala, or Java for ML development.
• Familiarity with data preprocessing techniques and feature engineering.
• Knowledge of model evaluation metrics and techniques.
• Experience with deploying ML models in production environments.
• Understanding of distributed computing concepts and parallel processing.