

X4 Technology
Machine Learning Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer focused on Databricks, offering a 12-24 month remote contract with pay based on experience. Key skills include Azure Databricks, MLflow, and production ML pipeline design. Experience in regulated industries is desirable.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 24, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Project Management #Compliance #Automation #PyTorch #ML (Machine Learning) #DevOps #Data Governance #AWS (Amazon Web Services) #Synapse #TensorFlow #AI (Artificial Intelligence) #Azure #Terraform #Delta Lake #MLflow #Databricks #Scala #Libraries #Azure Databricks #Security #Azure DevOps #Observability #Monitoring #Jira #Azure Data Factory #Agile #Model Deployment #Cloud #ADF (Azure Data Factory) #Docker #Deployment
Role description
Job Title: ML Engineer (Databricks)
Rate: Depending on experience
Location: Remote
Contract Length: 12-24 months
A European consultancy are seeking a Databricks focused Machine Learning Engineer to join the team on a long term 12-24 month contract.
This role will be supporting the full end-to-end model lifecycle in production environments built on Azure and Databricks not only internally, but also in close collaboration with business units and customer teams across a international business units.
Databricks expertise is a must.
Core Responsibilities
• Build and manage ML/MLOps pipelines using Databricks
• Design, optimise and operate robust end-to-end machine learning pipelines within the Databricks environment on Azure.
• Support internal project teams
• Act as a technical point of contact for internal stakeholders, assisting with onboarding to Databricks, model deployment and pipeline design.
• Leverage key Databricks features
• Utilise capabilities such as MLflow, Workflows, Unity Catalog, Model Serving and Monitoring to enable scalable and manageable solutions.
• Implement governance and observability
• Integrate compliance, monitoring and audit features across the full machine learning lifecycle.
• Operationalise ML/AI models
• Lead efforts to move models into production, ensuring they are stable, secure and scalable.
• Hands-on with model operations
• Work directly on model hosting, monitoring, drift detection and retraining processes.
• Collaborate with internal teams
• Participate in customer-facing meetings, workshops and solution design sessions across departments.
• Contribute to platform and knowledge improvement
• Support the continuous development of Databricks platform services and promote knowledge sharing across teams.
Essential Skills and Experience:
• End-to-end ML/AI lifecycle expertise
• Strong hands-on experience across the full machine learning lifecycle, from data preparation and model development to deployment, monitoring, and retraining.
• Proficiency with Azure Databricks
• Practical experience using key components such as:
• MLflow for experiment tracking and model management
• Delta Lake for data versioning and reliability
• Unity Catalog for access control and data governance
• Workflows for pipeline orchestration
• Model Serving and automation of the model lifecycle
• Machine learning frameworks
• Working knowledge of at least one widely used ML library, such as PyTorch, TensorFlow, or Scikit-learn.
• DevOps and automation tooling
• Experience with CI/CD pipelines, infrastructure-as-code (e.g., Terraform), and container technologies like Docker.
• Cloud platform familiarity
• Experience working on Azure is preferred; however, a background in AWS or other providers with a willingness to transition is also suitable.
• Production-grade pipeline design
• Proven ability to design, deploy, and maintain machine learning pipelines in production environments.
• Stakeholder-focused communication
• Ability to explain complex technical concepts in a clear and business-relevant way, especially when working with internal customers and cross-functional teams.
• Governance and compliance awareness
• Exposure to model monitoring, data governance, and regulatory considerations such as explainability and security controls.
• Agile working practices
• Comfortable contributing within agile teams and using tools like Jira or equivalent project management platforms.
Desirable Experience
• Experience working with large language models (LLMs), generative AI or multimodal orchestration tools
• Familiarity with explainability libraries such as SHAP or LIME
• Previous use of Azure services such as Azure Data Factory, Synapse Analytics or Azure DevOps
• Background in regulated industries such as insurance, financial services or healthcare
If this sounds like an exciting opportunity please apply with your CV.
Job Title: ML Engineer (Databricks)
Rate: Depending on experience
Location: Remote
Contract Length: 12-24 months
A European consultancy are seeking a Databricks focused Machine Learning Engineer to join the team on a long term 12-24 month contract.
This role will be supporting the full end-to-end model lifecycle in production environments built on Azure and Databricks not only internally, but also in close collaboration with business units and customer teams across a international business units.
Databricks expertise is a must.
Core Responsibilities
• Build and manage ML/MLOps pipelines using Databricks
• Design, optimise and operate robust end-to-end machine learning pipelines within the Databricks environment on Azure.
• Support internal project teams
• Act as a technical point of contact for internal stakeholders, assisting with onboarding to Databricks, model deployment and pipeline design.
• Leverage key Databricks features
• Utilise capabilities such as MLflow, Workflows, Unity Catalog, Model Serving and Monitoring to enable scalable and manageable solutions.
• Implement governance and observability
• Integrate compliance, monitoring and audit features across the full machine learning lifecycle.
• Operationalise ML/AI models
• Lead efforts to move models into production, ensuring they are stable, secure and scalable.
• Hands-on with model operations
• Work directly on model hosting, monitoring, drift detection and retraining processes.
• Collaborate with internal teams
• Participate in customer-facing meetings, workshops and solution design sessions across departments.
• Contribute to platform and knowledge improvement
• Support the continuous development of Databricks platform services and promote knowledge sharing across teams.
Essential Skills and Experience:
• End-to-end ML/AI lifecycle expertise
• Strong hands-on experience across the full machine learning lifecycle, from data preparation and model development to deployment, monitoring, and retraining.
• Proficiency with Azure Databricks
• Practical experience using key components such as:
• MLflow for experiment tracking and model management
• Delta Lake for data versioning and reliability
• Unity Catalog for access control and data governance
• Workflows for pipeline orchestration
• Model Serving and automation of the model lifecycle
• Machine learning frameworks
• Working knowledge of at least one widely used ML library, such as PyTorch, TensorFlow, or Scikit-learn.
• DevOps and automation tooling
• Experience with CI/CD pipelines, infrastructure-as-code (e.g., Terraform), and container technologies like Docker.
• Cloud platform familiarity
• Experience working on Azure is preferred; however, a background in AWS or other providers with a willingness to transition is also suitable.
• Production-grade pipeline design
• Proven ability to design, deploy, and maintain machine learning pipelines in production environments.
• Stakeholder-focused communication
• Ability to explain complex technical concepts in a clear and business-relevant way, especially when working with internal customers and cross-functional teams.
• Governance and compliance awareness
• Exposure to model monitoring, data governance, and regulatory considerations such as explainability and security controls.
• Agile working practices
• Comfortable contributing within agile teams and using tools like Jira or equivalent project management platforms.
Desirable Experience
• Experience working with large language models (LLMs), generative AI or multimodal orchestration tools
• Familiarity with explainability libraries such as SHAP or LIME
• Previous use of Azure services such as Azure Data Factory, Synapse Analytics or Azure DevOps
• Background in regulated industries such as insurance, financial services or healthcare
If this sounds like an exciting opportunity please apply with your CV.






