

X4 Technology
Machine Learning Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer focused on Databricks, offering a 12-24 month remote contract with pay based on experience. Key skills include Azure Databricks, MLflow, and production ML pipeline design. Experience in regulated industries is desirable.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
October 24, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United Kingdom
-
π§ - Skills detailed
#Project Management #Compliance #Automation #PyTorch #ML (Machine Learning) #DevOps #Data Governance #AWS (Amazon Web Services) #Synapse #TensorFlow #AI (Artificial Intelligence) #Azure #Terraform #Delta Lake #MLflow #Databricks #Scala #Libraries #Azure Databricks #Security #Azure DevOps #Observability #Monitoring #Jira #Azure Data Factory #Agile #Model Deployment #Cloud #ADF (Azure Data Factory) #Docker #Deployment
Role description
Job Title: ML Engineer (Databricks)
Rate: Depending on experience
Location: Remote
Contract Length: 12-24 months
A European consultancy are seeking a Databricks focused Machine Learning Engineer to join the team on a long term 12-24 month contract.
This role will be supporting the full end-to-end model lifecycle in production environments built on Azure and Databricks not only internally, but also in close collaboration with business units and customer teams across a international business units.
Databricks expertise is a must.
Core Responsibilities
β’ Build and manage ML/MLOps pipelines using Databricks
β’ Design, optimise and operate robust end-to-end machine learning pipelines within the Databricks environment on Azure.
β’ Support internal project teams
β’ Act as a technical point of contact for internal stakeholders, assisting with onboarding to Databricks, model deployment and pipeline design.
β’ Leverage key Databricks features
β’ Utilise capabilities such as MLflow, Workflows, Unity Catalog, Model Serving and Monitoring to enable scalable and manageable solutions.
β’ Implement governance and observability
β’ Integrate compliance, monitoring and audit features across the full machine learning lifecycle.
β’ Operationalise ML/AI models
β’ Lead efforts to move models into production, ensuring they are stable, secure and scalable.
β’ Hands-on with model operations
β’ Work directly on model hosting, monitoring, drift detection and retraining processes.
β’ Collaborate with internal teams
β’ Participate in customer-facing meetings, workshops and solution design sessions across departments.
β’ Contribute to platform and knowledge improvement
β’ Support the continuous development of Databricks platform services and promote knowledge sharing across teams.
Essential Skills and Experience:
β’ End-to-end ML/AI lifecycle expertise
β’ Strong hands-on experience across the full machine learning lifecycle, from data preparation and model development to deployment, monitoring, and retraining.
β’ Proficiency with Azure Databricks
β’ Practical experience using key components such as:
β’ MLflow for experiment tracking and model management
β’ Delta Lake for data versioning and reliability
β’ Unity Catalog for access control and data governance
β’ Workflows for pipeline orchestration
β’ Model Serving and automation of the model lifecycle
β’ Machine learning frameworks
β’ Working knowledge of at least one widely used ML library, such as PyTorch, TensorFlow, or Scikit-learn.
β’ DevOps and automation tooling
β’ Experience with CI/CD pipelines, infrastructure-as-code (e.g., Terraform), and container technologies like Docker.
β’ Cloud platform familiarity
β’ Experience working on Azure is preferred; however, a background in AWS or other providers with a willingness to transition is also suitable.
β’ Production-grade pipeline design
β’ Proven ability to design, deploy, and maintain machine learning pipelines in production environments.
β’ Stakeholder-focused communication
β’ Ability to explain complex technical concepts in a clear and business-relevant way, especially when working with internal customers and cross-functional teams.
β’ Governance and compliance awareness
β’ Exposure to model monitoring, data governance, and regulatory considerations such as explainability and security controls.
β’ Agile working practices
β’ Comfortable contributing within agile teams and using tools like Jira or equivalent project management platforms.
Desirable Experience
β’ Experience working with large language models (LLMs), generative AI or multimodal orchestration tools
β’ Familiarity with explainability libraries such as SHAP or LIME
β’ Previous use of Azure services such as Azure Data Factory, Synapse Analytics or Azure DevOps
β’ Background in regulated industries such as insurance, financial services or healthcare
If this sounds like an exciting opportunity please apply with your CV.
Job Title: ML Engineer (Databricks)
Rate: Depending on experience
Location: Remote
Contract Length: 12-24 months
A European consultancy are seeking a Databricks focused Machine Learning Engineer to join the team on a long term 12-24 month contract.
This role will be supporting the full end-to-end model lifecycle in production environments built on Azure and Databricks not only internally, but also in close collaboration with business units and customer teams across a international business units.
Databricks expertise is a must.
Core Responsibilities
β’ Build and manage ML/MLOps pipelines using Databricks
β’ Design, optimise and operate robust end-to-end machine learning pipelines within the Databricks environment on Azure.
β’ Support internal project teams
β’ Act as a technical point of contact for internal stakeholders, assisting with onboarding to Databricks, model deployment and pipeline design.
β’ Leverage key Databricks features
β’ Utilise capabilities such as MLflow, Workflows, Unity Catalog, Model Serving and Monitoring to enable scalable and manageable solutions.
β’ Implement governance and observability
β’ Integrate compliance, monitoring and audit features across the full machine learning lifecycle.
β’ Operationalise ML/AI models
β’ Lead efforts to move models into production, ensuring they are stable, secure and scalable.
β’ Hands-on with model operations
β’ Work directly on model hosting, monitoring, drift detection and retraining processes.
β’ Collaborate with internal teams
β’ Participate in customer-facing meetings, workshops and solution design sessions across departments.
β’ Contribute to platform and knowledge improvement
β’ Support the continuous development of Databricks platform services and promote knowledge sharing across teams.
Essential Skills and Experience:
β’ End-to-end ML/AI lifecycle expertise
β’ Strong hands-on experience across the full machine learning lifecycle, from data preparation and model development to deployment, monitoring, and retraining.
β’ Proficiency with Azure Databricks
β’ Practical experience using key components such as:
β’ MLflow for experiment tracking and model management
β’ Delta Lake for data versioning and reliability
β’ Unity Catalog for access control and data governance
β’ Workflows for pipeline orchestration
β’ Model Serving and automation of the model lifecycle
β’ Machine learning frameworks
β’ Working knowledge of at least one widely used ML library, such as PyTorch, TensorFlow, or Scikit-learn.
β’ DevOps and automation tooling
β’ Experience with CI/CD pipelines, infrastructure-as-code (e.g., Terraform), and container technologies like Docker.
β’ Cloud platform familiarity
β’ Experience working on Azure is preferred; however, a background in AWS or other providers with a willingness to transition is also suitable.
β’ Production-grade pipeline design
β’ Proven ability to design, deploy, and maintain machine learning pipelines in production environments.
β’ Stakeholder-focused communication
β’ Ability to explain complex technical concepts in a clear and business-relevant way, especially when working with internal customers and cross-functional teams.
β’ Governance and compliance awareness
β’ Exposure to model monitoring, data governance, and regulatory considerations such as explainability and security controls.
β’ Agile working practices
β’ Comfortable contributing within agile teams and using tools like Jira or equivalent project management platforms.
Desirable Experience
β’ Experience working with large language models (LLMs), generative AI or multimodal orchestration tools
β’ Familiarity with explainability libraries such as SHAP or LIME
β’ Previous use of Azure services such as Azure Data Factory, Synapse Analytics or Azure DevOps
β’ Background in regulated industries such as insurance, financial services or healthcare
If this sounds like an exciting opportunity please apply with your CV.






