
AI Assurance ML Ops SME
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI Assurance ML Ops SME, offering a 3+ year contract at £700–£900 per day. Requires SC or DV clearance, extensive ML Ops experience, and knowledge of governance and regulatory compliance in AI systems.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
-
🗓️ - Date discovered
September 5, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Inside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London
-
🧠 - Skills detailed
#SageMaker #Deployment #Data Science #Azure #ML (Machine Learning) #ML Ops (Machine Learning Operations) #Monitoring #MLflow #AI (Artificial Intelligence) #Compliance
Role description
Morela is supporting our client in securing an experienced AI Assurance ML Ops professional to join a high-impact programme. This is a chance to ensure AI/ML systems are safe, auditable, and fully aligned with regulatory and ethical standards in complex operational environments.
Contract Details:
• Duration: 3+ years
• IR35: Inside
• Clearance: SC or DV required
• Day Rate: £700–£900 (depending on experience)
Role Overview:
In this role, you will integrate assurance into every stage of the AI/ML lifecycle. From data preparation to production deployment, you will embed governance, risk management, and monitoring practices to make AI systems reliable, transparent, and compliant.
Key Responsibilities:
• Establish and maintain assurance processes within ML Ops pipelines
• Apply governance and risk frameworks throughout model development and deployment
• Implement and lead monitoring practices for bias, data drift, model drift, explainability, and system robustness
• Provide guidance on TEV&V practices for models under development or in production
• Partner with Data Science, Risk, and Compliance teams to align AI systems with organisational, ethical, and regulatory requirements
Required Expertise:
• Extensive ML Ops / ML Engineering experience, particularly in assurance, governance, and monitoring
• Hands-on knowledge of ML Ops platforms and tools (Kubeflow, MLflow, SageMaker, Azure ML, or equivalent)
• Proven ability to deploy and maintain AI solutions in regulated or complex operational settings
• Strong grounding in responsible AI practices, including explainability and fairness
• Experience ensuring AI systems comply with regulatory frameworks (EU AI Act, ISO, NIST, or industry standards)
• Skilled at translating assurance requirements into technical processes and collaborating across multidisciplinary teams
Preferred Skills:
• Practical experience implementing responsible AI practices directly in ML pipelines
• Ability to operationalise bias detection, monitoring, explainability, and TEV&V within ML Ops
• Demonstrated capacity to turn regulatory guidance into actionable pipeline processes
Why This Role:
This is a rare opportunity to influence the trustworthiness and compliance of AI/ML systems in real-world deployments. Join a programme that prioritises responsible AI while working with expert teams on cutting-edge projects.
Morela is proud to support our client in this crucial AI assurance initiative.
Morela is supporting our client in securing an experienced AI Assurance ML Ops professional to join a high-impact programme. This is a chance to ensure AI/ML systems are safe, auditable, and fully aligned with regulatory and ethical standards in complex operational environments.
Contract Details:
• Duration: 3+ years
• IR35: Inside
• Clearance: SC or DV required
• Day Rate: £700–£900 (depending on experience)
Role Overview:
In this role, you will integrate assurance into every stage of the AI/ML lifecycle. From data preparation to production deployment, you will embed governance, risk management, and monitoring practices to make AI systems reliable, transparent, and compliant.
Key Responsibilities:
• Establish and maintain assurance processes within ML Ops pipelines
• Apply governance and risk frameworks throughout model development and deployment
• Implement and lead monitoring practices for bias, data drift, model drift, explainability, and system robustness
• Provide guidance on TEV&V practices for models under development or in production
• Partner with Data Science, Risk, and Compliance teams to align AI systems with organisational, ethical, and regulatory requirements
Required Expertise:
• Extensive ML Ops / ML Engineering experience, particularly in assurance, governance, and monitoring
• Hands-on knowledge of ML Ops platforms and tools (Kubeflow, MLflow, SageMaker, Azure ML, or equivalent)
• Proven ability to deploy and maintain AI solutions in regulated or complex operational settings
• Strong grounding in responsible AI practices, including explainability and fairness
• Experience ensuring AI systems comply with regulatory frameworks (EU AI Act, ISO, NIST, or industry standards)
• Skilled at translating assurance requirements into technical processes and collaborating across multidisciplinary teams
Preferred Skills:
• Practical experience implementing responsible AI practices directly in ML pipelines
• Ability to operationalise bias detection, monitoring, explainability, and TEV&V within ML Ops
• Demonstrated capacity to turn regulatory guidance into actionable pipeline processes
Why This Role:
This is a rare opportunity to influence the trustworthiness and compliance of AI/ML systems in real-world deployments. Join a programme that prioritises responsible AI while working with expert teams on cutting-edge projects.
Morela is proud to support our client in this crucial AI assurance initiative.