

Montash
SC Cleared DevOps Engineer (Azure)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an SC Cleared DevOps Engineer (Azure) on a 12-month contract, paying up to £400 per day. It requires strong Databricks experience, CI/CD expertise, and active SC Clearance, with a focus on automation and cloud security.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
400
-
🗓️ - Date
January 7, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#AutoScaling #Terraform #Cloud #Synapse #ADLS (Azure Data Lake Storage) #Compliance #Logging #PySpark #Data Lake #Documentation #Monitoring #Vault #"ACID (Atomicity #Consistency #Isolation #Durability)" #Libraries #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Databricks #Automation #DevOps #Data Governance #Security #Data Engineering #Microsoft Power BI #BI (Business Intelligence) #Azure #Deployment #Azure ADLS (Azure Data Lake Storage) #Delta Lake #Storage #Batch
Role description
Job Title: SC Cleared DevOps Engineer (Azure)
Contract Type: 12-month contract
Day Rate: Up to £400 per day (Inside IR35)
Location: Remote or hybrid (as agreed)
Start Date: January 2026
Clearance Required: Active SC Clearance (mandatory)
We are seeking an experienced SC Cleared DevOps Engineer with strong Databricks platform experience to design, build, deploy, and operate large-scale data and analytics solutions on the Databricks Data Intelligence Platform within Azure.
This role focuses on automation, CI/CD, infrastructure reliability, security, and cost optimisation, while supporting high-performing batch and streaming workloads built on PySpark and Delta Lake. Client information remains confidential.
Required Skills & Experience
• Proven experience as a DevOps Engineer on Azure
• Strong hands-on experience with the Databricks Data Intelligence Platform
• Experience building and maintaining CI/CD pipelines for cloud and data platforms
• Solid understanding of Spark, PySpark, and Delta Lake from a platform and operational perspective
• Experience with infrastructure-as-code (e.g. Terraform or equivalent)
• Azure experience across ADLS Gen2, Key Vault, managed identities, and serverless services
• Strong troubleshooting skills in distributed, cloud-based environments
Platform Engineering & DevOps
• Design, build, and maintain CI/CD pipelines for Databricks code, jobs, and configuration across environments
• Automate provisioning and configuration of Databricks and Azure infrastructure using infrastructure-as-code
• Standardise workspace configuration, cluster policies, secrets, libraries, and access controls
• Implement monitoring, logging, and alerting for platform health, job reliability, and pipeline performance
• Drive cost optimisation and FinOps practices through usage analysis and workload benchmarking
• Support production operations, including incident management, root-cause analysis, and runbooks
Databricks & Data Platform Support
• Build and orchestrate Databricks pipelines using Notebooks, Jobs, and Workflows
• Optimise Spark and Delta Lake workloads through cluster tuning, autoscaling, adaptive execution, and caching
• Support development of PySpark-based ETL and streaming workloads
• Manage Delta Lake tables, including schema evolution, ACID compliance, and time travel
• Implement data governance, lineage, and access controls using Unity Catalog
Azure Integration & Security
• Integrate Databricks with Azure Data Lake Storage Gen2, Key Vault, and serverless Azure services
• Enforce security best practices using managed identities, RBAC, and secrets management
• Support secure, compliant deployments aligned with public sector security standards
Collaboration & Documentation
• Collaborate with cloud architects, data engineers, and analysts on end-to-end solution design
• Maintain clear technical documentation covering architecture, CI/CD, monitoring, and governance
• Contribute to platform standards, reusable templates, and DevOps best practices
Preferred Qualifications
• Experience supporting multiple Databricks workspaces and governed Unity Catalogs
• Knowledge of Azure analytics services such as Synapse or Power BI
• Experience implementing FinOps / cost governance in cloud environments
• Background working in regulated or public sector environments
• Strong communication and cross-functional collaboration skills
Job Title: SC Cleared DevOps Engineer (Azure)
Contract Type: 12-month contract
Day Rate: Up to £400 per day (Inside IR35)
Location: Remote or hybrid (as agreed)
Start Date: January 2026
Clearance Required: Active SC Clearance (mandatory)
We are seeking an experienced SC Cleared DevOps Engineer with strong Databricks platform experience to design, build, deploy, and operate large-scale data and analytics solutions on the Databricks Data Intelligence Platform within Azure.
This role focuses on automation, CI/CD, infrastructure reliability, security, and cost optimisation, while supporting high-performing batch and streaming workloads built on PySpark and Delta Lake. Client information remains confidential.
Required Skills & Experience
• Proven experience as a DevOps Engineer on Azure
• Strong hands-on experience with the Databricks Data Intelligence Platform
• Experience building and maintaining CI/CD pipelines for cloud and data platforms
• Solid understanding of Spark, PySpark, and Delta Lake from a platform and operational perspective
• Experience with infrastructure-as-code (e.g. Terraform or equivalent)
• Azure experience across ADLS Gen2, Key Vault, managed identities, and serverless services
• Strong troubleshooting skills in distributed, cloud-based environments
Platform Engineering & DevOps
• Design, build, and maintain CI/CD pipelines for Databricks code, jobs, and configuration across environments
• Automate provisioning and configuration of Databricks and Azure infrastructure using infrastructure-as-code
• Standardise workspace configuration, cluster policies, secrets, libraries, and access controls
• Implement monitoring, logging, and alerting for platform health, job reliability, and pipeline performance
• Drive cost optimisation and FinOps practices through usage analysis and workload benchmarking
• Support production operations, including incident management, root-cause analysis, and runbooks
Databricks & Data Platform Support
• Build and orchestrate Databricks pipelines using Notebooks, Jobs, and Workflows
• Optimise Spark and Delta Lake workloads through cluster tuning, autoscaling, adaptive execution, and caching
• Support development of PySpark-based ETL and streaming workloads
• Manage Delta Lake tables, including schema evolution, ACID compliance, and time travel
• Implement data governance, lineage, and access controls using Unity Catalog
Azure Integration & Security
• Integrate Databricks with Azure Data Lake Storage Gen2, Key Vault, and serverless Azure services
• Enforce security best practices using managed identities, RBAC, and secrets management
• Support secure, compliant deployments aligned with public sector security standards
Collaboration & Documentation
• Collaborate with cloud architects, data engineers, and analysts on end-to-end solution design
• Maintain clear technical documentation covering architecture, CI/CD, monitoring, and governance
• Contribute to platform standards, reusable templates, and DevOps best practices
Preferred Qualifications
• Experience supporting multiple Databricks workspaces and governed Unity Catalogs
• Knowledge of Azure analytics services such as Synapse or Power BI
• Experience implementing FinOps / cost governance in cloud environments
• Background working in regulated or public sector environments
• Strong communication and cross-functional collaboration skills






