

Montash
SC Cleared Databricks Data Engineer – Azure Cloud
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an SC Cleared Databricks Data Engineer on a 12-month contract, offering up to £400/day. It requires strong Databricks, PySpark, and Delta Lake expertise, with Azure experience essential. Remote or hybrid work is available.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
424
-
🗓️ - Date
December 5, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
England, United Kingdom
-
🧠 - Skills detailed
#Vault #Azure cloud #"ACID (Atomicity #Consistency #Isolation #Durability)" #Azure ADLS (Azure Data Lake Storage) #PySpark #Microsoft Power BI #Batch #Data Lake #Data Governance #Metadata #Documentation #Data Analysis #Synapse #Spark (Apache Spark) #Cloud #Azure #Storage #Delta Lake #Deployment #Data Lineage #BI (Business Intelligence) #Databricks #Data Quality #Data Pipeline #"ETL (Extract #Transform #Load)" #Compliance #ADLS (Azure Data Lake Storage) #Data Engineering
Role description
Job Title: SC Cleared Databricks Data Engineer – Azure Cloud
Contract Type: 12 month contract
Day Rate: Up to £400 a day inside IR35
Location: Remote or hybrid (as agreed)
Start Date: January 5th 2026
Clearance required: Must be holding active SC Clearance
We are seeking an experienced Databricks Data Engineer to design, build, and optimise large-scale data workflows within the Databricks Data Intelligence Platform.
The role focuses on delivering high-performing batch and streaming pipelines using PySpark, Delta Lake, and Azure services, with additional emphasis on governance, lineage tracking, and workflow orchestration. Client information remains confidential.
Key Responsibilities
• Build and orchestrate Databricks data pipelines using Notebooks, Jobs, and Workflows
• Optimise Spark and Delta Lake workloads through cluster tuning, adaptive execution, scaling, and caching
• Conduct performance benchmarking and cost optimisation across workloads
• Implement data quality, lineage, and governance practices aligned with Unity Catalog
• Develop PySpark-based ETL and transformation logic using modular, reusable coding standards
• Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel
• Integrate Databricks assets with Azure Data Lake Storage, Key Vault, and Azure Functions
• Collaborate with cloud architects, data analysts, and engineering teams on end-to-end workflow design
• Support automated deployment of Databricks artefacts via CI/CD pipelines
• Maintain clear technical documentation covering architecture, performance, and governance configuration
Required Skills and Experience
• Strong experience with the Databricks Data Intelligence Platform
• Hands-on experience with Databricks Jobs and Workflows
• Deep PySpark expertise, including schema management and optimisation
• Strong understanding of Delta Lake architecture and incremental design principles
• Proven Spark performance engineering and cluster tuning capabilities
• Unity Catalog experience (data lineage, access policies, metadata governance)
• Azure experience across ADLS Gen2, Key Vault, and serverless components
• Familiarity with CI/CD deployment for Databricks
• Solid troubleshooting skills in distributed environments
Preferred Qualifications
• Experience working across multiple Databricks workspaces and governed catalogs
• Knowledge of Synapse, Power BI, or related Azure analytics services
• Understanding of cost optimisation for data compute workloads
• Strong communication and cross-functional collaboration skills
Job Title: SC Cleared Databricks Data Engineer – Azure Cloud
Contract Type: 12 month contract
Day Rate: Up to £400 a day inside IR35
Location: Remote or hybrid (as agreed)
Start Date: January 5th 2026
Clearance required: Must be holding active SC Clearance
We are seeking an experienced Databricks Data Engineer to design, build, and optimise large-scale data workflows within the Databricks Data Intelligence Platform.
The role focuses on delivering high-performing batch and streaming pipelines using PySpark, Delta Lake, and Azure services, with additional emphasis on governance, lineage tracking, and workflow orchestration. Client information remains confidential.
Key Responsibilities
• Build and orchestrate Databricks data pipelines using Notebooks, Jobs, and Workflows
• Optimise Spark and Delta Lake workloads through cluster tuning, adaptive execution, scaling, and caching
• Conduct performance benchmarking and cost optimisation across workloads
• Implement data quality, lineage, and governance practices aligned with Unity Catalog
• Develop PySpark-based ETL and transformation logic using modular, reusable coding standards
• Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel
• Integrate Databricks assets with Azure Data Lake Storage, Key Vault, and Azure Functions
• Collaborate with cloud architects, data analysts, and engineering teams on end-to-end workflow design
• Support automated deployment of Databricks artefacts via CI/CD pipelines
• Maintain clear technical documentation covering architecture, performance, and governance configuration
Required Skills and Experience
• Strong experience with the Databricks Data Intelligence Platform
• Hands-on experience with Databricks Jobs and Workflows
• Deep PySpark expertise, including schema management and optimisation
• Strong understanding of Delta Lake architecture and incremental design principles
• Proven Spark performance engineering and cluster tuning capabilities
• Unity Catalog experience (data lineage, access policies, metadata governance)
• Azure experience across ADLS Gen2, Key Vault, and serverless components
• Familiarity with CI/CD deployment for Databricks
• Solid troubleshooting skills in distributed environments
Preferred Qualifications
• Experience working across multiple Databricks workspaces and governed catalogs
• Knowledge of Synapse, Power BI, or related Azure analytics services
• Understanding of cost optimisation for data compute workloads
• Strong communication and cross-functional collaboration skills






