Montash

SC Cleared Python Data Engineer – Azure Cloud

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an SC Cleared Python Data Engineer – Azure Cloud, with a 12-month contract, paying up to £410/day. Requires strong skills in Python, PySpark, Delta Lake, and Azure services. Active SC Clearance is mandatory. Remote work in the UK.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
408
-
🗓️ - Date
January 10, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#DevOps #Data Ingestion #Cloud #"ACID (Atomicity #Consistency #Isolation #Durability)" #Delta Lake #Databricks #ADLS (Azure Data Lake Storage) #PySpark #Python #Data Security #Data Lake #Agile #Security #Azure cloud #Data Pipeline #Storage #Synapse #Azure #Data Engineering #Deployment #Documentation #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Data Governance #Azure DevOps #Docker #Scala #Vault
Role description
Job Title: SC Cleared Python Data Engineer – Azure Cloud Contract: 12 months Rate: Up to £410/day (Inside IR35) Location: UK Based - Remote Start: January 2026 Clearance: Active SC Clearance required Role Overview We are seeking an SC Cleared Python Data Engineer with strong hands-on experience in PySpark, Delta Lake, and Azure cloud services. The role focuses on designing and delivering scalable, well-tested data pipelines, with particular importance placed on the ability to understand, explain, and design PySpark architectures and demonstrate deep, production-grade Python expertise. You will work in a containerised, cloud-native environment, delivering maintainable, configurable, and test-driven solutions as part of a multi-disciplinary engineering team. Key Responsibilities • Design, develop, and maintain data ingestion and transformation pipelines using Python and PySpark. • Clearly articulate PySpark architecture, execution models, and performance considerations to both technical and non-technical stakeholders. • Implement unit and BDD testing (Behave or similar), including effective mocking and dependency management. • Design and optimise Delta Lake tables to support ACID transactions, schema evolution, and incremental processing. • Build and manage Docker-based environments for development, testing, and deployment. • Develop configuration-driven, reusable codebases suitable for multiple environments. • Integrate Azure services including Azure Functions, Key Vault, and Blob/Data Lake Storage. • Optimise Spark jobs for performance, scalability, and reliability in production. • Collaborate with Cloud, DevOps, and Data teams to support CI/CD pipelines and environment consistency. • Produce clear technical documentation and follow cloud security and data governance best practices. Required Skills & Experience • Strong Python expertise, with a demonstrable depth of experience in designing modular, testable, production-quality code. • Proven experience explaining and designing PySpark architectures, including distributed processing and performance tuning. • Hands-on experience with Behave or similar BDD frameworks, including mocking and patching techniques. • Solid understanding of Delta Lake concepts, transactional guarantees, and optimisation strategies. • Experience using Docker across development and deployment workflows. • Practical experience with Azure services (Functions, Key Vault, Blob Storage, ADLS Gen2). • Experience building configuration-driven applications. • Strong problem-solving skills and ability to work independently in agile environments. Desirable Experience • Databricks or Synapse with Delta Lake. • CI/CD pipelines (Azure DevOps or similar) and infrastructure-as-code. • Knowledge of Azure data security and governance best practices. • Experience working in distributed or multi-team environments.