

Montash
SC Cleared Python Data Engineer – Azure & PySpark
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an SC Cleared Python Data Engineer – Azure & PySpark on a 12-month contract, paying up to £400/day. Key skills include Python, PySpark, Delta Lake, and Docker, with a focus on Azure-based data pipelines.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
424
-
🗓️ - Date
December 5, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
England, United Kingdom
-
🧠 - Skills detailed
#Vault #"ACID (Atomicity #Consistency #Isolation #Durability)" #PySpark #Data Science #Data Engineering #Scala #DevOps #Data Lake #Data Governance #Documentation #Synapse #Spark (Apache Spark) #Cloud #Azure #Azure DevOps #Storage #Programming #Delta Lake #Security #Deployment #Databricks #Data Pipeline #Data Processing #Python #"ETL (Extract #Transform #Load)" #Compliance #Docker
Role description
Job Title: SC Cleared Python Data Engineer – Azure & PySpark
Contract Type: 12 Month Contract
Day Rate: Up to £400 a day inside IR35
Location: Remote or hybrid (as agreed)
Start Date: January 5th 2026
Clearance required: Must be holding active SC Clearance
We are seeking an experienced Python Data Engineer to support the design, development, and optimisation of Azure-based data pipelines.
The focus of this role is to deliver scalable, test-driven, and configuration-driven data processing solutions using Python, PySpark, Delta Lake, and containerised workloads.
This opportunity sits within a fast-paced engineering environment working closely with cloud, DevOps, and data science teams.
Key Responsibilities
• Develop and maintain ingestion, transformation, and validation pipelines using Python and PySpark
• Implement unit and BDD testing with Behave, including mocking, patching, and dependency management
• Design and manage Delta Lake tables, ensuring ACID compliance, schema evolution, and incremental loading
• Build and maintain containerised applications using Docker for development and deployment
• Develop configuration-driven, modular, and reusable engineering solutions
• Integrate Azure services including Azure Functions, Key Vault, and Blob Storage
• Collaborate with cloud architects, data scientists, and DevOps teams on CI/CD processes and environment configuration
• Tune and troubleshoot PySpark jobs for performance in production workloads
• Maintain documentation and follow best practices in cloud security and data governance
Required Skills & Experience
• Strong Python programming skills with test-driven development
• Experience writing BDD scenarios and unit tests using Behave or similar tools
• Skilled in mocking, patching, and dependency injection for Python tests
• Proficiency in PySpark and distributed data processing
• Hands-on experience with Delta Lake (transactional guarantees, schema evolution, optimisation)
• Experience with Docker for development and deployment
• Familiarity with Azure Functions, Key Vault, Blob Storage or Data Lake Storage Gen2
• Experience working with configuration-driven systems
• Exposure to CI/CD tools (Azure DevOps or similar)
Preferred Qualifications
• Experience working with Databricks or Synapse
• Knowledge of data governance, security, and best practices in the Azure ecosystem
• Strong communication and collaboration skills, ideally within distributed teams
Job Title: SC Cleared Python Data Engineer – Azure & PySpark
Contract Type: 12 Month Contract
Day Rate: Up to £400 a day inside IR35
Location: Remote or hybrid (as agreed)
Start Date: January 5th 2026
Clearance required: Must be holding active SC Clearance
We are seeking an experienced Python Data Engineer to support the design, development, and optimisation of Azure-based data pipelines.
The focus of this role is to deliver scalable, test-driven, and configuration-driven data processing solutions using Python, PySpark, Delta Lake, and containerised workloads.
This opportunity sits within a fast-paced engineering environment working closely with cloud, DevOps, and data science teams.
Key Responsibilities
• Develop and maintain ingestion, transformation, and validation pipelines using Python and PySpark
• Implement unit and BDD testing with Behave, including mocking, patching, and dependency management
• Design and manage Delta Lake tables, ensuring ACID compliance, schema evolution, and incremental loading
• Build and maintain containerised applications using Docker for development and deployment
• Develop configuration-driven, modular, and reusable engineering solutions
• Integrate Azure services including Azure Functions, Key Vault, and Blob Storage
• Collaborate with cloud architects, data scientists, and DevOps teams on CI/CD processes and environment configuration
• Tune and troubleshoot PySpark jobs for performance in production workloads
• Maintain documentation and follow best practices in cloud security and data governance
Required Skills & Experience
• Strong Python programming skills with test-driven development
• Experience writing BDD scenarios and unit tests using Behave or similar tools
• Skilled in mocking, patching, and dependency injection for Python tests
• Proficiency in PySpark and distributed data processing
• Hands-on experience with Delta Lake (transactional guarantees, schema evolution, optimisation)
• Experience with Docker for development and deployment
• Familiarity with Azure Functions, Key Vault, Blob Storage or Data Lake Storage Gen2
• Experience working with configuration-driven systems
• Exposure to CI/CD tools (Azure DevOps or similar)
Preferred Qualifications
• Experience working with Databricks or Synapse
• Knowledge of data governance, security, and best practices in the Azure ecosystem
• Strong communication and collaboration skills, ideally within distributed teams






