

ShareForce
Azure Data Engineer - SC Cleared
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer - SC Cleared, offering a 6-month contract at up to £600 per day. Key skills include Python, PySpark, SQL, and experience with Microsoft Azure services and medallion architecture. Fully remote work.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date
May 6, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
England, United Kingdom
-
🧠 - Skills detailed
#Version Control #Data Quality #Data Design #Compliance #Azure #Scala #Databricks #GIT #Data Governance #Data Engineering #"ETL (Extract #Transform #Load)" #Data Pipeline #Documentation #Data Processing #SQL (Structured Query Language) #Spark (Apache Spark) #Spatial Data #Python #PySpark #Security #Microsoft Azure
Role description
We are seeking a skilled Azure Data Engineer to join a public sector project focused on delivering a scalable, reusable, and collaborative data platform with the aim to eliminate repetition and maximise reuse across the enterprise.
Key Responsibilities
• Design, build, and maintain scalable data pipelines on Microsoft Azure
• Develop reusable data products and frameworks to reduce duplication across teams
• Implement and maintain medallion architecture (Bronze, Silver, Gold layers) for structured data processing
• Work with cross-functional teams to promote shared standards, patterns, and engineering best practices
• Contribute to open and transparent development practices, including code sharing and documentation
• Develop and optimise data workflows using notebooks and distributed processing frameworks
• Ensure data quality, reliability, and performance across the platform
• Support the evolution of a modern enterprise data platform, reducing siloed solutions
Skills and Experience Profile
• Strong experience with Python for data engineering
• Hands-on expertise in PySpark and distributed data processing
• Advanced SQL skills for data transformation and optimisation
• Experience developing in notebook environments (e.g., Fabric, Databricks, or similar)
• Practical experience with Microsoft Fabric or Azure data services
• Solid understanding of medallion architecture and layered data design
• Experience building and maintaining ETL/ELT pipelines
• Familiarity with version control (e.g., Git) and collaborative development workflows
Desirable Skills
• Experience working in large-scale enterprise or government environments
• Knowledge of data governance, security, and compliance practices
• Familiarity with CI/CD pipelines for data engineering
• Experience with data modelling and performance tuning
• Exposure to geospatial data (beneficial but not required)
Additional Information
• Day Rate: Up to £600
• IR35 Status: Inside
• Duration: Initial 6-month contract with significant opportunity for extension
• Travel: Fully remote working
• Start Date: Immediate or short notice availability preferred
We are seeking a skilled Azure Data Engineer to join a public sector project focused on delivering a scalable, reusable, and collaborative data platform with the aim to eliminate repetition and maximise reuse across the enterprise.
Key Responsibilities
• Design, build, and maintain scalable data pipelines on Microsoft Azure
• Develop reusable data products and frameworks to reduce duplication across teams
• Implement and maintain medallion architecture (Bronze, Silver, Gold layers) for structured data processing
• Work with cross-functional teams to promote shared standards, patterns, and engineering best practices
• Contribute to open and transparent development practices, including code sharing and documentation
• Develop and optimise data workflows using notebooks and distributed processing frameworks
• Ensure data quality, reliability, and performance across the platform
• Support the evolution of a modern enterprise data platform, reducing siloed solutions
Skills and Experience Profile
• Strong experience with Python for data engineering
• Hands-on expertise in PySpark and distributed data processing
• Advanced SQL skills for data transformation and optimisation
• Experience developing in notebook environments (e.g., Fabric, Databricks, or similar)
• Practical experience with Microsoft Fabric or Azure data services
• Solid understanding of medallion architecture and layered data design
• Experience building and maintaining ETL/ELT pipelines
• Familiarity with version control (e.g., Git) and collaborative development workflows
Desirable Skills
• Experience working in large-scale enterprise or government environments
• Knowledge of data governance, security, and compliance practices
• Familiarity with CI/CD pipelines for data engineering
• Experience with data modelling and performance tuning
• Exposure to geospatial data (beneficial but not required)
Additional Information
• Day Rate: Up to £600
• IR35 Status: Inside
• Duration: Initial 6-month contract with significant opportunity for extension
• Travel: Fully remote working
• Start Date: Immediate or short notice availability preferred





