

Whitehall Resources
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 1-month contract (likely extensions), paying inside IR35. It requires expertise in Microsoft Fabric, PySpark, SQL, and data engineering in complex domains, with eligibility for SC clearance. Remote work with occasional London visits.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 7, 2026
🕒 - Duration
1 to 3 months
-
🏝️ - Location
Remote
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Dimensional Modelling #Jira #Microsoft Power BI #dbt (data build tool) #"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence) #Compliance #Automated Testing #Migration #Synapse #Delta Lake #Infrastructure as Code (IaC) #Python #ML (Machine Learning) #PySpark #Security #Agile #Azure #Scala #Code Reviews #Data Engineering #Observability #Data Pipeline #Batch #DevOps #GitLab #Documentation #SQL (Structured Query Language) #Spark (Apache Spark) #BI (Business Intelligence) #Terraform #Version Control #GIT #Spark SQL
Role description
Data Engineer
Whitehall Resources require a Data Engineer to work with a key client on a 1 month initial contract (with likely extensions).
• Inside IR35.
• This role is mostly remote, with expected occasional visits to the London site.
• Candidates are required to be eligible for SC clearance.
Data Engineer
We’re seeking a Data Engineer to design, build, and operate data solutions that power mission‑critical analytics in a complex public‑sector environment. You’ll lead on scalable pipelines Microsoft Fabric (One Lake/Delta Lake, Data Factory, Synapse Data Engineering), using PySpark/Spark SQL/Python and SQL, modernise legacy estates, and mentor engineers—turning raw data into reliable, secure, and actionable intelligence for stakeholders.
What you’ll do
• Engineer production‑grade data pipelines on Microsoft Fabric (One Lake/Delta Lake, Data Factory, Synapse Notebook Data Engineering), using PySpark/Spark SQL/Python and SQL, with a focus on performance, resilience, testing, and observability.
• Support reporting & MI use cases, including transformations and data models that feed downstream tools (e.g., Power BI).
• Own CI/CD and version control practices (e.g., Git/GitLab), review code, and enforce engineering standards.
• Coach and mentor engineers, provide technical guidance/code reviews, and contribute to architectural decisions across squads.
• Work in Agile delivery, collaborating across product, data, and platform teams using Jira/Confluence; translate requirements into robust engineering tasks.
• Embed security and compliance by design, aligning with BPSS/SC constraints and department data‑handling policies.
Essential skills & experience
• Hands‑on expertise in Azure/Fabric: Microsoft Fabric (One Lake/Delta Lake, Data Factory, Synapse Notebook Data Engineering), using PySpark/Spark SQL/Python and SQL for large‑scale batch processing.
• Data engineering at scale in government or similarly complex domains, including performance tuning and data‑quality management.
• CI/CD & DevOps: pipelines and IaC (e.g., Terraform), automated testing, and release governance.
• Version control & collaboration: Git/GitLab, code review, branching strategies, and trunk/PR workflows.
• APIs & integration: building/consuming data services to move and expose data safely and reliably.
• Agile ways of working with Jira/Confluence; clear stakeholder communication and concise technical documentation.
• Security clearance: BPSS (minimum) and SC‑cleared or SC‑clearable for UK government work.
Desirable
• Data warehousing & modelling (e.g., dimensional modelling; dbt).
• Basic Power BI familiarity to partner with BI developers and validate end‑to‑end data flows.
Certifications (nice to have)
• Fabric associate Data Engineer (or higher), Azure AI Fundamentals (awareness of ML/AI services).
• SFIA Level 4 (Enable) alignment
• Autonomy: Works under general direction; plans own work; designs and implements Microsoft Fabric pipelines, modernising legacy code with minimal supervision.
• Influence: Shapes standards through code reviews and mentoring; influences delivery outcomes across teams.
• Complexity: Handles substantial, multifaceted engineering tasks (e.g., migration to new MI platform; data‑quality resolution; estimating effort).
Data Engineer
Whitehall Resources require a Data Engineer to work with a key client on a 1 month initial contract (with likely extensions).
• Inside IR35.
• This role is mostly remote, with expected occasional visits to the London site.
• Candidates are required to be eligible for SC clearance.
Data Engineer
We’re seeking a Data Engineer to design, build, and operate data solutions that power mission‑critical analytics in a complex public‑sector environment. You’ll lead on scalable pipelines Microsoft Fabric (One Lake/Delta Lake, Data Factory, Synapse Data Engineering), using PySpark/Spark SQL/Python and SQL, modernise legacy estates, and mentor engineers—turning raw data into reliable, secure, and actionable intelligence for stakeholders.
What you’ll do
• Engineer production‑grade data pipelines on Microsoft Fabric (One Lake/Delta Lake, Data Factory, Synapse Notebook Data Engineering), using PySpark/Spark SQL/Python and SQL, with a focus on performance, resilience, testing, and observability.
• Support reporting & MI use cases, including transformations and data models that feed downstream tools (e.g., Power BI).
• Own CI/CD and version control practices (e.g., Git/GitLab), review code, and enforce engineering standards.
• Coach and mentor engineers, provide technical guidance/code reviews, and contribute to architectural decisions across squads.
• Work in Agile delivery, collaborating across product, data, and platform teams using Jira/Confluence; translate requirements into robust engineering tasks.
• Embed security and compliance by design, aligning with BPSS/SC constraints and department data‑handling policies.
Essential skills & experience
• Hands‑on expertise in Azure/Fabric: Microsoft Fabric (One Lake/Delta Lake, Data Factory, Synapse Notebook Data Engineering), using PySpark/Spark SQL/Python and SQL for large‑scale batch processing.
• Data engineering at scale in government or similarly complex domains, including performance tuning and data‑quality management.
• CI/CD & DevOps: pipelines and IaC (e.g., Terraform), automated testing, and release governance.
• Version control & collaboration: Git/GitLab, code review, branching strategies, and trunk/PR workflows.
• APIs & integration: building/consuming data services to move and expose data safely and reliably.
• Agile ways of working with Jira/Confluence; clear stakeholder communication and concise technical documentation.
• Security clearance: BPSS (minimum) and SC‑cleared or SC‑clearable for UK government work.
Desirable
• Data warehousing & modelling (e.g., dimensional modelling; dbt).
• Basic Power BI familiarity to partner with BI developers and validate end‑to‑end data flows.
Certifications (nice to have)
• Fabric associate Data Engineer (or higher), Azure AI Fundamentals (awareness of ML/AI services).
• SFIA Level 4 (Enable) alignment
• Autonomy: Works under general direction; plans own work; designs and implements Microsoft Fabric pipelines, modernising legacy code with minimal supervision.
• Influence: Shapes standards through code reviews and mentoring; influences delivery outcomes across teams.
• Complexity: Handles substantial, multifaceted engineering tasks (e.g., migration to new MI platform; data‑quality resolution; estimating effort).






