Morgan McKinley

Data Engineer ( NPPV3 )

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (NPPV3) with a contract length of flexible duration, offering a competitive pay rate. It requires strong Azure experience, Databricks proficiency, and knowledge of data pipelines, preferably in public-sector environments.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 27, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Outside IR35
-
πŸ”’ - Security
Yes
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#DevOps #ADF (Azure Data Factory) #ADLS (Azure Data Lake Storage) #"ETL (Extract #Transform #Load)" #Security #Synapse #Batch #Data Lake #Data Quality #Databricks #Azure DevOps #Azure #Spark SQL #Storage #Scala #Leadership #Azure Data Factory #Microsoft Azure #Agile #Data Governance #Data Engineering #PySpark #SQL (Structured Query Language) #Delta Lake #Spark (Apache Spark) #Cloud #Azure ADLS (Azure Data Lake Storage) #Automation #Data Pipeline
Role description
Senior Data Engineer ( NPPV3 ) Location: Fully Remote (UK) Security Clearance: NPPV3 required (or eligible) Contract Type: Outside IR35 Rate: Competitive (DOE) Start: ASAP / Flexible Overview We’re supporting a leading consultancy client delivering complex data platform and analytics programmes across regulated and public-sector environments. They’re looking to engage an experienced Senior Data Engineer to help design, build and scale modern cloud data platforms on Azure, with a strong focus on Databricks and modern lakehouse patterns. This is a hands-on delivery role within multi-disciplinary teams, supporting clients moving from legacy or fragmented data estates towards scalable, cloud-native platforms. Key Responsibilities β€’ Design and build robust, scalable data pipelines and platforms on Microsoft Azure β€’ Lead hands-on delivery using Databricks (PySpark / Spark SQL) for data engineering and transformation workloads β€’ Develop and maintain ingestion frameworks (batch and streaming) from a variety of source systems β€’ Contribute to lakehouse architecture patterns using ADLS Gen2, Delta Lake, and modern data modelling approaches β€’ Work closely with solution architects, analytics engineers and stakeholders to translate requirements into technical solutions β€’ Champion data quality, reliability, performance and security best practices β€’ Support CI/CD, automation and infrastructure-as-code approaches within data engineering workflows β€’ Provide technical leadership and mentoring within delivery teams where appropriate Required Skills & Experience β€’ Strong commercial experience as a Senior Data Engineer on Azure β€’ Proven hands-on delivery with Databricks (PySpark, Spark SQL, Delta Lake) β€’ Experience with core Azure services such as: β€’ Azure Data Lake Storage (ADLS Gen2) β€’ Azure Data Factory / Synapse Pipelines β€’ Azure DevOps (or similar CI/CD tooling) β€’ Strong SQL and data modelling experience β€’ Experience building production-grade data pipelines (batch and/or streaming) β€’ Comfortable working in agile delivery teams within consultancy or complex enterprise environments Desirable β€’ Experience working in public sector or regulated environments β€’ Familiarity with lakehouse architectures and modern data platform patterns β€’ Exposure to data governance, security and platform best practice β€’ Experience operating in environments requiring NPPV3 or similar clearance Clearance β€’ Active NPPV3 clearance is required (or candidates must be eligible and willing to go through the process).