

Sanderson Government & Defence
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Inside IR35) in London, offering £425 per day. Candidates must have SC Clearance and experience in data governance, ETL, and modern data engineering tools like Azure Data Factory and Apache Spark.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
425
-
🗓️ - Date
December 12, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Data Analysis #Scala #Data Governance #Data Pipeline #Automation #Spark (Apache Spark) #Data Science #Metadata #Data Quality #Databricks #Azure Data Factory #Data Integrity #"ETL (Extract #Transform #Load)" #Apache Spark #Data Ingestion #Microsoft Power BI #Data Architecture #Batch #Security #Azure #ADF (Azure Data Factory) #AWS (Amazon Web Services) #BI (Business Intelligence) #Monitoring #Business Analysis #Data Engineering
Role description
Data Engineer (Inside IR35)
£425 Per day
SC Cleared - A must to be considered for this opportunity
London
To support the formation of a new Data Team, there is a critical need to establish robust, scalable, and secure data infrastructure. Currently, data is dispersed across multiple systems with inconsistent formats and limited automation. A Data Engineer will play a key role in designing and implementing the pipelines, architecture, and tooling required to enable reliable data ingestion, transformation, and delivery.
Key Responsibilities
• Design, build, and maintain scalable data pipelines to ingest, transform, and store data from multiple sources.
• Develop and manage data models, schemas, and metadata to support analytics and reporting needs.
• Collaborate with Data Analysts, Data Scientists, and Business Analysts to ensure data availability, accessibility, and usability.
• Implement data quality checks, validation routines, and monitoring to ensure data integrity and reliability.
• Optimise data workflows for performance, cost efficiency, and maintainability using modern data-engineering tools and platforms (e.g., Azure Data Factory, AWS Data Pipeline, Databricks, Apache Spark).
• Support the integration of data into visualisation platforms and analytical environments (e.g., Power BI, ServiceNow).
• Ensure adherence to data governance, security, and privacy policies.
• Document data architecture, pipelines, and processes to support transparency and knowledge sharing.
• Contribute to the development of a modern data platform that supports both real-time and batch processing.
Data Engineer (Inside IR35)
£425 Per day
SC Cleared - A must to be considered for this opportunity
London
To support the formation of a new Data Team, there is a critical need to establish robust, scalable, and secure data infrastructure. Currently, data is dispersed across multiple systems with inconsistent formats and limited automation. A Data Engineer will play a key role in designing and implementing the pipelines, architecture, and tooling required to enable reliable data ingestion, transformation, and delivery.
Key Responsibilities
• Design, build, and maintain scalable data pipelines to ingest, transform, and store data from multiple sources.
• Develop and manage data models, schemas, and metadata to support analytics and reporting needs.
• Collaborate with Data Analysts, Data Scientists, and Business Analysts to ensure data availability, accessibility, and usability.
• Implement data quality checks, validation routines, and monitoring to ensure data integrity and reliability.
• Optimise data workflows for performance, cost efficiency, and maintainability using modern data-engineering tools and platforms (e.g., Azure Data Factory, AWS Data Pipeline, Databricks, Apache Spark).
• Support the integration of data into visualisation platforms and analytical environments (e.g., Power BI, ServiceNow).
• Ensure adherence to data governance, security, and privacy policies.
• Document data architecture, pipelines, and processes to support transparency and knowledge sharing.
• Contribute to the development of a modern data platform that supports both real-time and batch processing.






