Sanderson

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Inside IR35) in London, offering £425 per day for a contract length of unspecified duration. Key skills include Apache Spark, Azure Data Factory, and data governance. SC clearance is required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 5, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
London
-
🧠 - Skills detailed
#Automation #Data Science #Microsoft Power BI #Scala #Batch #Metadata #Data Governance #Monitoring #Data Analysis #Spark (Apache Spark) #Azure #Data Integrity #AWS (Amazon Web Services) #Security #Data Ingestion #BI (Business Intelligence) #Databricks #Data Quality #Data Pipeline #Business Analysis #"ETL (Extract #Transform #Load)" #ADF (Azure Data Factory) #Azure Data Factory #Data Engineering #Apache Spark #Data Architecture
Role description
Data Engineer (Inside IR35) £425 Per day SC Cleared London o support the formation of a new Data Team, there is a critical need to establish robust, scalable, and secure data infrastructure. Currently, data is dispersed across multiple systems with inconsistent formats and limited automation. A Data Engineer will play a key role in designing and implementing the pipelines, architecture, and tooling required to enable reliable data ingestion, transformation, and delivery. Key Responsibilities • Design, build, and maintain scalable data pipelines to ingest, transform, and store data from multiple sources. • Develop and manage data models, schemas, and metadata to support analytics and reporting needs. • Collaborate with Data Analysts, Data Scientists, and Business Analysts to ensure data availability, accessibility, and usability. • Implement data quality checks, validation routines, and monitoring to ensure data integrity and reliability. • Optimise data workflows for performance, cost efficiency, and maintainability using modern data-engineering tools and platforms (e.g., Azure Data Factory, AWS Data Pipeline, Databricks, Apache Spark). • Support the integration of data into visualisation platforms and analytical environments (e.g., Power BI, ServiceNow). • Ensure adherence to data governance, security, and privacy policies. • Document data architecture, pipelines, and processes to support transparency and knowledge sharing. • Contribute to the development of a modern data platform that supports both real-time and batch processing. Reasonable Adjustments: Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications from people of all backgrounds and perspectives. Our success is driven by our people, united by the spirit of partnership to deliver the best resourcing solutions for our clients. If you need any help or adjustments during the recruitment process for any reason, please let us know when you apply or talk to the recruiters directly so we can support you.