

Oliver Bernard
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6–12 month contract, offering £500–£700 per day. Based in Newcastle, it requires expertise in Python, SQL, Apache Spark, and AWS, focusing on scalable data pipelines and cloud-based solutions.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
640
-
🗓️ - Date
February 26, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Newcastle Upon Tyne, England, United Kingdom
-
🧠 - Skills detailed
#Data Engineering #Cloud #S3 (Amazon Simple Storage Service) #Python #DevOps #Data Pipeline #Apache Spark #Spark (Apache Spark) #Data Quality #Lambda (AWS Lambda) #Data Governance #SQL (Structured Query Language) #Data Architecture #Scala #AWS (Amazon Web Services) #Redshift #Agile #Security #Big Data #"ETL (Extract #Transform #Load)" #Infrastructure as Code (IaC) #Terraform
Role description
Senior Data Engineer – Contract (Outside IR35)
📍 Newcastle (3 days onsite) | 🏡 2 days WFH
💰 £500–£700 per day | 📅 6–12 Month Project
We’re looking for an experienced Senior Data Engineer to join a high-impact programme delivering scalable, cloud-based data solutions for a major organisation based in Newcastle.
This is a 6–12 month contract, outside IR35, offering a competitive day rate and a flexible hybrid working model.
🔎 The Role
You’ll play a key role in designing, building, and optimising robust data pipelines and cloud-native data platforms. Working closely with architects, analysts, and engineering teams, you’ll help drive best practice in data engineering and deliver reliable, high-performance solutions.
🛠 Key Responsibilities
• Design and build scalable data pipelines using Python and SQL
• Develop and optimise big data solutions using Apache Spark
• Implement and manage cloud-based data platforms within AWS
• Build and maintain ETL/ELT processes
• Ensure data quality, governance, and performance optimisation
• Collaborate with cross-functional teams to translate business requirements into technical solutions
• Contribute to architectural decisions and technical best practice
✅ Required Skills & Experience
• Strong commercial experience as a Senior Data Engineer
• Advanced proficiency in Python
• Solid experience with Apache Spark
• Strong SQL and data modelling skills
• Hands-on experience with AWS (e.g. S3, Glue, Redshift, EMR, Lambda)
• Experience building and maintaining scalable data pipelines
• Strong understanding of data architecture and best practices
🌟 Desirable
• Experience with CI/CD and DevOps practices
• Infrastructure as Code (e.g. Terraform)
• Experience working in Agile environments
• Exposure to data governance and security frameworks
Senior Data Engineer – Contract (Outside IR35)
📍 Newcastle (3 days onsite) | 🏡 2 days WFH
💰 £500–£700 per day | 📅 6–12 Month Project
We’re looking for an experienced Senior Data Engineer to join a high-impact programme delivering scalable, cloud-based data solutions for a major organisation based in Newcastle.
This is a 6–12 month contract, outside IR35, offering a competitive day rate and a flexible hybrid working model.
🔎 The Role
You’ll play a key role in designing, building, and optimising robust data pipelines and cloud-native data platforms. Working closely with architects, analysts, and engineering teams, you’ll help drive best practice in data engineering and deliver reliable, high-performance solutions.
🛠 Key Responsibilities
• Design and build scalable data pipelines using Python and SQL
• Develop and optimise big data solutions using Apache Spark
• Implement and manage cloud-based data platforms within AWS
• Build and maintain ETL/ELT processes
• Ensure data quality, governance, and performance optimisation
• Collaborate with cross-functional teams to translate business requirements into technical solutions
• Contribute to architectural decisions and technical best practice
✅ Required Skills & Experience
• Strong commercial experience as a Senior Data Engineer
• Advanced proficiency in Python
• Solid experience with Apache Spark
• Strong SQL and data modelling skills
• Hands-on experience with AWS (e.g. S3, Glue, Redshift, EMR, Lambda)
• Experience building and maintaining scalable data pipelines
• Strong understanding of data architecture and best practices
🌟 Desirable
• Experience with CI/CD and DevOps practices
• Infrastructure as Code (e.g. Terraform)
• Experience working in Agile environments
• Exposure to data governance and security frameworks






