Oliver Bernard

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6–12 month contract, paying £700–£800 per day. Located in Canary Wharf, London (3 days onsite), it requires expertise in Python, Apache Spark, AWS, Kafka, and SQL, with experience in complex environments.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
800
-
🗓️ - Date
January 15, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Athena #Apache Spark #Datasets #Terraform #Cloud #Monitoring #Agile #AWS (Amazon Web Services) #Scrum #Data Pipeline #Data Processing #Infrastructure as Code (IaC) #Programming #Data Science #Batch #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Python #SQL (Structured Query Language) #DevOps #Scala #Data Engineering #S3 (Amazon Simple Storage Service) #Data Quality #Kafka (Apache Kafka) #Lambda (AWS Lambda) #Redshift
Role description
Senior Data Engineer (Contract) Location: Canary Wharf, London (3 days onsite / 2 days remote) Rate: £700–£800 per day Contract Type: Inside IR35 Duration: 6–12 months Overview We are seeking an experienced Senior Data Engineer to join a high-performing data team within a large-scale, enterprise environment. You will play a key role in designing, building, and maintaining robust data platforms and pipelines that support analytics, reporting, and downstream applications. This role requires strong hands-on experience with Python, Apache Spark, AWS, Kafka, and SQL, and the ability to operate effectively in a collaborative, agile delivery setting. Key Responsibilities • Design, develop, and maintain scalable, fault-tolerant data pipelines and data platforms • Build and optimise batch and streaming data solutions using Python, Spark, and Kafka • Work extensively with AWS services (e.g. S3, EMR, Glue, Lambda, Redshift, Athena) • Ensure high data quality, reliability, and performance across data workflows • Collaborate with data scientists, analysts, and stakeholders to deliver data solutions aligned to business needs • Contribute to architectural decisions and best practices around data engineering • Support production systems, including monitoring, troubleshooting, and performance tuning • Write clean, well-documented, and testable code following engineering best practices Required Skills & Experience • Proven experience as a Senior Data Engineer in complex, data-intensive environments • Strong programming experience in Python • Hands-on expertise with Apache Spark (batch and/or streaming) • Solid experience working with AWS cloud services in production environments • Experience with Kafka or similar event streaming platforms • Advanced SQL skills and experience working with large datasets • Strong understanding of data modelling, ETL/ELT, and data warehousing concepts • Experience working in Agile / Scrum teams • Excellent communication and stakeholder engagement skills Desirable Experience • Experience with real-time or near-real-time data processing • Infrastructure as Code (e.g. Terraform, CloudFormation) • CI/CD pipelines and DevOps practices • Experience in regulated or enterprise-scale environments (e.g. financial services)