

Hays
Data Engineer (Big Data/ Hadoop/ Spark Dev)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Big Data/Hadoop/Spark Dev) with a contract length of "unknown," offering a pay rate of "unknown," and is located "on-site." Key skills include Big Data technologies, Spark, Hadoop, and experience in financial services.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
April 15, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
London
-
π§ - Skills detailed
#Data Warehouse #Kafka (Apache Kafka) #dbt (data build tool) #Data Processing #Monitoring #SQL Server #Data Quality #NoSQL #Data Engineering #Scala #Security #Airflow #Databases #RDBMS (Relational Database Management System) #PostgreSQL #Big Data #Spark (Apache Spark) #MongoDB #Data Lake #Apache Airflow #SQL (Structured Query Language) #Hadoop
Role description
Your new company
Working for a renowned financial services organisation
Your new role
We're looking for a Data Engineer to design and deliver scalable on prem, high-quality data solutions for low/ high-level data platforms that power analytical and business insights. This is a hands-on role suited to someone with strong data engineering and big data expertise, ideally gained within financial services.
Using , Big Data tools and Spark development you will ensure data quality through automated validation, monitoring, and testing. You will also enable seamless integration across data warehouses and data lakes, contributing to a robust, scalable, and resilient enterprise data ecosystem.
What you'll need to succeed
β’ Strong Data Engineering expertise with Big Data technologies.
β’ Experience designing and building on-prem data platforms, from high-level architecture to detailed technical design.
β’ Strong Spark development experience.
β’ Hands-on experience with Hadoop - configuring multi-node Hadoop clusters, including resource management, security, and performance tuning.
β’ Strong Big Data engineering background using Apache Airflow, Spark, dbt, Kafka, and Hadoop ecosystem tools.
β’ Knowledge of RDBMS systems (PostgreSQL, SQL Server) and familiarity with NoSQL/distributed databases such as MongoDB.
β’ Proven delivery of streaming pipelines and real-time data processing solutions.
β’ Delivered streaming pipelines and real-time data processing solutions.
What you'll get in return
Flexible working options available.
What you need to do now
If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now.
Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk
Your new company
Working for a renowned financial services organisation
Your new role
We're looking for a Data Engineer to design and deliver scalable on prem, high-quality data solutions for low/ high-level data platforms that power analytical and business insights. This is a hands-on role suited to someone with strong data engineering and big data expertise, ideally gained within financial services.
Using , Big Data tools and Spark development you will ensure data quality through automated validation, monitoring, and testing. You will also enable seamless integration across data warehouses and data lakes, contributing to a robust, scalable, and resilient enterprise data ecosystem.
What you'll need to succeed
β’ Strong Data Engineering expertise with Big Data technologies.
β’ Experience designing and building on-prem data platforms, from high-level architecture to detailed technical design.
β’ Strong Spark development experience.
β’ Hands-on experience with Hadoop - configuring multi-node Hadoop clusters, including resource management, security, and performance tuning.
β’ Strong Big Data engineering background using Apache Airflow, Spark, dbt, Kafka, and Hadoop ecosystem tools.
β’ Knowledge of RDBMS systems (PostgreSQL, SQL Server) and familiarity with NoSQL/distributed databases such as MongoDB.
β’ Proven delivery of streaming pipelines and real-time data processing solutions.
β’ Delivered streaming pipelines and real-time data processing solutions.
What you'll get in return
Flexible working options available.
What you need to do now
If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now.
Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk






