

CipherTek Recruitment
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer focused on building a Databricks lakehouse platform in a high-performance trading environment. Contract length is 12 months at £850 p/d. Key skills include Spark, Databricks, Python, and experience with large-scale data systems.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
1040
-
🗓️ - Date
April 28, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
City Of London, England, United Kingdom
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #ML (Machine Learning) #Azure #Datasets #Databricks #Delta Lake #Spark (Apache Spark) #Python #Data Engineering #Scala #BI (Business Intelligence)
Role description
🔥 Senior / Lead Data Engineer – Databricks / Spark (High-Performance Platform)
£850 p/d OUTSIDE IR35 (higher for Lead)
12-month rolling (multi-year programme)
1 day/week – St Paul’s, London
We’re hiring a Senior & Lead Data Engineer to build a Databricks lakehouse platform in a high-performance, business-critical Front office trading environment.
This is a hands-on engineering role focused on building and optimising large-scale distributed data systems.
This is a highly technical team operating at scale, we’re looking for engineers with deep data engineering expertise, strong low-level Spark knowledge, and experience building high-performance systems using modern Databricks and AI-driven platforms.
.
What you’ll do
• Build and optimise Spark pipelines on Databricks
• Develop a lakehouse platform (Medallion architecture)
• Own data modelling, architecture, and pipeline design
• Work with large-scale data (TB–PB)
• Drive performance, scalability, and reliability in production
What we’re looking for
• Strong experience running Spark workloads in production
• Proven ability to optimise Spark at scale (Tb/PB datasets)
• Solid Python (Scala beneficial, not essential)
• Experience with data modelling and lakehouse architecture
• Ability to debug and improve performance in distributed systems
Important
• Must have recent, hands-on Spark experience
• Databricks strongly preferred (not essential if Spark depth is very strong)
• Experience supporting AI/ML or advanced analytics platforms is a big plus
Nice to have
• Financial services / trading exposure
• Experience in performance-critical environments
Not a fit if
• Primarily BI / reporting focused
• Spark used only at small scale or outside production
• No experience with performance optimisation in distributed systems
Stack
Databricks (Azure), Spark, Delta Lake, Python (+ Scala optional)
Bottom line
We’re looking for engineers who can design, build, and optimise Spark-based systems at scale and operate effectively in a performance-critical environment from day one.
🔥 Senior / Lead Data Engineer – Databricks / Spark (High-Performance Platform)
£850 p/d OUTSIDE IR35 (higher for Lead)
12-month rolling (multi-year programme)
1 day/week – St Paul’s, London
We’re hiring a Senior & Lead Data Engineer to build a Databricks lakehouse platform in a high-performance, business-critical Front office trading environment.
This is a hands-on engineering role focused on building and optimising large-scale distributed data systems.
This is a highly technical team operating at scale, we’re looking for engineers with deep data engineering expertise, strong low-level Spark knowledge, and experience building high-performance systems using modern Databricks and AI-driven platforms.
.
What you’ll do
• Build and optimise Spark pipelines on Databricks
• Develop a lakehouse platform (Medallion architecture)
• Own data modelling, architecture, and pipeline design
• Work with large-scale data (TB–PB)
• Drive performance, scalability, and reliability in production
What we’re looking for
• Strong experience running Spark workloads in production
• Proven ability to optimise Spark at scale (Tb/PB datasets)
• Solid Python (Scala beneficial, not essential)
• Experience with data modelling and lakehouse architecture
• Ability to debug and improve performance in distributed systems
Important
• Must have recent, hands-on Spark experience
• Databricks strongly preferred (not essential if Spark depth is very strong)
• Experience supporting AI/ML or advanced analytics platforms is a big plus
Nice to have
• Financial services / trading exposure
• Experience in performance-critical environments
Not a fit if
• Primarily BI / reporting focused
• Spark used only at small scale or outside production
• No experience with performance optimisation in distributed systems
Stack
Databricks (Azure), Spark, Delta Lake, Python (+ Scala optional)
Bottom line
We’re looking for engineers who can design, build, and optimise Spark-based systems at scale and operate effectively in a performance-critical environment from day one.






