

Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer specializing in Spark and Databricks, based in London (2 days on-site). It’s a 6-month contract requiring strong experience in Spark/Databricks, PySpark, DevOps, and familiarity with financial systems.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
-
🗓️ - Date discovered
June 7, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Fixed Term
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Automation #dbt (data build tool) #Data Engineering #PySpark #Data Lake #Databricks #Spark (Apache Spark) #DevOps #"ETL (Extract #Transform #Load)" #Python #Storage #Apache Spark #AWS (Amazon Web Services) #Airflow #Infrastructure as Code (IaC) #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Senior Data Engineer – Spark & Databricks Platform Build
Location: London (2 days a week on site)
Contract: 6-month sign-on
Interview process: 2 stages
twentyAI’s customer is building the next generation of their data platform, with Databricks and Apache Spark at the core. A PoC is already in place — now they’re looking for someone who’s done this before to lead the full-scale build and help roll the platform out across the organisation.
The project involves migrating the existing on-prem Data Lake to a Databricks-based architecture, while continuing to leverage private cloud storage. You’ll be laying the foundations: building ETL pipelines in PySpark, implementing platform best practices, and working closely with teams across trading and finance.
Profile:
• Strong experience delivering Spark/Databricks implementations in a lead or senior role
• Solid hands-on background in PySpark, Spark, and Python
• Experience setting up data platforms with a DevOps-first approach (IaC, CI/CD, automation)
• Exposure to AWS-based environments
• Familiarity with financial/trading systems and working in regulated industries (e.g., banking, commodities)
Tech Environment:
• Primary Platform: Databricks, Apache Spark
• Other Tech: DBT, Airflow, Python, PySpark
• Cloud: AWS (preferred), private cloud storage
• Data Sources: Financial/trading systems