

Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in London, on-site 4 days a week, for 6 months at £550 - £615 per day outside IR35. Requires expertise in PySpark, Databricks, Azure, and financial services experience.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
615
-
🗓️ - Date discovered
September 27, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Databricks #DevOps #"ETL (Extract #Transform #Load)" #Infrastructure as Code (IaC) #Data Strategy #Collibra #Spark (Apache Spark) #ADF (Azure Data Factory) #Agile #Data Vault #Vault #Libraries #Batch #Scala #Data Pipeline #Data Governance #Automated Testing #Python #Azure #Data Engineering #Strategy #PySpark #Database Modelling #Delta Lake #Terraform #Synapse #Pytest
Role description
🚨 URGENT ROLE - London Based Senior Data Engineers 🚨
Senior Data Engineer
London 4 days per week on-site
6 months (likely extension)
£550 - £615 per day outside IR35
Primus is partnering with a leading Financial Services client who are embarking on a greenfield data transformation programme. Their current processes offer limited digital customer interaction, and the vision is to modernise these processes by:
- Building a modern data platform in Databricks
- Creating a single customer view across the organisation.
- Enabling new client-facing digital services through real-time and batch data pipelines.
You will join a growing team of engineers and architects, with strong autonomy and ownership. This is a high-value greenfield initiative for the business, directly impacting customer experience and long-term data strategy.
Key Responsibilities:
• Design and build scalable data pipelines and transformation logic in Databricks
• Implement and maintain Delta Lake physical models and relational data models.
• Contribute to design and coding standards, working closely with architects.
• Develop and maintain Python packages and libraries to support engineering work.
• Build and run automated testing frameworks (e.g. PyTest).
• Support CI/CD pipelines and DevOps best practices.
• Collaborate with BAs on source-to-target mapping and build new data model components.
• Participate in Agile ceremonies (stand-ups, backlog refinement, etc.).
Essential Skills:
• PySpark and SparkSQL.
• Strong knowledge of relational database modelling
• Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes).
• Azure platform experience.
• ADF or Synapse pipelines for orchestration.
• Python development
• Familiarity with CI/CD and DevOps principles.
Desirable Skills
• Data Vault 2.0.
• Data Governance & Quality tools (e.g. Great Expectations, Collibra).
• Terraform and Infrastructure as Code.
• Event Hubs, Azure Functions.
• Experience with DLT / Lakeflow Declarative Pipelines:
• Financial Services background.
If you are open to working 4 days onsite in London and tick most of the boxes please reach out to me directly! tom.fielding@primus.connect.com
🚨 URGENT ROLE - London Based Senior Data Engineers 🚨
Senior Data Engineer
London 4 days per week on-site
6 months (likely extension)
£550 - £615 per day outside IR35
Primus is partnering with a leading Financial Services client who are embarking on a greenfield data transformation programme. Their current processes offer limited digital customer interaction, and the vision is to modernise these processes by:
- Building a modern data platform in Databricks
- Creating a single customer view across the organisation.
- Enabling new client-facing digital services through real-time and batch data pipelines.
You will join a growing team of engineers and architects, with strong autonomy and ownership. This is a high-value greenfield initiative for the business, directly impacting customer experience and long-term data strategy.
Key Responsibilities:
• Design and build scalable data pipelines and transformation logic in Databricks
• Implement and maintain Delta Lake physical models and relational data models.
• Contribute to design and coding standards, working closely with architects.
• Develop and maintain Python packages and libraries to support engineering work.
• Build and run automated testing frameworks (e.g. PyTest).
• Support CI/CD pipelines and DevOps best practices.
• Collaborate with BAs on source-to-target mapping and build new data model components.
• Participate in Agile ceremonies (stand-ups, backlog refinement, etc.).
Essential Skills:
• PySpark and SparkSQL.
• Strong knowledge of relational database modelling
• Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes).
• Azure platform experience.
• ADF or Synapse pipelines for orchestration.
• Python development
• Familiarity with CI/CD and DevOps principles.
Desirable Skills
• Data Vault 2.0.
• Data Governance & Quality tools (e.g. Great Expectations, Collibra).
• Terraform and Infrastructure as Code.
• Event Hubs, Azure Functions.
• Experience with DLT / Lakeflow Declarative Pipelines:
• Financial Services background.
If you are open to working 4 days onsite in London and tick most of the boxes please reach out to me directly! tom.fielding@primus.connect.com