

Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in London, requiring 3 days on-site per week for a 6-month contract at £550 - £615 per day. Key skills include PySpark, Databricks, Azure, and financial services experience.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
624
-
🗓️ - Date discovered
September 4, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#ADF (Azure Data Factory) #Data Strategy #Database Modelling #Delta Lake #Python #Libraries #Scala #Databricks #DevOps #Agile #Collibra #PySpark #Pytest #Spark (Apache Spark) #Batch #Azure #Vault #Strategy #Data Engineering #Data Vault #Automated Testing #Infrastructure as Code (IaC) #"ETL (Extract #Transform #Load)" #Synapse #Data Governance #Data Pipeline #Terraform
Role description
Senior Data Engineer
London 3 days per week on-site
6 months (likely extension)
£550 - £615 per day outside IR35
Primus is partnering with a leading Financial Services client who are embarking on a greenfield data transformation programme. Their current processes offer limited digital customer interaction, and the vision is to modernise these processes by:
• Building a modern data platform in Databricks
• Creating a single customer view across the organisation.
• Enabling new client-facing digital services through real-time and batch data pipelines.
You will join a growing team of engineers and architects, with strong autonomy and ownership. This is a high-value greenfield initiative for the business, directly impacting customer experience and long-term data strategy.
Key Responsibilities:
• Design and build scalable data pipelines and transformation logic in Databricks
• Implement and maintain Delta Lake physical models and relational data models.
• Contribute to design and coding standards, working closely with architects.
• Develop and maintain Python packages and libraries to support engineering work.
• Build and run automated testing frameworks (e.g. PyTest).
• Support CI/CD pipelines and DevOps best practices.
• Collaborate with BAs on source-to-target mapping and build new data model components.
• Participate in Agile ceremonies (stand-ups, backlog refinement, etc.).
Essential Skills:
• PySpark and SparkSQL.
• Strong knowledge of relational database modelling
• Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes).
• Azure platform experience.
• ADF or Synapse pipelines for orchestration.
• Python development
• Familiarity with CI/CD and DevOps principles.
Desirable Skills
• Data Vault 2.0.
• Data Governance & Quality tools (e.g. Great Expectations, Collibra).
• Terraform and Infrastructure as Code.
• Event Hubs, Azure Functions.
• Experience with DLT / Lakeflow Declarative Pipelines:
• Financial Services background.
To be considered, please click apply.
Senior Data Engineer
London 3 days per week on-site
6 months (likely extension)
£550 - £615 per day outside IR35
Primus is partnering with a leading Financial Services client who are embarking on a greenfield data transformation programme. Their current processes offer limited digital customer interaction, and the vision is to modernise these processes by:
• Building a modern data platform in Databricks
• Creating a single customer view across the organisation.
• Enabling new client-facing digital services through real-time and batch data pipelines.
You will join a growing team of engineers and architects, with strong autonomy and ownership. This is a high-value greenfield initiative for the business, directly impacting customer experience and long-term data strategy.
Key Responsibilities:
• Design and build scalable data pipelines and transformation logic in Databricks
• Implement and maintain Delta Lake physical models and relational data models.
• Contribute to design and coding standards, working closely with architects.
• Develop and maintain Python packages and libraries to support engineering work.
• Build and run automated testing frameworks (e.g. PyTest).
• Support CI/CD pipelines and DevOps best practices.
• Collaborate with BAs on source-to-target mapping and build new data model components.
• Participate in Agile ceremonies (stand-ups, backlog refinement, etc.).
Essential Skills:
• PySpark and SparkSQL.
• Strong knowledge of relational database modelling
• Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes).
• Azure platform experience.
• ADF or Synapse pipelines for orchestration.
• Python development
• Familiarity with CI/CD and DevOps principles.
Desirable Skills
• Data Vault 2.0.
• Data Governance & Quality tools (e.g. Great Expectations, Collibra).
• Terraform and Infrastructure as Code.
• Event Hubs, Azure Functions.
• Experience with DLT / Lakeflow Declarative Pipelines:
• Financial Services background.
To be considered, please click apply.