

Senior Data Engineer - 12 Month FTC
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 12-month FTC in London, offering a competitive pay rate. Key skills required include SQL, Python, PySpark, and Azure Databricks. Experience in the London Insurance Market is preferred.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
August 5, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Fixed Term
-
π - Security clearance
Unknown
-
π - Location detailed
City Of London, England, United Kingdom
-
π§ - Skills detailed
#Data Strategy #Agile #SQL (Structured Query Language) #DevOps #Spark (Apache Spark) #Azure Databricks #Azure DevOps #Databricks #PySpark #Strategy #Data Engineering #Data Pipeline #Metadata #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Scala #Azure #Documentation #Python
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Position Title: Senior Data Engineer β 12 Month FTC
Reports to: Head of Data Platforms
Location: London
About the Role:
We are seeking a highly motivated and skilled Senior Data Engineer to join our growing Data Platforms team at BMS Group. In this role, you will play a pivotal role in supporting BMSβs ambition to fully realise the benefits of a Lakehouse platform deployed within Azure. Working under the Head of Data Platforms you will help manage, and implement the data engineering pipelines within Azure to ingest, enrich, and curate our; structured, semi-structured and unstructured data from our global business. In this role you will help lead a small team of engineers who you will be responsible for developing to support the business objectives of delivering a scalable and efficient platform for the business.
Key Responsibilities:
β’ Responsible for owning and managing all data engineering pipelines and layers of the Lakehouse platform.
β’ Responsible for managing the prioritisation of data engineering backlog.
β’ Responsible for managing the peer review and testing process of any pipeline releases.
β’ Continuously engage with Head of Data Platforms, Group Head of Data Strategy and Governance and Architecture to understand the evolving needs of the business so that they can be supported by the Lakehouse platform.
β’ Own, with the support of Head of Data Platforms, the definition of standards and best practices for BMSβs data engineering pipelines in relation to; code as well as documentation.
β’ Continuously monitor and analyse pipelines to identify opportunities for optimisation and efficiencies.
β’ Work within plan-driven (Waterfall) or iterative (Agile) delivery methodologies based on project requirements.
β’ Continuously learn and develop your skills to stay ahead of the curve in the evolving data landscape.
β’ Develop and implement generalised data engineering pipelines, based on patterns, to create efficient, scalable, and manageable pipelines.
β’ Collaborate with other central IT functions to ensure that the necessary access to systems and technology is granted based on the evolving needs of the team.
β’ Build a comprehensive understanding of both technical and business domains.
β’ Collaborate with cross-functional teams to understand and address data engineering and data needs.
Knowledge and Skills:
β’ Experience working as a principal/lead Data Engineer.
β’ Experience working with large data sets and proficiency in SQL, Python and PySpark.
β’ Experience manging a team of engineers with varying levels of experience within data engineering.
β’ Experience deploying pipelines within Azure Databricks in line with the medallion architecture framework.
β’ Experience using SQL, Python and PySpark to build data engineering pipelines.
β’ Understanding of how to define best practices in relation to documentation standards as well as code standards.
β’ Understanding of data modelling approaches and standards
β’ Understanding of semantic modelling techniques and how data is consumed from a Lakehouse to support them.
β’ Experience building Azure DevOps Pipelines.
β’ Excellent communication and problem-solving skills.
β’ Experience working within an agile environment.
β’ Assist with the upskilling and continued improvement of junior members of the team
Desired skills and experience:
β’ Experience in the London Insurance Market, or the wider Financial Services sector
β’ Experience building and deploying machine learning pipelines into data engineering pipelines.
β’ Experience of using metadata driven data engineering approaches for ingestion and transformations within data pipelines.
Success Metrics:
β’ Manage and maintain high-quality and efficient data engineering pipelines to meet the technical requirements of the business.
β’ Share knowledge and expertise to develop and upskill your team to become better and more effective data engineers.
β’ Contribute to our team culture by creating a positively challenging environment.