
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6-month freelance contract, paying £550-£650 per day. Requires expertise in PySpark, Databricks, Azure, and Python. Financial Services experience is essential. Hybrid work in Edinburgh, 4 days in office.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
650
-
🗓️ - Date discovered
September 23, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Edinburgh EH3
-
🧠 - Skills detailed
#Collibra #Data Governance #Data Vault #Infrastructure as Code (IaC) #Vault #Terraform #Databricks #Data Pipeline #PySpark #Azure #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Data Engineering #Delta Lake #Scala #Strategy #Data Strategy #ADF (Azure Data Factory) #Agile #Database Modelling #Automated Testing #Python #Libraries #Synapse #DevOps #Pytest #Batch
Role description
ARRT Integration are partnering with a leading Financial Services client who are embarking on a greenfield data transformation programme. Their current processes offer limited digital customer interaction, and the vision is to modernise these processes by:
Building a modern data platform in Databricks
Creating a single customer view across the organisation.
Enabling new client-facing digital services through real-time and batch data pipelines.
You will join a growing team of engineers and architects, with strong autonomy and ownership. This is a high-value greenfield initiative for the business, directly impacting customer experience and long-term data strategy.
Key Responsibilities:
Design and build scalable data pipelines and transformation logic in Databricks
Implement and maintain Delta Lake physical models and relational data models.
Contribute to design and coding standards, working closely with architects.
Develop and maintain Python packages and libraries to support engineering work.
Build and run automated testing frameworks (e.g. PyTest).
Support CI/CD pipelines and DevOps best practices.
Collaborate with BAs on source-to-target mapping and build new data model components.
Participate in Agile ceremonies (stand-ups, backlog refinement, etc.).
Essential Skills:
PySpark and SparkSQL.
Strong knowledge of relational database modelling
Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes).
Azure platform experience.
ADF or Synapse pipelines for orchestration.
Python development
Familiarity with CI/CD and DevOps principles.
Desirable Skills
Data Vault 2.0.
Data Governance & Quality tools (e.g. Great Expectations, Collibra).
Terraform and Infrastructure as Code.
Event Hubs, Azure Functions.
Experience with DLT / Lakeflow Declarative Pipelines:
Financial Services background.
Please note the Hybrid working arrangement is x4 days in office with x1 day at home, no flexibility on this.
Job Types: Full-time, Temporary, FreelanceContract length: 6 months
Pay: £550.00-£650.00 per day
Application question(s):
Happy to work 4 days in office
Work authorisation:
United Kingdom (required)
Work Location: Hybrid remote in Edinburgh EH3
ARRT Integration are partnering with a leading Financial Services client who are embarking on a greenfield data transformation programme. Their current processes offer limited digital customer interaction, and the vision is to modernise these processes by:
Building a modern data platform in Databricks
Creating a single customer view across the organisation.
Enabling new client-facing digital services through real-time and batch data pipelines.
You will join a growing team of engineers and architects, with strong autonomy and ownership. This is a high-value greenfield initiative for the business, directly impacting customer experience and long-term data strategy.
Key Responsibilities:
Design and build scalable data pipelines and transformation logic in Databricks
Implement and maintain Delta Lake physical models and relational data models.
Contribute to design and coding standards, working closely with architects.
Develop and maintain Python packages and libraries to support engineering work.
Build and run automated testing frameworks (e.g. PyTest).
Support CI/CD pipelines and DevOps best practices.
Collaborate with BAs on source-to-target mapping and build new data model components.
Participate in Agile ceremonies (stand-ups, backlog refinement, etc.).
Essential Skills:
PySpark and SparkSQL.
Strong knowledge of relational database modelling
Experience designing and implementing in Databricks (DBX notebooks, Delta Lakes).
Azure platform experience.
ADF or Synapse pipelines for orchestration.
Python development
Familiarity with CI/CD and DevOps principles.
Desirable Skills
Data Vault 2.0.
Data Governance & Quality tools (e.g. Great Expectations, Collibra).
Terraform and Infrastructure as Code.
Event Hubs, Azure Functions.
Experience with DLT / Lakeflow Declarative Pipelines:
Financial Services background.
Please note the Hybrid working arrangement is x4 days in office with x1 day at home, no flexibility on this.
Job Types: Full-time, Temporary, FreelanceContract length: 6 months
Pay: £550.00-£650.00 per day
Application question(s):
Happy to work 4 days in office
Work authorisation:
United Kingdom (required)
Work Location: Hybrid remote in Edinburgh EH3