

Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6-month contract, remote (UK-based), with a £500 day rate. Key skills include Azure, Databricks, PySpark, and SQL. Experience with data quality frameworks and governance is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
500
-
🗓️ - Date discovered
July 9, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Azure Databricks #PySpark #Spark SQL #Scala #Synapse #Data Pipeline #Data Architecture #Data Quality #Spark (Apache Spark) #Azure #Databricks #Data Engineering #SQL (Structured Query Language)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Senior Data Engineer
Remote-first (UK-based) | Outside IR35 £500 day rate | 6 month contract - extension likely
We’re working with a legal-tech start-up that’s grown steadily over the past 6–7 years. They’ve built a strong technical foundation and now need a Senior Data Engineer to take ownership of their data platform and drive it forward.
This is a genuinely hands-on role with responsibility for improving and modernising pipelines, leading on best practices, and solving key challenges around data quality and integration. The business handles large volumes of case-specific legal data, including work on major litigation cases so the problems are real, complex, and interesting.
What you'll be doing:
• Own and enhance the existing Azure-based data architecture, including CI/CD pipelines and medallion framework
• Improve data quality across multiple sources and integrate them into a central warehouse/lab environment
• Build scalable, well-documented data pipelines using tools like Databricks, Synapse, and Data Factory
What we're looking for:
• Strong experience with Azure, Databricks, PySpark, SQL, and modern data engineering tools
• Solid grasp of data quality frameworks, governance, and performance optimisation
• Ability to communicate clearly with non-technical stakeholders and document solutions effectively
• Comfortable working independently and proactively in a small, fast-moving team
The setup:
• Fully remote
• Rate: £500 - Outside
If you’re looking to own a platform, make technical decisions, and work with interesting data in a company that’s genuinely doing something different, this could be a great fit.