MRJ Recruitment

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, initially 5 months, remote UK, with a pay rate of "TBD." Key skills include AWS Databricks, SQL, Python, and PySpark. Extensive experience in data pipeline design and Delta Live Table pipelines is required.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 21, 2026
πŸ•’ - Duration
3 to 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Monitoring #Documentation #Cloud #Data Ingestion #Databricks #PySpark #"ETL (Extract #Transform #Load)" #Scala #Data Warehouse #AWS (Amazon Web Services) #Data Pipeline #Deployment #Spark (Apache Spark) #Python #Data Engineering
Role description
Role: Senior Data Engineer Length: 5 months initially Location: Remote UK Start date: ASAP About the Role Our client is currently undertaking a key data programme of work modernising their legacy data warehouse, migrating to a modern AWS Databricks platform. As part of this project, we are looking for an experienced Senior Data Engineer to help us build a Common Data Model in AWS Databricks to support analytics, performance reporting and downstream consumption across the platform for the company. Work will include modelling, data ingestion, transformation between the medallion layers and documentation to enable consistent, scalable use of our data across our domains. Key Responsibilities β€’ Lead discovery activities to understand the current data warehouse, dependencies, data flows and reporting needs β€’ Design and build modern data pipelines and data models on AWS and Databricks to support analytics and downstream consumption β€’ Migrate and re-platform data from legacy systems into the new cloud environment, ensuring accuracy, performance and reliability β€’ Collaborate with stakeholders to capture requirements and translate them into practical, scalable engineering solutions β€’ Drive best practice across data engineering, including quality, monitoring, performance optimisation and deployment processes β€’ Support deployment activities, release planning and operational readiness as the new platform goes live Person Requirements β€’ Extensive experience of using AWS Databricks β€’ Experienced in building scalable data pipelines using SQL, Python and Pyspark β€’ Hands on experience of designing and operating Delta Live Table pipelines (now called Lakeflow Declarative Spark Pipelines) in both streaming and rejiggered modes β€’ Strong experience of selecting the right Databricks compute for different workloads β€’ Has worked extensively with the Unity Catalog to centralise governance across Databricks workloads β€’ Can diagnose and optimise slow queries to guide and improve performance of existing structure/design β€’ Experience of using Databricks Asset Bundles to define, package and deploy resources as code INext steps: If you’re a Senior Data Engineer ready to lead a move to Databricks and want a role where your work will shape a modern data platform, we’d like to hear from you. Shortlisting will take place on the this week with a quick turnaround expected for interviews and start.