Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown" and a pay rate of "$X/hour." It requires expertise in retail data, PySpark, and data warehousing, along with strong mentoring skills. Location: "remote."
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
456
-
πŸ—“οΈ - Date discovered
September 5, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Data Pipeline #Datasets #Data Engineering #PySpark #Documentation #Data Quality #"ETL (Extract #Transform #Load)" #Scala #Spark (Apache Spark)
Role description
We are seeking an experienced Senior Data Engineer to join our client’s data engineering team, working within their Microsoft Fabric stack. The successful candidate will play a pivotal role in transforming and cleansing data between the Bronze and Silver layers of their Medallion architecture, primarily using PySpark. This position is well suited to a data warehousing professional with deep expertise in retail data, strong data modelling capabilities, and a proven ability to mentor offshore engineers. Responsibilities β€’ Develop robust PySpark code for data transformation and cleansing between Bronze and Silver layers. β€’ Design and implement efficient, scalable, and maintainable data pipelines within Microsoft Fabric. β€’ Build and optimise datasets in Third Normal Form (3NF) for the Silver Layer. β€’ Design and deliver dimensional models to support analytical use cases in the Gold Layer. β€’ Create and maintain comprehensive technical documentation for data processes, transformations, and models. β€’ Mentor and guide offshore engineering teams, ensuring best practices and high-quality code through reviews and coaching. β€’ Collaborate with analysts, architects, and business stakeholders to capture requirements and translate them into scalable data solutions. β€’ Uphold data quality, lineage, and governance standards across the stack.