Olive Jar Digital

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a 6-month fixed-term contract, remote UK-wide. Key skills include Azure, SQL, Python, and data pipeline development. Experience with Microsoft Purview and data governance principles is essential.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 22, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Greater London, England, United Kingdom
-
🧠 - Skills detailed
#Data Ingestion #GDPR (General Data Protection Regulation) #Cloud #Agile #Data Management #Azure Data Factory #Data Quality #Data Pipeline #Azure #Databricks #Synapse #Documentation #Deployment #MDM (Master Data Management) #SQL (Structured Query Language) #Scala #Spark (Apache Spark) #Python #Data Manipulation #Data Governance #Data Catalog #Metadata #Data Lineage #ADF (Azure Data Factory) #Compliance #Data Engineering #PySpark #"ETL (Extract #Transform #Load)" #Storage
Role description
6 Month Fixed-Term Contract UK-wide Remote About The Role We are seeking an experienced Data Engineer to support the delivery of a Master Data Management (MDM) initiative and associated data governance and metadata management activities. The role focuses on implementing scalable data pipelines, metadata management, and data quality solutions within Azure and Microsoft Purview environments. You will work closely with data governance and architecture teams to design and build robust data engineering capabilities that underpin organisational data management and reporting. About Olive Jar Digital As we grow, we empower our teams to develop their roles and functions and offer support to get you from where you are now, to where you want to be. Moving towards our 10th year, we are now an established brand, building digital products and services and championing the provision of expert talent to enhance customer and in-house teams, satisfying all user needs. We are a professional, fun, fully Inclusive and diverse digital consultancy, valuing everyone’s opinion. With a huge growth plan over the next two years, we are looking to expand our client facing delivery team with designers, developers, and testers as we continue to expand our project portfolio. Responsibilities β€’ Design, build, and optimise data pipelines for ingestion, transformation, and delivery across structured and unstructured sources. β€’ Develop and maintain metadata capture processes to support data cataloguing and lineage. Implement and document data quality rules, retention policies, and data optimisation processes. β€’ Support large-scale deployment and configuration of Microsoft Purview. β€’ Establish automated scanning and metadata ingestion for source systems. β€’ Configure data lineage, glossary, and metadata structures to align with governance and compliance standards (UK GDPR, ISO 27001). β€’ Assess current data platform performance; identify and resolve bottlenecks across ingestion, processing, query, and reporting layers. β€’ Conduct benchmarking (e.g., latency, throughput, cost efficiency) and implement storage/DQ optimisations. β€’ Develop and apply storage tiering and retention strategies. β€’ Collaborate with Data Governance and Data Quality teams to ensure consistency, accuracy, and compliance. β€’ Contribute to documentation and knowledge transfer for data engineering and digital services teams. β€’ Participate in agile delivery ceremonies, workshops, and technical review sessions. About You β€’ Strong experience in data engineering within cloud environments, ideally Azure. β€’ Proficiency in data pipeline development (e.g., Azure Data Factory, Databricks, Synapse, or similar). β€’ Experience implementing and managing Microsoft Purview or equivalent data cataloguing tools. β€’ Solid understanding of metadata management, data lineage, and data quality frameworks. Knowledge of data governance principles, including GDPR compliance and data retention policy design. β€’ Strong experience in SQL, Python, or PySpark for data manipulation and integration. β€’ Proven ability to optimise performance and manage large data volumes efficiently. β€’ Experience working in agile delivery environments with cross-functional data teams. Powered by JazzHR