ManpowerGroup

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in London (hybrid, 2/3 days onsite) for 6-12 months, offering competitive pay. Key skills include AWS expertise, data governance, cloud architecture, and proficiency in Python, SQL, and PySpark.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 28, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#dbt (data build tool) #Docker #Data Lakehouse #Data Management #Data Lake #AWS (Amazon Web Services) #Data Engineering #Cloud #Leadership #Data Quality #Databricks #S3 (Amazon Simple Storage Service) #Redshift #Data Governance #Apache Kafka #Athena #Lambda (AWS Lambda) #Dimensional Modelling #IAM (Identity and Access Management) #Metadata #Kafka (Apache Kafka) #Data Access #Delta Lake #Python #PySpark #SQL (Structured Query Language) #Data Security #"ETL (Extract #Transform #Load)" #Security #Spark (Apache Spark) #Distributed Computing #Scala #Data Architecture
Role description
Job title: Senior Data Engineer Location : London - 2/3 days on site Duration: 6-12 months initially This is for a Senior, Experienced Data Engineer role to support Strategic Platform build, as part of Integrated Financial Crime Programme This role will be producing design and support for POC build this year. β€’ Strong knowledge of data governance, metadata management, data quality, and data mesh concepts, with the ability to influence enterprise-wide adoption. Ability to communicate complex technical concepts clearly to senior leadership, influencing architectural and investment decisions. β€’ Deep expertise in cloud data architecture and distributed computing paradigms, with strong hands‑on background in AWS data platforms, including Glue, Lambda, S3, Redshift, Athena, and Databricks. β€’ Modern Data Stack: Understanding of data Lakehouse architectures (Delta Lake, Iceberg) & experience with containerization using Docker. Including architecture design, data modelling, workload optimization, and cost governance. Advanced knowledge of data modelling techniques, including dimensional modelling, schema evolution, and design patterns for analytics, reporting, and downstream consumption. β€’ Exposure to real time and event‑driven architectures, including Apache Kafka, Spark Streaming, or similar technologies. Strategic understanding of DBT (Data Build Tool) and analytics engineering practices for scalable transformation and modelling. β€’ Demonstrated ability to define and govern data architecture standards, reference architectures, and engineering frameworks across multiple teams. Strong understanding of cloud security, IAM, data access controls, and platform governance, with experience implementing fine grained data security using tools such as Immuta. β€’ Advanced proficiency in Python, PySpark, and SQL, with the ability to guide teams on performance optimization and scalable design rather than individual contribution alone.