Glocomms

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Mid-Level Data Engineer on a 6-month contract, paying "pay rate". Candidates must have financial services experience, strong Python and Apache Spark skills, and proficiency in building ETL/ELT pipelines. Remote work is available.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 13, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
City Of London, England, United Kingdom
-
🧠 - Skills detailed
#Azure #Data Quality #Automation #Python #PySpark #Apache Spark #Data Pipeline #AWS (Amazon Web Services) #Scala #Datasets #GCP (Google Cloud Platform) #Spark (Apache Spark) #Cloud #"ETL (Extract #Transform #Load)" #Data Engineering #Data Extraction
Role description
Mid‑Level Data Engineer (Contract) Start Date: ASAP Contract Duration: 6 Months Experience within the Financial Services Industry is a must. If you're a Data Engineer who enjoys building reliable, scalable data pipelines and wants your work to directly support front‑office decision‑making, this role offers exactly that. You'll join a data engineering function working closely with investment management and front‑office stakeholders, helping ensure critical financial data is delivered accurately, efficiently and at scale. This role sits at the intersection of technology, data, and the business, and is ideal for someone who enjoys ownership, delivery, and solving real‑world data challenges in a regulated environment. This is a hands‑on opportunity for a mid‑level engineer who can contribute from day one and take responsibility for production data workflows. What You'll Be Doing • Building and maintaining end‑to‑end data pipelines (ETL/ELT) to support analytics and downstream use cases • Developing scalable data solutions using Python, with a focus on maintainability and performance • Working with Apache Spark / PySpark to process and transform large datasets • Supporting the ingestion, transformation and validation of complex financial data • Improving the performance, reliability and resilience of existing data workflows • Partnering with engineering, analytics and front‑office teams to understand requirements and deliver trusted data assets • Taking ownership of data issues and seeing them through to resolution • Contributing ideas that improve data quality, automation, and overall platform efficiency Skills That Will Help You Succeed Essential • Commercial experience as a Data Engineer at a mid‑level • Strong Python development skills • Hands‑on experience with Apache Spark / PySpark • Solid experience building ETL/ELT pipelines • Background within the financial services industry (investment management experience desirable) • Comfortable working with production systems in a regulated environment • Able to work independently and deliver in a fast‑paced setting Nice to Have • Exposure to Polars • Experience optimising Spark workloads • Cloud data platform experience across AWS, Azure or GCP What Makes This Role Appealing • You'll work on data that directly supports investment and front‑office functions • You'll have ownership of production pipelines, not just isolated tasks • You'll collaborate closely with both technical teams and business stakeholders • Your work will have clear, visible impact on data quality, reliability and decision‑making • You'll join a team that values pragmatic engineering, accountability and continuous improvement Interested? Get in touch to discuss the role in more detail and what success looks like in the first few months. Desired Skills and Experience Required Skills & Experience • Commercial experience as a Data Engineer (mid level) • Strong Python skills • Hands on Apache Spark / PySpark experience • Experience with ETL/ELT and data extraction • Background in financial services, ideally investment management • Comfortable working in a regulated, production environment Nice to Have • Exposure to Polars • Experience optimising Spark workloads • Cloud data platform experience (AWS, Azure or GCP)