Robson Bale

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a contract basis in London until February 27, with a pay rate of £520-550pd via Umbrella. Key skills include ETL development, Python, PySpark, AWS Glue, and Apache Airflow.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
550
-
🗓️ - Date
March 21, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Deployment #Datasets #S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Airflow #Data Processing #Scala #AWS (Amazon Web Services) #Code Reviews #Spark (Apache Spark) #TypeScript #PySpark #Data Engineering #Monitoring #Python #Automation #Documentation #Leadership #Apache Airflow #Data Ingestion #AWS Glue #Data Integration #Scripting #Lambda (AWS Lambda) #Docker #Cloud #React #Athena
Role description
Senior Data Engineer - Contract - London - 3 days on site - £520-550pd via Umbrella Contract until Feb 27 Key Responsibilities ETL Pipeline Architecture: Design, build, and maintain scalable ETL pipelines using Apache Airflow, Python, PySpark, and AWS Glue to transform complex financial datasets into actionable insights. Data Integration: Lead the integration of diverse data sources, ensuring data consistency, accuracy, and timely delivery to downstream systems. Containerization & Deployment: Utilize Docker for containerization and manage deployments within an AWS cloud environment (utilizing EKS or ECS). Technical Leadership: As a Principal Engineer, provide technical guidance, code reviews, and mentorship to the engineering team, ensuring best practices in coding, architecture, and documentation. Optimization: Drive performance tuning and continuous improvement of data ingestion and transformation processes. Required Skills ETL Development: Extensive experience building, testing, and deploying production-grade ETL pipelines in a cloud environment. Python: Expert-level proficiency in Python for data processing, automation, and Scripting. PySpark: Strong hands-on experience with PySpark for large-scale data transformation and processing. AWS Data Stack: Deep hands-on experience with AWS Glue and related AWS services (S3, Lambda, Athena, Glue Catalog). Orchestration: Proven experience with Apache Airflow for scheduling, monitoring, and managing complex workflows. Docker: Proficiency in containerization using Docker and experience with AWS container services. Nice-to-Have Skills Capital Markets Experience: Prior experience in the financial sector, particularly in Pricing, Market Data, or Trade Processing. Domain Knowledge: Familiarity integrating complex financial data sources, specifically regarding Discounting data (eg, BroCalc), DSO, Energy & Commodities data (FACTS), and Cash/Credit product data. UI Awareness: Familiarity with modern UI frameworks (React, Next.js, TypeScript) to facilitate better collaboration with Front End delivery teams.