G MASS Consulting

Data Engineer (FinTech)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (FinTech) on an initial 6-month contract, focusing on scalable data solutions. Requires strong SQL, Apache Spark, and Python skills; experience in financial services; and familiarity with cloud platforms. Salary negotiable.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 4, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#GIT #Big Data #Apache Spark #Python #Data Quality #Qlik #AWS (Amazon Web Services) #Jupyter #AI (Artificial Intelligence) #Tableau #Data Pipeline #Data Engineering #BI (Business Intelligence) #Java #Data Processing #Data Manipulation #GitHub #Datasets #DevOps #SQL (Structured Query Language) #GCP (Google Cloud Platform) #Spark (Apache Spark) #Version Control #BitBucket #"ETL (Extract #Transform #Load)" #Data Ingestion #Data Extraction #Azure #SQL Queries #Microsoft Power BI #PySpark #Documentation #Scala #Azure DevOps #Cloud #Storage #Data Science #Data Visualisation
Role description
G MASS is partnering with a global FinTech business to support the build-out of a next-generation data engineering capability within a large-scale enterprise data environment. This role sits within a highly technical engineering team focused on delivering AI-enabled, cloud-native data products used across the business. You'll join a collaborative engineering function responsible for designing and delivering robust, scalable data solutions. The work spans data ingestion, transformation, analytics and enablement, supporting both internal stakeholders and client-facing products. Key responsibilities • Design, develop and maintain scalable data pipelines for ingestion, transformation and storage • Write efficient SQL queries to support data extraction, manipulation and analysis • Use Apache Spark and Python to process large datasets and automate workflows • Partner with data scientists, engineers and business stakeholders to translate requirements into data solutions • Implement data quality checks and validation to ensure accuracy and reliability • Analyse complex datasets to identify trends, patterns and anomalies • Produce and maintain clear documentation covering data processes and architecture • Support stakeholders with data visualisation and interpretation of outputs Requirements • Strong experience in SQL for data manipulation and analysis • Hands-on experience with Apache Spark (Java, SQL or PySpark) and Python • Experience working with large, complex datasets in enterprise environments • Exposure to cloud data platforms (AWS, Azure or GCP) and big data technologies • Confident communicator, comfortable working across technical and non-technical teams • Background or exposure to financial services or capital markets Nice to have • Experience using Java in data processing or integration contexts • Knowledge of ETL processes and data warehousing concepts • Experience with data visualisation tools (Power BI, Tableau, Qlik) • Familiarity with data science notebooks (e.g. Jupyter, Zeppelin) • Version control experience (Git, GitHub, Bitbucket, Azure DevOps) Benefits Initial 6-month contract, with strong possibility of extension and/or permanency. Salary to be discussed.