N Consulting Global

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position based in Glasgow, UK, for a 6-month contract at a pay rate of "X". Candidates must have 10+ years of experience, expertise in AWS, Snowflake, Python, Apache Spark, and banking domain experience. Immediate joiners preferred.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 26, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#Data Engineering #Data Ingestion #Cloud #Data Warehouse #Athena #Batch #PySpark #S3 (Amazon Simple Storage Service) #Python #Data Pipeline #Compliance #Apache Spark #Data Processing #Spark (Apache Spark) #Data Analysis #Data Quality #Lambda (AWS Lambda) #Snowflake #Programming #Schema Design #SQL (Structured Query Language) #Data Architecture #IAM (Identity and Access Management) #AWS (Amazon Web Services) #Scala #Data Integration #Redshift #Security #"ETL (Extract #Transform #Load)" #Data Modeling #Datasets
Role description
Role: Data Engineer Location: Glasgow, UK Work Mode: Hybrid (3 Days from Office) Contract Role: 6 months Experience: 10+ years Start Date: Only Immediate Joiners or candidates with max 2-3 weeks’ notice No Visa Sponsorship Must have skill: AWS cloud ecosystem, Snowflake, Python, and Apache Spark, Banking Domain Role Overview We are seeking an experienced Data Engineer with strong expertise in AWS cloud ecosystem, Snowflake, Python, and Apache Spark, along with proven experience in the banking domain. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines and modern data platforms that support analytics, reporting, and regulatory requirements. \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ Key Responsibilities Design, build, and maintain scalable data pipelines using AWS services and modern data engineering practices. Develop and optimize ETL/ELT workflows using Python and Apache Spark. Implement and manage Snowflake data warehouse solutions including data modeling, performance tuning, and optimization. Work closely with business stakeholders, data analysts, and architects to understand banking data requirements. Integrate data from multiple banking systems such as payments, transactions, customer, and risk platforms. Ensure data quality, governance, security, and compliance aligned with banking regulations. Develop data ingestion frameworks for structured and semi-structured data. Optimize data processing performance and cost efficiency within AWS environments. Support real-time and batch data processing solutions. Document data architecture, data flows, and technical processes. \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ Required Skills & Qualifications 10+ years of experience in Data Engineering. Strong hands-on experience with AWS services (S3, Glue, Lambda, Redshift, EMR, Athena, Step Functions, IAM). Extensive experience with Snowflake including schema design and performance tuning. Strong programming skills in Python. Hands-on experience with Apache Spark / PySpark. Experience building ETL/ELT pipelines and data integration frameworks. Strong SQL and data modeling skills. Experience working with large-scale datasets.