

N Consulting Global
AWS Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer with 10+ years of experience, focusing on scalable data pipelines using Python and Apache Spark. It is a 12-month hybrid contract in London, requiring AWS familiarity and Agile teamwork skills.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
November 25, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Fixed Term
-
π - Security
Unknown
-
π - Location detailed
London Area, United Kingdom
-
π§ - Skills detailed
#Data Processing #Apache Iceberg #Data Quality #Jenkins #Observability #AWS (Amazon Web Services) #Pytest #GitHub #Version Control #Apache Spark #Code Reviews #"ETL (Extract #Transform #Load)" #Data Engineering #Agile #Python #Scala #Data Pipeline #Spark (Apache Spark) #GitLab #Lambda (AWS Lambda) #Batch #S3 (Amazon Simple Storage Service) #Programming #Automated Testing
Role description
Role :AWS Data Engineer
Location: London, UK (Hybrid)
Duration:12 Months FTC
We are building the next-generation data platform at FTSE Russell β and we want you to shape it with us. Your role will involve:
β’ Need 10+ yearβs Experience in Data Engineering.
β’ Designing and developing scalable, testable data pipelines using Python and Apache Spark
β’ Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3
β’ Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing
β’ Contributing to the development of a lake house architecture using Apache Iceberg
β’ Collaborating with business teams to translate requirements into data-driven solutions
β’ Building observability into data flows and implementing basic quality checks
β’ Participating in code reviews, pair programming, and architecture discussions
β’ Continuously learning about the financial indices domain and sharing insights with the team
WHAT YOU'LL BRING:
β’ Writes clean, maintainable Python code (ideally with type hints, linters, and tests like pytest)
β’ Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines
β’ Has experience with or is eager to learn Apache Spark for large-scale data processing
β’ Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR)
β’ Enjoys learning the business context and working closely with stakeholders
β’ Works well in Agile teams and values collaboration over solo heroics
Nice-to-haves:
β’ Itβs great (but not required) if you also bring:
β’ Experience with Apache Iceberg or similar table formats
β’ Familiarity with CI/CD tools like GitLab CI, Jenkins, or GitHub Actions
β’ Exposure to data quality frameworks like Great Expectations or Deequ
β’ Curiosity about financial markets, index data, or investment analytics
Role :AWS Data Engineer
Location: London, UK (Hybrid)
Duration:12 Months FTC
We are building the next-generation data platform at FTSE Russell β and we want you to shape it with us. Your role will involve:
β’ Need 10+ yearβs Experience in Data Engineering.
β’ Designing and developing scalable, testable data pipelines using Python and Apache Spark
β’ Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3
β’ Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing
β’ Contributing to the development of a lake house architecture using Apache Iceberg
β’ Collaborating with business teams to translate requirements into data-driven solutions
β’ Building observability into data flows and implementing basic quality checks
β’ Participating in code reviews, pair programming, and architecture discussions
β’ Continuously learning about the financial indices domain and sharing insights with the team
WHAT YOU'LL BRING:
β’ Writes clean, maintainable Python code (ideally with type hints, linters, and tests like pytest)
β’ Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines
β’ Has experience with or is eager to learn Apache Spark for large-scale data processing
β’ Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR)
β’ Enjoys learning the business context and working closely with stakeholders
β’ Works well in Agile teams and values collaboration over solo heroics
Nice-to-haves:
β’ Itβs great (but not required) if you also bring:
β’ Experience with Apache Iceberg or similar table formats
β’ Familiarity with CI/CD tools like GitLab CI, Jenkins, or GitHub Actions
β’ Exposure to data quality frameworks like Great Expectations or Deequ
β’ Curiosity about financial markets, index data, or investment analytics






