N Consulting Global

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in London, UK (Hybrid) for a 12-month contract, offering competitive pay. Key skills include Python, Apache Spark, AWS tools (Glue, Lambda, S3), and data engineering basics. Familiarity with Apache Iceberg is a plus.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 13, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Automated Testing #Version Control #Apache Spark #Agile #Apache Iceberg #S3 (Amazon Simple Storage Service) #Data Engineering #GitLab #Scala #Data Quality #Observability #Data Processing #Jenkins #AWS (Amazon Web Services) #Code Reviews #Batch #"ETL (Extract #Transform #Load)" #Python #Data Pipeline #GitHub #Spark (Apache Spark) #Lambda (AWS Lambda) #Pytest #Programming
Role description
Role :AWS Data Engineer Location: London, UK (Hybrid) Duration:12 Months FTC We are building the next-generation data platform at FTSE Russell β€” and we want you to shape it with us. Your role will involve: β€’ Designing and developing scalable, testable data pipelines using Python and Apache Spark β€’ Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 β€’ Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing β€’ Contributing to the development of a lakehouse architecture using Apache Iceberg β€’ Collaborating with business teams to translate requirements into data-driven solutions β€’ Building observability into data flows and implementing basic quality checks β€’ Participating in code reviews, pair programming, and architecture discussions β€’ Continuously learning about the financial indices domain and sharing insights with the team WHAT YOU'LL BRING: β€’ Writes clean, maintainable Python code (ideally with type hints, linters, and tests like pytest) β€’ Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines β€’ Has experience with or is eager to learn Apache Spark for large-scale data processing β€’ Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) β€’ Enjoys learning the business context and working closely with stakeholders β€’ Works well in Agile teams and values collaboration over solo heroics Nice-to-haves: β€’ It’s great (but not required) if you also bring: β€’ Experience with Apache Iceberg or similar table formats β€’ Familiarity with CI/CD tools like GitLab CI, Jenkins, or GitHub Actions β€’ Exposure to data quality frameworks like Great Expectations or Deequ β€’ Curiosity about financial markets, index data, or investment analytics