Natobotics

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer based in London, UK, on a 12-month hybrid contract. Key skills include Python, PySpark, AWS services, and data pipeline development. Experience with Apache Iceberg and Agile methodologies is preferred.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
January 15, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Apache Spark #Observability #Jenkins #Cloud #Agile #Apache Iceberg #AWS (Amazon Web Services) #Data Pipeline #Data Processing #Programming #GitHub #Version Control #Code Reviews #GitLab #Batch #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #PySpark #Python #Pytest #Scala #Data Engineering #S3 (Amazon Simple Storage Service) #Data Quality #Lambda (AWS Lambda) #Automated Testing
Role description
Role – AWS Data Engineer Location : London, UK 12 Months FTC Work Mode : Hybrid Need Expert in Python Pyspark, AWS, Cloud, AWS Services, AWS Components β€’ Designing and developing scalable, testable data pipelines using Python and Apache Spark β€’ Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3 β€’ Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing β€’ Contributing to the development of a lakehouse architecture using Apache Iceberg β€’ Collaborating with business teams to translate requirements into data-driven solutions β€’ Building observability into data flows and implementing basic quality checks β€’ Participating in code reviews, pair programming, and architecture discussions β€’ Continuously learning about the financial indices domain and sharing insights with the team What You'll Bring β€’ Writes clean, maintainable Python code (ideally with type hints, linters, and tests like pytest) β€’ Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines β€’ Has experience with or is eager to learn Apache Spark for large-scale data processing β€’ Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR) β€’ Enjoys learning the business context and working closely with stakeholders β€’ Works well in Agile teams and values collaboration over solo heroics Nice-to-haves β€’ It’s great (but not required) if you also bring: β€’ Experience with Apache Iceberg or similar table formats β€’ Familiarity with CI/CD tools like GitLab CI, Jenkins, or GitHub Actions β€’ Exposure to data quality frameworks like Great Expectations or Deequ β€’ Curiosity about financial markets, index data, or investment analytics