

Intuition IT – Intuitive Technology Recruitment
AWS Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in London, UK, with a hybrid work model. The contract starts immediately, offering a competitive pay rate. Key skills include Python, Apache Spark, AWS tools, and data engineering fundamentals.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 13, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Automated Testing #Version Control #Apache Spark #Agile #Apache Iceberg #S3 (Amazon Simple Storage Service) #Data Engineering #Scala #Observability #Data Processing #AWS (Amazon Web Services) #Code Reviews #Batch #"ETL (Extract #Transform #Load)" #Python #Data Pipeline #Spark (Apache Spark) #Lambda (AWS Lambda) #Pytest #Programming
Role description
Job Title: AWS Data Engineer
Location: London,UK
Work Type: Hybrid
Start Date: Immediate (Notice period/joining within 1-2 weeks)
Key Responsibilities
• Designing and developing scalable, testable data pipelines using Python and Apache Spark
• Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3
• Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing
• Contributing to the development of a lakehouse architecture using Apache Iceberg
• Collaborating with business teams to translate requirements into data-driven solutions
• Building observability into data flows and implementing basic quality checks
• Participating in code reviews, pair programming, and architecture discussions
• Continuously learning about the financial indices domain and sharing insights with the team
WHAT YOU'LL BRING:
• Writes clean, maintainable Python code (ideally with type hints, linters, and tests like pytest)
• Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines
• Has experience with or is eager to learn Apache Spark for large-scale data processing
• Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR)
• Enjoys learning the business context and working closely with stakeholders
• Works well in Agile teams and values collaboration over solo heroics
Job Title: AWS Data Engineer
Location: London,UK
Work Type: Hybrid
Start Date: Immediate (Notice period/joining within 1-2 weeks)
Key Responsibilities
• Designing and developing scalable, testable data pipelines using Python and Apache Spark
• Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3
• Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing
• Contributing to the development of a lakehouse architecture using Apache Iceberg
• Collaborating with business teams to translate requirements into data-driven solutions
• Building observability into data flows and implementing basic quality checks
• Participating in code reviews, pair programming, and architecture discussions
• Continuously learning about the financial indices domain and sharing insights with the team
WHAT YOU'LL BRING:
• Writes clean, maintainable Python code (ideally with type hints, linters, and tests like pytest)
• Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines
• Has experience with or is eager to learn Apache Spark for large-scale data processing
• Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR)
• Enjoys learning the business context and working closely with stakeholders
• Works well in Agile teams and values collaboration over solo heroics





