

Ampstek
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a fixed-term contract in London, UK. Key skills include Python, PySpark, AWS services, and data pipeline development. Experience with Apache Spark and Agile methodologies is essential. Hybrid work environment.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Apache Spark #Observability #Cloud #Agile #Apache Iceberg #AWS (Amazon Web Services) #Data Pipeline #Data Processing #Programming #Version Control #Code Reviews #Batch #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #PySpark #Python #Pytest #Scala #Data Engineering #S3 (Amazon Simple Storage Service) #Lambda (AWS Lambda) #Automated Testing
Role description
Role: AWS Data Engineer
Location: London, UK
Hybrid Onsite
Fixed Term contract
Need Expert in Python Pyspark, AWS, Cloud, AWS Services, AWS Components
• Designing and developing scalable, testable data pipelines using Python and Apache Spark
• Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3
• Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing
• Contributing to the development of a lakehouse architecture using Apache Iceberg
• Collaborating with business teams to translate requirements into data-driven solutions
• Building observability into data flows and implementing basic quality checks
• Participating in code reviews, pair programming, and architecture discussions
• Continuously learning about the financial indices domain and sharing insights with the team
Required Skills sets
• Writes clean, maintainable Python code (ideally with type hints, linters, and tests like pytest)
• Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines
• Has experience with or is eager to learn Apache Spark for large-scale data processing
• Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR)
• Enjoys learning the business context and working closely with stakeholders
• Works well in Agile teams and values collaboration over solo heroics
Role: AWS Data Engineer
Location: London, UK
Hybrid Onsite
Fixed Term contract
Need Expert in Python Pyspark, AWS, Cloud, AWS Services, AWS Components
• Designing and developing scalable, testable data pipelines using Python and Apache Spark
• Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3
• Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing
• Contributing to the development of a lakehouse architecture using Apache Iceberg
• Collaborating with business teams to translate requirements into data-driven solutions
• Building observability into data flows and implementing basic quality checks
• Participating in code reviews, pair programming, and architecture discussions
• Continuously learning about the financial indices domain and sharing insights with the team
Required Skills sets
• Writes clean, maintainable Python code (ideally with type hints, linters, and tests like pytest)
• Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines
• Has experience with or is eager to learn Apache Spark for large-scale data processing
• Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR)
• Enjoys learning the business context and working closely with stakeholders
• Works well in Agile teams and values collaboration over solo heroics






