Technopride

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an "AWS Data Engineer" on a 12-month contract, offering £85,000-£90,000 per year. Requires 10+ years of experience, strong Python skills, and expertise in AWS services. Familiarity with Apache Spark and data engineering concepts is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
409
-
🗓️ - Date
December 5, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
London EC1A
-
🧠 - Skills detailed
#Automation #Scala #Batch #Jenkins #Monitoring #Spark (Apache Spark) #AWS Glue #Apache Iceberg #Cloud #Programming #Delta Lake #Lambda (AWS Lambda) #AWS (Amazon Web Services) #GitHub #Agile #S3 (Amazon Simple Storage Service) #Pytest #Data Quality #Data Pipeline #Data Processing #Python #Version Control #Data Modeling #"ETL (Extract #Transform #Load)" #Observability #GitLab #Automated Testing #Data Engineering #Apache Spark #Code Reviews #Datasets
Role description
(10+ years of experience required) Role Overview We are building a next-generation data platform and are looking for an experienced Senior Data Engineer to help design, develop, and optimize large-scale data solutions. This role involves end-to-end data engineering, modern cloud-based development, and close collaboration with cross-functional stakeholders to deliver reliable, scalable, and high-quality data products. Key Responsibilities Design, develop, and maintain scalable, testable, and high-performance data pipelines using Python and Apache Spark. Orchestrate data workflows using cloud-native services such as AWS Glue, EMR Serverless, Lambda, and S3. Apply modern engineering practices including modular design, version control, CI/CD automation, and comprehensive testing. Support the design and implementation of lakehouse architectures leveraging table formats such as Apache Iceberg. Collaborate with business stakeholders to translate requirements into robust data engineering solutions. Build observability and monitoring into data workflows; implement data quality checks and validations. Participate in code reviews, pair programming, and architecture discussions to promote engineering excellence. Continuously expand domain knowledge and contribute insights relevant to data operations and analytics. What You’ll Bring Strong ability to write clean, maintainable Python code using best practices such as type hints, linting, and automated testing frameworks (e.g., pytest). Deep understanding of core data engineering concepts including ETL/ELT pipeline design, batch processing, schema evolution, and data modeling. Hands-on experience with Apache Spark or willingness and capability to learn large-scale distributed data processing. Familiarity with AWS data services such as S3, Glue, Lambda, and EMR. Ability to work closely with business and technical stakeholders and translate needs into actionable engineering tasks. Strong team collaboration skills, especially within Agile environments, emphasizing shared ownership and high transparency. Nice-to-Have Skills Experience with Apache Iceberg or similar lakehouse table formats (Delta Lake, Hudi). Practical exposure to CI/CD tools such as GitLab CI, GitHub Actions, or Jenkins. Familiarity with data quality frameworks such as Great Expectations or Deequ. Interest or background in financial markets, analytical datasets, or related business domains. Job Type: Fixed term contractContract length: 12 months Pay: £85,000.00-£90,000.00 per year Benefits: Sabbatical Sick pay Work Location: Hybrid remote in London EC1A