Athsai

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with strong experience in AWS, PySpark, and SQL. The contract lasts for an unspecified duration at £300/day, and it allows for remote work. Key skills include ETL pipeline development, Apache Airflow, and Terraform.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
300
-
🗓️ - Date
March 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#PySpark #Apache Airflow #"ETL (Extract #Transform #Load)" #Programming #Datasets #Data Pipeline #GitHub #SQL (Structured Query Language) #Cloud #Airflow #Deployment #Scala #Data Engineering #AWS (Amazon Web Services) #Spark (Apache Spark) #Terraform #Lambda (AWS Lambda) #RDS (Amazon Relational Database Service) #S3 (Amazon Simple Storage Service) #Redshift #Python
Role description
🚀 Data Engineer – AWS | PySpark | SQL | Lambda 📍 London / Remote 💼 Contract | £300/day | Outside IR35 🌟 About the Role We are seeking a talented and driven Data Engineer to join a forward-thinking organisation where data sits at the heart of decision-making. You will design and build scalable, high-performance data pipelines that power analytics and business insights across the company. This is an exciting opportunity to work with modern cloud technologies, contribute to mission-critical systems, and make a real impact in a fast-paced, collaborative environment. 🔧 What You’ll Be Doing • Designing and implementing robust ETL pipelines to move and transform data at scale • Building cloud-native solutions on AWS to ensure reliability and performance • Working with PySpark, Python, and SQL to process large datasets efficiently • Orchestrating workflows using Apache Airflow • Leveraging AWS services such as S3, RDS, Redshift, and Lambda • Deploying infrastructure using Terraform and Infrastructure-as-Code best practices • Automating releases through CI/CD pipelines with GitHub Actions • Collaborating closely with engineering and analytics teams to deliver high-quality solutions ✅ Required Skills • Strong experience with PySpark and AWS • Proven background building and maintaining ETL pipelines • Solid programming ability in Python, SQL, and Spark • Hands-on experience with Apache Airflow • Deep understanding of AWS data services • Terraform for cloud deployments • CI/CD workflow experience using GitHub Actions