

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 6-month rolling contract, remote, outside IR35. Key skills include AWS, Terraform, Python, Spark, and SQL. Experience in data architecture, ETL/ELT, and CI/CD pipelines is essential.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
September 27, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Outside IR35
-
π - Security clearance
Unknown
-
π - Location detailed
United Kingdom
-
π§ - Skills detailed
#Deployment #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #Data Science #Data Migration #Spark SQL #Infrastructure as Code (IaC) #Data Lifecycle #Spark (Apache Spark) #Data Architecture #Migration #Cloud #Lambda (AWS Lambda) #Docker #Scala #SQL (Structured Query Language) #Data Pipeline #Python #S3 (Amazon Simple Storage Service) #BI (Business Intelligence) #Data Engineering #Data Warehouse #GitHub #Terraform #Redshift #dbt (data build tool)
Role description
Data Engineer - Contract - AWS, Python, Terraform, Spark
β’ 6 months rolling contract
β’ Remote based
β’ Outside IR35
β’ Must have strong AWS, Terraform, Python, Spark, SQL
Data Engineer role:
β’ Design and implement scalable data architectures in the cloud, ensuring secure and reliable data pipelines.
β’ Work across the full data lifecycle, supporting data scientists, analysts, and engineering teams.
β’ Lead development projects, data modelling, and cloud data platform deployments.
β’ Mentor data engineers and contribute to best practices across the team.
Data Engineer Key skills:
β’ Strong experience with AWS data services (S3, Redshift, Glue, Lambda, Lake Formation).
β’ Must have strong experience in SQL, Python, Spark or Iceberg.
β’ Must have strong Terraform experience
β’ Experience with data warehouse design, ETL/ELT, and CI/CD pipelines (GitHub, CodeBuild).
β’ Knowledge of infrastructure as code, performance tuning, and data migration.
β’ Exposure to DBT, Docker, or Microsoft BI stack is a plus.
Data Engineer - Contract - AWS, Python, Terraform, Spark
β’ 6 months rolling contract
β’ Remote based
β’ Outside IR35
β’ Must have strong AWS, Terraform, Python, Spark, SQL
Data Engineer role:
β’ Design and implement scalable data architectures in the cloud, ensuring secure and reliable data pipelines.
β’ Work across the full data lifecycle, supporting data scientists, analysts, and engineering teams.
β’ Lead development projects, data modelling, and cloud data platform deployments.
β’ Mentor data engineers and contribute to best practices across the team.
Data Engineer Key skills:
β’ Strong experience with AWS data services (S3, Redshift, Glue, Lambda, Lake Formation).
β’ Must have strong experience in SQL, Python, Spark or Iceberg.
β’ Must have strong Terraform experience
β’ Experience with data warehouse design, ETL/ELT, and CI/CD pipelines (GitHub, CodeBuild).
β’ Knowledge of infrastructure as code, performance tuning, and data migration.
β’ Exposure to DBT, Docker, or Microsoft BI stack is a plus.