AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer on a contract basis, offering a competitive pay rate. Key skills include Python, SQL, AWS services, PySpark, and Terraform. Remote work is available, with a focus on ETL pipeline development and cloud-native architectures.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 26, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Deployment #Datasets #Agile #Spark (Apache Spark) #Debugging #Data Processing #S3 (Amazon Simple Storage Service) #Snowflake #AWS (Amazon Web Services) #Databricks #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Lambda (AWS Lambda) #Cloud #Automation #SQL (Structured Query Language) #Python #PySpark #Infrastructure as Code (IaC) #Data Pipeline #Scala #Terraform #Programming #AI (Artificial Intelligence) #Data Science #Data Engineering
Role description
We are seeking a highly skilled AWS Data Engineer to join our team on a contract basis. The successful candidate will design, develop, and maintain scalable data pipelines and platforms, enabling advanced analytics and AI-driven solutions. This is an exciting opportunity to work on cutting-edge projects in a fully remote environment. Key Responsibilities β€’ Design, develop, and maintain ETL/ELT pipelines for large-scale structured and unstructured data. β€’ Build and optimize data solutions using AWS services (S3, Lambda, Glue, Step Functions, EMR). β€’ Implement scalable data models and warehouses in Postgres and Snowflake. β€’ Develop distributed data workflows using Databricks and PySpark APIs. β€’ Write clean, reusable, and efficient Python code for data transformation/orchestration. β€’ Automate infrastructure provisioning and deployments using Terraform (IaC). β€’ Collaborate with data scientists, ML engineers, and stakeholders to deliver high-quality datasets. β€’ Monitor, troubleshoot, and optimize pipelines for performance and reliability. β€’ Stay updated with emerging trends in data engineering, cloud computing, and AI/LLMs. Required Skills & Experience β€’ Strong programming experience in Python with ETL pipeline development. β€’ Advanced SQL skills; hands-on with Postgres and Snowflake. β€’ Proven experience with AWS data/compute services (S3, Lambda, Glue, EMR, Step Functions, CloudWatch). β€’ Proficiency in PySpark and Databricks for distributed data processing. β€’ Solid experience with Terraform for infrastructure automation. β€’ Strong understanding of cloud-native architectures and serverless computing. β€’ Excellent problem-solving, debugging, and performance optimization skills. β€’ Effective communication and collaboration in fast-paced, agile teams.