Gazelle Global

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer on a contract basis, requiring strong experience in PySpark, SQL, and AWS services. The position involves designing scalable data pipelines and implementing data governance. Advanced Python skills are essential.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
May 16, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Lambda (AWS Lambda) #Cloud #Data Quality #Spark (Apache Spark) #SQL (Structured Query Language) #Data Processing #Scala #Data Pipeline #Airflow #Data Governance #Data Engineering #PySpark #Athena #Automation #Apache Airflow #Business Analysis #Batch #AWS (Amazon Web Services) #S3 (Amazon Simple Storage Service) #Python #Security #"ETL (Extract #Transform #Load)"
Role description
Your responsibilities: β€’ Design, develop, and maintain scalable data pipelines on AWS using Glue, EMR, S3, and Athena for batch and real-time processing. β€’ Build and optimize ETL workflows using PySpark and SQL, ensuring high data quality, reliability, and performance. Orchestrate and schedule data pipelines using Apache Airflow, enabling seamless data movement across systems. β€’ Collaborate with business analysts and stakeholders to translate data requirements into technical solutions and deliver actionable insights. β€’ Implement data governance, security, and best practices while working within cloud-native architectures on AWS. Your Profile Essential skills/knowledge/experience: β€’ Strong experience with PySpark, distributed data processing, and largescale ETL/ELT pipelines. β€’ Advanced proficiency in Python for data engineering, automation β€’ Hands‑on expertise with AWS services (S3, Glue, Lambda, EMR, Bedrock / custom model hosting). β€’ Hands-on experience in SQL and ETL. Desirable skills/knowledge/experience: β€’ Pyspark, Python, SQL, AWS