RED Global

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer, contract length of 6 months (hybrid in Milton Keynes). Requires 5+ years in data pipeline engineering, advanced SQL, PySpark, Databricks, AWS, and experience with large datasets and data quality.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 25, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Data Architecture #Scala #Security #Data Quality #Distributed Computing #GIT #Datasets #Data Engineering #AWS (Amazon Web Services) #PySpark #JSON (JavaScript Object Notation) #Microsoft Power BI #Data Analysis #Data Science #SQL (Structured Query Language) #Database Management #Cloud #Data Pipeline #Databricks #Spark (Apache Spark) #BI (Business Intelligence) #Data Integrity #"ETL (Extract #Transform #Load)" #Logical Data Model
Role description
RED Global are currently looking for an experienced Data Engineer to support a cloud-based data programme. This contract is Hybrid in Milton Keynes and will last for an initial 6-month period with high chances of extension. This role is focused on building, optimising, and maintaining scalable data pipelines using distributed computing technologies. You will work closely with software developers, data analysts, and data scientists to ensure reliable data delivery across multiple teams, systems, and products. Key Responsibilities: β€’ Develop and operationalise scalable, reliable data pipelines β€’ Build and manage large, complex datasets for business and technical use cases β€’ Optimise data systems from ingestion through to transformation and consumption β€’ Implement data quality checks to improve pipeline reliability and data integrity β€’ Support database management, including logical data modelling, optimisation, and security β€’ Work with Data & Analytics teams to improve functionality across data platforms β€’ Provide on-call support and help unblock users based on issue severity Key Skills Required: β€’ 5+ years’ experience in engineering and operationalising data pipelines β€’ Strong experience working with large and complex datasets β€’ Advanced SQL experience β€’ PySpark experience β€’ Databricks experience β€’ AWS experience β€’ Experience working with JSON β€’ Git and CI/CD experience β€’ Power BI or similar visualisation tool experience β€’ Strong understanding of data quality, pipeline optimisation, and cloud data architecture If this is something you are interested in, please send me an up-to-date CV and we can discuss the role in more detail. Otherwise, what is it you're looking for? If you know someone who might be interested, please forward this advert their way