

Hays
Cloud Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Data Engineer in Glasgow, contracted until 31/12/2026, offering £350/day. Key requirements include 8+ years in data engineering, expert AWS CloudFormation skills, and strong experience with AWS data services and Python ETL development.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 6, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#Automation #API (Application Programming Interface) #Cloud #Data Access #Monitoring #Lambda (AWS Lambda) #Data Storage #DynamoDB #Data Management #S3 (Amazon Simple Storage Service) #PySpark #Spark (Apache Spark) #Scala #"ETL (Extract #Transform #Load)" #Athena #Storage #Metadata #Data Architecture #Datasets #Data Pipeline #Data Quality #Observability #Compliance #AWS (Amazon Web Services) #IAM (Identity and Access Management) #EC2 #Security #Data Engineering #Python #Data Ingestion
Role description
Description
CONTRACTOR MUST BE ELIGIBLE FOR BPSS
Role Title: AWS Engineer (Contract)
Location: Glasgow
Rate: 350 £/day through umbrella
Duration: 31/12/2026
Days on site: 2-3
Role Description:
We are seeking a highly skilled Senior AWS Data Engineer with strong hands-on experience building scalable, secure, and automated data platforms on AWS. The ideal candidate will have deep expertise in AWS CloudFormation, data ingestion and transformation services, Python-based ETL development, and orchestration workflows. This role will focus on designing, implementing, and optimizing end to end data pipelines, ensuring data quality, reliability, and governance across cloud-native environments.
Key Responsibilities
Data Engineering & Pipeline Development
• Design, develop, and maintain large scale data pipelines using AWS services such as Glue, Lambda, Step Functions, EMR, DynamoDB, S3, Athena, and other ETL/ELT components.
• Build automated ingestion, transformation, and enrichment workflows for structured and unstructured datasets.
• Implement reusable data engineering frameworks and modular components using Python, PySpark, and AWS-native tooling.
Cloud Infrastructure for Data Platforms
• Develop and manage AWS CloudFormation templates for provisioning secure, scalable data engineering infrastructure.
• Optimize data storage strategies (S3 layouts, partitioning, compression, lifecycle rules).
• Configure and maintain compute services for data workloads (Lambda, ECS, EC2, EMR).
Automation & Orchestration
• Build and enhance orchestration flows using AWS Step Functions, EventBridge, and Glue Workflows.
• Implement CI/CD practices for data pipelines and infrastructure automation.
Security, Governance & Best Practices
• Apply strong authentication/authorization mechanisms using IAM, KMS, access policies, and data access controls.
• Ensure compliance with enterprise security standards, encryption requirements, and governance frameworks.
• Implement data quality checks, schema validation, lineage tracking, and metadata management.
Collaboration & Troubleshooting
• Work with data architects, platform engineers, analysts, and cross functional stakeholders to deliver high quality datasets.
• Troubleshoot pipeline issues, optimize performance, and improve reliability and observability across the data platform.
• Drive continuous improvement in automation, monitoring, and operational efficiency.
Required Skills & Experience
• 8+ years of hands-on experience as a Data Engineer with strong AWS expertise.
• Expert-level proficiency in AWS CloudFormation (mandatory).
• Strong experience with AWS data and compute services:
o Glue, Lambda, Step Functions, EMR
o S3, DynamoDB, Athena
o ECS/EC2 for data workloads where relevant
• Solid experience building ETL/ELT pipelines using Python (and ideally PySpark).
• Strong knowledge of IAM, KMS, encryption, and AWS security fundamentals.
• Ability to design and implement authentication/authorization patterns (OAuth2, API security, IAM roles & policies).
• Strong understanding of distributed systems, data modelling, modern data architectures, and cloud-native design.
• Experience deploying pipelines using CI/CD practices and automated workflows.
Description
CONTRACTOR MUST BE ELIGIBLE FOR BPSS
Role Title: AWS Engineer (Contract)
Location: Glasgow
Rate: 350 £/day through umbrella
Duration: 31/12/2026
Days on site: 2-3
Role Description:
We are seeking a highly skilled Senior AWS Data Engineer with strong hands-on experience building scalable, secure, and automated data platforms on AWS. The ideal candidate will have deep expertise in AWS CloudFormation, data ingestion and transformation services, Python-based ETL development, and orchestration workflows. This role will focus on designing, implementing, and optimizing end to end data pipelines, ensuring data quality, reliability, and governance across cloud-native environments.
Key Responsibilities
Data Engineering & Pipeline Development
• Design, develop, and maintain large scale data pipelines using AWS services such as Glue, Lambda, Step Functions, EMR, DynamoDB, S3, Athena, and other ETL/ELT components.
• Build automated ingestion, transformation, and enrichment workflows for structured and unstructured datasets.
• Implement reusable data engineering frameworks and modular components using Python, PySpark, and AWS-native tooling.
Cloud Infrastructure for Data Platforms
• Develop and manage AWS CloudFormation templates for provisioning secure, scalable data engineering infrastructure.
• Optimize data storage strategies (S3 layouts, partitioning, compression, lifecycle rules).
• Configure and maintain compute services for data workloads (Lambda, ECS, EC2, EMR).
Automation & Orchestration
• Build and enhance orchestration flows using AWS Step Functions, EventBridge, and Glue Workflows.
• Implement CI/CD practices for data pipelines and infrastructure automation.
Security, Governance & Best Practices
• Apply strong authentication/authorization mechanisms using IAM, KMS, access policies, and data access controls.
• Ensure compliance with enterprise security standards, encryption requirements, and governance frameworks.
• Implement data quality checks, schema validation, lineage tracking, and metadata management.
Collaboration & Troubleshooting
• Work with data architects, platform engineers, analysts, and cross functional stakeholders to deliver high quality datasets.
• Troubleshoot pipeline issues, optimize performance, and improve reliability and observability across the data platform.
• Drive continuous improvement in automation, monitoring, and operational efficiency.
Required Skills & Experience
• 8+ years of hands-on experience as a Data Engineer with strong AWS expertise.
• Expert-level proficiency in AWS CloudFormation (mandatory).
• Strong experience with AWS data and compute services:
o Glue, Lambda, Step Functions, EMR
o S3, DynamoDB, Athena
o ECS/EC2 for data workloads where relevant
• Solid experience building ETL/ELT pipelines using Python (and ideally PySpark).
• Strong knowledge of IAM, KMS, encryption, and AWS security fundamentals.
• Ability to design and implement authentication/authorization patterns (OAuth2, API security, IAM roles & policies).
• Strong understanding of distributed systems, data modelling, modern data architectures, and cloud-native design.
• Experience deploying pipelines using CI/CD practices and automated workflows.






