

Simon James IT Ltd
Data Engineer - 12 Months, Remote - SC Cleared
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 12-month contract, remote, requiring SC clearance. Key skills include ETL development, AWS EMR, PySpark, and SQL. Experience in building large-scale data pipelines and ensuring data accuracy is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 31, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Yes
-
📍 - Location detailed
England, United Kingdom
-
🧠 - Skills detailed
#Data Pipeline #Deployment #AWS Glue #AWS EMR (Amazon Elastic MapReduce) #Cloud #Data Accuracy #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #Monitoring #Python #SQL (Structured Query Language) #Data Engineering #Data Processing #PySpark #Informatica #ADF (Azure Data Factory) #Spark (Apache Spark)
Role description
Data Engineer – 12‑Month Contract (Remote)
We're seeking a Data Engineer with strong hands‑on experience in ETL development, AWS Cloud services, and large‑scale data pipelines using EMR and PySpark. This fully remote role focuses on building, optimising, and maintaining modern data processing workflows across cloud environments.
Core Tech Requirements
• ETL/ELT pipeline engineering (Informatica, ADF, AWS Glue or similar)
• PySpark, Python, SQL expertise
• AWS EMR and cloud‑native data processing experience
• Strong data processing and quality best‑practice knowledge
What You’ll Do
• Build and optimise large‑scale ETL/ELT pipelines in AWS
• Develop and tune PySpark jobs for EMR workloads
• Improve performance, reliability, and cost efficiency across data workflows
• Ensure data accuracy and consistency across ingestion and processing layers
• Support orchestration, monitoring, and deployments in AWS environments
• Troubleshoot production issues with a focus on long‑term stability
Data Engineer – 12‑Month Contract (Remote)
We're seeking a Data Engineer with strong hands‑on experience in ETL development, AWS Cloud services, and large‑scale data pipelines using EMR and PySpark. This fully remote role focuses on building, optimising, and maintaining modern data processing workflows across cloud environments.
Core Tech Requirements
• ETL/ELT pipeline engineering (Informatica, ADF, AWS Glue or similar)
• PySpark, Python, SQL expertise
• AWS EMR and cloud‑native data processing experience
• Strong data processing and quality best‑practice knowledge
What You’ll Do
• Build and optimise large‑scale ETL/ELT pipelines in AWS
• Develop and tune PySpark jobs for EMR workloads
• Improve performance, reliability, and cost efficiency across data workflows
• Ensure data accuracy and consistency across ingestion and processing layers
• Support orchestration, monitoring, and deployments in AWS environments
• Troubleshoot production issues with a focus on long‑term stability






