Initialize

Data Engineer - SC Cleared

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - SC Cleared, with a 12-month contract, offering remote work. Key skills include ETL development, AWS cloud experience, and proficiency in PySpark, Python, and SQL. Agile experience is a plus.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 31, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Yes
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Datasets #Cloud #Data Ingestion #Data Quality #Data Engineering #S3 (Amazon Simple Storage Service) #Informatica #ADF (Azure Data Factory) #Scala #GitHub #Python #Version Control #AWS (Amazon Web Services) #DevOps #Spark (Apache Spark) #Data Management #Data Pipeline #Metadata #Lambda (AWS Lambda) #Azure #PySpark #Data Processing #GitLab #Data Governance #Data Architecture #AWS Glue #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Automation #Agile #Azure Data Factory #Data Manipulation
Role description
Data Engineer - MUST BE SC Cleared - mostly remote - 12 months Required Skills & Qualifications • Strong hands on experience with ETL development and orchestration (Informatica, Azure, or AWS). • Solid AWS cloud experience, including working with core data services. • Expertise in building distributed data pipelines using EMR, PySpark, or similar technologies. • Strong data processing and transformation experience across large datasets. • Proficiency in PySpark, Python, and SQL for data manipulation and automation. • Understanding of data modelling, data warehousing concepts, and performance optimization. • Familiarity with CI/CD tools (DevOps, GitHub, GitLab). • Exposure to data governance, metadata management, and data quality frameworks. • Experience working in Agile environments is a plus. Key Responsibilities • Develop, maintain, and optimize ETL pipelines using tools such as Informatica, Azure Data Factory, AWS Glue or Azure Data Factory • Build and manage cloud based data pipelines leveraging AWS services (eg, EMR, S3, Lambda, Glue). • Implement scalable data processing workflows using PySpark, Python, and SQL. • Design and support data ingestion, transformation, and integration processes across structured and unstructured data sources. • Collaborate with data architects, analysts, and business stakeholders to understand requirements and deliver reliable data solutions. • Monitor pipeline performance, troubleshoot issues, and ensure data quality and reliability. • Contribute to best practices for data engineering, including version control, CI/CD, and automation.