

Fynity
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of £350 p/d Outside IR35. It requires expertise in Python, Spark, Airflow, and AWS services, along with SC clearance eligibility. Remote work with occasional London visits.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
350
-
🗓️ - Date
January 27, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Data Pipeline #Infrastructure as Code (IaC) #Data Processing #Airflow #Cloud #AWS CloudWatch #API (Application Programming Interface) #PySpark #Debugging #Datasets #Redshift #Terraform #Data Engineering #Kafka (Apache Kafka) #Scala #Spark (Apache Spark) #Programming #Data Architecture #Lambda (AWS Lambda) #Data Quality #AWS (Amazon Web Services) #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Big Data #GitHub #Apache Kafka
Role description
Data Engineer
Location: Remote/ London occasional visits to London
Contract Rate: £350 p/d Outside IR35
Start Date: Immediately starters only
About the Role
Join a dynamic Digital Transformation Consultancy as a Data Engineer and play a pivotal role in delivering innovative, data-driven solutions for high-profile government clients. You’ll be responsible for designing and implementing robust ETL pipelines, leveraging cutting-edge big data technologies, and driving excellence in cloud-based data engineering.
This role offers the opportunity to work with leading technologies, collaborate with data architects and scientists, and make a significant impact in a fast-paced, challenging environment.
Key Responsibilities:
• Design, implement, and debug ETL pipelines to process and manage complex datasets.
• Leverage big data tools, including Apache Kafka, Spark, and Airflow, to deliver scalable solutions.
• Collaborate with stakeholders to ensure data quality and alignment with business goals.
• Utilise programming expertise in Python, Scala, and SQL for efficient data processing.
• Build data pipelines using cloud-native services on AWS, including Lambda, Glue, Redshift, and API Gateway.
• Monitor and optimise data solutions using AWS CloudWatch and other tools.
What We’re Looking For:
• Proven hands-on experience in data engineering projects
• Good hands-on experience of designing, implementing, debugging ETL pipeline
• Expertise in Python, PySpark and SQL languages
• Expertise with Spark and Airflow
• Experience of designing data pipelines using cloud native services on AWS
• Extensive knowledge of AWS services like API Gateway, Lambda, Redshift, Glue, Cloudwatch, etc.
• Iac experience of deploying AWS resources using terraform
• Hands-on experience of setting up CI/CD workflows using GitHub Actions
SC Clearance Criteria:
• Must be a British Citizen or have resided in the UK for at least 5 consecutive years.
• Detailed employment history for the past 10 years or longer may be required.
Why Join Us?
• Be part of a forward-thinking consultancy driving digital transformation for industry leaders.
• Work with the latest big data and cloud technologies.
• Collaborate with a team of skilled professionals in a fast-paced and rewarding environment.
If you’re passionate about delivering impactful data solutions and meet the criteria for this role, we’d love to hear from you. Apply today and lead the way in digital transformation!
Data Engineer
Location: Remote/ London occasional visits to London
Contract Rate: £350 p/d Outside IR35
Start Date: Immediately starters only
About the Role
Join a dynamic Digital Transformation Consultancy as a Data Engineer and play a pivotal role in delivering innovative, data-driven solutions for high-profile government clients. You’ll be responsible for designing and implementing robust ETL pipelines, leveraging cutting-edge big data technologies, and driving excellence in cloud-based data engineering.
This role offers the opportunity to work with leading technologies, collaborate with data architects and scientists, and make a significant impact in a fast-paced, challenging environment.
Key Responsibilities:
• Design, implement, and debug ETL pipelines to process and manage complex datasets.
• Leverage big data tools, including Apache Kafka, Spark, and Airflow, to deliver scalable solutions.
• Collaborate with stakeholders to ensure data quality and alignment with business goals.
• Utilise programming expertise in Python, Scala, and SQL for efficient data processing.
• Build data pipelines using cloud-native services on AWS, including Lambda, Glue, Redshift, and API Gateway.
• Monitor and optimise data solutions using AWS CloudWatch and other tools.
What We’re Looking For:
• Proven hands-on experience in data engineering projects
• Good hands-on experience of designing, implementing, debugging ETL pipeline
• Expertise in Python, PySpark and SQL languages
• Expertise with Spark and Airflow
• Experience of designing data pipelines using cloud native services on AWS
• Extensive knowledge of AWS services like API Gateway, Lambda, Redshift, Glue, Cloudwatch, etc.
• Iac experience of deploying AWS resources using terraform
• Hands-on experience of setting up CI/CD workflows using GitHub Actions
SC Clearance Criteria:
• Must be a British Citizen or have resided in the UK for at least 5 consecutive years.
• Detailed employment history for the past 10 years or longer may be required.
Why Join Us?
• Be part of a forward-thinking consultancy driving digital transformation for industry leaders.
• Work with the latest big data and cloud technologies.
• Collaborate with a team of skilled professionals in a fast-paced and rewarding environment.
If you’re passionate about delivering impactful data solutions and meet the criteria for this role, we’d love to hear from you. Apply today and lead the way in digital transformation!






