
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a 12-month contract in London with a negotiable pay rate. Requires 5+ years in Python and SQL, experience with big data frameworks, cloud platforms, and strong problem-solving skills.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
-
🗓️ - Date discovered
September 5, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Inside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Data Pipeline #PostgreSQL #Python #Data Science #Kafka (Apache Kafka) #Data Manipulation #GCP (Google Cloud Platform) #Spark (Apache Spark) #Storage #Airflow #Data Engineering #SQL (Structured Query Language) #Hadoop #Big Data #Programming #Cloud #"ETL (Extract #Transform #Load)" #Scala #GIT #Kubernetes #BigQuery #AWS (Amazon Web Services) #Docker #Scripting #Linux #Data Framework
Role description
We are seeking a skilled Data Engineer to design, build, and maintain data pipelines and solutions that capture, manage, and transform structured and unstructured data from multiple sources. You’ll play a key role in delivering reliable, scalable, and real-time data tools that support internal teams, partners, and customers.
Based in London with a very attractive hybrid working arrangement, this will be a 12-month contract (extension likely). The rate is negotiable dependant on experience (inside IR35).
What you’ll do:
• Build and optimise data pipelines, integrating multiple data sources into cloud and database storage solutions.
• Develop automated processes to cleanse, organise, and transform big data while maintaining accuracy and integrity.
• Collaborate with data engineers, software engineers, and data scientists to deliver functional, scalable, and reliable solutions.
• Resolve complex technical issues and document processes clearly.
What you’ll need:
• 5+ years’ professional experience in Python (including data manipulation packages) and SQL.
• Strong understanding of Object-Oriented Programming (OOP) and familiarity with Airflow.
• Knowledge of the full Software Development Lifecycle.
• Excellent problem-solving skills, attention to detail, and ability to work independently.
What will help you succeed:
• Experience with big data frameworks (Spark, Hadoop, Kafka), cloud platforms (AWS, GCP), and data warehousing solutions (PostgreSQL, BigQuery).
• Familiarity with CI/CD pipelines, Docker/Kubernetes, Git, and Linux scripting.
• Strong communication skills and a collaborative mindset, with the ability to mentor team members.
• Background in aviation connectivity or telecommunications is a plus.
If you’re passionate about building innovative data solutions and enjoy working with complex systems at scale, we’d love to hear from you.
We are seeking a skilled Data Engineer to design, build, and maintain data pipelines and solutions that capture, manage, and transform structured and unstructured data from multiple sources. You’ll play a key role in delivering reliable, scalable, and real-time data tools that support internal teams, partners, and customers.
Based in London with a very attractive hybrid working arrangement, this will be a 12-month contract (extension likely). The rate is negotiable dependant on experience (inside IR35).
What you’ll do:
• Build and optimise data pipelines, integrating multiple data sources into cloud and database storage solutions.
• Develop automated processes to cleanse, organise, and transform big data while maintaining accuracy and integrity.
• Collaborate with data engineers, software engineers, and data scientists to deliver functional, scalable, and reliable solutions.
• Resolve complex technical issues and document processes clearly.
What you’ll need:
• 5+ years’ professional experience in Python (including data manipulation packages) and SQL.
• Strong understanding of Object-Oriented Programming (OOP) and familiarity with Airflow.
• Knowledge of the full Software Development Lifecycle.
• Excellent problem-solving skills, attention to detail, and ability to work independently.
What will help you succeed:
• Experience with big data frameworks (Spark, Hadoop, Kafka), cloud platforms (AWS, GCP), and data warehousing solutions (PostgreSQL, BigQuery).
• Familiarity with CI/CD pipelines, Docker/Kubernetes, Git, and Linux scripting.
• Strong communication skills and a collaborative mindset, with the ability to mentor team members.
• Background in aviation connectivity or telecommunications is a plus.
If you’re passionate about building innovative data solutions and enjoy working with complex systems at scale, we’d love to hear from you.