Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract until March 2026, offering £650–£750/day, hybrid work (1 day/week in London). Key skills include Azure Synapse Analytics, ETL, PySpark, SQL, and data modeling. Experience with Azure Data Lakes and pipeline management is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
750
-
🗓️ - Date discovered
September 3, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Inside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Automation #Data Lake #Storage #Python #Synapse #PySpark #Spark (Apache Spark) #Databases #Azure Databricks #Azure Synapse Analytics #Databricks #SQL (Structured Query Language) #Data Engineering #Data Pipeline #"ETL (Extract #Transform #Load)" #Azure
Role description
🚀 Contract Opportunity: Data Engineers (Inside IR35) 📅 Until March 2026 | 💰 £650–£750/day | Hybrid ( 1 day per week in London) Opportunity for an experienced Data Engineer to join a high-impact programme delivering enterprise-scale data solutions. This is a fantastic opportunity to work with cutting-edge Azure technologies in a collaborative, forward-thinking environment. 🔍 Role Overview As a Data Engineer, you’ll play a key role in designing, building, and optimising data pipelines and models that power critical business insights. You’ll be part of a skilled team working on large-scale data transformation projects. ✅ Essential Skills & Experience To hit the ground running, you’ll need hands-on expertise in: • Azure Synapse Analytics • ETL (Extract, Transform, Load) processes • PySpark • SQL (advanced querying and optimisation) • Data Modelling (conceptual, logical, and physical) • Azure Databricks • Experience of working with Azure Data Lakes, Data warehousing, and pipelines and Storage accounts • Experience in building and managing Pipelines including: • building data interfaces to source systems • combining and transforming data into appropriate storage formats • engineering data sets for analytics purposes • developing automated and robust pipelines with error handling • Experience identifying and resolving issues in databases, data processes, data products and services. • Proficiency in T-SQL and Python to develop automation This is a brilliant opportunity to contribute to a major data-driven initiative with long-term stability and a competitive day rate. If you’re a proactive, detail-oriented engineer with a passion for data, we’d love to hear from you. 📩 Apply now or get in touch for more information.