Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Manchester, UK, on a 6-month contract (outside IR35) at £500-£650 per day. Key skills include Databricks, Spark, Delta Lake, Python, and Azure experience is preferred.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
-
🗓️ - Date discovered
June 3, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Manchester
-
🧠 - Skills detailed
#Scala #Data Engineering #Data Warehouse #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Version Control #Delta Lake #DevOps #Data Analysis #Azure #Databricks #SQL (Structured Query Language) #Python #Datasets #Apache Spark #Data Pipeline #Cloud #Data Quality
Role description
Job Title: Contract Data Engineer Location: Manchester, UK (3 Days On-Site per Week) Contract Type: 6 Months (Outside IR35) Day Rate: £500-£650 per day (DOE) Start Date: ASAP About the Role: We're looking for a skilled Data Engineer with strong experience working with Databricks to join an exciting 6-month contract, contributing to a high-impact data platform initiative for a leading organisation in Manchester. This is a part-time, flexible contract (3 days per week, on-site) and outside IR35. You'll work closely with a data team to design, build, and optimise data pipelines on the Databricks Lakehouse Platform, helping to modernise and scale enterprise data infrastructure. Key Responsibilities: • Build and maintain high-performance ETL/ELT pipelines using Databricks and Apache Spark • Implement and support scalable Delta Lake solutions on the Azure platform • Collaborate with data analysts, scientists, and business stakeholders to deliver clean, validated, and governed datasets • Tune performance and troubleshoot Spark workloads • Contribute to best practices in data quality, version control, and DevOps for data Requirements: • Proven hands-on experience with Databricks, Spark, and Delta Lake in production environments • Proficient in Python (or Scala) for data engineering workflows • Strong SQL and data transformation skills • Experience with Azure (preferred) or other cloud platforms • Solid understanding of data warehouse/lakehouse concepts and data modelling • Comfortable working independently with limited supervision