Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract for 3 months (possible extension), remote, paying up to £400 inside IR35. Requires solid experience with Databricks, Python, SQL, and data pipeline tools. Strong communication skills and self-starter attitude are essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
400
-
🗓️ - Date discovered
September 23, 2025
🕒 - Project duration
3 to 6 months
-
🏝️ - Location type
Remote
-
📄 - Contract type
Inside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Consulting #Logging #Monitoring #AWS (Amazon Web Services) #Databases #Databricks #SQL (Structured Query Language) #Data Pipeline #PySpark #Azure #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Scala #Data Engineering #ML (Machine Learning) #Python #Data Quality #Cloud #Data Science #Storage #SQL Queries #Data Extraction #GCP (Google Cloud Platform)
Role description
Job Description: Contract Data Engineer (3-Month, Possible Extension) Location: Remote Contract Duration: 3 months (with possibility to extend) Rate: Up to £400 inside IR35 Company: Data Science & Engineering consultancy-led firm helping brands transform using people, data, and design thinking. Role Overview We’re looking for a talented Data Engineer for a short-term contract project, to help build, optimise, and maintain data infrastructure and pipelines. You’ll work closely with our data science, engineering, and analytics teams to take raw data through to usable formats, ensuring reliability, scalability, and performance. Ideal if you thrive in fast paced environments and can hit ground running with minimal oversight. Key Responsibilities • Design, build, and maintain data pipelines and ETL/ELT workflows, especially for large-scale data sets. • Use Databricks / PySpark to process, transform, and clean data efficiently. • Write performant SQL queries for data extraction, analysis, and validation. • Collaborate with data scientists / ML engineers to enable downstream model building (data availability, feature engineering, etc.). • Ensure data quality, monitoring, and error-handling: logging, alerting, handling data drift or schema changes. • Work on optimizing compute and storage costs, scaling pipelines and ensuring job performance. • Document pipelines, data schemas, and maintain good practices (versioning, modular design). • Assist in support and troubleshooting when data or pipeline issues arise. Must-Have Skills & Experience • Solid experience working with Databricks (setting up notebooks / jobs / clusters / job orchestration). • Strong Python skills, especially with PySpark. • Very competent with SQL (both for analytics queries and data engineering). • Experience with data pipeline tools / ETL frameworks. • Awareness of best practices in data versioning, testing, monitoring. • Good communication skills; able to liaise with both technical and non-technical stakeholders. • Self-starter; able to rapidly understand existing architecture, adapt, and deliver. Nice to Have • Experience with cloud environments (AWS, GCP, Azure). • Experience with streaming or real-time data pipelines. • Understanding of vector databases or ML-friendly data engineering (feature stores etc.). • Experience working in consulting / client-facing roles.