

SPG Resourcing
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a 6-month remote Data Engineer contract focused on Databricks, requiring expertise in data migration, Spark, and SQL. Key skills include building data pipelines, data modeling, and familiarity with Delta Lake and Jira for project management.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
476
-
🗓️ - Date
January 27, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Leeds, England, United Kingdom
-
🧠 - Skills detailed
#Data Pipeline #Apache Spark #Databricks #Cloud #Migration #PySpark #Datasets #Spark SQL #Data Lifecycle #Data Engineering #Scala #Delta Lake #Spark (Apache Spark) #Data Quality #Documentation #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Jira #Data Migration
Role description
Job Title: Data Engineer (Databricks)
Location: Remote (once a month, Derby)
Employment Type: 6-month Contract
Rate: Market Rate (Outside IR35)
About the Role:
I’m looking to speak with an experienced Data Engineer to support a major data platform modernisation, centred around Databricks. You’ll play a key role in migrating data from legacy/on-premise systems to a modern cloud-based analytics platform, building scalable data pipelines and robust data models to support reporting and advanced analytics.
This role will involve hands-on engineering across the full data lifecycle — from ingestion and transformation through to modelling and optimisation — working closely with analytics, reporting, and business stakeholders.
Key Responsibilities:
• Design, build, and maintain data pipelines using Databricks (Spark / PySpark / SQL).
• Support the migration of on-premise data sources to a cloud-based Databricks platform.
• Develop and optimise data models (e.g. dimensional, star schema, or medallion architecture).
• Implement best practices for Delta Lake, performance tuning, and cost optimisation.
• Identify and resolve data quality, reliability, and performance issues during migration.
• Collaborate with stakeholders to translate business requirements into scalable data solutions.
• Contribute to sprint planning, delivery, and issue tracking using Jira.
• Help establish and promote data engineering standards, governance, and documentation.
Tech Stack & Tools:
Core Technologies:
• Databricks
• Apache Spark (PySpark / Spark SQL)
• Delta Lake
• SQL
Focus Areas
• Databricks platform development
• Cloud data migration
• Data modelling & analytics-ready datasets
• Modern data engineering best practices
SPG Resourcing is an equal opportunities employer and is committed to fostering an inclusive workplace which values and benefits from the diversity of the workforce we hire. We offer reasonable accommodation at every stage of the application and interview process.
Job Title: Data Engineer (Databricks)
Location: Remote (once a month, Derby)
Employment Type: 6-month Contract
Rate: Market Rate (Outside IR35)
About the Role:
I’m looking to speak with an experienced Data Engineer to support a major data platform modernisation, centred around Databricks. You’ll play a key role in migrating data from legacy/on-premise systems to a modern cloud-based analytics platform, building scalable data pipelines and robust data models to support reporting and advanced analytics.
This role will involve hands-on engineering across the full data lifecycle — from ingestion and transformation through to modelling and optimisation — working closely with analytics, reporting, and business stakeholders.
Key Responsibilities:
• Design, build, and maintain data pipelines using Databricks (Spark / PySpark / SQL).
• Support the migration of on-premise data sources to a cloud-based Databricks platform.
• Develop and optimise data models (e.g. dimensional, star schema, or medallion architecture).
• Implement best practices for Delta Lake, performance tuning, and cost optimisation.
• Identify and resolve data quality, reliability, and performance issues during migration.
• Collaborate with stakeholders to translate business requirements into scalable data solutions.
• Contribute to sprint planning, delivery, and issue tracking using Jira.
• Help establish and promote data engineering standards, governance, and documentation.
Tech Stack & Tools:
Core Technologies:
• Databricks
• Apache Spark (PySpark / Spark SQL)
• Delta Lake
• SQL
Focus Areas
• Databricks platform development
• Cloud data migration
• Data modelling & analytics-ready datasets
• Modern data engineering best practices
SPG Resourcing is an equal opportunities employer and is committed to fostering an inclusive workplace which values and benefits from the diversity of the workforce we hire. We offer reasonable accommodation at every stage of the application and interview process.






