

Peaple Talent
Databricks Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineer (contract) in London, offering £500–£650 per day for an initial 6-month period. Key skills include Databricks, Spark (PySpark/SQL), and cloud experience (AWS/Azure). Strong data pipeline and ETL/ELT expertise required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
650
-
🗓️ - Date
May 2, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Migration #Scala #Microsoft Azure #Data Engineering #Spark SQL #Web Services #Data Pipeline #Spark (Apache Spark) #SQL (Structured Query Language) #Data Quality #AWS (Amazon Web Services) #Azure #Databricks #Datasets #Cloud #Airflow #Data Processing #PySpark #"ETL (Extract #Transform #Load)"
Role description
Databricks Data Engineer (Contract) | London (Hybrid) | £500–£650 per day (Inside IR35)
Peaple Talent are working with a data consultancy who partner with a range of enterprise clients to deliver modern cloud data platforms. They’re currently looking for a Databricks Data Engineer to support multiple client engagements across large-scale data transformation programmes.
💻 The Role
This is a hands-on Databricks Data Engineering contract where you’ll be working across different client environments, helping design, build and optimise scalable data platforms.
You’ll be focused on delivering robust data pipelines and analytics-ready datasets using Databricks, working across cloud environments such as Amazon Web Services or Microsoft Azure depending on the client. The work will range from greenfield builds through to modernisation of existing data estates.
🔧 Key Responsibilities
• Design and build scalable data pipelines using Databricks
• Develop and optimise ETL/ELT workflows using Spark (PySpark/SQL)
• Support migration and modernisation of legacy data platforms into Databricks
• Improve performance, reliability and cost efficiency across data workloads
• Work closely with Data Engineers, Analysts and Architects across client teams
• Implement data quality, governance and best practices across delivery
✅ What We’re Looking For
• Strong hands-on experience with Databricks in a Data Engineering capacity
• Solid experience with Spark (PySpark / Spark SQL)
• Proven track record building and optimising data pipelines (ETL/ELT)
• Experience working within cloud environments (AWS or Azure)
• Strong SQL skills and experience working with large-scale datasets
• Familiarity with orchestration tools such as Airflow or similar
• Good understanding of data modelling and distributed data processing concepts
• Comfortable working across multiple client environments and stakeholders
🎁 What’s on Offer
• Day Rate: £500–£650 (Outside IR35)
• Location: London
• Remote working: 3 days a week onsite
• Initial 6-month contract (strong likelihood of extension)
• Exposure to multiple enterprise data transformation programmes
• Opportunity to work across a variety of Databricks implementations within a consultancy environment
Databricks Data Engineer (Contract) | London (Hybrid) | £500–£650 per day (Inside IR35)
Databricks Data Engineer (Contract) | London (Hybrid) | £500–£650 per day (Inside IR35)
Peaple Talent are working with a data consultancy who partner with a range of enterprise clients to deliver modern cloud data platforms. They’re currently looking for a Databricks Data Engineer to support multiple client engagements across large-scale data transformation programmes.
💻 The Role
This is a hands-on Databricks Data Engineering contract where you’ll be working across different client environments, helping design, build and optimise scalable data platforms.
You’ll be focused on delivering robust data pipelines and analytics-ready datasets using Databricks, working across cloud environments such as Amazon Web Services or Microsoft Azure depending on the client. The work will range from greenfield builds through to modernisation of existing data estates.
🔧 Key Responsibilities
• Design and build scalable data pipelines using Databricks
• Develop and optimise ETL/ELT workflows using Spark (PySpark/SQL)
• Support migration and modernisation of legacy data platforms into Databricks
• Improve performance, reliability and cost efficiency across data workloads
• Work closely with Data Engineers, Analysts and Architects across client teams
• Implement data quality, governance and best practices across delivery
✅ What We’re Looking For
• Strong hands-on experience with Databricks in a Data Engineering capacity
• Solid experience with Spark (PySpark / Spark SQL)
• Proven track record building and optimising data pipelines (ETL/ELT)
• Experience working within cloud environments (AWS or Azure)
• Strong SQL skills and experience working with large-scale datasets
• Familiarity with orchestration tools such as Airflow or similar
• Good understanding of data modelling and distributed data processing concepts
• Comfortable working across multiple client environments and stakeholders
🎁 What’s on Offer
• Day Rate: £500–£650 (Outside IR35)
• Location: London
• Remote working: 3 days a week onsite
• Initial 6-month contract (strong likelihood of extension)
• Exposure to multiple enterprise data transformation programmes
• Opportunity to work across a variety of Databricks implementations within a consultancy environment
Databricks Data Engineer (Contract) | London (Hybrid) | £500–£650 per day (Inside IR35)






