

Revolution Technology
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on an initial 6-month contract, hybrid (3 days onsite in London, 2 remote), paying £430/day. Key skills include Python, Databricks (AWS), machine learning, data engineering principles, and version control (Git, CI/CD).
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
430
-
🗓️ - Date
November 14, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Automation #"ETL (Extract #Transform #Load)" #Version Control #Programming #Data Processing #Python #Databricks #Data Engineering #GIT #MLflow #AWS (Amazon Web Services) #ML (Machine Learning)
Role description
Data Engineer
Initial 6 month Contract| Hybrid 3 days onsite (London)2 remote | £430p/day (Inside IR35 Via umbrella)
Currently on the look for a Data Engineer for an exciting project with our consultancy client! Looking for someone who is a hands on problem solver with a strong foundation in Python development, Databricks (preferably on AWS), and a good understanding of machine learning (ML) workflow.
What I’m looking for:
• Strong programming experience in Python, including data processing, automation, and ETL development.
• Hands-on experience with Databricks (ideally on AWS), including notebook workflows, cluster management, and performance tuning.
• Solid understanding of data engineering principle-pipelines, orchestration, and version control (Git, CI/CD).
• Familiarity with machine learning (ML) development and tools such as MLflow or equivalent.
Data Engineer
Initial 6 month Contract| Hybrid 3 days onsite (London)2 remote | £430p/day (Inside IR35 Via umbrella)
Currently on the look for a Data Engineer for an exciting project with our consultancy client! Looking for someone who is a hands on problem solver with a strong foundation in Python development, Databricks (preferably on AWS), and a good understanding of machine learning (ML) workflow.
What I’m looking for:
• Strong programming experience in Python, including data processing, automation, and ETL development.
• Hands-on experience with Databricks (ideally on AWS), including notebook workflows, cluster management, and performance tuning.
• Solid understanding of data engineering principle-pipelines, orchestration, and version control (Git, CI/CD).
• Familiarity with machine learning (ML) development and tools such as MLflow or equivalent.






