

Tenth Revolution Group
Databricks Engineer - £400PD - Remote
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer with 3+ years of Azure Data Engineering experience, offering £400PD for a remote contract. Key skills include Databricks, PySpark, Azure Data Lake Storage, and CI/CD practices. Experience with streaming technologies is a plus.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 21, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London
-
🧠 - Skills detailed
#Data Engineering #Data Lakehouse #Python #Azure cloud #Azure DevOps #Scrum #Data Lake #"ETL (Extract #Transform #Load)" #Distributed Computing #Vault #GitHub #ADF (Azure Data Factory) #Azure ADLS (Azure Data Lake Storage) #Azure Databricks #Data Governance #Delta Lake #Databricks #MLflow #Cloud #Agile #Terraform #Storage #Azure #Programming #Kafka (Apache Kafka) #Synapse #Security #ADLS (Azure Data Lake Storage) #Spark (Apache Spark) #AI (Artificial Intelligence) #Version Control #PySpark #SQL (Structured Query Language) #Scala #DevOps #Data Science #Infrastructure as Code (IaC) #Azure Data Factory #Data Pipeline
Role description
Databricks Engineer - £400PD - RemoteAbout the Role
We are seeking a highly skilled Azure Data Engineer with deep, hands-on Databricks experience to join our growing data engineering team. In this role, you will design, build, and optimise scalable data pipelines and lakehouse architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities
• Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake.
• Build and optimise data lakehouse architectures on Azure Data Lake Storage (ADLS).
• Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricksworkflows
• Implement best practices for data governance, security, and quality across all pipelines.
• Collaborate with data scientists, analysts, and cross-functional teams to deliver reliable, production-grade data models.
• Monitor and tune pipeline performance to ensure efficiency, reliability, and cost optimisation.
• Participate in CI/CD processes and infrastructure-as-code solutions using tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience
• 3+ years' experience as a Data Engineer working in Azure environments.
• Strong hands-on experience with Databricks (PySpark, Delta Lake, cluster optimisation, job scheduling).
• Solid knowledge of Azure cloud services including:
• Azure Data Lake Storage
• Azure Data Factory
• Azure Synapse / SQL Pools
• Azure Key Vault
• Strong programming skills in Python and SQL.
• Experience building scalable, production-grade data pipelines.
• Understanding of data modelling, data warehousing concepts, and distributed computing.
• Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have
• Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka).
• Knowledge of MLflow, Unity Catalog, or advanced Databricks features.
• Exposure to Terraform or other IaC tools.
• Experience working in Agile/Scrum environments.
To apply for this role please submit your CV or contact Dillon Blackburn on or at .
Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Databricks Engineer - £400PD - RemoteAbout the Role
We are seeking a highly skilled Azure Data Engineer with deep, hands-on Databricks experience to join our growing data engineering team. In this role, you will design, build, and optimise scalable data pipelines and lakehouse architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities
• Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake.
• Build and optimise data lakehouse architectures on Azure Data Lake Storage (ADLS).
• Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricksworkflows
• Implement best practices for data governance, security, and quality across all pipelines.
• Collaborate with data scientists, analysts, and cross-functional teams to deliver reliable, production-grade data models.
• Monitor and tune pipeline performance to ensure efficiency, reliability, and cost optimisation.
• Participate in CI/CD processes and infrastructure-as-code solutions using tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience
• 3+ years' experience as a Data Engineer working in Azure environments.
• Strong hands-on experience with Databricks (PySpark, Delta Lake, cluster optimisation, job scheduling).
• Solid knowledge of Azure cloud services including:
• Azure Data Lake Storage
• Azure Data Factory
• Azure Synapse / SQL Pools
• Azure Key Vault
• Strong programming skills in Python and SQL.
• Experience building scalable, production-grade data pipelines.
• Understanding of data modelling, data warehousing concepts, and distributed computing.
• Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have
• Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka).
• Knowledge of MLflow, Unity Catalog, or advanced Databricks features.
• Exposure to Terraform or other IaC tools.
• Experience working in Agile/Scrum environments.
To apply for this role please submit your CV or contact Dillon Blackburn on or at .
Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.






