

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Databricks SME) on a 6-month remote contract, with pay rate DOE. Key skills include Azure Databricks, Python, SQL, and cloud architecture. Experience in retail or finance sectors is highly desirable.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
August 15, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Outside IR35
-
π - Security clearance
Unknown
-
π - Location detailed
United Kingdom
-
π§ - Skills detailed
#DevOps #Alation #Delta Lake #BI (Business Intelligence) #Data Lake #Migration #Data Pipeline #Spark (Apache Spark) #Databricks #Python #Cloud #Compliance #Data Governance #Security #Scala #Data Engineering #Azure Databricks #Data Quality #Dimensional Modelling #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #MLflow #Azure
Role description
Job Title: Data Engineer (Databricks SME)
Rate: DOE (outside IR35)
Location: Remote
Contract Length: 6 months
A consultancy client of ours have secured a project requiring an Azure Databricks expert. This is an exciting opportunity to work on cutting-edge data projects, building scalable data pipelines and cloud-based systems that deliver real impact.
Key Responsibilities:
β’ Lead the design, development and optimisation of scalable data solutions using Azure Databricks
β’ Provide subject matter expertise on Databricks architecture, best practices and performance tuning
β’ Collaborate with data engineering, BI and analytics teams to deliver robust and reusable data pipelines
β’ Drive the adoption of Databricks features such as Delta Lake, Unity Catalog, and MLflow where appropriate
β’ Support the migration of legacy ETL processes to Databricks-based workflows
β’ Ensure data quality, governance and security standards are met across all Databricks solutions
β’ Mentor and upskill team members in Databricks usage and data engineering techniques
β’ Troubleshoot complex technical issues and act as the escalation point for Databricks-related queries
β’ Contribute to the continuous improvement of the data platform, tooling and engineering practices
β’ Work closely with stakeholders to understand data needs and deliver fit-for-purpose solutions at pace
Experience and Qualifications Required:
β’ Extensive hands-on experience with Azure Databricks, including Delta Lake, notebooks, and job orchestration
β’ Strong proficiency in Python, SQL and Spark for building and optimising data pipelines
β’ Solid understanding of cloud architecture, ideally within Azure, including Data Lake, Data Factory and related services
β’ Experience designing and implementing data solutions using dimensional modelling (e.g. Kimball methodology)
β’ Proven track record of delivering data products in large-scale, enterprise environments
β’ Familiarity with data governance, security, and compliance frameworks
β’ Experience with CI/CD practices and DevOps tools in a data engineering context
β’ Strong problem-solving skills and ability to troubleshoot complex data issues
β’ Excellent communication and stakeholder engagement skills across technical and non-technical teams
β’ Previous experience mentoring or upskilling engineers in Databricks or data engineering practices
β’ Experience working in retail or finance sectors is highly desirable
If this sounds like an exciting opportunity please apply with your CV.
Job Title: Data Engineer (Databricks SME)
Rate: DOE (outside IR35)
Location: Remote
Contract Length: 6 months
A consultancy client of ours have secured a project requiring an Azure Databricks expert. This is an exciting opportunity to work on cutting-edge data projects, building scalable data pipelines and cloud-based systems that deliver real impact.
Key Responsibilities:
β’ Lead the design, development and optimisation of scalable data solutions using Azure Databricks
β’ Provide subject matter expertise on Databricks architecture, best practices and performance tuning
β’ Collaborate with data engineering, BI and analytics teams to deliver robust and reusable data pipelines
β’ Drive the adoption of Databricks features such as Delta Lake, Unity Catalog, and MLflow where appropriate
β’ Support the migration of legacy ETL processes to Databricks-based workflows
β’ Ensure data quality, governance and security standards are met across all Databricks solutions
β’ Mentor and upskill team members in Databricks usage and data engineering techniques
β’ Troubleshoot complex technical issues and act as the escalation point for Databricks-related queries
β’ Contribute to the continuous improvement of the data platform, tooling and engineering practices
β’ Work closely with stakeholders to understand data needs and deliver fit-for-purpose solutions at pace
Experience and Qualifications Required:
β’ Extensive hands-on experience with Azure Databricks, including Delta Lake, notebooks, and job orchestration
β’ Strong proficiency in Python, SQL and Spark for building and optimising data pipelines
β’ Solid understanding of cloud architecture, ideally within Azure, including Data Lake, Data Factory and related services
β’ Experience designing and implementing data solutions using dimensional modelling (e.g. Kimball methodology)
β’ Proven track record of delivering data products in large-scale, enterprise environments
β’ Familiarity with data governance, security, and compliance frameworks
β’ Experience with CI/CD practices and DevOps tools in a data engineering context
β’ Strong problem-solving skills and ability to troubleshoot complex data issues
β’ Excellent communication and stakeholder engagement skills across technical and non-technical teams
β’ Previous experience mentoring or upskilling engineers in Databricks or data engineering practices
β’ Experience working in retail or finance sectors is highly desirable
If this sounds like an exciting opportunity please apply with your CV.