Data Consultant

⭐ - Featured Role | Apply direct with Data Freelance Hub
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 16, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Outside IR35
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Azure Databricks #Security #Dimensional Modelling #SQL (Structured Query Language) #Migration #MLflow #Alation #Spark (Apache Spark) #Python #Data Pipeline #Delta Lake #Cloud #Data Lake #Data Quality #"ETL (Extract #Transform #Load)" #DevOps #Data Governance #Databricks #BI (Business Intelligence) #Scala #Data Engineering #Compliance #Azure
Role description
Job Title: Data Consultant (Databricks SME) Rate: DOE (outside IR35) Location: Remote Contract Length: 6 months A consultancy client of ours have secured a project requiring an Azure Databricks expert. This is an exciting opportunity to work on cutting-edge data projects, building scalable data pipelines and cloud-based systems that deliver real impact. Key Responsibilities: β€’ Lead the design, development and optimisation of scalable data solutions using Azure Databricks β€’ Provide subject matter expertise on Databricks architecture, best practices and performance tuning β€’ Collaborate with data engineering, BI and analytics teams to deliver robust and reusable data pipelines β€’ Drive the adoption of Databricks features such as Delta Lake, Unity Catalog, and MLflow where appropriate β€’ Support the migration of legacy ETL processes to Databricks-based workflows β€’ Ensure data quality, governance and security standards are met across all Databricks solutions β€’ Mentor and upskill team members in Databricks usage and data engineering techniques β€’ Troubleshoot complex technical issues and act as the escalation point for Databricks-related queries β€’ Contribute to the continuous improvement of the data platform, tooling and engineering practices β€’ Work closely with stakeholders to understand data needs and deliver fit-for-purpose solutions at pace Experience and Qualifications Required: β€’ Extensive hands-on experience with Azure Databricks, including Delta Lake, notebooks, and job orchestration β€’ Strong proficiency in Python, SQL and Spark for building and optimising data pipelines β€’ Solid understanding of cloud architecture, ideally within Azure, including Data Lake, Data Factory and related services β€’ Experience designing and implementing data solutions using dimensional modelling (e.g. Kimball methodology) β€’ Proven track record of delivering data products in large-scale, enterprise environments β€’ Familiarity with data governance, security, and compliance frameworks β€’ Experience with CI/CD practices and DevOps tools in a data engineering context β€’ Strong problem-solving skills and ability to troubleshoot complex data issues β€’ Excellent communication and stakeholder engagement skills across technical and non-technical teams β€’ Previous experience mentoring or upskilling engineers in Databricks or data engineering practices If this sounds like an exciting opportunity please apply with your CV.