Azure Databricks Data Engineer - AI

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Databricks Data Engineer - AI, offering a contract of unspecified length at £450 – £500/day, remote with occasional travel. Requires 5+ years in data engineering, expertise in Azure Databricks, Spark, and strong Python skills.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
500
-
🗓️ - Date discovered
September 23, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Remote
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Azure Databricks #DataOps #MLflow #Version Control #Monitoring #Unit Testing #Documentation #AI (Artificial Intelligence) #AWS (Amazon Web Services) #Databricks #GitHub #SQL (Structured Query Language) #PySpark #Deployment #Azure #Compliance #Spark (Apache Spark) #Data Engineering #Delta Lake #GIT #Code Reviews #Automated Testing #Python #Leadership #Cloud #Jenkins #Pytest #Storage #Programming #Debugging #Observability #GCP (Google Cloud Platform)
Role description
Data Engineer/ Azure Databricks Platform Lead – Data Engineering & AI Enablement Remote Role with Occasional travel for team meetups £450 – £500/Day Outside IR35 Experience in Azure Databricks, Spark, Delta Lake, Unity Catalog, MLflow, PySpark, Databricks SQL. We’re hiring a hands-on leader to drive the future of data engineering, platform optimization, and AI enablement in our modern data ecosystem. As our Databricks Platform Lead, you'll mentor a growing team of data engineers and own the design and governance of a high-performing Databricks platform—infused with AI-powered development, MLOps, and best-in-class engineering practices. What You’ll Do • Lead and mentor a team of 6+ data engineers, fostering excellence in AI-enabled data engineering. • Champion best practices: version control, modular code, AI-assisted development, CI/CD, and code reviews. • Host pair-programming sessions and technical workshops to upskill the team. Platform Ownership & Optimization • Own Databricks operations: Delta Lake, Unity Catalog, MLflow, compute/storage management. • Implement robust standards around governance, cost efficiency, job orchestration, and lineage tracking. • Monitor and optimize resource utilization across workflows and clusters. Pipeline Re-Design & AI Enablement • Re-architect existing pipelines for modularity, performance, and observability. • Introduce modern DataOps and MLOps practices: versioning, automated deployment, testing, monitoring. • Infuse AI into workflows (e.g., GenAI-assisted development, prompt engineering, automated testing). Software Engineering Standards • Establish and enforce engineering standards: CI/CD, documentation, test coverage, Git workflows. • Drive adoption of AI copilots for faster development, optimization, and debugging. • Define clear branching strategies and peer review processes. Platform Operations & Governance • Build observability and alerting using Databricks-native and 3rd-party tools. • Maintain thorough documentation: runbooks, CI/CD processes, platform standards. • Ensure compliance with DataOps/MLOps lifecycle practices across data products. Technical Expertise • 5+ years in data platform/engineering roles; 2+ years managing Databricks at scale. • Deep knowledge of: Spark, Delta Lake, Unity Catalog, MLflow, PySpark, Databricks SQL. • Strong Python engineering background: unit testing (pytest), CI/CD (GitHub Actions, Jenkins, etc.). • Cloud fluency: Azure, AWS, or GCP with Databricks. • Proven experience in platform performance tuning, cost management, and DataOps implementation. Leadership & Collaboration • Demonstrated ability to mentor junior engineers and lead by example. • Strong communicator with the ability to align technical decisions with business goals. Please share your CVs at shikha@careerwise.com