Quantum World Technologies Inc.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Databricks Architect/Data Engineer based in London, UK, on a hybrid contract (3 days onsite). Key skills include Databricks, PySpark, and SQL. Experience in data governance and Lakehouse architecture is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 6, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Data Strategy #Metadata #Python #Automation #PySpark #Strategy #SQL (Structured Query Language) #Data Vault #Data Science #Databricks #Delta Lake #Scala #Data Governance #Vault #Data Processing #Storage #Scripting #Data Catalog #GCP (Google Cloud Platform) #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Data Quality #Data Management #Data Engineering #Data Lake #Azure
Role description
Role: Senior Databricks Architect Location: London, UK Mode: Hybrid (3 days onsite) Job Description: We are looking for an experienced Databricks Architect/Data Engineer to design, build, and optimize our Lakehouse architecture on Databricks. You will play a key role in shaping our data strategy, ensuring scalability, performance, and governance while working with Delta Lake, Data Catalog, and PySpark. Key Responsibilities: ✔ Databricks Lakehouse Architecture: Design and implement scalable Databricks Lakehouse solutions with Delta Lake for optimized storage and analytics. ✔ Data Governance & Cataloging: Establish data cataloging, lineage, and metadata management for improved discoverability. ✔ Performance Optimization: Tune Spark/PySpark jobs for efficiency in large-scale data processing. ✔ Data Modelling & Quality: Develop dimensional/data vault models and enforce data quality checks. ✔ Collaboration: Work with data scientists, analysts, and business teams to enable self-service analytics. ✔ CI/CD & Automation: Implement Databricks workflows and integrate with Azure/AWS/GCP data ecosystems. Primary Skills (Must-Have): ✅ Databricks – Architecture, Delta Lake, Lakehouse, Unity Catalog/Data Catalog ✅ PySpark (optimization, UDFs, Delta operations) ✅ SQL (advanced querying, performance tuning) ✅ Data Lake/Warehouse best practices Secondary Skills (Nice-to-Have): 🔹 Python (for scripting & automation) 🔹 Data Modelling (star schema, Kimball, Data Vault) 🔹 Data Quality/Validation frameworks 🔹 ETL/ELT pipelines Work Arrangement: 📍 Hybrid (3 days in office – ideally Tues-Thurs, Paddington, London) 📍 Flexible remote work (2 days/week)