Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Azure Databricks Engineer) on a 12-month contract in London, requiring 3 days in-office. Key skills include 12+ years in data pipelines, 4+ years with Azure stack, SQL, Python, and insurance industry experience.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 14, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Data Quality #Microsoft Azure #Agile #SQL Server #Synapse #Data Strategy #Big Data #Informatica #Azure Synapse Analytics #Security #Data Architecture #Jira #Data Engineering #Azure DevOps #Data Pipeline #"ETL (Extract #Transform #Load)" #Data Modeling #DevOps #Python #Data Management #Data Ingestion #ADLS (Azure Data Lake Storage) #MDM (Master Data Management) #Azure Databricks #IICS (Informatica Intelligent Cloud Services) #Databricks #Storage #Delta Lake #Data Lake #Data Security #Cloud #Strategy #ADF (Azure Data Factory) #Oracle #Azure Data Factory #SQL (Structured Query Language) #Scrum #Azure #Databases #Azure ADLS (Azure Data Lake Storage) #Data Integration
Role description
Azure Databricks Engineer 12 months contract London (3 days per week needed in office) We're looking for a highly skilled and experienced Azure Databricks Engineer to be a leader in our data team. In this role, you'll design and build modern data solutions, focusing on cloud-native platforms and architecture. You will be responsible for creating and optimizing data pipelines, from big data ingestion to complex analytical processing. A deep understanding of cloud data architecture and a hands-on, problem-solving approach are essential. Your work will directly support our data strategy and enable key business insights. Skills: β€’ 12+ years of experience in data ingestion, processing, and building analytical pipelines. β€’ 4+ years of hands-on experience with the Microsoft Azure data stack, including Azure Databricks, Azure Data Factory (ADF), Azure Synapse Analytics, and Azure Data Lake Storage (ADLS). β€’ Expertise in SQL and Python. β€’ Experience with data integration and ETL tools like Informatica IICS, as well as modern data warehousing concepts and technologies like Delta Lake. β€’ Proven experience in data integration, modeling, and transformation within the insurance or reinsurance market. β€’ Familiarity with both on-premise and cloud databases, specifically Oracle and SQL Server. β€’ Experience with Agile methodologies (Scrum, SAFe) and tools such as Jira or Azure DevOps. β€’ A strong understanding of mass data ingestion, data quality, and master data management. β€’ In-depth knowledge of data security challenges, data delivery principles, and data modeling concepts. β€’ Excellent communication, teamwork, and active listening skills. β€’ Professional certifications in Databricks and Azure are highly desired.