ValueMomentum

Azure Databricks Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Databricks Data Engineer with 10+ years of experience, offering a hybrid contract for 3 days onsite. Key skills include Advanced SQL, Azure Databricks, and Data Factory. Experience in the insurance or financial industry is preferred.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 16, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Data Lake #Data Engineering #DevOps #Data Processing #PySpark #Azure Datalake #Data Pipeline #Scala #Deployment #Azure Databricks #Databricks #Azure SQL #Synapse #Azure Data Factory #Databases #SQL (Structured Query Language) #ADF (Azure Data Factory) #"ETL (Extract #Transform #Load)" #Agile #ADLS (Azure Data Lake Storage) #Python #Spark (Apache Spark) #Big Data #Azure #Data Ingestion
Role description
Job Title: Azure Databricks Data Engineer Primary skills: Advanced SQL, Azure Databricks, Azure Data Factory, Azure Datalake. Secondary skills: Azure SQL, PySpark, Azure Synapse. Experience: 10+ Years of Experience Hybrid - 3 Days/ Week onsite is MUST About the job We are looking for an experienced Databricks Data Engineer to design, develop, and manage data pipelines using Azure services such as Databricks, Data Factory, and Datalake. The role involves building scalable ETL solutions, collaborating with cross-functional teams, and processing large volumes of data. You will work closely with business and technical teams to deliver robust data models and transformations in support of analytics and reporting needs. • Responsibilities: • Design and develop ETL pipelines using ADF for data ingestion and transformation. • Collaborate with Azure stack modules like Data Lakes and SQL DW to handle large volumes of data. • Write SQL, Python, and PySpark code to meet data processing and transformation needs. • Understand business requirements and create data flow processes that meet them. • Develop mapping documents and transformation business rules. • Ensure continuous communication with the team and stakeholders regarding project status. • Requirements - Must Have: • 8+ years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases. • Extensive hands-on experience with Azure services: Databricks, Data Factory, ADLS, Synapse, and Azure SQL. • Experience in SQL, Python, and PySpark for data transformation and processing. • Strong understanding of DevOps, CI/CD deployments, and Agile methodologies. • Strong communication skills and attention to detail. • Experience in the insurance or financial industry is preferred. About ValueMomentum ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the global insurance industry.