

ValueMomentum
Azure Databricks Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Databricks Data Engineer with a contract length of "Unknown" and a pay rate of "Unknown." Requires 10+ years of experience, advanced SQL, Azure Databricks, Data Factory, and Datalake skills. Industry experience in insurance or finance preferred.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 7, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Spark (Apache Spark) #Databases #"ETL (Extract #Transform #Load)" #DevOps #Azure #SQL (Structured Query Language) #Azure Datalake #Data Processing #Agile #Deployment #PySpark #Synapse #Azure Databricks #Data Lake #Big Data #Data Ingestion #Data Engineering #Python #ADF (Azure Data Factory) #Azure Data Factory #Scala #Azure SQL #Data Pipeline #Databricks #ADLS (Azure Data Lake Storage)
Role description
Job Title: Azure Databricks Data Engineer
Primary skills: Advanced SQL, Azure Databricks, Azure Data Factory, Azure Datalake.
Secondary skills: Azure SQL, PySpark, Azure Synapse.
Experience: 10+ Years of Experience
About the job
We are looking for an experienced Databricks Data Engineer to design, develop, and manage data pipelines using Azure services such as Databricks, Data Factory, and Datalake.
The role involves building scalable ETL solutions, collaborating with cross-functional teams, and processing large volumes of data.
You will work closely with business and technical teams to deliver robust data models and transformations in support of analytics and reporting needs.
• Responsibilities:
• Design and develop ETL pipelines using ADF for data ingestion and transformation.
• Collaborate with Azure stack modules like Data Lakes and SQL DW to handle large volumes of data.
• Write SQL, Python, and PySpark code to meet data processing and transformation needs.
• Understand business requirements and create data flow processes that meet them.
• Develop mapping documents and transformation business rules.
• Ensure continuous communication with the team and stakeholders regarding project status.
• Requirements - Must Have:
• 8+ years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases.
• Extensive hands-on experience with Azure services: Databricks, Data Factory, ADLS, Synapse, and Azure SQL.
• Experience in SQL, Python, and PySpark for data transformation and processing.
• Strong understanding of DevOps, CI/CD deployments, and Agile methodologies.
• Strong communication skills and attention to detail.
• Experience in the insurance or financial industry is preferred.
About ValueMomentum
ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the global insurance industry.
Job Title: Azure Databricks Data Engineer
Primary skills: Advanced SQL, Azure Databricks, Azure Data Factory, Azure Datalake.
Secondary skills: Azure SQL, PySpark, Azure Synapse.
Experience: 10+ Years of Experience
About the job
We are looking for an experienced Databricks Data Engineer to design, develop, and manage data pipelines using Azure services such as Databricks, Data Factory, and Datalake.
The role involves building scalable ETL solutions, collaborating with cross-functional teams, and processing large volumes of data.
You will work closely with business and technical teams to deliver robust data models and transformations in support of analytics and reporting needs.
• Responsibilities:
• Design and develop ETL pipelines using ADF for data ingestion and transformation.
• Collaborate with Azure stack modules like Data Lakes and SQL DW to handle large volumes of data.
• Write SQL, Python, and PySpark code to meet data processing and transformation needs.
• Understand business requirements and create data flow processes that meet them.
• Develop mapping documents and transformation business rules.
• Ensure continuous communication with the team and stakeholders regarding project status.
• Requirements - Must Have:
• 8+ years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases.
• Extensive hands-on experience with Azure services: Databricks, Data Factory, ADLS, Synapse, and Azure SQL.
• Experience in SQL, Python, and PySpark for data transformation and processing.
• Strong understanding of DevOps, CI/CD deployments, and Agile methodologies.
• Strong communication skills and attention to detail.
• Experience in the insurance or financial industry is preferred.
About ValueMomentum
ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the global insurance industry.






