

BlueRose Technologies
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a contract basis in London, UK (Hybrid). The position requires 7+ years of experience, expertise in Informatica, Databricks, and SQL, with a focus on financial datasets and data quality frameworks.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 9, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#BI (Business Intelligence) #BO (Business Objects) #Scala #Automated Testing #DevOps #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #SQL (Structured Query Language) #Spark SQL #Data Quality #Data Pipeline #Azure Databricks #Datasets #Documentation #Monitoring #Informatica #GIT #Data Engineering #Databricks #Azure #Microsoft Power BI #Data Lake #Data Integration #SAP #Python #Delta Lake #Metadata
Role description
Job Title: Senior Data Engineer
Job Location: London, UK (Hybrid)
Job Type: Contract job opportunity
Job description:
We are seeking a Senior Data Engineer to design and deliver scalable data integration, transformation, and analytics solutions across Azure.
The ideal candidate will have strong expertise in Informatica, Databricks, and BI enablement, with a focus on data quality and efficient data modelling.
Experience with financial /trading datasets is an added advantage.
Key Responsibilities:
-Build and maintain data pipelines using Informatica and Azure Databricks (Spark/Delta Lake).
-Design and optimize data lake/lakehouse models, ensuring performance, reliability, and governance.
-Enable BI consumption by preparing clean, curated datasets for Power BI and SAP BO.
-Implement data quality rules, monitoring, and exception handling frameworks.
-Collaborate with business, analytics, and technology teams to understand requirements and deliver robust data solutions.
-Apply DevOps practices including Git, CI/CD, and automated testing for data pipelines.
-Support creation of lightweight workflow/ops tools using Power Apps (nice to have).
Required Skills:
-7+ years of experience in data engineering within enterprise environments.
-Strong hands-on experience with Informatica ETL, Azure Databricks, Spark, Delta Lake, and Azure data services.
-Proficiency in SQL and Python (or Scala).
-Experience preparing data models and datasets for Power BI and/or SAP BO.
-Practical understanding of data quality frameworks, metadata, and lineage.
-Strong problem-solving, documentation, and stakeholder communication skills.
#DataEngineer #Azure #Databricks #Spark #SQL #Python
Job Title: Senior Data Engineer
Job Location: London, UK (Hybrid)
Job Type: Contract job opportunity
Job description:
We are seeking a Senior Data Engineer to design and deliver scalable data integration, transformation, and analytics solutions across Azure.
The ideal candidate will have strong expertise in Informatica, Databricks, and BI enablement, with a focus on data quality and efficient data modelling.
Experience with financial /trading datasets is an added advantage.
Key Responsibilities:
-Build and maintain data pipelines using Informatica and Azure Databricks (Spark/Delta Lake).
-Design and optimize data lake/lakehouse models, ensuring performance, reliability, and governance.
-Enable BI consumption by preparing clean, curated datasets for Power BI and SAP BO.
-Implement data quality rules, monitoring, and exception handling frameworks.
-Collaborate with business, analytics, and technology teams to understand requirements and deliver robust data solutions.
-Apply DevOps practices including Git, CI/CD, and automated testing for data pipelines.
-Support creation of lightweight workflow/ops tools using Power Apps (nice to have).
Required Skills:
-7+ years of experience in data engineering within enterprise environments.
-Strong hands-on experience with Informatica ETL, Azure Databricks, Spark, Delta Lake, and Azure data services.
-Proficiency in SQL and Python (or Scala).
-Experience preparing data models and datasets for Power BI and/or SAP BO.
-Practical understanding of data quality frameworks, metadata, and lineage.
-Strong problem-solving, documentation, and stakeholder communication skills.
#DataEngineer #Azure #Databricks #Spark #SQL #Python





