

Qualient Technology Solutions UK Limited
Azure Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an "Azure Data Engineer" with a contract length of "unknown" and a pay rate of "unknown". It requires "10+ years of data engineering experience", proficiency in "Python, Spark, and Azure Databricks", and is based in "London, UK" with a hybrid work model.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 22, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Yes
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Azure SQL Database #Data Processing #Azure SQL #Databricks #Forecasting #Azure Databricks #Monitoring #R #Azure Blob Storage #Microsoft Power BI #Metadata #Azure #ADF (Azure Data Factory) #Compliance #Data Architecture #Delta Lake #Scala #GIT #Data Lineage #DevOps #Data Engineering #Azure Event Hubs #Data Lifecycle #Kafka (Apache Kafka) #Agile #Data Governance #Data Integration #Data Management #Big Data #Automation #Data Transformations #BI (Business Intelligence) #Data Pipeline #Datasets #Deployment #JavaScript #SQL (Structured Query Language) #Version Control #API (Application Programming Interface) #Tableau #YAML (YAML Ain't Markup Language) #NoSQL #Data Quality #PySpark #Azure Data Factory #Data Visualisation #Data Science #Python #Storage #Cloud #Spark (Apache Spark) #Spark SQL #Code Reviews #"ETL (Extract #Transform #Load)" #Databases
Role description
We are looking for Azure Data engineer -SC cleared based in London, UK which is hybrid.
Job Description:
We are seeking a highly skilled and experienced Senior Data Engineer to join our team and contribute to the development and maintenance of our cutting-edge Azure Databricks platform for economic data. This platform is critical for our Monetary Analysis, Forecasting, and Modelling activities. The Senior Data Engineer will be responsible for building and optimising data pipelines, implementing data transformations, and ensuring data quality and reliability. This role requires a strong understanding of data engineering principles, big data technologies, cloud computing (specifically Azure), and experience working with large datasets.
Key Responsibilities:
Data Pipeline Development & Optimisation:
• Design, develop, and maintain robust and scalable data pipelines for ingesting, transforming, and loading data from various sources (e.g., APIs, databases, financial data providers) into the Azure Databricks platform.
• Optimise data pipelines for performance, efficiency, and cost-effectiveness.
• Implement data quality checks and validation rules within data pipelines.
Data Transformation & Processing:
• Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies.
• Develop and maintain data processing logic for cleaning, enriching, and aggregating data.
• Ensure data consistency and accuracy throughout the data lifecycle.
Azure Databricks Implementation:
• Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services.
• Implement best practices for Databricks development and deployment.
• Optimise Databricks workloads for performance and cost.
• Need to program using the languages such as SQL, Python, R, YAML and JavaScript
Data Integration:
• Integrate data from various sources, including relational databases, APIs, and streaming data sources.
• Implement data integration patterns and best practices.
• Work with API developers to ensure seamless data exchange.
Data Quality & Governance:
• Hands on experience to use Azure Purview for data quality and data governance
• Implement data quality monitoring and alerting processes.
• Work with data governance teams to ensure compliance with data governance policies and standards.
• Implement data lineage tracking and metadata management processes.
Collaboration & Communication:
• Collaborate closely with data scientists, economists, and other technical teams to understand data requirements and translate them into technical solutions.
• Communicate technical concepts effectively to both technical and non-technical audiences.
• Participate in code reviews and knowledge sharing sessions.
Automation & DevOps:
• Implement automation for data pipeline deployments and other data engineering tasks.
• Work with DevOps teams to implement and Build CI/CD pipelines, for environmental deployments.
• Promote and implement DevOps best practices.
Essential Skills & Experience:
• 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with
• Azure Databricks.
• Strong proficiency in Python and Spark (PySpark) or Scala.
• Deep understanding of data warehousing principles, data modelling techniques, and data
• integration patterns.
• Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage,
• and Azure SQL Database.
• Experience working with large datasets and complex data pipelines.
• Experience with data architecture design and data pipeline optimization.
• Proven expertise with Databricks, including hands-on implementation experience and
• certifications.
• Experience with SQL and NoSQL databases.
• Experience with data quality and data governance processes.
• Experience with version control systems (e.g., Git).
• Experience with Agile development methodologies.
• Excellent communication, interpersonal, and problem-solving skills.
• Experience with streaming data technologies (e.g., Kafka, Azure Event Hubs).
• Experience with data visualisation tools (e.g., Tableau, Power BI).
We are looking for Azure Data engineer -SC cleared based in London, UK which is hybrid.
Job Description:
We are seeking a highly skilled and experienced Senior Data Engineer to join our team and contribute to the development and maintenance of our cutting-edge Azure Databricks platform for economic data. This platform is critical for our Monetary Analysis, Forecasting, and Modelling activities. The Senior Data Engineer will be responsible for building and optimising data pipelines, implementing data transformations, and ensuring data quality and reliability. This role requires a strong understanding of data engineering principles, big data technologies, cloud computing (specifically Azure), and experience working with large datasets.
Key Responsibilities:
Data Pipeline Development & Optimisation:
• Design, develop, and maintain robust and scalable data pipelines for ingesting, transforming, and loading data from various sources (e.g., APIs, databases, financial data providers) into the Azure Databricks platform.
• Optimise data pipelines for performance, efficiency, and cost-effectiveness.
• Implement data quality checks and validation rules within data pipelines.
Data Transformation & Processing:
• Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies.
• Develop and maintain data processing logic for cleaning, enriching, and aggregating data.
• Ensure data consistency and accuracy throughout the data lifecycle.
Azure Databricks Implementation:
• Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services.
• Implement best practices for Databricks development and deployment.
• Optimise Databricks workloads for performance and cost.
• Need to program using the languages such as SQL, Python, R, YAML and JavaScript
Data Integration:
• Integrate data from various sources, including relational databases, APIs, and streaming data sources.
• Implement data integration patterns and best practices.
• Work with API developers to ensure seamless data exchange.
Data Quality & Governance:
• Hands on experience to use Azure Purview for data quality and data governance
• Implement data quality monitoring and alerting processes.
• Work with data governance teams to ensure compliance with data governance policies and standards.
• Implement data lineage tracking and metadata management processes.
Collaboration & Communication:
• Collaborate closely with data scientists, economists, and other technical teams to understand data requirements and translate them into technical solutions.
• Communicate technical concepts effectively to both technical and non-technical audiences.
• Participate in code reviews and knowledge sharing sessions.
Automation & DevOps:
• Implement automation for data pipeline deployments and other data engineering tasks.
• Work with DevOps teams to implement and Build CI/CD pipelines, for environmental deployments.
• Promote and implement DevOps best practices.
Essential Skills & Experience:
• 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with
• Azure Databricks.
• Strong proficiency in Python and Spark (PySpark) or Scala.
• Deep understanding of data warehousing principles, data modelling techniques, and data
• integration patterns.
• Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage,
• and Azure SQL Database.
• Experience working with large datasets and complex data pipelines.
• Experience with data architecture design and data pipeline optimization.
• Proven expertise with Databricks, including hands-on implementation experience and
• certifications.
• Experience with SQL and NoSQL databases.
• Experience with data quality and data governance processes.
• Experience with version control systems (e.g., Git).
• Experience with Agile development methodologies.
• Excellent communication, interpersonal, and problem-solving skills.
• Experience with streaming data technologies (e.g., Kafka, Azure Event Hubs).
• Experience with data visualisation tools (e.g., Tableau, Power BI).






