

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6-month contract, hybrid in London. Key skills include SQL, Python, Azure services, and Snowflake. Experience with CI/CD, data modeling, and ETL processes is required.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
June 17, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
London Area, United Kingdom
-
π§ - Skills detailed
#Databases #Data Processing #Cloud #Azure Data Factory #Automation #Data Architecture #Business Analysis #Snowflake #Linux #Kubernetes #SQL (Structured Query Language) #Scala #Python #Containers #Physical Data Model #Data Quality #Docker #Data Vault #GIT #Vault #Big Data #Monitoring #Data Modeling #Data Ingestion #Azure #Agile #Java #Scrum #"ETL (Extract #Transform #Load)" #Data Engineering #Data Pipeline #Computer Science #ML (Machine Learning) #SQL Queries #ADF (Azure Data Factory) #DevOps #Data Integrity #Datasets #RDBMS (Relational Database Management System) #Airflow
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
As a Senior Data Engineer, you will be responsible for the development of complex data sources and pipelines into our data platform (i.e. Snowflake) along with other data applications (i.e. Azure, Airflow, etc.) and automation.
The Senior Data Engineer will work closely with the data, Architecture, Business Analyst, Data Stewards to integrate and align requirements, specifications and constraints of each element of the requirement. They will also need to help identify gaps in resources, technology, or capabilities required, and work with the data engineering team to identify and implement solutions where appropriate.
Work type: Contract
Length: initial 6 months
Work structure: hybrid 2 days a week in London.
Primary Responsibilities:
β’ Integrate data from multiple on prem and cloud sources and systems. Handle data ingestion, transformation, and consolidation to create a unified and reliable data foundation for analysis and reporting.
β’ Develop data transformation routines to clean, normalize, and aggregate data. Apply data processing techniques to handle complex data structures, handle missing or inconsistent data, and prepare the data for analysis, reporting, or machine learning tasks.
β’ Implement data de-identification/data masking in line with company standards.
β’ Monitor data pipelines and data systems to detect and resolve issues promptly.
β’ Develop monitoring tools to automate error handling mechanisms to ensure data integrity and system reliability.
β’ Utilize data quality tools like Great Expectations or Soda to ensure the accuracy, reliability, and integrity of data throughout its lifecycle.
β’ Create & maintain data pipelines using Airflow & Snowflake as primary tools
β’ Create SQL Stored procs to perform complex transformation
β’ Understand data requirements and design optimal pipelines to fulfil the use-cases
β’ Creating logical & physical data models to ensure data integrity is maintained
β’ CI CD pipeline creation & automation using GIT & GIT Actions
β’ Tuning and optimizing data processes
Qualifications
Required Qualifications:
Β· Bachelor's degree in Computer Science or a related field.
Β· Proven hands-on experience as a Data Engineer.
Β· Proficiency in SQL (any flavor), with experience using Window functions and advanced features.
Β· Excellent communication skills.
Β· Strong knowledge of Python.
Β· Familiarity with Azure Services such as Blobs, Functions, Azure Data Factory, Service Principal, Containers, Key Vault, etc.
Β· In-depth knowledge of Snowflake architecture, features, and best practices.
Β· Experience with CI/CD pipelines using Git and Git Actions.
Β· Knowledge of various data modeling techniques, including Star Schema, Dimensional models, and Data Vault.
Β· Hands-on experience with:
Β· Developing data pipelines (Snowflake), writing complex SQL queries.
Β· Building ETL/ELT/data pipelines.
Β· Kubernetes and Linux containers (e.g., Docker).
Β· Related/complementary open-source software platforms and languages (e.g., Scala, Python, Java, Linux).
Β· Experience with both relational (RDBMS) and non-relational databases.
Β· Analytical and problem-solving skills applied to big data datasets.
Β· Experience working on projects with agile/scrum methodologies and high-performing teams.
Β· Good understanding of access control, data masking, and row access policies.
Β· Exposure to DevOps methodology.
Β· Knowledge of data warehousing principles, architecture, and implementation.