

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown," offering a pay rate of "unknown," and is based in "unknown." Key skills include Azure Data Factory, Databricks, PySpark, and SQL. Azure certifications are a plus.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
July 26, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
London Area, United Kingdom
-
π§ - Skills detailed
#Data Engineering #Data Pipeline #SonarQube #Documentation #Azure Databricks #Data Governance #Project Management #SQL Queries #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Extraction #Azure Data Factory #Scrum #Security #Spark (Apache Spark) #Synapse #Scala #Strategy #PySpark #Azure #Databricks #Data Strategy #DevOps #Redshift #Pytest #Airflow #Qlik #ADF (Azure Data Factory)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Day to Day:
The Sr. Data Engineer is the one who designs and builds data foundations and end to end solutions for the business to maximize value from data. The role helps create a data-driven thinking within the organization, not just within IT teams, but also in the wider business stakeholder community. The Sr. Data engineer is expected to be a subject matter expert, who design & build data solutions and mentor junior engineers. They are also the key drivers to convert Vison and Data Strategy for IT solutions and deliver.
Must Haves:
β’ Proven hands-on experience with Azure Data Factory, Databricks, PySpark, DLT, and Unity Catalog
β’ Hands on Databricks expertise including Structured Streaming, Performance Tunning & Cost Optimization
β’ Strong command of SQL and data modelling concepts
β’ Excellent communication and interpersonal skills
β’ Ability to manage stakeholders and work collaboratively in a team environment
β’ Self-motivated, proactive, and capable of working independently with minimal supervision
β’ Strong problem-solving skills and a mindset focused on continuous improvement
β’ Experience with multiple if not all of the following: Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest, Qlik
Pluses:
β’ Azure certifications (e.g., Azure Data Engineer Associate, Databricks Professional)
β’ Experience with CI/CD pipelines and DevOps practices in data engineering
β’ Familiarity with data governance and security best practices in Azure
β’ Experience in project management, running a scrum team
β’ Exposure to working with external technical ecosystem
β’ MKDocs documentation experience
Key Responsibilities:
β’ Design, develop, and maintain scalable data pipelines using Azure Data Factory and Azure Databricks.
β’ Implement data transformation workflows using PySpark and Delta Live Tables (DLT).
β’ Manage and govern data assets using Unity Catalog.
β’ Write efficient and optimized SQL queries for data extraction, transformation, and analysis.
β’ Collaborate with cross-functional teams to understand data requirements and deliver high-quality solutions.
β’ Demonstrate strong ownership and accountability in delivering end-to-end data solutions.
β’ Communicate effectively with stakeholders to gather requirements, provide updates, and manage expectations.