

Stott and May
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a 6-month contract, paying £446/day, located in the UK (Hybrid). Requires 6+ years of data engineering experience, strong Python and SQL skills, and expertise in Databricks. Occasional travel to Dublin may be needed.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
446
-
🗓️ - Date
February 4, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Reading, England, United Kingdom
-
🧠 - Skills detailed
#GIT #Delta Lake #Python #Data Quality #AutoScaling #Databricks #Monitoring #Data Engineering #Automated Testing #Datasets #DevOps #SQL (Structured Query Language) #DataOps #"ETL (Extract #Transform #Load)" #Security #Azure #Classification #Documentation #Scala #Batch #Cloud #Data Science
Role description
Job Description
Job Title: Senior Data Engineer
Location: UK (Hybrid, 2–3 days per week in-office)
Rate: £446/day (Inside IR35)
Contract Duration: 6 months
Additional Requirements: May require occasional travel to Dublin office
About The Role
We are looking for an experienced Senior Data Engineer to join a Data & Analytics (DnA) team. You will design, build, and operate production-grade data products across customer, commercial, financial, sales, and broader data domains. This role is hands-on and heavily focused on Databricks-based engineering, data quality, governance, and DevOps-aligned delivery.
You will work closely with the Data Engineering Manager, Product Owner, Data Product Manager, Data Scientists, Head of Data & Analytics, and IT teams to transform business requirements into governed, decision-grade datasets embedded in business processes and trusted for reporting, analytics, and advanced use cases.
Key Responsibilities
• Design, build, and maintain pipelines in Databricks using Delta Lake and Delta Live Tables.
• Implement medallion architectures (Bronze/Silver/Gold) and deliver reusable, discoverable data products.
• Ensure pipelines meet non-functional requirements such as freshness, latency, completeness, scalability, and cost-efficiency.
• Own and operate Databricks assets including jobs, notebooks, SQL, and Unity Catalog objects.
• Apply Git-based DevOps practices, CI/CD, and Databricks Asset Bundles to safely promote changes across environments.
• Implement monitoring, alerting, runbooks, incident response, and root-cause analysis.
• Enforce governance and security using Unity Catalog (lineage, classification, ACLs, row/column-level security).
• Define and maintain data-quality rules, expectations, and SLOs within pipelines.
• Support root-cause analysis of data anomalies and production issues.
• Partner with Product Owner, Product Manager, and business stakeholders to translate requirements into functional and non-functional delivery scope.
• Collaborate with IT platform teams to define data contracts, SLAs, and schema evolution strategies.
• Produce clear technical documentation (data contracts, source-to-target mappings, release notes).
Essential Skills & Experience
• 6+ years in data engineering or advanced analytics engineering roles.
• Strong hands-on expertise in Python and SQL.
• Proven experience building production pipelines in Databricks.
• Excellent attention to detail, with the ability to create effective documentation and process diagrams.
• Solid understanding of data modelling, performance tuning, and cost optimisation.
Desirable Skills & Experience
• Hands-on experience with Databricks Lakehouse, including Delta Lake and Delta Live Tables for batch/stream pipelines.
• Knowledge of pipeline health monitoring, SLA/SLO management, and incident response.
• Unity Catalog governance and security expertise, including lineage, table ACLs, and row/column-level security.
• Familiarity with Databricks DevOps/DataOps practices (Git-based development, CI/CD, automated testing).
• Performance and cost optimization strategies for Databricks (autoscaling, Photon/serverless, partitioning, Z-Ordering, OPTIMIZE/VACUUM).
• Semantic layer and metrics engineering experience for consistent business metrics and self-service analytics.
• Experience with cloud-native analytics platforms (preferably Azure) operating as enterprise-grade production services.
Job Description
Job Title: Senior Data Engineer
Location: UK (Hybrid, 2–3 days per week in-office)
Rate: £446/day (Inside IR35)
Contract Duration: 6 months
Additional Requirements: May require occasional travel to Dublin office
About The Role
We are looking for an experienced Senior Data Engineer to join a Data & Analytics (DnA) team. You will design, build, and operate production-grade data products across customer, commercial, financial, sales, and broader data domains. This role is hands-on and heavily focused on Databricks-based engineering, data quality, governance, and DevOps-aligned delivery.
You will work closely with the Data Engineering Manager, Product Owner, Data Product Manager, Data Scientists, Head of Data & Analytics, and IT teams to transform business requirements into governed, decision-grade datasets embedded in business processes and trusted for reporting, analytics, and advanced use cases.
Key Responsibilities
• Design, build, and maintain pipelines in Databricks using Delta Lake and Delta Live Tables.
• Implement medallion architectures (Bronze/Silver/Gold) and deliver reusable, discoverable data products.
• Ensure pipelines meet non-functional requirements such as freshness, latency, completeness, scalability, and cost-efficiency.
• Own and operate Databricks assets including jobs, notebooks, SQL, and Unity Catalog objects.
• Apply Git-based DevOps practices, CI/CD, and Databricks Asset Bundles to safely promote changes across environments.
• Implement monitoring, alerting, runbooks, incident response, and root-cause analysis.
• Enforce governance and security using Unity Catalog (lineage, classification, ACLs, row/column-level security).
• Define and maintain data-quality rules, expectations, and SLOs within pipelines.
• Support root-cause analysis of data anomalies and production issues.
• Partner with Product Owner, Product Manager, and business stakeholders to translate requirements into functional and non-functional delivery scope.
• Collaborate with IT platform teams to define data contracts, SLAs, and schema evolution strategies.
• Produce clear technical documentation (data contracts, source-to-target mappings, release notes).
Essential Skills & Experience
• 6+ years in data engineering or advanced analytics engineering roles.
• Strong hands-on expertise in Python and SQL.
• Proven experience building production pipelines in Databricks.
• Excellent attention to detail, with the ability to create effective documentation and process diagrams.
• Solid understanding of data modelling, performance tuning, and cost optimisation.
Desirable Skills & Experience
• Hands-on experience with Databricks Lakehouse, including Delta Lake and Delta Live Tables for batch/stream pipelines.
• Knowledge of pipeline health monitoring, SLA/SLO management, and incident response.
• Unity Catalog governance and security expertise, including lineage, table ACLs, and row/column-level security.
• Familiarity with Databricks DevOps/DataOps practices (Git-based development, CI/CD, automated testing).
• Performance and cost optimization strategies for Databricks (autoscaling, Photon/serverless, partitioning, Z-Ordering, OPTIMIZE/VACUUM).
• Semantic layer and metrics engineering experience for consistent business metrics and self-service analytics.
• Experience with cloud-native analytics platforms (preferably Azure) operating as enterprise-grade production services.






