

Haystack
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in London, offering £550 - £600/day for an initial 6-month contract. Requires expertise in Azure Databricks, Azure Data Factory, PySpark, and experience in financial services. Hybrid work model with 2 days in-office.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
600
-
🗓️ - Date
March 11, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#DevOps #Datasets #Azure #ADLS (Azure Data Lake Storage) #Data Quality #Scripting #Cloud #Monitoring #Version Control #PySpark #ADF (Azure Data Factory) #Data Access #Data Lakehouse #SQL (Structured Query Language) #Data Processing #Azure Databricks #Python #"ETL (Extract #Transform #Load)" #Storage #Data Lake #GitHub #Spark (Apache Spark) #Data Engineering #Azure Data Factory #Azure DevOps #Data Pipeline #Data Architecture #Azure ADLS (Azure Data Lake Storage) #Databricks
Role description
Data Engineer | London | Hybrid | £550 - £600/day
We're working with a leading UK financial powerhouse on this exciting opportunity.
Are you ready to scale a modern cloud-first data platform? We are looking for a high-impact Data Engineer to drive the evolution of a massive Azure landscape, building robust pipelines that handle complex financial datasets at scale. If you live and breathe PySpark and Databricks, this is your chance to influence the data architecture of a household name in British finance.
The Role
• Architect and deliver high-performance data pipelines using the full Azure stack including Azure Data Factory and Databricks.
• Build and optimize distributed data processing solutions utilizing Spark, PySpark, and advanced Python scripting.
• Navigate and manage large-scale data structures within Azure Data Lake environments to ensure seamless data accessibility.
• Collaborate directly with platform architects and engineering squads to implement secure, production-grade data solutions.
• Drive operational excellence by improving data quality, system performance, and automated monitoring across the ecosystem.
What You'll Need
• Deep technical expertise in Azure Databricks, Azure Data Factory (ADF), and Azure Data Lake storage solutions.
• Advanced coding proficiency in Python, SQL, and Spark (specifically PySpark) for complex ETL development.
• Proven track record of building and optimizing mission-critical data pipelines in high-volume cloud environments.
• Solid understanding of DevOps transformations, including CI/CD workflows and version control via Azure DevOps or GitHub.
• Experience handling large-scale distributed data processing systems within the financial services or regulated sectors.
What's On Offer
• Competitive day rate of £550 - £600 (Inside IR35) with a 6-month initial term.
• Modern hybrid working model with 2 days per week in a prime London location.
• The opportunity to work on a cloud-first platform using the absolute latest iterations of Data Lakehouse technology.
Apply via Haystack today!
Data Engineer | London | Hybrid | £550 - £600/day
We're working with a leading UK financial powerhouse on this exciting opportunity.
Are you ready to scale a modern cloud-first data platform? We are looking for a high-impact Data Engineer to drive the evolution of a massive Azure landscape, building robust pipelines that handle complex financial datasets at scale. If you live and breathe PySpark and Databricks, this is your chance to influence the data architecture of a household name in British finance.
The Role
• Architect and deliver high-performance data pipelines using the full Azure stack including Azure Data Factory and Databricks.
• Build and optimize distributed data processing solutions utilizing Spark, PySpark, and advanced Python scripting.
• Navigate and manage large-scale data structures within Azure Data Lake environments to ensure seamless data accessibility.
• Collaborate directly with platform architects and engineering squads to implement secure, production-grade data solutions.
• Drive operational excellence by improving data quality, system performance, and automated monitoring across the ecosystem.
What You'll Need
• Deep technical expertise in Azure Databricks, Azure Data Factory (ADF), and Azure Data Lake storage solutions.
• Advanced coding proficiency in Python, SQL, and Spark (specifically PySpark) for complex ETL development.
• Proven track record of building and optimizing mission-critical data pipelines in high-volume cloud environments.
• Solid understanding of DevOps transformations, including CI/CD workflows and version control via Azure DevOps or GitHub.
• Experience handling large-scale distributed data processing systems within the financial services or regulated sectors.
What's On Offer
• Competitive day rate of £550 - £600 (Inside IR35) with a 6-month initial term.
• Modern hybrid working model with 2 days per week in a prime London location.
• The opportunity to work on a cloud-first platform using the absolute latest iterations of Data Lakehouse technology.
Apply via Haystack today!






