

Haystack
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a remote project focused on building scalable data platforms. Required skills include Azure Data Factory, Azure Data Lake, SQL, Python, ETL development, and experience with Spark and Databricks. Contract length and pay rate unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 9, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Data Architecture #Data Pipeline #Data Warehouse #Data Lake #Scala #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #SQL (Structured Query Language) #Azure SQL #Cloud #Python #Azure Data Factory #Data Engineering #Azure Data Platforms #Azure #Databricks #ADF (Azure Data Factory)
Role description
We are seeking an experienced Data Engineer to join a fast-paced project on a fully remote basis. This is an exciting opportunity to work on building and optimising scalable data platforms within a cloud-first environment.
The Role
• Design, build, and maintain scalable data pipelines
• Develop and manage ETL processes across Azure data platforms
• Work with Azure Data Factory, Azure Data Lake, and Azure SQL
• Collaborate with stakeholders to integrate data from multiple sources
• Support data warehouse development and optimisation
• Implement best practices across data engineering and pipeline performance
What You'll Need
• Strong experience with Azure Data Factory, Azure Data Lake, and Azure SQL
• Solid background in ETL development and data pipeline engineering
• Proficiency in Python and SQL
• Experience with Spark and Databricks
• Strong understanding of data warehousing concepts
• Experience building and maintaining scalable data architectures
What's On Offer
• Remote (UK-based candidates only)
• Immediate start available
• Fast interview process
• Outside IR35 contract
Apply via Haystack today!
We are seeking an experienced Data Engineer to join a fast-paced project on a fully remote basis. This is an exciting opportunity to work on building and optimising scalable data platforms within a cloud-first environment.
The Role
• Design, build, and maintain scalable data pipelines
• Develop and manage ETL processes across Azure data platforms
• Work with Azure Data Factory, Azure Data Lake, and Azure SQL
• Collaborate with stakeholders to integrate data from multiple sources
• Support data warehouse development and optimisation
• Implement best practices across data engineering and pipeline performance
What You'll Need
• Strong experience with Azure Data Factory, Azure Data Lake, and Azure SQL
• Solid background in ETL development and data pipeline engineering
• Proficiency in Python and SQL
• Experience with Spark and Databricks
• Strong understanding of data warehousing concepts
• Experience building and maintaining scalable data architectures
What's On Offer
• Remote (UK-based candidates only)
• Immediate start available
• Fast interview process
• Outside IR35 contract
Apply via Haystack today!





