

Haystack
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 3-month contract, offering a fully remote work arrangement and an outside IR35 pay rate. Key skills required include Azure Data Factory, Azure Data Lake, SQL, Python, and ETL development.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 1, 2026
🕒 - Duration
3 to 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Azure SQL #Spark (Apache Spark) #Databricks #Azure #ADF (Azure Data Factory) #Data Architecture #SQL (Structured Query Language) #Cloud #Azure Data Factory #Data Warehouse #Data Pipeline #Scala #Python #Data Lake #Data Engineering #Azure Data Platforms
Role description
We are working with a leading technology solutions provider, renowned for its innovative cloud-first strategies and fast-paced project delivery. They are seeking talented individuals to help build and optimise scalable data platforms.
The Role
• Design, build, and maintain scalable data pipelines
• Develop and manage ETL processes across Azure data platforms
• Work with Azure Data Factory, Azure Data Lake, and Azure SQL
• Collaborate with stakeholders to integrate data from multiple sources
• Support data warehouse development and optimisation
• Implement best practices across data engineering and pipeline performance
What You'll Need
• Strong experience with Azure Data Factory, Azure Data Lake, and Azure SQL
• Solid background in ETL development and data pipeline engineering
• Proficiency in Python and SQL
• Experience with Spark and Databricks
• Strong understanding of data warehousing concepts
• Experience building and maintaining scalable data architectures
What's On Offer
• Opportunity to work on innovative cloud-first projects
• Fully remote work arrangement
• Initial 3-month contract (Outside IR35)
• Immediate start available
Apply via Haystack today!
We are working with a leading technology solutions provider, renowned for its innovative cloud-first strategies and fast-paced project delivery. They are seeking talented individuals to help build and optimise scalable data platforms.
The Role
• Design, build, and maintain scalable data pipelines
• Develop and manage ETL processes across Azure data platforms
• Work with Azure Data Factory, Azure Data Lake, and Azure SQL
• Collaborate with stakeholders to integrate data from multiple sources
• Support data warehouse development and optimisation
• Implement best practices across data engineering and pipeline performance
What You'll Need
• Strong experience with Azure Data Factory, Azure Data Lake, and Azure SQL
• Solid background in ETL development and data pipeline engineering
• Proficiency in Python and SQL
• Experience with Spark and Databricks
• Strong understanding of data warehousing concepts
• Experience building and maintaining scalable data architectures
What's On Offer
• Opportunity to work on innovative cloud-first projects
• Fully remote work arrangement
• Initial 3-month contract (Outside IR35)
• Immediate start available
Apply via Haystack today!





