

Haystack
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a fully remote contract, offering competitive pay. Key skills include Azure Data Factory, ETL development, Python, SQL, and experience with Spark and Databricks. Strong data warehousing knowledge is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 2, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Deployment #SQL (Structured Query Language) #Python #Azure Data Platforms #Scala #Azure Data Factory #Azure SQL #Data Pipeline #Spark (Apache Spark) #Azure #Data Architecture #Data Lake #Databricks #ADF (Azure Data Factory) #"ETL (Extract #Transform #Load)" #Data Warehouse #Cloud #Data Engineering
Role description
We are working with a leading technology solutions provider, renowned for delivering innovative data platforms and cloud-first strategies. Join a dynamic team and contribute to cutting-edge projects that transform how businesses utilise their data.
The Role
• Design, build, and maintain scalable data pipelines
• Develop and manage ETL processes across Azure data platforms
• Collaborate with stakeholders to integrate data from multiple sources
• Support data warehouse development and optimisation
• Implement best practices across data engineering and pipeline performance
• Contribute to CI/CD pipelines and deployment processes
What You'll Need
• Strong experience with Azure Data Factory, Azure Data Lake, and Azure SQL
• Solid background in ETL development and data pipeline engineering
• Proficiency in Python and SQL
• Experience with Spark and Databricks
• Strong understanding of data warehousing concepts
• Experience building and maintaining scalable data architectures
What's On Offer
• Opportunity to work on a fully remote basis
• Engage in fast-paced, impactful projects
• Immediate start available with rapid interview process
Apply via Haystack today!
We are working with a leading technology solutions provider, renowned for delivering innovative data platforms and cloud-first strategies. Join a dynamic team and contribute to cutting-edge projects that transform how businesses utilise their data.
The Role
• Design, build, and maintain scalable data pipelines
• Develop and manage ETL processes across Azure data platforms
• Collaborate with stakeholders to integrate data from multiple sources
• Support data warehouse development and optimisation
• Implement best practices across data engineering and pipeline performance
• Contribute to CI/CD pipelines and deployment processes
What You'll Need
• Strong experience with Azure Data Factory, Azure Data Lake, and Azure SQL
• Solid background in ETL development and data pipeline engineering
• Proficiency in Python and SQL
• Experience with Spark and Databricks
• Strong understanding of data warehousing concepts
• Experience building and maintaining scalable data architectures
What's On Offer
• Opportunity to work on a fully remote basis
• Engage in fast-paced, impactful projects
• Immediate start available with rapid interview process
Apply via Haystack today!






