

Haystack
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include Azure Data Factory, ETL processes, Python, SQL, and Spark. Remote work is available, requiring strong data engineering experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 29, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Data Pipeline #Databricks #Scala #ADF (Azure Data Factory) #Data Engineering #SQL (Structured Query Language) #Azure Data Platforms #Data Warehouse #Data Lake #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Azure Data Factory #Base #Data Architecture #Azure #Cloud #Azure SQL #Python
Role description
We are working with a leading technology solutions provider specializing in cloud infrastructure and data analytics, delivering innovative, scalable platforms for their diverse client base.
The Role
• Design, build, and maintain scalable data pipelines
• Develop and manage ETL processes across Azure data platforms
• Work with Azure Data Factory, Azure Data Lake, and Azure SQL
• Collaborate with stakeholders to integrate data from multiple sources
• Support data warehouse development and optimisation
• Implement best practices across data engineering and pipeline performance
What You'll Need
• Strong experience with Azure Data Factory, Azure Data Lake, and Azure SQL
• Solid background in ETL development and data pipeline engineering
• Proficiency in Python and SQL
• Experience with Spark and Databricks
• Strong understanding of data warehousing concepts
• Experience building and maintaining scalable data architectures
What's On Offer
• Opportunity to work on a fast-paced, impactful project
• Fully remote work flexibility
• Immediate start available
• Collaborative and innovative work environment
Apply via Haystack today!
We are working with a leading technology solutions provider specializing in cloud infrastructure and data analytics, delivering innovative, scalable platforms for their diverse client base.
The Role
• Design, build, and maintain scalable data pipelines
• Develop and manage ETL processes across Azure data platforms
• Work with Azure Data Factory, Azure Data Lake, and Azure SQL
• Collaborate with stakeholders to integrate data from multiple sources
• Support data warehouse development and optimisation
• Implement best practices across data engineering and pipeline performance
What You'll Need
• Strong experience with Azure Data Factory, Azure Data Lake, and Azure SQL
• Solid background in ETL development and data pipeline engineering
• Proficiency in Python and SQL
• Experience with Spark and Databricks
• Strong understanding of data warehousing concepts
• Experience building and maintaining scalable data architectures
What's On Offer
• Opportunity to work on a fast-paced, impactful project
• Fully remote work flexibility
• Immediate start available
• Collaborative and innovative work environment
Apply via Haystack today!




