Haystack

Azure Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a senior-level Azure Data Engineer on a fully remote, outside IR35 contract. Key skills include Azure Data Factory, Databricks, Python, and SQL. Requires extensive experience in ETL development and scalable data architectures. Immediate start available.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 4, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Outside IR35
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Synapse #Azure Data Factory #Python #"ETL (Extract #Transform #Load)" #Cloud #Data Engineering #Databricks #Azure #DevOps #Azure Synapse Analytics #Spark (Apache Spark) #SQL (Structured Query Language) #Data Lake #Data Pipeline #Deployment #Data Architecture #ADF (Azure Data Factory) #Scala #Apache Spark #Data Warehouse #Azure SQL #Data Processing
Role description
We're working with a dynamic digital transformation consultancy on this exciting opportunity. Are you a cloud data expert looking for a high-impact contract? We are seeking a senior-level Azure Data Engineer to spearhead the development of scalable, cloud-native data platforms and high-performance pipelines. You'll be leveraging the full power of Azure Data Factory, Databricks, and Spark to solve complex data challenges in a fast-paced environment. The Role β€’ Architect and build robust, scalable ETL/ELT data pipelines using Azure Data Factory and Azure Data Lake. β€’ Drive data warehouse development and optimization using Azure SQL and Azure Synapse Analytics. β€’ Implement high-performance data processing solutions using Spark and Databricks clusters. β€’ Collaborate with cross-functional stakeholders to integrate diverse data sources into a unified cloud architecture. β€’ Champion engineering best practices by contributing to CI/CD pipelines and automated deployment processes. What You'll Need β€’ Deep technical expertise across the Azure Data Stack, specifically Azure Data Factory, Azure Data Lake, and Azure SQL. β€’ Mastery of Python and SQL for advanced data engineering and complex ETL development. β€’ Extensive hands-on experience with Databricks and Apache Spark for large-scale data processing. β€’ Proven track record of building and maintaining scalable data architectures and modern data warehousing solutions. β€’ Strong understanding of DevOps practices and CI/CD integration within a data environment. What's On Offer β€’ Outside IR35 contract status providing professional autonomy. β€’ 100% Fully Remote working (must be UK-based). β€’ Immediate start with an incredibly fast-paced interview and onboarding process. β€’ Opportunity to work on high-profile, cloud-first infrastructure projects. Apply via Haystack today!