Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer on a 6-month Inside IR35 contract, offering competitive pay. Key skills include Azure Data Factory, Synapse, SQL, and ETL pipeline development. Experience with data ingestion and governance is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
May 14, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Inside IR35
🔒 - Security clearance
Unknown
📍 - Location detailed
Birmingham, England, United Kingdom
🧠 - Skills detailed
#Infrastructure as Code (IaC) #Databricks #Datasets #Scala #API (Application Programming Interface) #Azure #Data Architecture #Storage #Data Warehouse #AWS (Amazon Web Services) #Data Governance #Azure Databricks #Microsoft Power BI #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #ADF (Azure Data Factory) #Data Ingestion #Data Lake #Synapse #Python #Spark (Apache Spark) #Data Engineering #BI (Business Intelligence) #DevOps #Security #Azure Data Factory
Role description
Azure Data Engineer – Inside IR35 (Contract) We’re seeking an experienced Azure Data Engineer to join a dynamic project team delivering cutting-edge data solutions. This is an Inside IR35 contract role, ideal for a proactive engineer with deep knowledge of Azure-based data architecture and development. Key Responsibilities: • Design and build scalable ETL pipelines using Azure services (ADF, Synapse, Data Lake, SQL, etc.). • Ingest, process, and transform large datasets into AWS Datalake or similar platforms. • Collaborate with data architects, analysts, and stakeholders to define requirements and deliver efficient solutions. • Ensure high-quality data through robust validation and cleansing processes. • Monitor, troubleshoot, and optimize data workflows for performance and reliability. • Implement DevOps best practices, CI/CD, and infrastructure as code (IaC) where possible. Key Skills: • Strong hands-on expertise in Azure Data Factory, Synapse, SQL, Data Lake, Blob Storage. • Experience with API/streaming data ingestion, security models, and data governance. • Familiarity with Azure Databricks, Power BI, HDInsight/Spark, Stream Analytics is a plus. • Solid understanding of data warehouse modelling (Kimball, Inmon). • Python proficiency and knowledge of DevOps/CI-CD pipelines is desirable.