

Vector Resourcing
Fabric Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Fabric Data Engineer on a contract basis, focusing on building scalable data solutions in Microsoft Azure. Key skills include Microsoft Fabric, PySpark, SQL, and API integration. Experience with Medallion architecture and CI/CD pipelines is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
496
-
🗓️ - Date
March 4, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Azure SQL Database #PySpark #"ETL (Extract #Transform #Load)" #Azure SQL #Documentation #Deployment #Spark (Apache Spark) #SQL (Structured Query Language) #API (Application Programming Interface) #Data Pipeline #Data Architecture #Scala #Monitoring #Logging #Data Quality #Data Engineering #Microsoft Azure #Data Processing #Azure
Role description
We are searching for a hands-on Data Engineer who will deliver high-quality, scalable data solutions across our client's Microsoft Azure ecosystem. This role involves building robust data pipelines within Microsoft Fabric, integrating external systems via APIs, and designing data flows aligned to the Medallion architecture to support analytics, integration, and reporting requirements.
Responsibilities
• Build and maintain data pipelines using Microsoft Fabric Data Factory, supporting database-based, file-based and API-based ingestion, transformation and orchestration
• Develop scalable data processing logic using PySpark and SQL for both Lakehouse and Warehouse workloads
• Design and implement data solutions aligned to the Medallion architecture (Bronze, Silver, Gold layers) to support analytics and reporting requirements
• Integrate external systems through secure and reliable API calls
• Ensure operational visibility across pipelines, including robust logging, monitoring and error-handling mechanisms
• Work with Azure SQL Database, OneLake and Lakehouse components to deliver efficient ingestion and transformation processes
• Automate infrastructure provisioning and configuration using Bicep (Infrastructure-as-Code)
• Contribute to and enhance CI/CD pipelines to enable automated deployment across Fabric workspaces and Azure environments
• Troubleshoot pipeline failures, resolve performance bottlenecks and address data quality issues
• Collaborate closely with data architects, analysts and engineering teams to deliver high-quality, production-ready data solutions
• Maintain comprehensive technical documentation, pipeline runbooks and governance guidelines
We are searching for a hands-on Data Engineer who will deliver high-quality, scalable data solutions across our client's Microsoft Azure ecosystem. This role involves building robust data pipelines within Microsoft Fabric, integrating external systems via APIs, and designing data flows aligned to the Medallion architecture to support analytics, integration, and reporting requirements.
Responsibilities
• Build and maintain data pipelines using Microsoft Fabric Data Factory, supporting database-based, file-based and API-based ingestion, transformation and orchestration
• Develop scalable data processing logic using PySpark and SQL for both Lakehouse and Warehouse workloads
• Design and implement data solutions aligned to the Medallion architecture (Bronze, Silver, Gold layers) to support analytics and reporting requirements
• Integrate external systems through secure and reliable API calls
• Ensure operational visibility across pipelines, including robust logging, monitoring and error-handling mechanisms
• Work with Azure SQL Database, OneLake and Lakehouse components to deliver efficient ingestion and transformation processes
• Automate infrastructure provisioning and configuration using Bicep (Infrastructure-as-Code)
• Contribute to and enhance CI/CD pipelines to enable automated deployment across Fabric workspaces and Azure environments
• Troubleshoot pipeline failures, resolve performance bottlenecks and address data quality issues
• Collaborate closely with data architects, analysts and engineering teams to deliver high-quality, production-ready data solutions
• Maintain comprehensive technical documentation, pipeline runbooks and governance guidelines






