

Vector Resourcing
Fabric Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Fabric Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include Microsoft Azure, PySpark, SQL, and data architecture. Experience with data pipelines, ETL processes, and API integration is required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Monitoring #Azure SQL #Data Quality #Azure #Azure SQL Database #Scala #Logging #API (Application Programming Interface) #Data Engineering #"ETL (Extract #Transform #Load)" #Data Pipeline #Documentation #Data Processing #Deployment #SQL (Structured Query Language) #Spark (Apache Spark) #Data Architecture #PySpark #Microsoft Azure
Role description
We are searching for a hands-on Data Engineer who will deliver high-quality, scalable data solutions across our client's Microsoft Azure ecosystem. This role involves building robust data pipelines within Microsoft Fabric, integrating external systems via APIs, and designing data flows aligned to the Medallion architecture to support analytics, integration, and reporting requirements.
Responsibilities
• Build and maintain data pipelines using Microsoft Fabric Data Factory, supporting database-based, file-based and API-based ingestion, transformation and orchestration
• Develop scalable data processing logic using PySpark and SQL for both Lakehouse and Warehouse workloads
• Design and implement data solutions aligned to the Medallion architecture (Bronze, Silver, Gold layers) to support analytics and reporting requirements
• Integrate external systems through secure and reliable API calls
• Ensure operational visibility across pipelines, including robust logging, monitoring and error-handling mechanisms
• Work with Azure SQL Database, OneLake and Lakehouse components to deliver efficient ingestion and transformation processes
• Automate infrastructure provisioning and configuration using Bicep (Infrastructure-as-Code)
• Contribute to and enhance CI/CD pipelines to enable automated deployment across Fabric workspaces and Azure environments
• Troubleshoot pipeline failures, resolve performance bottlenecks and address data quality issues
• Collaborate closely with data architects, analysts and engineering teams to deliver high-quality, production-ready data solutions
• Maintain comprehensive technical documentation, pipeline runbooks and governance guidelines
We are searching for a hands-on Data Engineer who will deliver high-quality, scalable data solutions across our client's Microsoft Azure ecosystem. This role involves building robust data pipelines within Microsoft Fabric, integrating external systems via APIs, and designing data flows aligned to the Medallion architecture to support analytics, integration, and reporting requirements.
Responsibilities
• Build and maintain data pipelines using Microsoft Fabric Data Factory, supporting database-based, file-based and API-based ingestion, transformation and orchestration
• Develop scalable data processing logic using PySpark and SQL for both Lakehouse and Warehouse workloads
• Design and implement data solutions aligned to the Medallion architecture (Bronze, Silver, Gold layers) to support analytics and reporting requirements
• Integrate external systems through secure and reliable API calls
• Ensure operational visibility across pipelines, including robust logging, monitoring and error-handling mechanisms
• Work with Azure SQL Database, OneLake and Lakehouse components to deliver efficient ingestion and transformation processes
• Automate infrastructure provisioning and configuration using Bicep (Infrastructure-as-Code)
• Contribute to and enhance CI/CD pipelines to enable automated deployment across Fabric workspaces and Azure environments
• Troubleshoot pipeline failures, resolve performance bottlenecks and address data quality issues
• Collaborate closely with data architects, analysts and engineering teams to deliver high-quality, production-ready data solutions
• Maintain comprehensive technical documentation, pipeline runbooks and governance guidelines






