

MS Fabric Data Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an MS Fabric Data Developer with a contract length of "unknown" and a pay rate of "unknown". Candidates must have mandatory Microsoft Fabric certification, advanced PySpark skills, and experience in ETL/ELT pipeline development and medallion architecture implementation.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
August 22, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
London Area, United Kingdom
-
π§ - Skills detailed
#Snowflake #Spark (Apache Spark) #PySpark #Data Engineering #"ETL (Extract #Transform #Load)" #Databricks #Data Governance #Data Lineage #Data Processing
Role description
MS Fabric certification is mandatory
Job Description:
Microsoft Fabric Expertise:
MANDATORY: Microsoft Fabric accreditation and certification
Proven experience with Fabric implementations (not Snowflake, Databricks, or other platforms)
Deep understanding of Fabric ecosystem components and best practices
Experience with medallion architecture implementation in Fabric
Technical Skills:
PySpark: Advanced proficiency in PySpark for data processing
Data Engineering: ETL/ELT pipeline development and optimization
Real-time Processing: Experience with streaming data and real-time analytics
Performance Tuning: Optimization of data models and query performance
Data Governance: Implementation of data lineage, cataloging, and quality frameworks
MS Fabric certification is mandatory
Job Description:
Microsoft Fabric Expertise:
MANDATORY: Microsoft Fabric accreditation and certification
Proven experience with Fabric implementations (not Snowflake, Databricks, or other platforms)
Deep understanding of Fabric ecosystem components and best practices
Experience with medallion architecture implementation in Fabric
Technical Skills:
PySpark: Advanced proficiency in PySpark for data processing
Data Engineering: ETL/ELT pipeline development and optimization
Real-time Processing: Experience with streaming data and real-time analytics
Performance Tuning: Optimization of data models and query performance
Data Governance: Implementation of data lineage, cataloging, and quality frameworks