

Jobs via Dice
Microsoft Fabric Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Microsoft Fabric Data Engineer, remote for 12 months at a competitive pay rate. Candidates must have strong experience in Microsoft Fabric, PySpark, SQL, and ERP data, preferably SAP, with a focus on knowledge transfer and mentoring.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 27, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Datasets #SQL (Structured Query Language) #SAP #Scala #Spark (Apache Spark) #Azure #Data Ingestion #PySpark #Data Engineering #Data Processing #"ETL (Extract #Transform #Load)"
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Activesoft, Inc., is seeking the following. Apply via Dice today!
Role: Microsoft Fabric Data Engineer
Start Date: ASAP
Duration: 12 months
Location: Remote (very limited travel)
We are seeking a hands on Data Engineer with deep experience in Microsoft Fabric, PySpark, SparkSQL and T-SQL to help implement, optimize, and operationalize our clientβs Fabric data platform. A key responsibility of this role will be knowledge transfer to our clientβs internal development team.
Key Responsibilities
β’ Build and maintain data ingestion and transformation pipelines within Microsoft Fabric Medallion Architecture (Bronze ? Silver ? Gold).
β’ Develop scalable and efficient data processing using PySpark, SparkSQL, T-SQL or other approaches as appropriate and provide guidance on best practices for use of each.
β’ Support ingestion and transformation of ERP data (SAP and other core ERPs).
β’ Establish engineering best practices for Fabric performance, reliability, and cost optimization.
β’ Mentor and train client engineers so experienced SQL developers can transition to Fabric/PySpark within the contract period.
β’ Document patterns, standards, and operational processes.
Required Experience
β’ Strong hands on experience with Microsoft Fabric and PySpark.
β’ Solid background working with ERP datasets, ideally SAP.
β’ Prior experience in multinational enterprise environments.
β’ Strong SQL skills and ability to translate relational thinking into modern data engineering patterns.
β’ Proven experience performing knowledge transfer to internal teams.
β’ Experience with Azure data services is a plus.
Note: Make sure the candidate is very strong in PySpark and SQL
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Activesoft, Inc., is seeking the following. Apply via Dice today!
Role: Microsoft Fabric Data Engineer
Start Date: ASAP
Duration: 12 months
Location: Remote (very limited travel)
We are seeking a hands on Data Engineer with deep experience in Microsoft Fabric, PySpark, SparkSQL and T-SQL to help implement, optimize, and operationalize our clientβs Fabric data platform. A key responsibility of this role will be knowledge transfer to our clientβs internal development team.
Key Responsibilities
β’ Build and maintain data ingestion and transformation pipelines within Microsoft Fabric Medallion Architecture (Bronze ? Silver ? Gold).
β’ Develop scalable and efficient data processing using PySpark, SparkSQL, T-SQL or other approaches as appropriate and provide guidance on best practices for use of each.
β’ Support ingestion and transformation of ERP data (SAP and other core ERPs).
β’ Establish engineering best practices for Fabric performance, reliability, and cost optimization.
β’ Mentor and train client engineers so experienced SQL developers can transition to Fabric/PySpark within the contract period.
β’ Document patterns, standards, and operational processes.
Required Experience
β’ Strong hands on experience with Microsoft Fabric and PySpark.
β’ Solid background working with ERP datasets, ideally SAP.
β’ Prior experience in multinational enterprise environments.
β’ Strong SQL skills and ability to translate relational thinking into modern data engineering patterns.
β’ Proven experience performing knowledge transfer to internal teams.
β’ Experience with Azure data services is a plus.
Note: Make sure the candidate is very strong in PySpark and SQL






