

Remote - Senior Microsoft Fabric Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Remote Senior Microsoft Fabric Data Engineer for a long-term contract, with pay rates dependent on experience. Requires 8+ years in data warehousing and strong skills in Microsoft Fabric, SQL, PySpark, and data modeling.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 29, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Dataflow #Azure Databricks #Version Control #Big Data #BI (Business Intelligence) #Azure #Data Governance #Data Modeling #Spark (Apache Spark) #Synapse #Databricks #Microsoft Power BI #"ETL (Extract #Transform #Load)" #Azure Data Factory #Data Engineering #PySpark #GIT #Documentation #DAX #Azure Synapse Analytics #Data Warehouse #KQL (Kusto Query Language) #ADF (Azure Data Factory) #Data Integration #SQL (Structured Query Language)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Role: Senior MS Fabric Engineer
Location: Remote
Duration: Long term
Rates: DOE
Prefer USC/GC (client is not sponsering for this opportunity)
Experience:
β’ 8+ years of hands-on experience in designing, implementing, and supporting data warehousing and business intelligence solutions, with a strong focus on Microsoft Fabric or similar tools within the Microsoft data stack (e.g., Azure Synapse Analytics, Azure Data Factory, Azure Databricks).
β’ Strong proficiency with Microsoft Fabric components:
β’ Data Engineering (Spark Notebooks, Spark Job Definitions)
β’ Data Factory (Dataflows Gen2, Pipelines)
β’ Data Warehouse / SQL Analytics Endpoints
β’ OneLake and Lakehouse architecture
β’ Power BI integration and DAX
β’ Solid understanding of data modeling (dimensional modeling, star schemas), ETL/ELT processes, and data integration patterns.
β’ Proficiency in SQL, PySpark, and/or T-SQL.
β’ Familiarity with KQL (Kusto Query Language) is a plus.
β’ Experience with big data technologies like Spark.
β’ Strong understanding of data and analytics concepts, including data governance, data warehousing, and structured/unstructured data.
β’ Knowledge of software development best practices (e.g., code modularity, documentation, version control - Git).