

Intent Talent Solutions
Data Engineer (MS Fabric)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (MS Fabric) in Chicago, IL (Hybrid) with a contract length of "unknown" and a pay rate of "unknown." Requires 3+ years of data engineering experience, strong SQL and PySpark skills, and familiarity with Microsoft Fabric.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
May 11, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Chicago, IL
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Data Pipeline #Microsoft Power BI #SharePoint #XML (eXtensible Markup Language) #PySpark #Data Governance #Security #SQL Server #Data Engineering #Semantic Models #Databases #Spark (Apache Spark) #SQL (Structured Query Language) #BI (Business Intelligence) #Python #Spark SQL #Dataflow
Role description
Data Engineer β MS Fabric - Chicago, IL (Hybrid)
Weβre looking for a hands-on Data Engineer to help ingest, transform, manage, and deliver trusted data and insights across investment, finance, and risk teams as the organization is modernizing its enterprise analytics platform using Microsoft Fabric.
This role sits within the Data Engineering team and focuses on building and operating lakehouse-based analytics solutions end to end.
What Youβll Do
β’ Build data pipelines in Microsoft Fabric ingesting data from:
β’ External files (CSV, XML, Excel, vendor feeds)
β’ SQL Server databases
β’ SharePoint and other enterprise sources
β’ Implement Bronze (raw), Silver and Gold (business-ready) layers in Fabric Lakehouse
β’ Design patterns and develop ETL/ELT processes
β’ Create semantic models for agentic frameworks and reporting
β’ Participate in Fabric environment management including workspaces, capacities, access, and promotion across environments
β’ Apply data governance, quality, and security standards
β’ Collaborate with enterprise engineering and Investments business and technology teams
What You Bring
β’ 3+ years of experience in building implementing data engineering solutions
β’ Hands-on experience with Microsoft Fabric, including:
β’ Lakehouse, OneLake, Pipelines, Notebooks, DataFlow Gen2, Power Query
β’ Workspace, capacity, and environment management
β’ Strong SQL (T-SQL/Spark SQL)
β’ Strong PySpark/Python skills including experience using Dataframes
β’ Experience building Data Models. Experience with Semantic models preferred.
β’ Experience working with SQL Server and file-based data sources
β’ Strong collaboration and communication skills
Nice to Have
β’ Financial services or investment data experience
β’ Experience developing Power BI reports and Data Agents
Data Engineer β MS Fabric - Chicago, IL (Hybrid)
Weβre looking for a hands-on Data Engineer to help ingest, transform, manage, and deliver trusted data and insights across investment, finance, and risk teams as the organization is modernizing its enterprise analytics platform using Microsoft Fabric.
This role sits within the Data Engineering team and focuses on building and operating lakehouse-based analytics solutions end to end.
What Youβll Do
β’ Build data pipelines in Microsoft Fabric ingesting data from:
β’ External files (CSV, XML, Excel, vendor feeds)
β’ SQL Server databases
β’ SharePoint and other enterprise sources
β’ Implement Bronze (raw), Silver and Gold (business-ready) layers in Fabric Lakehouse
β’ Design patterns and develop ETL/ELT processes
β’ Create semantic models for agentic frameworks and reporting
β’ Participate in Fabric environment management including workspaces, capacities, access, and promotion across environments
β’ Apply data governance, quality, and security standards
β’ Collaborate with enterprise engineering and Investments business and technology teams
What You Bring
β’ 3+ years of experience in building implementing data engineering solutions
β’ Hands-on experience with Microsoft Fabric, including:
β’ Lakehouse, OneLake, Pipelines, Notebooks, DataFlow Gen2, Power Query
β’ Workspace, capacity, and environment management
β’ Strong SQL (T-SQL/Spark SQL)
β’ Strong PySpark/Python skills including experience using Dataframes
β’ Experience building Data Models. Experience with Semantic models preferred.
β’ Experience working with SQL Server and file-based data sources
β’ Strong collaboration and communication skills
Nice to Have
β’ Financial services or investment data experience
β’ Experience developing Power BI reports and Data Agents






