

Azure Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer in Chicago (Hybrid) with a contract length of unspecified duration, offering a competitive pay rate. Requires 5+ years of experience in ETL, Azure services, and Microsoft Fabric, along with strong SQL and data modeling skills.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 27, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Chicago, IL
-
π§ - Skills detailed
#Airflow #Data Mart #"ETL (Extract #Transform #Load)" #ADF (Azure Data Factory) #Datasets #SQL (Structured Query Language) #Scala #Data Governance #Azure #Data Quality #BI (Business Intelligence) #Data Engineering #Documentation #Data Warehouse #CRM (Customer Relationship Management) #Microsoft Azure #Synapse #Data Modeling #Microsoft Power BI #dbt (data build tool)
Role description
Job Title: Data Engineer β Azure & Fabric
Location: Chicago (Hybrid) β On-site cadence flexible; occasional travel to collaborate with partner teams and portfolio companies.
About The Company
A U.S.-based private equity firm focused on partnering with lower middle-market industrial growth businesses. The firm invests in high-performing companies that can benefit from operational, technology, and strategic resources to scale and thrive. Portfolio companies span essential industrial sectors, creating opportunities to apply shared data/technology solutions that drive measurable enterprise value.
Job Summary
We are seeking a hands-on Data Engineer to design and build a modern data platform in Microsoft Azure and Fabric . This role will focus on developing ETL pipelines, integrating key data sources (ERP, CRM, HRIS, subscription data), and building a scalable Data Mart to support analytics, reporting, and decision-making across the firm and its portfolio companies. The ideal candidate thrives in an individual contributor role, enjoys working in a multi-entity environment, and is passionate about building high-quality, reliable data solutions.
Key Responsibilities
β’ Design, build, and maintain ETL pipelines in Azure and Microsoft Fabric using a medallion architecture (bronze/silver/gold) .
β’ Develop and optimize a centralized Data Mart to support business reporting, analytics, and self-service dashboards in Power BI.
β’ Integrate structured and unstructured data from ERP, CRM, HRIS, and subscription-based systems via APIs and connectors.
β’ Ensure data quality, lineage, and governance practices are followed to maintain trusted datasets.
β’ Automate workflows and monitor pipelines for reliability, scalability, and performance.
β’ Collaborate with analysts, business stakeholders, and portfolio companies to translate requirements into technical solutions.
β’ Document data models, processes, and pipelines to support long-term maintainability.
Requirements (Must-Haves)
β’ 5+ years of experience as a Data Engineer with strong expertise in ETL, Azure services, and Microsoft Fabric .
β’ Experience building Data Marts or Data Warehouses with medallion/lakehouse patterns.
β’ Strong SQL and data modeling skills with a focus on structured and semi-structured data.
β’ Hands-on experience integrating APIs and third-party data sources.
β’ Familiarity with Power BI and enabling data for self-service analytics.
β’ Strong problem-solving and documentation skills with a detail-oriented mindset.
Nice to Haves
β’ Experience working with subscription data or harmonizing external datasets with internal sources.
β’ Exposure to data governance frameworks and best practices.
β’ Familiarity with modern approaches like dbt, orchestration tools (Airflow, ADF, Synapse Pipelines), or GenAI for data productivity.
Job Title: Data Engineer β Azure & Fabric
Location: Chicago (Hybrid) β On-site cadence flexible; occasional travel to collaborate with partner teams and portfolio companies.
About The Company
A U.S.-based private equity firm focused on partnering with lower middle-market industrial growth businesses. The firm invests in high-performing companies that can benefit from operational, technology, and strategic resources to scale and thrive. Portfolio companies span essential industrial sectors, creating opportunities to apply shared data/technology solutions that drive measurable enterprise value.
Job Summary
We are seeking a hands-on Data Engineer to design and build a modern data platform in Microsoft Azure and Fabric . This role will focus on developing ETL pipelines, integrating key data sources (ERP, CRM, HRIS, subscription data), and building a scalable Data Mart to support analytics, reporting, and decision-making across the firm and its portfolio companies. The ideal candidate thrives in an individual contributor role, enjoys working in a multi-entity environment, and is passionate about building high-quality, reliable data solutions.
Key Responsibilities
β’ Design, build, and maintain ETL pipelines in Azure and Microsoft Fabric using a medallion architecture (bronze/silver/gold) .
β’ Develop and optimize a centralized Data Mart to support business reporting, analytics, and self-service dashboards in Power BI.
β’ Integrate structured and unstructured data from ERP, CRM, HRIS, and subscription-based systems via APIs and connectors.
β’ Ensure data quality, lineage, and governance practices are followed to maintain trusted datasets.
β’ Automate workflows and monitor pipelines for reliability, scalability, and performance.
β’ Collaborate with analysts, business stakeholders, and portfolio companies to translate requirements into technical solutions.
β’ Document data models, processes, and pipelines to support long-term maintainability.
Requirements (Must-Haves)
β’ 5+ years of experience as a Data Engineer with strong expertise in ETL, Azure services, and Microsoft Fabric .
β’ Experience building Data Marts or Data Warehouses with medallion/lakehouse patterns.
β’ Strong SQL and data modeling skills with a focus on structured and semi-structured data.
β’ Hands-on experience integrating APIs and third-party data sources.
β’ Familiarity with Power BI and enabling data for self-service analytics.
β’ Strong problem-solving and documentation skills with a detail-oriented mindset.
Nice to Haves
β’ Experience working with subscription data or harmonizing external datasets with internal sources.
β’ Exposure to data governance frameworks and best practices.
β’ Familiarity with modern approaches like dbt, orchestration tools (Airflow, ADF, Synapse Pipelines), or GenAI for data productivity.