

Morph Enterprise
Microsoft Fabric Data Engineer (UT Local Only - USC Only)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Microsoft Fabric Data Engineer, requiring a Bachelor's degree and 3+ years of Azure-focused data engineering experience. Contract length is unspecified, with a pay rate of "unknown," and is limited to local USC candidates. Key skills include Microsoft Fabric, Azure Data Factory, SQL, and Python.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
624
-
ποΈ - Date
November 7, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Salt Lake City, UT
-
π§ - Skills detailed
#Spark (Apache Spark) #Dataflow #Storage #"ETL (Extract #Transform #Load)" #DevOps #Azure #SQL (Structured Query Language) #Data Governance #Data Transformations #Datasets #Microsoft Power BI #Compliance #PySpark #Synapse #Delta Lake #Data Integration #Data Lake #Azure DevOps #Data Ingestion #BI (Business Intelligence) #Data Engineering #Python #ADF (Azure Data Factory) #Azure Data Factory #Scala #Azure SQL #Data Pipeline #Documentation #Azure Synapse Analytics #Cloud #GitHub #Computer Science #Security
Role description
Key Responsibilities:
β’ Design, implement, and manage end-to-end data pipelines using Microsoft Fabric and Azure Data Factory.
β’ Develop data models and analytical datasets using Microsoft Fabric's OneLake, Power BI, and Dataflows.
β’ Integrate and optimize data ingestion, transformation, and storage across Azure services (e.g., Synapse, Data Lake, SQL, Delta Lake).
β’ Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
β’ Ensure best practices in data governance, security, compliance, and performance tuning.
β’ Monitor and troubleshoot data pipelines, ensuring reliability and scalability.
β’ Stay updated with Microsoft Fabric and Azure ecosystem advancements and propose improvements.
β’ Create and maintain documentation for data flows, architecture, and configuration settings.
Required Qualifications:
β’ Bachelorβs degree in Computer Science, Information Systems, or a related field.
β’ 3+ years of experience in data engineering or cloud engineering with focus on Azure.
β’ Hands-on experience with Microsoft Fabric, including Data Engineering, Data Factory, and Power BI within Fabric.
β’ Proficiency in Azure Synapse Analytics, Azure Data Lake, Delta Lake, and Azure SQL.
β’ Strong knowledge of SQL, T-SQL, Python or PySpark for data transformations.
β’ Experience with data integration, ETL/ELT pipelines, and data warehousing concepts.
β’ Familiarity with DevOps practices and tools like Azure DevOps or GitHub Actions.
β’ Excellent problem-solving and communication skills.
Key Responsibilities:
β’ Design, implement, and manage end-to-end data pipelines using Microsoft Fabric and Azure Data Factory.
β’ Develop data models and analytical datasets using Microsoft Fabric's OneLake, Power BI, and Dataflows.
β’ Integrate and optimize data ingestion, transformation, and storage across Azure services (e.g., Synapse, Data Lake, SQL, Delta Lake).
β’ Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
β’ Ensure best practices in data governance, security, compliance, and performance tuning.
β’ Monitor and troubleshoot data pipelines, ensuring reliability and scalability.
β’ Stay updated with Microsoft Fabric and Azure ecosystem advancements and propose improvements.
β’ Create and maintain documentation for data flows, architecture, and configuration settings.
Required Qualifications:
β’ Bachelorβs degree in Computer Science, Information Systems, or a related field.
β’ 3+ years of experience in data engineering or cloud engineering with focus on Azure.
β’ Hands-on experience with Microsoft Fabric, including Data Engineering, Data Factory, and Power BI within Fabric.
β’ Proficiency in Azure Synapse Analytics, Azure Data Lake, Delta Lake, and Azure SQL.
β’ Strong knowledge of SQL, T-SQL, Python or PySpark for data transformations.
β’ Experience with data integration, ETL/ELT pipelines, and data warehousing concepts.
β’ Familiarity with DevOps practices and tools like Azure DevOps or GitHub Actions.
β’ Excellent problem-solving and communication skills.






