

Smart IT Frame LLC
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with Microsoft Fabric, offering a long-term remote contract in the USA. Requires 8+ years of data engineering/BI experience, including 2+ years in Microsoft Fabric, and expertise in data warehousing and governance. Pay rate is "unknown."
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 11, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #DataOps #Requirements Gathering #Triggers #Leadership #Semantic Models #Dataflow #Azure #Databricks #Microsoft Power BI #GIT #ADLS (Azure Data Lake Storage) #Data Enrichment #Data Processing #PySpark #Synapse #DAX #DevOps #Automation #Spark (Apache Spark) #Data Engineering #Documentation #KQL (Kusto Query Language) #Unit Testing #Scala #Security #"ETL (Extract #Transform #Load)" #Storage #ADF (Azure Data Factory) #Metadata #Datasets #Data Governance #Classification #ML (Machine Learning) #SQL (Structured Query Language) #BI (Business Intelligence) #Compliance #SQL Queries #Delta Lake #GDPR (General Data Protection Regulation) #Databases
Role description
Position: Data Engineer with Microsoft Fabric
Remote (USA)
Duration: Longterm Contract
About Smart IT Frame:
At Smart IT Frame, we connect top talent with leading organizations across the USA. With over a decade of staffing excellence, we specialize in IT, healthcare, and professional roles, empowering both clients and candidates to grow together.
Job Description:
Key Responsibilities
Requirements Gathering & Solution Design
• Engage with business stakeholders to understand analytical, operational, and compliance needs.
• Translate business requirements into functional designs, source‑to‑target mappings, transformation logic, and technical specifications.
• Validate requirements against enterprise data models and recommend architecture patterns (Lakehouse, Warehouse, Real‑Time Hub).
Data Modelling & Semantic Layer
• Design, build, and govern Fabric Semantic Models (Direct Lake, Import, Hybrid modes).
• Define enterprise‑wide canonical models, shared dimensions, hierarchies, KPIs, and reusable DAX measures.
• Optimize semantic models for performance using aggregations, incremental refresh, and partitioning strategies.
• Implement certified datasets, semantic governance, and role‑level security in Fabric.
ETL/ELT Engineering
• Build ingestion and transformation processes using Data Factory pipelines, Dataflows Gen2, Warehouse pipelines, and PySpark notebooks.
• Maintain metadata‑driven ETL patterns and reusable frameworks for ingestion, harmonization, and transformation.
Notebook Engineering
• Use Fabric Notebooks for PySpark transformations, Delta Lake optimization (Z‑order, vacuuming, partitioning), data validation, and ML feature engineering.
• Automate notebook execution via pipelines, triggers, and Fabric scheduling.
• Integrate notebooks with Lakehouse tables, Warehouse tables, and ML model outputs.
Near Real‑Time Data Processing
• Design and implement near‑real‑time ingestion pipelines using Fabric Real‑Time Hub, Event Streams, KQL Databases, and Streaming Dataflows.
• Build streaming transformations and real‑time analytical models leveraging KQL and PySpark Structured Streaming.
• Optimize workloads for durability, recovery, and performance under high throughput.
• Deliver dashboards and semantic models supporting near‑real‑time refresh scenarios.
Performance Optimization
• Optimize SQL queries, Delta tables, semantic models, DAX expressions, and Power BI datasets.
• Tune pipeline throughput, notebook execution, and refresh schedules.
• Improve Direct Lake performance by optimizing storage layouts, file size distributions, and column structures.
• Monitor workloads using Fabric capacity metrics and logs.
Power BI Development
• Build enterprise‑grade Power BI dashboards integrated with centralized semantic models.
• Develop DAX calculations, KPIs, UX/UI standards, drill‑throughs, and row‑level security.
• Promote semantic model reuse and governed gold datasets.
Governance, Security & Compliance
• Implement governance and cataloging using Microsoft Purview for Fabric assets.
• Manage lineage tracking, glossary management, classification, and metadata enrichment.
• Define enterprise security controls (RBAC, masking, PII handling, encryption, retention).
• Ensure compliance with GDPR, CCPA, HIPAA, SOX, and internal audit controls.
• Govern Fabric workspace structure, capacity usage, certification processes, and lifecycle management.
Technical Leadership & Program Delivery
• Lead data engineers, BI developers, and analysts across multiple initiatives.
• Review designs, STTMs, code, semantic models, and performance benchmarks.
• Own sprint planning, estimation, milestone tracking, and stakeholder communication.
• Promote documentation, technical standards, reusable frameworks, and automation.
Required Skills & Experience
• 8+ years of data engineering/BI experience, including 2+ years in Microsoft Fabric.
• Expertise in data warehousing, dimensional modeling, semantic modeling, and data governance.
• Strong hands‑on experience with:
• Fabric Lakehouses & Warehouses
• Fabric Semantic Models (Direct Lake, Import, Hybrid)
• Real‑Time Hub, Event Streams, KQL Databases
• Notebooks & PySpark transformations
• Data Factory pipelines, Dataflows Gen2
• Power BI modeling, DAX, and report development
• Solid understanding of Delta Lake, Spark performance tuning, and workload optimization.
• Proven ability to implement source‑to‑target mappings, transformation logic, and validation rules.
Preferred Skills
• Experience with Azure Synapse, ADF, ADLS, Databricks.
• Knowledge of DataOps, CI/CD, Git, DevOps pipelines, and unit testing.
• Familiarity with Fabric AI Copilot for Power BI and AI‑driven accelerators.
Soft Skills
• Excellent communication and stakeholder management.
• Strong leadership and mentoring abilities.
• Problem‑solving mindset with focus on scalability, efficiency, and accuracy.
Mandatory Skills
• Microsoft Fabric – Warehousing
• Microsoft Fabric – Data Engineering
Apply today or share profiles at priya@smartitframe.com
Position: Data Engineer with Microsoft Fabric
Remote (USA)
Duration: Longterm Contract
About Smart IT Frame:
At Smart IT Frame, we connect top talent with leading organizations across the USA. With over a decade of staffing excellence, we specialize in IT, healthcare, and professional roles, empowering both clients and candidates to grow together.
Job Description:
Key Responsibilities
Requirements Gathering & Solution Design
• Engage with business stakeholders to understand analytical, operational, and compliance needs.
• Translate business requirements into functional designs, source‑to‑target mappings, transformation logic, and technical specifications.
• Validate requirements against enterprise data models and recommend architecture patterns (Lakehouse, Warehouse, Real‑Time Hub).
Data Modelling & Semantic Layer
• Design, build, and govern Fabric Semantic Models (Direct Lake, Import, Hybrid modes).
• Define enterprise‑wide canonical models, shared dimensions, hierarchies, KPIs, and reusable DAX measures.
• Optimize semantic models for performance using aggregations, incremental refresh, and partitioning strategies.
• Implement certified datasets, semantic governance, and role‑level security in Fabric.
ETL/ELT Engineering
• Build ingestion and transformation processes using Data Factory pipelines, Dataflows Gen2, Warehouse pipelines, and PySpark notebooks.
• Maintain metadata‑driven ETL patterns and reusable frameworks for ingestion, harmonization, and transformation.
Notebook Engineering
• Use Fabric Notebooks for PySpark transformations, Delta Lake optimization (Z‑order, vacuuming, partitioning), data validation, and ML feature engineering.
• Automate notebook execution via pipelines, triggers, and Fabric scheduling.
• Integrate notebooks with Lakehouse tables, Warehouse tables, and ML model outputs.
Near Real‑Time Data Processing
• Design and implement near‑real‑time ingestion pipelines using Fabric Real‑Time Hub, Event Streams, KQL Databases, and Streaming Dataflows.
• Build streaming transformations and real‑time analytical models leveraging KQL and PySpark Structured Streaming.
• Optimize workloads for durability, recovery, and performance under high throughput.
• Deliver dashboards and semantic models supporting near‑real‑time refresh scenarios.
Performance Optimization
• Optimize SQL queries, Delta tables, semantic models, DAX expressions, and Power BI datasets.
• Tune pipeline throughput, notebook execution, and refresh schedules.
• Improve Direct Lake performance by optimizing storage layouts, file size distributions, and column structures.
• Monitor workloads using Fabric capacity metrics and logs.
Power BI Development
• Build enterprise‑grade Power BI dashboards integrated with centralized semantic models.
• Develop DAX calculations, KPIs, UX/UI standards, drill‑throughs, and row‑level security.
• Promote semantic model reuse and governed gold datasets.
Governance, Security & Compliance
• Implement governance and cataloging using Microsoft Purview for Fabric assets.
• Manage lineage tracking, glossary management, classification, and metadata enrichment.
• Define enterprise security controls (RBAC, masking, PII handling, encryption, retention).
• Ensure compliance with GDPR, CCPA, HIPAA, SOX, and internal audit controls.
• Govern Fabric workspace structure, capacity usage, certification processes, and lifecycle management.
Technical Leadership & Program Delivery
• Lead data engineers, BI developers, and analysts across multiple initiatives.
• Review designs, STTMs, code, semantic models, and performance benchmarks.
• Own sprint planning, estimation, milestone tracking, and stakeholder communication.
• Promote documentation, technical standards, reusable frameworks, and automation.
Required Skills & Experience
• 8+ years of data engineering/BI experience, including 2+ years in Microsoft Fabric.
• Expertise in data warehousing, dimensional modeling, semantic modeling, and data governance.
• Strong hands‑on experience with:
• Fabric Lakehouses & Warehouses
• Fabric Semantic Models (Direct Lake, Import, Hybrid)
• Real‑Time Hub, Event Streams, KQL Databases
• Notebooks & PySpark transformations
• Data Factory pipelines, Dataflows Gen2
• Power BI modeling, DAX, and report development
• Solid understanding of Delta Lake, Spark performance tuning, and workload optimization.
• Proven ability to implement source‑to‑target mappings, transformation logic, and validation rules.
Preferred Skills
• Experience with Azure Synapse, ADF, ADLS, Databricks.
• Knowledge of DataOps, CI/CD, Git, DevOps pipelines, and unit testing.
• Familiarity with Fabric AI Copilot for Power BI and AI‑driven accelerators.
Soft Skills
• Excellent communication and stakeholder management.
• Strong leadership and mentoring abilities.
• Problem‑solving mindset with focus on scalability, efficiency, and accuracy.
Mandatory Skills
• Microsoft Fabric – Warehousing
• Microsoft Fabric – Data Engineering
Apply today or share profiles at priya@smartitframe.com






