Arkhya Tech. Inc.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Microsoft Fabric Data Engineer, remote, with a contract length of unspecified duration. Pay rate is also unspecified. Requires 3+ years of Data Engineering experience, proficiency in Microsoft Fabric, SQL, Python, and ETL/ELT pipelines.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
March 7, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Scrum #Spark SQL #Synapse #Scala #Data Quality #Azure Data Factory #Security #Databricks #"ETL (Extract #Transform #Load)" #Azure DevOps #Metadata #ADF (Azure Data Factory) #BI (Business Intelligence) #Dataflow #GitHub #Spark (Apache Spark) #SQL (Structured Query Language) #DevOps #Semantic Models #Python #Microsoft Power BI #Data Pipeline #Agile #Data Governance #Azure #Data Engineering
Role description
Job Title: Microsoft Fabric Data Engineer Location: Remote Experience Level: Mid–Senior Level Job Description We are looking for a Microsoft Fabric Data Engineer to design, build, and optimize data pipelines within the Microsoft Fabric ecosystem to support analytics, reporting, and business intelligence initiatives. Key Responsibilities β€’ Design and maintain data pipelines using Microsoft Fabric (Dataflows Gen2, Pipelines, Lakehouses, Warehouses) β€’ Integrate data from multiple sources into OneLake and Fabric Lakehouses β€’ Develop metadata-driven and parameterized pipelines for scalable solutions β€’ Build semantic models for Power BI to support analytics and reporting β€’ Ensure data quality, governance, and security practices β€’ Automate data processes using Python, Spark SQL, or Databricks β€’ Implement CI/CD practices using Azure DevOps or GitHub Required Skills β€’ 3+ years of Data Engineering experience β€’ Hands-on experience with Microsoft Fabric components and Power BI β€’ Strong knowledge of SQL, Python, or Spark β€’ Experience with ETL/ELT pipelines and data warehousing β€’ Familiarity with Azure Data Factory, Synapse, or Databricks β€’ Experience working in Agile/Scrum environments Preferred Skills β€’ Experience with Databricks or Spark SQL β€’ Knowledge of data governance tools (Purview, RBAC) β€’ Experience with row-level security and data partitioning