Vivid Resourcing

Microsoft Fabric Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Microsoft Fabric Data Engineer on a 12-month remote contract, requiring 3+ years of data engineering experience, strong SQL and Python skills, and hands-on Microsoft Fabric expertise. Relevant certifications are preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
960
-
🗓️ - Date
March 13, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Agile #Storage #Data Processing #Microsoft Power BI #Scala #SQL (Structured Query Language) #Data Governance #"ETL (Extract #Transform #Load)" #Data Modeling #BI (Business Intelligence) #Data Engineering #Microsoft Azure #Spark (Apache Spark) #Data Analysis #Data Lake #Python #Data Lakehouse #Delta Lake #Data Storage #Data Science #Azure #Datasets #Cloud #Monitoring #PySpark #Databases #Data Pipeline
Role description
Microsoft Fabric Data Engineer Job Summary We are seeking an experienced Microsoft Fabric Data Engineer for a 12-month remote contract to design, develop, and maintain scalable data solutions using Microsoft Fabric. The ideal candidate will build data pipelines, manage data storage within OneLake, and support enterprise analytics initiatives. This role will collaborate closely with analytics, engineering, and business teams to deliver high-quality data solutions. Contract Details • Contract Type: 12-month contract • Work Location: Remote • Engagement: Full-time contract • Start Date: Immediate / As soon as available Key Responsibilities Data Engineering & Pipeline Development • Design, build, and maintain scalable ETL/ELT pipelines using Fabric Data Factory. • Ingest data from multiple sources including databases, APIs, and cloud platforms. • Transform and process large datasets using Spark notebooks and SQL. Data Storage & Architecture • Implement and maintain Lakehouse architectures within Microsoft Fabric. • Manage data storage and governance in OneLake. • Optimize performance of data pipelines and queries. Analytics Enablement • Prepare and model data for reporting and dashboards in Microsoft Power BI. • Support analytics teams with curated and high-quality datasets. Monitoring & Optimization • Monitor pipeline performance and troubleshoot issues. • Ensure reliability, scalability, and efficiency of data workflows. Collaboration • Work with data analysts, data scientists, and stakeholders to understand business requirements. • Participate in agile development processes and technical planning. Required Qualifications • 3+ years of experience in data engineering or analytics engineering • Hands-on experience with Microsoft Fabric • Strong SQL skills • Experience with Python or PySpark • Experience building ETL/ELT pipelines • Knowledge of data lakehouse or warehouse architectures • Experience working with Microsoft Azure services Preferred Qualifications • Experience with Spark, Delta Lake, or distributed data processing • Familiarity with data modeling and data governance • Experience building datasets for Power BI • Relevant certifications such as Microsoft DP-600 or Microsoft DP-203