Bayforce

Senior Data Engineer - Synapse and Fabric

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 5+ years of Azure experience, focusing on migrating data workloads from Azure Synapse to Microsoft Fabric. Contract duration is 8+ months, with a pay rate of "pay rate" and remote work in ET or CT time zones.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 4, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#ADLS (Azure Data Lake Storage) #Dataflow #Synapse #Azure Data Factory #API (Application Programming Interface) #"ETL (Extract #Transform #Load)" #PySpark #Cloud #Monitoring #Regression #Data Engineering #Spark SQL #Azure #Storage #Logging #Migration #Azure Synapse Analytics #Data Storage #Spark (Apache Spark) #SQL (Structured Query Language) #REST (Representational State Transfer) #Data Lake #Security #Data Architecture #ADF (Azure Data Factory) #Azure cloud #Data Quality #Computer Science #Compliance #Data Processing
Role description
Role Title: Senior Data Engineer Employment Type: Contract Duration: 8+ months initially + extensions (2-2.5 year project) Preferred Location: Remote ET or CT time zone: The Senior Data Engineer will play a hands-on, delivery-focused role in migrating enterprise data workloads from Azure Synapse Analytics to Microsoft Fabric. This role is deeply technical and execution-oriented, responsible for refactoring, rebuilding, and optimizing existing Synapse-based pipelines, Spark workloads, and SQL models into Fabric-native Lakehouse and Warehouse architectures. The engineer will work across ingestion, transformation, orchestration, and serving layers, with heavy emphasis on Microsoft Fabric (OneLake, Lakehouse, Fabric Pipelines, notebooks) while leveraging prior Synapse and Azure Data Factory patterns during migration. The ideal candidate is a strong individual contributor who has executed real-world platform migrations and can operate independently in a fast-moving environment. • Individually execute hands-on migration of enterprise data workloads from Azure Synapse Analytics to Microsoft Fabric, including pipelines, Spark notebooks, and SQL workloads. • Collaborate closely with Data Architects and fellow Data Engineers to translate target-state architecture into working Fabric solutions and ensure consistent implementation across migration efforts. • Review existing Synapse implementations (ADF/Synapse pipelines, Spark pools, SQL pools) and design, refactor, and implement Fabric-native equivalents using Lakehouse and Warehouse architectures (Bronze/Silver/Gold). • Migrate data storage from ADLS Gen2 to OneLake, ensuring data parity, performance, security, and governance controls. • Build, optimize, and maintain end-to-end ELT/ETL pipelines using Microsoft Fabric Pipelines, Fabric notebooks, PySpark, SQL, and Azure Data Factory where required. • Develop and tune PySpark/Spark transformations for large-scale data processing, including incremental loads, partitioning strategies, joins, and file optimization. • Implement and support API-heavy ingestion frameworks (REST/SOAP), handling authentication, pagination, throttling, retries, and robust error handling. • Troubleshoot and resolve migration defects, performance regressions, and functional gaps between Synapse and Fabric. • Implement monitoring, logging, alerting, and operational runbooks for both migrated and net-new Fabric workloads. • Apply security, governance, and data quality standards across pipelines, including access controls, validation checks, lineage, and compliance requirements. • Contribute to and follow established engineering standards Requirements: • Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field. • 5+ years of hands-on data engineering experience in Azure cloud environments. • Strong, practical experience with Azure Synapse Analytics, including: • Synapse Pipelines and/or Azure Data Factory • Synapse Spark pools • Dedicated and/or Serverless SQL pools • Direct, hands-on experience migrating or modernizing data platforms, preferably from Synapse to Microsoft Fabric. • Hands-on Microsoft Fabric experience is REQUIRED, including: • Fabric Lakehouse and Warehouse • OneLake • Fabric Pipelines • Fabric Notebooks • Dataflows Gen2 • Demonstrated experience rebuilding Azure Synapse patterns into Fabric-native solutions (not lift-and-shift). • Advanced PySpark / Spark experience for complex transformations and performance tuning. • Strong SQL expertise and data warehousing concepts (dimensional modeling, incremental processing, data quality checks). • Deep understanding of Data Lake, Lakehouse, and Warehouse architectures. • Proven ability to troubleshoot and optimize complex production data workflows.