
Microsoft Fabric Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Microsoft Fabric Data Engineer, offering a 6-month hybrid contract in Newark, New Jersey. Candidates need 7+ years in Azure, 5+ years in data platforms, and relevant certifications. Proficiency in Python, ETL processes, and Agile methodologies is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 12, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Newark, NJ
-
π§ - Skills detailed
#Data Accuracy #Programming #"ETL (Extract #Transform #Load)" #Data Pipeline #Microsoft Azure #Data Architecture #DevOps #Storage #Data Processing #Azure ADLS (Azure Data Lake Storage) #SQL (Structured Query Language) #Data Lake #Automation #Azure #Computer Science #Jira #Azure Synapse Analytics #Azure Data Factory #Data Modeling #Big Data #Python #Scala #Database Systems #ADLS (Azure Data Lake Storage) #Data Migration #Data Storage #Data Management #Project Management #Synapse #Data Engineering #Cloud #Apache Spark #Migration #Spark (Apache Spark) #Azure SQL #PySpark #Agile #BI (Business Intelligence) #Azure SQL Database #Scrum #ADF (Azure Data Factory)
Role description
Microsoft Fabric Data Engineer - 6 month Hybrid full-time role.
Our client is seeking a highly experienced Microsoft Fabric Data Engineer to design, build, and optimize our enterprise data and analytics platform. The ideal candidate will be an expert in the Microsoft Azure ecosystem, with deep hands-on experience in data management, ETL/ELT processes, and big data technologies. You will be instrumental in migrating and modernizing our data infrastructure, ensuring it is scalable, performant, and cost-effective to support advanced analytics and business intelligence.
Key Responsibilities:
β’ Architect & Implement Data Solutions: Design, implement, and manage scalable data storage and processing systems using Azure services including Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database.
β’ Develop Data Pipelines: Build, maintain, and orchestrate efficient, scalable, and reliable ETL/ELT processes using Azure Data Factory and Apache Spark (PySpark) to integrate data from diverse sources.
β’ Drive Platform Migration: Lead and contribute to the build-out of new applications and the migration of existing ones to a modern cloud-based data architecture within an Agile (SAFe) framework.
β’ Optimize Performance: Proactively monitor, troubleshoot, and optimize data pipelines and database systems for maximum performance, efficiency, and cost-management.
β’ Collaborate in Agile Teams: Work closely with Scrum Masters, Product Owners, and cross-functional teams to deliver high-value features and meet sprint objectives.
β’ Ensure Best Practices: Document data models, engineering processes, and pipelines to ensure clarity, transparency, and long-term maintainability.
β’ Interface Development: Build and maintain robust data interfaces to support new applications and accommodate evolving data sources and types.
Must-Have Qualifications:
β’ Location: Must be local to Newark, New Jersey or the surrounding area.
β’ Azure Expertise: 7+ years of hands-on experience designing and implementing solutions within Microsoft Azure.
β’ Data Platform Focus: 5+ years of proven experience specifically building and optimizing data and analytics platforms on Azure.
β’ Certification: Must possess either the DP-600 (Microsoft Fabric Analytics Engineer) or DP-203 (Azure Data Engineer Associate) certification. Direct hands-on experience with Microsoft Fabric is also highly valued.
β’ Technical Proficiency:
β’ Expert-level knowledge of Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, and Azure Data Lake Storage (Gen2).
β’ 3+ years of strong programming experience with Python and PySpark for data processing and automation.
β’ Solid experience with cloud-based data migration projects.
Agile Methodology: Demonstrated experience working in an Agile environment (Scrum/SAFe), collaborating with Scrum Masters and Product Owners to deliver on commitments.
Education: Bachelors degree in Computer Science, Information Technology, or a related field.
Preferred Skills:
β’ Proficiency with project management and collaboration tools such as Jira and Confluence.
β’ In-depth understanding of SAFe (Scaled Agile Framework) principles and DevOps practices.
β’ Strong knowledge of data modeling, data warehousing concepts, and best practices.
β’ Excellent problem-solving and analytical skills, with a proven ability to perform root cause analysis and implement effective solutions.
β’ Meticulous attention to detail and an unwavering commitment to data accuracy and quality.
What's on Offer:
β’ A competitive salary and comprehensive benefits package.
β’ The opportunity to work on a high-impact, large-scale migration project with modern technologies.
β’ A collaborative and innovative work environment that fosters professional growth.
β’ The chance to be a key player in shaping the data-driven future of our company.
How to Apply:
If you are a local New Jersey candidate who meets all the "Must-Have" qualifications and is excited to take on this challenge, please apply with your updated resume detailing your relevant experience.
Equal Opportunity Employer
Fortis Hayes is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Microsoft Fabric Data Engineer - 6 month Hybrid full-time role.
Our client is seeking a highly experienced Microsoft Fabric Data Engineer to design, build, and optimize our enterprise data and analytics platform. The ideal candidate will be an expert in the Microsoft Azure ecosystem, with deep hands-on experience in data management, ETL/ELT processes, and big data technologies. You will be instrumental in migrating and modernizing our data infrastructure, ensuring it is scalable, performant, and cost-effective to support advanced analytics and business intelligence.
Key Responsibilities:
β’ Architect & Implement Data Solutions: Design, implement, and manage scalable data storage and processing systems using Azure services including Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database.
β’ Develop Data Pipelines: Build, maintain, and orchestrate efficient, scalable, and reliable ETL/ELT processes using Azure Data Factory and Apache Spark (PySpark) to integrate data from diverse sources.
β’ Drive Platform Migration: Lead and contribute to the build-out of new applications and the migration of existing ones to a modern cloud-based data architecture within an Agile (SAFe) framework.
β’ Optimize Performance: Proactively monitor, troubleshoot, and optimize data pipelines and database systems for maximum performance, efficiency, and cost-management.
β’ Collaborate in Agile Teams: Work closely with Scrum Masters, Product Owners, and cross-functional teams to deliver high-value features and meet sprint objectives.
β’ Ensure Best Practices: Document data models, engineering processes, and pipelines to ensure clarity, transparency, and long-term maintainability.
β’ Interface Development: Build and maintain robust data interfaces to support new applications and accommodate evolving data sources and types.
Must-Have Qualifications:
β’ Location: Must be local to Newark, New Jersey or the surrounding area.
β’ Azure Expertise: 7+ years of hands-on experience designing and implementing solutions within Microsoft Azure.
β’ Data Platform Focus: 5+ years of proven experience specifically building and optimizing data and analytics platforms on Azure.
β’ Certification: Must possess either the DP-600 (Microsoft Fabric Analytics Engineer) or DP-203 (Azure Data Engineer Associate) certification. Direct hands-on experience with Microsoft Fabric is also highly valued.
β’ Technical Proficiency:
β’ Expert-level knowledge of Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, and Azure Data Lake Storage (Gen2).
β’ 3+ years of strong programming experience with Python and PySpark for data processing and automation.
β’ Solid experience with cloud-based data migration projects.
Agile Methodology: Demonstrated experience working in an Agile environment (Scrum/SAFe), collaborating with Scrum Masters and Product Owners to deliver on commitments.
Education: Bachelors degree in Computer Science, Information Technology, or a related field.
Preferred Skills:
β’ Proficiency with project management and collaboration tools such as Jira and Confluence.
β’ In-depth understanding of SAFe (Scaled Agile Framework) principles and DevOps practices.
β’ Strong knowledge of data modeling, data warehousing concepts, and best practices.
β’ Excellent problem-solving and analytical skills, with a proven ability to perform root cause analysis and implement effective solutions.
β’ Meticulous attention to detail and an unwavering commitment to data accuracy and quality.
What's on Offer:
β’ A competitive salary and comprehensive benefits package.
β’ The opportunity to work on a high-impact, large-scale migration project with modern technologies.
β’ A collaborative and innovative work environment that fosters professional growth.
β’ The chance to be a key player in shaping the data-driven future of our company.
How to Apply:
If you are a local New Jersey candidate who meets all the "Must-Have" qualifications and is excited to take on this challenge, please apply with your updated resume detailing your relevant experience.
Equal Opportunity Employer
Fortis Hayes is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.