

SPECTRAFORCE
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Newark, NJ, for 6 months with a pay rate of "unknown." Key skills include Microsoft Fabric, Power BI, AWS, and Medallion architecture. Experience in data modeling, ETL processes, and data governance is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
600
-
ποΈ - Date
November 4, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Newark, NJ
-
π§ - Skills detailed
#BI (Business Intelligence) #Spark (Apache Spark) #Metadata #Compliance #Databricks #Redshift #SQL (Structured Query Language) #Data Security #DevOps #Data Pipeline #"ETL (Extract #Transform #Load)" #Data Governance #PySpark #AWS (Amazon Web Services) #Dataflow #Microsoft Power BI #Security #Cloud #Lambda (AWS Lambda) #Data Quality #Snowflake #GDPR (General Data Protection Regulation) #DAX #S3 (Amazon Simple Storage Service) #Data Modeling #Azure #Visualization #Synapse #Scala #Data Engineering
Role description
Title: Data Engineer
Location: Newark, NJ - Hybrid onsite
Duration: 6 months, likely extensions or conversion to FTE
Overview
We are seeking a Data Engineer with deep experience in Microsoft Fabric, Power BI, and AWS environments. The role focuses on building scalable data pipelines, implementing Medallion architecture, and enabling governed analytics using Purview. Youβll be responsible for data modeling, visualization, and ensuring seamless data flow across Azure and AWS platforms.
Key Responsibilities
β’ Design and implement data pipelines and ETL processes within Microsoft Fabric and AWS.
β’ Apply Medallion Architecture (BronzeβSilverβGold) principles for data curation and optimization.
β’ Build and maintain semantic data models to power self-service analytics and Power BI dashboards.
β’ Leverage Azure Purview for data governance, cataloging, and lineage tracking.
β’ Collaborate across teams to integrate data from AWS-hosted systems (90% of current environment) into Fabric.
β’ Develop reusable dataflows and maintain cross-cloud interoperability between AWS and Azure.
β’ Ensure data quality, security, and performance throughout the pipeline lifecycle.
Required Skills
β’ Microsoft Fabric (Data Factory, Dataflows Gen2, Lakehouse).
β’ Power BI β strong in data modeling, DAX, and visualization.
β’ PySpark and SQL for data transformation and pipeline development.
β’ Experience with AWS data stack (S3, Glue, Redshift, Lambda, etc.).
β’ Familiarity with Medallion architecture and data warehousing principles.
β’ Understanding of Purview for metadata and governance.
β’ Comfort working in hybrid or cross-cloud environments (AWS β Azure).
Nice to Have
β’ Exposure to Databricks, Azure Synapse, or Snowflake.
β’ Experience with CI/CD pipelines and DevOps practices for data engineering.
β’ Background in data security and compliance (GDPR, SOC2, etc.).
Title: Data Engineer
Location: Newark, NJ - Hybrid onsite
Duration: 6 months, likely extensions or conversion to FTE
Overview
We are seeking a Data Engineer with deep experience in Microsoft Fabric, Power BI, and AWS environments. The role focuses on building scalable data pipelines, implementing Medallion architecture, and enabling governed analytics using Purview. Youβll be responsible for data modeling, visualization, and ensuring seamless data flow across Azure and AWS platforms.
Key Responsibilities
β’ Design and implement data pipelines and ETL processes within Microsoft Fabric and AWS.
β’ Apply Medallion Architecture (BronzeβSilverβGold) principles for data curation and optimization.
β’ Build and maintain semantic data models to power self-service analytics and Power BI dashboards.
β’ Leverage Azure Purview for data governance, cataloging, and lineage tracking.
β’ Collaborate across teams to integrate data from AWS-hosted systems (90% of current environment) into Fabric.
β’ Develop reusable dataflows and maintain cross-cloud interoperability between AWS and Azure.
β’ Ensure data quality, security, and performance throughout the pipeline lifecycle.
Required Skills
β’ Microsoft Fabric (Data Factory, Dataflows Gen2, Lakehouse).
β’ Power BI β strong in data modeling, DAX, and visualization.
β’ PySpark and SQL for data transformation and pipeline development.
β’ Experience with AWS data stack (S3, Glue, Redshift, Lambda, etc.).
β’ Familiarity with Medallion architecture and data warehousing principles.
β’ Understanding of Purview for metadata and governance.
β’ Comfort working in hybrid or cross-cloud environments (AWS β Azure).
Nice to Have
β’ Exposure to Databricks, Azure Synapse, or Snowflake.
β’ Experience with CI/CD pipelines and DevOps practices for data engineering.
β’ Background in data security and compliance (GDPR, SOC2, etc.).





