

JSR Tech Consulting
Data Engineer - Fabric
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer - Fabric contract position, hybrid in Newark, NJ, with a pay rate of "unknown." Key skills include data ingestion, ETL, data pipelines, and Scala. Experience in financial services is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 26, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
New Jersey, United States
-
π§ - Skills detailed
#Data Engineering #Data Ingestion #"ETL (Extract #Transform #Load)" #Monitoring #Data Pipeline #Datasets #Scala
Role description
Contract position, hybrid in Newark, NJ with a major financial firm.
Data Engineer
β’ Build and maintain data pipelines that collect, store, and transform data to support analytics use cases and business outcomes.
β’ Implement data ingestion and transformation workflows in Microsoft Fabric, using Fabric-native capabilities such as notebooks, pipelines, and lakehouse patterns.
β’ Develop and operationalize data solutions across lakehouse layers (e.g., landing and standardized "Bronzeβ data through curated "Silver/Goldβ outputs) aligned to the platform's workspace architecture and OneLake design.
β’ Ensure data solutions are reliable and supportable by incorporating monitoring, issue resolution, and ongoing enhancements to pipelines and datasets.
β’ Collaborate across teams (engineering, analytics, product, and stakeholders) to translate data needs into scalable, reusable solutions and improved workflow efficiency.
β’ Support secure and appropriate use of Fabric assets by following established access and workspace practices.
Contract position, hybrid in Newark, NJ with a major financial firm.
Data Engineer
β’ Build and maintain data pipelines that collect, store, and transform data to support analytics use cases and business outcomes.
β’ Implement data ingestion and transformation workflows in Microsoft Fabric, using Fabric-native capabilities such as notebooks, pipelines, and lakehouse patterns.
β’ Develop and operationalize data solutions across lakehouse layers (e.g., landing and standardized "Bronzeβ data through curated "Silver/Goldβ outputs) aligned to the platform's workspace architecture and OneLake design.
β’ Ensure data solutions are reliable and supportable by incorporating monitoring, issue resolution, and ongoing enhancements to pipelines and datasets.
β’ Collaborate across teams (engineering, analytics, product, and stakeholders) to translate data needs into scalable, reusable solutions and improved workflow efficiency.
β’ Support secure and appropriate use of Fabric assets by following established access and workspace practices.






