

New York Technology Partners
Fabric Data Engineer
โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Fabric Data Engineer in Phoenix, AZ, onsite for 5 months at a pay rate of "X". Requires 8+ years of experience, expertise in ELT/ETL with Microsoft Fabric, and familiarity with data engineering in the manufacturing domain.
๐ - Country
United States
๐ฑ - Currency
$ USD
-
๐ฐ - Day rate
Unknown
-
๐๏ธ - Date
April 24, 2026
๐ - Duration
Unknown
-
๐๏ธ - Location
On-site
-
๐ - Contract
Unknown
-
๐ - Security
Unknown
-
๐ - Location detailed
Phoenix, AZ
-
๐ง - Skills detailed
#Data Security #AWS (Amazon Web Services) #Computer Science #Data Engineering #Delta Lake #ADLS (Azure Data Lake Storage) #PySpark #"ETL (Extract #Transform #Load)" #Data Cleansing #Scala #SQL (Structured Query Language) #Azure #Monitoring #Spark SQL #Security #Automation #Cloud #Spark (Apache Spark) #Data Processing #Data Ingestion #Storage #Datasets
Role description
Position Title: Fabric Data Engineer
Location: Phoenix, AZ (Onsite 5 days)
Years of Experience: 8+ years
Responsibilities -
โข Builds and maintains ELT/ETL pipelines using Microsoft Fabric tools, enabling efficient data ingestion from multiple resources.
โข Applies transformations, cleanses, and enriches data to ensure it is ready for analyzing and reporting
โข Handles large datasets, optimizing storage and retrieval for performance.
โข Implements automation for data processing and integration workflows, reducing manual intervention
โข Works with Platform Architects to ensure infrastructure supports data requirements.
โข Partners with Report developers to ensure that data is in a usable format and ready for analysis.
โข Ensuring code reusability and parameterization
โข Focuses on creating interactive, intuitive reports and dashboards using Microsoft Fabric's reporting tools.
Qualifications:
โข Data Factory (in Fabric): Designing and orchestrating data ingestion and transformation pipelines (ETL/ELT).
โข Data Engineering Experience (Spark): Using Notebooks (PySpark, Spark SQL, Scala) and Spark Job Definitions for complex data processing, cleansing, enrichment, and large-scale transformations directly on OneLake data.
โข Lakehouse Items: Creating and managing Lakehouse structures (Delta tables, files) as the primary landing and processing zone within OneLake.
โข OneLake / ADLS Gen2: Understanding storage structures, Delta Lake format, partitioning strategies, and potentially managing Shortcuts.
โข Monitoring Hubs: Tracking pipeline runs and Spark job performance.
โข Core Responsibilities (Fabric Context): Building ingestion pipelines from diverse sources; implementing data cleansing and quality rules; transforming raw data into curated Delta tables within Lakehouses or Warehouses; optimizing Spark jobs and data layouts for performance and cost; managing pipeline schedules and dependencies; ensuring data security and governance principles are applied to pipelines and data structures.
โข Excellent Communicator and collaboration skills
โข Bachelorโs degree in Computer Science, Engineering, or a relevant field
โข Azure/AWS Cloud Certifications will be a plus
โข Experience with Manufacturing domain will be a plus.
โข Should be self-driven and be able to drive the projects to delivery
Position Title: Fabric Data Engineer
Location: Phoenix, AZ (Onsite 5 days)
Years of Experience: 8+ years
Responsibilities -
โข Builds and maintains ELT/ETL pipelines using Microsoft Fabric tools, enabling efficient data ingestion from multiple resources.
โข Applies transformations, cleanses, and enriches data to ensure it is ready for analyzing and reporting
โข Handles large datasets, optimizing storage and retrieval for performance.
โข Implements automation for data processing and integration workflows, reducing manual intervention
โข Works with Platform Architects to ensure infrastructure supports data requirements.
โข Partners with Report developers to ensure that data is in a usable format and ready for analysis.
โข Ensuring code reusability and parameterization
โข Focuses on creating interactive, intuitive reports and dashboards using Microsoft Fabric's reporting tools.
Qualifications:
โข Data Factory (in Fabric): Designing and orchestrating data ingestion and transformation pipelines (ETL/ELT).
โข Data Engineering Experience (Spark): Using Notebooks (PySpark, Spark SQL, Scala) and Spark Job Definitions for complex data processing, cleansing, enrichment, and large-scale transformations directly on OneLake data.
โข Lakehouse Items: Creating and managing Lakehouse structures (Delta tables, files) as the primary landing and processing zone within OneLake.
โข OneLake / ADLS Gen2: Understanding storage structures, Delta Lake format, partitioning strategies, and potentially managing Shortcuts.
โข Monitoring Hubs: Tracking pipeline runs and Spark job performance.
โข Core Responsibilities (Fabric Context): Building ingestion pipelines from diverse sources; implementing data cleansing and quality rules; transforming raw data into curated Delta tables within Lakehouses or Warehouses; optimizing Spark jobs and data layouts for performance and cost; managing pipeline schedules and dependencies; ensuring data security and governance principles are applied to pipelines and data structures.
โข Excellent Communicator and collaboration skills
โข Bachelorโs degree in Computer Science, Engineering, or a relevant field
โข Azure/AWS Cloud Certifications will be a plus
โข Experience with Manufacturing domain will be a plus.
โข Should be self-driven and be able to drive the projects to delivery






