New York Technology Partners

Fabric Data Engineer

โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Fabric Data Engineer in Phoenix, AZ, onsite for 5 months at a pay rate of "X". Requires 8+ years of experience, expertise in ELT/ETL with Microsoft Fabric, and familiarity with data engineering in the manufacturing domain.
๐ŸŒŽ - Country
United States
๐Ÿ’ฑ - Currency
$ USD
-
๐Ÿ’ฐ - Day rate
Unknown
-
๐Ÿ—“๏ธ - Date
April 24, 2026
๐Ÿ•’ - Duration
Unknown
-
๐Ÿ๏ธ - Location
On-site
-
๐Ÿ“„ - Contract
Unknown
-
๐Ÿ”’ - Security
Unknown
-
๐Ÿ“ - Location detailed
Phoenix, AZ
-
๐Ÿง  - Skills detailed
#Data Security #AWS (Amazon Web Services) #Computer Science #Data Engineering #Delta Lake #ADLS (Azure Data Lake Storage) #PySpark #"ETL (Extract #Transform #Load)" #Data Cleansing #Scala #SQL (Structured Query Language) #Azure #Monitoring #Spark SQL #Security #Automation #Cloud #Spark (Apache Spark) #Data Processing #Data Ingestion #Storage #Datasets
Role description
Position Title: Fabric Data Engineer Location: Phoenix, AZ (Onsite 5 days) Years of Experience: 8+ years Responsibilities - โ€ข Builds and maintains ELT/ETL pipelines using Microsoft Fabric tools, enabling efficient data ingestion from multiple resources. โ€ข Applies transformations, cleanses, and enriches data to ensure it is ready for analyzing and reporting โ€ข Handles large datasets, optimizing storage and retrieval for performance. โ€ข Implements automation for data processing and integration workflows, reducing manual intervention โ€ข Works with Platform Architects to ensure infrastructure supports data requirements. โ€ข Partners with Report developers to ensure that data is in a usable format and ready for analysis. โ€ข Ensuring code reusability and parameterization โ€ข Focuses on creating interactive, intuitive reports and dashboards using Microsoft Fabric's reporting tools. Qualifications: โ€ข Data Factory (in Fabric): Designing and orchestrating data ingestion and transformation pipelines (ETL/ELT). โ€ข Data Engineering Experience (Spark): Using Notebooks (PySpark, Spark SQL, Scala) and Spark Job Definitions for complex data processing, cleansing, enrichment, and large-scale transformations directly on OneLake data. โ€ข Lakehouse Items: Creating and managing Lakehouse structures (Delta tables, files) as the primary landing and processing zone within OneLake. โ€ข OneLake / ADLS Gen2: Understanding storage structures, Delta Lake format, partitioning strategies, and potentially managing Shortcuts. โ€ข Monitoring Hubs: Tracking pipeline runs and Spark job performance. โ€ข Core Responsibilities (Fabric Context): Building ingestion pipelines from diverse sources; implementing data cleansing and quality rules; transforming raw data into curated Delta tables within Lakehouses or Warehouses; optimizing Spark jobs and data layouts for performance and cost; managing pipeline schedules and dependencies; ensuring data security and governance principles are applied to pipelines and data structures. โ€ข Excellent Communicator and collaboration skills โ€ข Bachelorโ€™s degree in Computer Science, Engineering, or a relevant field โ€ข Azure/AWS Cloud Certifications will be a plus โ€ข Experience with Manufacturing domain will be a plus. โ€ข Should be self-driven and be able to drive the projects to delivery