Lorven Technologies Inc.

Data Engineer (AWS+ Azure Fabric+ Power BI)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer (AWS + Azure Fabric + Power BI) for a contract position in Chicago, IL, or Phoenix, AZ. Key skills include Data Factory, ELT/ETL, Spark, and experience in the Manufacturing domain is preferred.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 21, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#ADLS (Azure Data Lake Storage) #Datasets #Scala #Spark SQL #Cloud #Monitoring #Spark (Apache Spark) #Security #Delta Lake #PySpark #BI (Business Intelligence) #SQL (Structured Query Language) #Storage #Microsoft Power BI #Data Cleansing #Data Ingestion #Azure #Data Security #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #Automation #Data Processing #Data Engineering #Computer Science
Role description
Title: Data Engineer (AWS+ Azure Fabric+ Power BI) Location: Chicago, IL, Phoeniz, AZ Contract Must Skills: Data, Microsoft Fabric, Data Factory, ELT/ETL, Spark, One Lake, AWS or Azure. Responsibilities: β€’ Builds and maintains ELT/ETL pipelines using Microsoft Fabric tools, enabling efficient data ingestion from multiple resources. β€’ Applies transformations, cleanses, and enriches data to ensure it is ready for analyzing and reporting β€’ Handles large datasets, optimizing storage and retrieval for performance. β€’ Implements automation for data processing and integration workflows, reducing manual intervention β€’ Works with Platform Architects to ensure infrastructure supports data requirements. β€’ Partners with Report developers to ensure that data is in a usable format and ready for analysis. β€’ Ensuring code reusability and parameterization β€’ Focuses on creating interactive, intuitive reports and dashboards using Microsoft Fabric's reporting tools. Qualifications: β€’ Data Factory (in Fabric): Designing and orchestrating data ingestion and transformation pipelines (ETL/ELT). β€’ Data Engineering Experience (Spark): Using Notebooks (PySpark, Spark SQL, Scala) and Spark Job Definitions for complex data processing, cleansing, enrichment, and large-scale transformations directly on OneLake data. β€’ Lakehouse Items: Creating and managing Lakehouse structures (Delta tables, files) as the primary landing and processing zone within OneLake. β€’ OneLake / ADLS Gen2: Understanding storage structures, Delta Lake format, partitioning strategies, and potentially managing Shortcuts. β€’ Monitoring Hubs: Tracking pipeline runs and Spark job performance. β€’ Core Responsibilities (Fabric Context): Building ingestion pipelines from diverse sources; implementing data cleansing and quality rules; transforming raw data into curated Delta tables within Lakehouses or Warehouses; optimizing Spark jobs and data layouts for performance and cost; managing pipeline schedules and dependencies; ensuring data security and governance principles are applied to pipelines and data structures. β€’ Excellent Communicator and collaboration skills β€’ Bachelor’s degree in Computer Science, Engineering, or a relevant field β€’ Azure/AWS Cloud Certifications will be a plus β€’ Experience with Manufacturing domain will be a plus. β€’ Should be self-driven and be able to drive the projects to delivery