

Azure Data Engineer - Healthcare Data Expertise
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer with healthcare data expertise, offering a contract of unspecified length and a competitive pay rate. Key skills include Azure Databricks, ADF, FHIR data, and modern data architecture. Requires 6+ years in data engineering.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
760
-
ποΈ - Date discovered
May 22, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Consul #Metadata #Azure Data Factory #ADF (Azure Data Factory) #Databricks #Data Lake #Consulting #Spark (Apache Spark) #Data Engineering #Azure Databricks #Data Ingestion #Microsoft Power BI #Data Architecture #FHIR (Fast Healthcare Interoperability Resources) #"ETL (Extract #Transform #Load)" #Scala #Storage #Data Integration #Data Pipeline #BI (Business Intelligence) #Monitoring #Delta Lake #Data Lakehouse #Data Processing #Azure
Role description
Job Description
Seha Consulting seeks a healthcare focused data architect with experience in building modern data platforms.
β’ Experience with EMR data (Epic, Cerner)
β’ Experience with FHIR data and Claims data analytics
β’ Deep expertise in modern data architecture, with specific experience in Microsoft's data platform and Delta Lake architecture.
β’ 6+ years of experience in data architecture and engineering.
β’ Required 2+ years hands-on experience with Azure Databricks / ADF and Spark.
β’ Required recent experience with Microsoft Fabric platform.
Key Responsibilities:
β’ Data Architecture:
β’ Design end-to-end data architecture leveraging Microsoft Fabric's capabilities.
β’ Design data flows within the Microsoft Fabric environment.
β’ Implement OneLake storage strategies.
β’ Establish Power BI integration patterns.
β’ Integration Design:
β’ Architect data integration patterns with analytics using Azure Data Factory and Microsoft Fabric.
β’ Implement medallion architecture (Bronze/Silver/Gold layers).
β’ Ability to configure real-time data ingestion patterns.
β’ Lakehouse Architecture:
β’ Implement modern data Lakehouse architecture and discern when to use Fabric Warehouse vs Fabric Lakehouse platforms
β’ Pipeline Development:
β’ Design scalable data pipelines using Azure Data Factory for ETL/ELT processes.
β’ Have experience with metadata driven pipelines.
β’ Combining and cleansing data from various sources.
β’ Creating, orchestrating, and troubleshooting data pipelines.
β’ Implement performance tuning strategies for large-scale data processing and analytics workloads.
β’ Establish monitoring frameworks.
Job Description
Seha Consulting seeks a healthcare focused data architect with experience in building modern data platforms.
β’ Experience with EMR data (Epic, Cerner)
β’ Experience with FHIR data and Claims data analytics
β’ Deep expertise in modern data architecture, with specific experience in Microsoft's data platform and Delta Lake architecture.
β’ 6+ years of experience in data architecture and engineering.
β’ Required 2+ years hands-on experience with Azure Databricks / ADF and Spark.
β’ Required recent experience with Microsoft Fabric platform.
Key Responsibilities:
β’ Data Architecture:
β’ Design end-to-end data architecture leveraging Microsoft Fabric's capabilities.
β’ Design data flows within the Microsoft Fabric environment.
β’ Implement OneLake storage strategies.
β’ Establish Power BI integration patterns.
β’ Integration Design:
β’ Architect data integration patterns with analytics using Azure Data Factory and Microsoft Fabric.
β’ Implement medallion architecture (Bronze/Silver/Gold layers).
β’ Ability to configure real-time data ingestion patterns.
β’ Lakehouse Architecture:
β’ Implement modern data Lakehouse architecture and discern when to use Fabric Warehouse vs Fabric Lakehouse platforms
β’ Pipeline Development:
β’ Design scalable data pipelines using Azure Data Factory for ETL/ELT processes.
β’ Have experience with metadata driven pipelines.
β’ Combining and cleansing data from various sources.
β’ Creating, orchestrating, and troubleshooting data pipelines.
β’ Implement performance tuning strategies for large-scale data processing and analytics workloads.
β’ Establish monitoring frameworks.