Global Business Ser. 4u

Sr Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." It requires extensive healthcare data engineering experience, skills in ADF, Snowflake, ETL processes, and familiarity with HL7 and FHIR standards. Remote work is available.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 29, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Pipeline #Oracle #Data Warehouse #SQL (Structured Query Language) #Data Governance #Data Engineering #Compliance #GitHub #Informatica #Data Privacy #Spark (Apache Spark) #Scripting #EDW (Enterprise Data Warehouse) #Informatica PowerCenter #Snowflake #FHIR (Fast Healthcare Interoperability Resources) #Azure #Datasets #Python #DevOps #SSIS (SQL Server Integration Services) #Azure Data Factory #Security #Azure DevOps #REST (Representational State Transfer) #SQL Server #"ETL (Extract #Transform #Load)" #Databricks #Visualization #ADF (Azure Data Factory)
Role description
Job Summary (Sr. Data Engineer - Healthcare, Remote, Contract) • Translate healthcare business requirements into enterprise systems, applications, and process designs for large, complex data solutions. • Work extensively with large healthcare datasets, ensuring data privacy, security, and compliance with standards like HL7 and FHIR. • Design, build, and optimize data pipelines and ETL processes using tools such as Azure Data Factory (ADF), Databricks, Informatica PowerCenter, and SSIS. • Develop and manage enterprise data warehouses, focusing on technologies including Snowflake, SQL Server, and Oracle. • Write efficient stored procedures using PL/SQL, T-SQL, or Snowflake SQL. • Collaborate on data governance, analytics, visualization, and information modeling within the organization's EDW (Enterprise Data Warehouse) initiatives. • Use source control systems (Azure DevOps, GitHub, SVN) to manage code and data assets. • Troubleshoot, optimize, and document data processes for both structured and unstructured data. • Analyze project requirements and develop detailed ETL specifications. • Adapt to evolving technologies and changing business needs in the healthcare domain. • Preferred: Experience with Spark, Azure Data Fabric, PowerShell/Python scripting, designing APIs (REST, RPC), and familiarity with Medicaid/Medicare data. • Ideal candidates will have strong expertise in ADF and Snowflake, with Azure certifications seen as a plus. Note: Recent and relevant healthcare data engineering experience is required. Only independent visa holders are eligible. 3 rounds of technical interviews will be conducted.