TPI Global Solutions

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position lasting 6+ months, offering competitive pay. Key skills required include SmileCDR, HL7 FHIR, REST APIs, Python, PySpark, and experience with high-volume ETL/ELT pipelines, preferably in healthcare data integration.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 23, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Spark (Apache Spark) #JSON (JavaScript Object Notation) #PySpark #Security #GCP (Google Cloud Platform) #Data Quality #Informatica #dbt (data build tool) #BigQuery #CMS (Content Management System) #Data Engineering #Trino #Deployment #Computer Science #REST (Representational State Transfer) #DevOps #Data Pipeline #FHIR (Fast Healthcare Interoperability Resources) #Apache Iceberg #JavaScript #GitLab #"ETL (Extract #Transform #Load)" #Compliance #REST API #Data Integration #Python #Hadoop #Cloud #Informatica BDM (Big Data Management)
Role description
Job Title: Software Engineer – Interoperability & Data Platforms Duration: 6+ Months Contract with possible extension Role Summary Software Engineer – Interoperability & Data Platforms 🏒 Job Summary We are seeking a Software Engineer to design, build, and support enterprise-scale healthcare interoperability and data integration solutions. The role focuses on HL7 FHIR-based APIs, SmileCDR, and high-volume ETL/ELT pipelines, supporting CMS ONC and enterprise data initiatives. 🎯 Key Responsibilities β€’ Develop and support FHIR-based interoperability solutions using SmileCDR (US Core, Da Vinci, CMS APIs) β€’ Build and maintain REST APIs using JavaScript, OAuth2, and JSON β€’ Configure SmileCDR (FHIR endpoints, ingestion pipelines, workflows, mappings, validation) β€’ Design and implement large-scale ETL/ELT pipelines using Python and PySpark β€’ Develop data pipelines using Informatica BDM and integrate with Hadoop, Hive, Spark, and cloud platforms β€’ Work with modern data tools (DBT, Starburst/Trino, Apache Iceberg, GCP/BigQuery) β€’ Support CI/CD pipelines (GitLab) and cloud deployments (GCP) β€’ Ensure data quality, performance, security, and compliance β€’ Collaborate with product, architecture, and compliance teams; support production issues and RCA 🧠 Required Skills β€’ SmileCDR and HL7 FHIR implementation experience β€’ REST APIs, JavaScript, OAuth2 β€’ Python, PySpark, and ETL/ELT pipelines (high-volume data) β€’ Informatica BDM (PowerCenter/IDMC preferred) β€’ Hadoop ecosystem (Hive, Spark) β€’ Data platforms: DBT, Starburst/Trino, Apache Iceberg, GCP/BigQuery β€’ GitLab CI/CD and DevOps practices ⭐ Preferred β€’ Healthcare payer/provider experience β€’ CMS ONC / BCBSA regulatory exposure β€’ FHIR certification or hands-on implementation β€’ Experience supporting production systems and compliance reporting πŸŽ“ Education Bachelor’s or Master’s degree in Computer Science, Engineering, or equivalent experience