

TPI Global Solutions
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position lasting 6+ months, offering competitive pay. Key skills required include SmileCDR, HL7 FHIR, REST APIs, Python, PySpark, and experience with high-volume ETL/ELT pipelines, preferably in healthcare data integration.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 23, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Unknown
-
π - Contract
Fixed Term
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Spark (Apache Spark) #JSON (JavaScript Object Notation) #PySpark #Security #GCP (Google Cloud Platform) #Data Quality #Informatica #dbt (data build tool) #BigQuery #CMS (Content Management System) #Data Engineering #Trino #Deployment #Computer Science #REST (Representational State Transfer) #DevOps #Data Pipeline #FHIR (Fast Healthcare Interoperability Resources) #Apache Iceberg #JavaScript #GitLab #"ETL (Extract #Transform #Load)" #Compliance #REST API #Data Integration #Python #Hadoop #Cloud #Informatica BDM (Big Data Management)
Role description
Job Title: Software Engineer β Interoperability & Data Platforms
Duration: 6+ Months Contract with possible extension
Role Summary
Software Engineer β Interoperability & Data Platforms
π’ Job Summary
We are seeking a Software Engineer to design, build, and support enterprise-scale healthcare interoperability and data integration solutions. The role focuses on HL7 FHIR-based APIs, SmileCDR, and high-volume ETL/ELT pipelines, supporting CMS ONC and enterprise data initiatives.
π― Key Responsibilities
β’ Develop and support FHIR-based interoperability solutions using SmileCDR (US Core, Da Vinci, CMS APIs)
β’ Build and maintain REST APIs using JavaScript, OAuth2, and JSON
β’ Configure SmileCDR (FHIR endpoints, ingestion pipelines, workflows, mappings, validation)
β’ Design and implement large-scale ETL/ELT pipelines using Python and PySpark
β’ Develop data pipelines using Informatica BDM and integrate with Hadoop, Hive, Spark, and cloud platforms
β’ Work with modern data tools (DBT, Starburst/Trino, Apache Iceberg, GCP/BigQuery)
β’ Support CI/CD pipelines (GitLab) and cloud deployments (GCP)
β’ Ensure data quality, performance, security, and compliance
β’ Collaborate with product, architecture, and compliance teams; support production issues and RCA
π§ Required Skills
β’ SmileCDR and HL7 FHIR implementation experience
β’ REST APIs, JavaScript, OAuth2
β’ Python, PySpark, and ETL/ELT pipelines (high-volume data)
β’ Informatica BDM (PowerCenter/IDMC preferred)
β’ Hadoop ecosystem (Hive, Spark)
β’ Data platforms: DBT, Starburst/Trino, Apache Iceberg, GCP/BigQuery
β’ GitLab CI/CD and DevOps practices
β Preferred
β’ Healthcare payer/provider experience
β’ CMS ONC / BCBSA regulatory exposure
β’ FHIR certification or hands-on implementation
β’ Experience supporting production systems and compliance reporting
π Education
Bachelorβs or Masterβs degree in Computer Science, Engineering, or equivalent experience
Job Title: Software Engineer β Interoperability & Data Platforms
Duration: 6+ Months Contract with possible extension
Role Summary
Software Engineer β Interoperability & Data Platforms
π’ Job Summary
We are seeking a Software Engineer to design, build, and support enterprise-scale healthcare interoperability and data integration solutions. The role focuses on HL7 FHIR-based APIs, SmileCDR, and high-volume ETL/ELT pipelines, supporting CMS ONC and enterprise data initiatives.
π― Key Responsibilities
β’ Develop and support FHIR-based interoperability solutions using SmileCDR (US Core, Da Vinci, CMS APIs)
β’ Build and maintain REST APIs using JavaScript, OAuth2, and JSON
β’ Configure SmileCDR (FHIR endpoints, ingestion pipelines, workflows, mappings, validation)
β’ Design and implement large-scale ETL/ELT pipelines using Python and PySpark
β’ Develop data pipelines using Informatica BDM and integrate with Hadoop, Hive, Spark, and cloud platforms
β’ Work with modern data tools (DBT, Starburst/Trino, Apache Iceberg, GCP/BigQuery)
β’ Support CI/CD pipelines (GitLab) and cloud deployments (GCP)
β’ Ensure data quality, performance, security, and compliance
β’ Collaborate with product, architecture, and compliance teams; support production issues and RCA
π§ Required Skills
β’ SmileCDR and HL7 FHIR implementation experience
β’ REST APIs, JavaScript, OAuth2
β’ Python, PySpark, and ETL/ELT pipelines (high-volume data)
β’ Informatica BDM (PowerCenter/IDMC preferred)
β’ Hadoop ecosystem (Hive, Spark)
β’ Data platforms: DBT, Starburst/Trino, Apache Iceberg, GCP/BigQuery
β’ GitLab CI/CD and DevOps practices
β Preferred
β’ Healthcare payer/provider experience
β’ CMS ONC / BCBSA regulatory exposure
β’ FHIR certification or hands-on implementation
β’ Experience supporting production systems and compliance reporting
π Education
Bachelorβs or Masterβs degree in Computer Science, Engineering, or equivalent experience






