

Quantum World Technologies Inc.
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include Python, SQL, Databricks, and healthcare data experience. A Bachelor's/Master's degree and 7-9 years in data-centric healthcare roles are required. Remote work location.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 26, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Engineering #SQS (Simple Queue Service) #Cloud #Data Warehouse #PySpark #S3 (Amazon Simple Storage Service) #DevOps #Python #FHIR (Fast Healthcare Interoperability Resources) #Data Processing #Spark (Apache Spark) #Data Analysis #Data Integrity #Snowflake #SQL (Structured Query Language) #Deployment #Spark SQL #Databricks #Computer Science #Data Science #Data Extraction #GitLab #AWS (Amazon Web Services) #Agile #Big Data #"ETL (Extract #Transform #Load)" #Infrastructure as Code (IaC) #Terraform #Data Mapping #Datasets
Role description
ey Responsibilities:
β’ Data Analysis & Mapping: Collaborate with clients and implementation teams to understand data distribution requirements and perform end-to-end data mapping for client outputs.
β’ ETL Pipeline Management: Build and manage efficient ETL pipelines using Databricks workflows or other orchestration frameworks.
β’ Data Extraction: Develop data distribution extracts and scripts, ensuring they handle large volumes of healthcare data efficiently.
β’ Performance Optimization: Conduct query tuning and performance optimization in SQL and SparkSQL to ensure high-speed data processing.
β’ Stakeholder Communication: Act as a bridge between technical and non-technical stakeholders to translate business needs into technical solutions.
β’ Root Cause Analysis: Perform RCA on data discrepancies and script failures to ensure data integrity and system reliability.
Mandatory Technical Skills:
β’ Languages: Expert-level proficiency in Python and SQL (specifically Spark SQL).
β’ Big Data Tools: Hands-on experience with Databricks and PySpark for processing structured and semi-structured data.
β’ Cloud Infrastructure: Strong knowledge of AWS Services (S3, SQS) or equivalent cloud providers.
β’ Data Warehousing: Experience with Snowflake or similar modern data warehouse platforms.
β’ DevOps & CI/CD: Practical experience with Terraform for Infrastructure as Code and GitLab CI/CD for automated deployments.
Domain & Preferred Skills:
β’ Healthcare Experience: Must have experience working with cloud-based data services for large healthcare clients.
β’ FHIR Standards: Hands-on experience with FHIR (Fast Healthcare Interoperability Resources) data models.
β’ Data Volume: Proven track record of handling and processing high-volume datasets.
β’ Methodology: Strong alignment with Agile development methodologies.
β’ Soft Skills: Exceptional organizational skills and the ability to work independently in a remote environment.
Qualifications:
β’ Bachelorβs/Masterβs degree in Computer Science, Data Science, or a related field.
β’ Minimum 7-9 years of experience in data-centric roles within the healthcare domain.
ey Responsibilities:
β’ Data Analysis & Mapping: Collaborate with clients and implementation teams to understand data distribution requirements and perform end-to-end data mapping for client outputs.
β’ ETL Pipeline Management: Build and manage efficient ETL pipelines using Databricks workflows or other orchestration frameworks.
β’ Data Extraction: Develop data distribution extracts and scripts, ensuring they handle large volumes of healthcare data efficiently.
β’ Performance Optimization: Conduct query tuning and performance optimization in SQL and SparkSQL to ensure high-speed data processing.
β’ Stakeholder Communication: Act as a bridge between technical and non-technical stakeholders to translate business needs into technical solutions.
β’ Root Cause Analysis: Perform RCA on data discrepancies and script failures to ensure data integrity and system reliability.
Mandatory Technical Skills:
β’ Languages: Expert-level proficiency in Python and SQL (specifically Spark SQL).
β’ Big Data Tools: Hands-on experience with Databricks and PySpark for processing structured and semi-structured data.
β’ Cloud Infrastructure: Strong knowledge of AWS Services (S3, SQS) or equivalent cloud providers.
β’ Data Warehousing: Experience with Snowflake or similar modern data warehouse platforms.
β’ DevOps & CI/CD: Practical experience with Terraform for Infrastructure as Code and GitLab CI/CD for automated deployments.
Domain & Preferred Skills:
β’ Healthcare Experience: Must have experience working with cloud-based data services for large healthcare clients.
β’ FHIR Standards: Hands-on experience with FHIR (Fast Healthcare Interoperability Resources) data models.
β’ Data Volume: Proven track record of handling and processing high-volume datasets.
β’ Methodology: Strong alignment with Agile development methodologies.
β’ Soft Skills: Exceptional organizational skills and the ability to work independently in a remote environment.
Qualifications:
β’ Bachelorβs/Masterβs degree in Computer Science, Data Science, or a related field.
β’ Minimum 7-9 years of experience in data-centric roles within the healthcare domain.





