REQ Solutions

Healthcare Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Healthcare Data Engineer, lasting 12+ months in Valencia, CA. It requires 5+ years in data engineering, proficiency in ETL, strong Python and SQL skills, and experience with regulated healthcare data. Hybrid work environment.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
496
-
🗓️ - Date
October 23, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Santa Clarita, CA
-
🧠 - Skills detailed
#Angular #PostgreSQL #MongoDB #Data Engineering #Programming #SQL (Structured Query Language) #NoSQL #Data Management #"ETL (Extract #Transform #Load)" #Data Pipeline #AWS (Amazon Web Services) #Agile #Kafka (Apache Kafka) #Data Architecture #Python #Scala #Terraform #MySQL #Spark (Apache Spark) #Hadoop #DynamoDB #FHIR (Fast Healthcare Interoperability Resources) #AWS Glue #AWS EMR (Amazon Elastic MapReduce) #AWS Lambda #Data Ingestion #Cloud #Computer Science #Data Modeling #Big Data #Lambda (AWS Lambda) #S3 (Amazon Simple Storage Service) #AWS S3 (Amazon Simple Storage Service)
Role description
Job Title: Sr. Data Engineer Duration: 12+ Months (Possible extension) Location: Valencia, CA 91355 Hybrid Role Responsibilities: • Designs, builds and manages the information or big data infrastructure. • Develops the architecture that helps analyze and process data in the way the organization needs it. Makes sure those systems are performing smoothly. • Provides technical guidance related to data architecture, data models and meta data management to senior IT and business leaders. • Defines and implements data flows through and around digital products. • Participates in data modeling and testing. • Extracts relevant data to solve analytical problems; ensure development teams have the required data. • Tracks analytics impact on business. • Monitors market watch. • Develops sustainable data driven solutions with current new gen data technologies to meet the needs of the organization and business customers. • Builds data pipeline frameworks to automate high-volume and real-time data delivery for Hadoop and streaming data hub. • Builds robust systems with an eye on the long-term maintenance and support of the application. • Builds data APIs and data delivery services that support critical operational and analytical applications for our internal business operations, customers and partners. • Transforms complex analytical models into scalable, production-ready solutions. • Continuously integrates and ships code into our own premise and cloud production environments. • Develops applications from ground up using a modern technology stack such as Scala, Spark, Postgres, Angular JS, and NoSQL. • Works directly with Product Managers and customers to deliver data products in a collaborative and agile environment. Education/Experience: • Bachelor's or Master's degree in Computer Science, Engineering, Bioinformatics, or related field. • 5+ years of experience in data engineering • Prefer prior experience working with regulated healthcare data (e.g., HIPAA, FDA 21, CFR Part 11) • ETL Development: Proficiency in building robust pipelines. • Data Modeling: Strong knowledge of relational (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) data models, especially clinical or device data schemas (FHIR, HL7, OMOP). • Cloud Platforms: Experience with HIPAA-compliant cloud services. • Programming: Strong Python and SQL skills; Spark or Scala is a plus. • APIs & Integration: RESTful APIs, HL7/FHIR data ingestion, integration with EHRs and medical devices. • Experience in Python, Flink, Kafka, Spark, AWS Glue, AWS EMR, AWS S3, AWS Lambda, AWS ECS, Terraform