Advantage Technical

Software Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Software Data Engineer (Medical Data Specialist II) with a contract length of "unknown" and a pay rate of "unknown." It requires 5+ years of data engineering experience, expertise in ETL development, and familiarity with healthcare data compliance. Hybrid work location in Valencia, CA.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 27, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Santa Clarita, CA
-
🧠 - Skills detailed
#S3 (Amazon Simple Storage Service) #NoSQL #Programming #Metadata #Data Accuracy #Data Ingestion #Scala #Data Management #MongoDB #Compliance #AWS Glue #CRM (Customer Relationship Management) #Kafka (Apache Kafka) #PostgreSQL #"ETL (Extract #Transform #Load)" #Terraform #Data Modeling #Computer Science #Data Engineering #Data Pipeline #Spark (Apache Spark) #Datasets #AWS (Amazon Web Services) #Data Architecture #Cloud #Lambda (AWS Lambda) #DynamoDB #FHIR (Fast Healthcare Interoperability Resources) #Python #SQL (Structured Query Language) #MySQL #Angular #Hadoop #Agile
Role description
Medical Data Specialist II Location: Hybrid – 3 days onsite in Valencia, CA Position Overview We are seeking a Medical Data Specialist II to provide technical guidance on data architecture, data models, and metadata management. This role will partner with senior IT and business leaders to define and implement data flows, support data modeling and testing, and deliver sustainable, data-driven solutions using modern technologies. The position requires strong collaboration with product managers, database teams, and business divisions to ensure accurate, compliant, and scalable data solutions. Key Responsibilities β€’ Provide technical guidance on data architecture, data models, and metadata management. β€’ Define and implement data flows across digital products. β€’ Participate in data modeling and testing activities. β€’ Extract relevant data to solve analytical problems and ensure development teams have required datasets. β€’ Collaborate with business divisions to translate CRM data requirements into IT data structures and models. β€’ Work closely with database teams to ensure data accuracy, cleanliness, and compliance. β€’ Track analytics impact on business performance and monitor market trends. β€’ Develop sustainable, data-driven solutions using next-generation technologies. β€’ Build data pipeline frameworks to automate high-volume and real-time data delivery (Hadoop, streaming hubs). β€’ Design robust systems with long-term maintenance and support in mind. β€’ Build data APIs and delivery services to support operational and analytical applications. β€’ Transform complex analytical models into scalable, production-ready solutions. β€’ Continuously integrate and ship code into on-premise and cloud environments. β€’ Develop applications from the ground up using modern technology stacks (Scala, Spark, Postgres, AngularJS, NoSQL). β€’ Collaborate directly with Product Managers and customers to deliver data products in an agile environment. Education & Experience β€’ Bachelor’s or Master’s degree in Computer Science, Engineering, Bioinformatics, or related field. β€’ 5+ years of experience in data engineering. β€’ Preferred: Experience working with regulated healthcare data (HIPAA, FDA 21 CFR Part 11). Core Technical Skills β€’ ETL Development: Proficiency in building robust pipelines. β€’ Data Modeling: Strong knowledge of relational (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) data models, especially clinical/device schemas (FHIR, HL7, OMOP). β€’ Cloud Platforms: Experience with HIPAA-compliant cloud services. β€’ Programming: Strong Python and SQL skills; Spark or Scala preferred. β€’ APIs & Integration: RESTful APIs, HL7/FHIR data ingestion, integration with EHRs and medical devices. β€’ AWS Tools: Python, AWS Glue, EMR, S3, Lambda, ECS. β€’ Additional Tools: Experience with Flink, Kafka, Spark, or Terraform.