Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with Life Science experience in San Francisco, CA, on a contract basis. Requires 10+ years of experience, expertise in Oracle and PostgreSQL, strong Python skills, and ETL development.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 21, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
San Francisco, CA
-
🧠 - Skills detailed
#Java #DevOps #Storage #Data Engineering #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #Data Modeling #Oracle #AWS (Amazon Web Services) #Migration #Python #Databases #Visualization #Programming #Data Integrity #Cloud #Deployment #PostgreSQL #Docker #Security #Data Migration #Scala #Data Pipeline #Kubernetes
Role description
Role: Data Engineer with Life Science Location: San Francisco, CA ( 4 days from Office) Contract role Looking for 10+ years experience What’s in it for you :- Opportunity to lead technical teams, guide architecture decisions, and influence migration strategies to AWS. Job Description: Key Responsibilities ● Data Migration and Integration: β—‹ Migrate assay and compound data from legacy systems into other architectures, ensuring data integrity and security. β—‹ Implement robust ETL pipelines for processing and integrating data efficiently from multiple sources. ● Database Development: β—‹ Design, develop, and maintain scalable and high-performing relational databases (e.g., Oracle, PostgreSQL) to host and manage experimental and prediction data. β—‹ Optimize database structures for querying, storage, and scalability. ● Visualization Tool Integration: β—‹ Collaborate with teams to integrate data into existing visualization tools and frameworks. β—‹ Develop and enhance plugins for visualization tools to deliver interactive and meaningful insights for scientific teams. ● Collaboration and Communication: β—‹ Work closely with pRED teams in Europe, ensuring alignment in goals and timelines. β—‹ Collaborate with scientists, data engineers, and software developers to define requirements and deliver user-centric solutions. Must-Have Qualifications ● Database Expertise: Proven experience with relational databases like Oracle and PostgreSQL, including design, optimization, and migration. ● Programming Skills: Strong proficiency in Python, especially for backend development and data handling. Java is an added advantage. ● ETL Development: Expertise in building Extract, Transform, and Load (ETL) processes to effectively process and migrate assay/compound data. ● Integration Experience: Familiarity with integrating data into visualization platforms and crafting modular plugins or interfaces. ● Team Collaboration: A demonstrated ability to work with globally distributed teams across time zones to deliver complex projects. Nice-to-Have Skills ● Data Engineering Knowledge: Skills in building data pipelines, data modeling, and working with modern cloud platforms such as AWS or GCP. ● Visualization Expertise: Experience with visualization tools such as Vortex or D360. ● Domain Knowledge: Familiarity with laboratory workflows, assay data, compound data, or related pharmaceutical/life sciences data. ● Plugin Development Tools: Experience with APIs or SDKs for creating custom visualization tool integrations. ● DevOps Practices: Familiarity with Docker, Kubernetes, or CI/CD pipelines to streamline development and deployment. Key Attributes ● Strong analytical and problem-solving skills with attention to detail. ● Excellent communication skills to bridge gaps between scientific and technical teams. ● Highly adaptive and able to manage competing priorities in a dynamic, fast-paced environment. ● Self-motivated and able to independently drive progress while collaborating with globally distributed teams.