

Omiz Staffing Solutions (OSS)
Sr. Data Engineer / Data Scientist III
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer / Data Scientist III in West Point, PA, with a 12-month contract at $78-$88/hr. Requires 7-8 years of experience, strong Python and AWS skills, and expertise in data engineering within life sciences.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
704
-
🗓️ - Date
October 4, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Lansdale, PA
-
🧠 - Skills detailed
#API (Application Programming Interface) #S3 (Amazon Simple Storage Service) #Data Pipeline #GIT #Cloud #Python #Dataiku #Data Engineering #SNS (Simple Notification Service) #JSON (JavaScript Object Notation) #R #YAML (YAML Ain't Markup Language) #Databases #System Testing #GitHub #SQL (Structured Query Language) #Computer Science #Logging #RDS (Amazon Relational Database Service) #Lambda (AWS Lambda) #AWS (Amazon Web Services) #ECR (Elastic Container Registery) #Data Wrangling #Pandas #Data Science #"ETL (Extract #Transform #Load)" #SQS (Simple Queue Service)
Role description
🚀 Now Hiring: Sr. Data Engineer / Data Scientist III (Senior)🚀
📍 Location: West Point, PA (Hybrid – 2–3 days onsite per week)
💼 Duration: 12 months (with possible extension)
💲 Pay: $78 – $88 /hr (W2)
🔢 Open Positions: 2
Are you passionate about data engineering in the life sciences and excited about enabling cutting-edge drug discovery and development? We are seeking an experienced Senior Data Engineer to join our Digital Sciences team within a leading pharmaceutical R&D organization.
This role offers the opportunity to work hands-on with scientists to build data workflows, automate processes, and create pipelines that accelerate research across small molecules, biologics, vaccines, and more.
What You’ll Do
• Design, develop, and maintain ETL processes, data workflows, and pipelines in Python.
• Collaborate with scientists to understand experimental data and help automate electronic lab notebooks.
• Partner with IT teams to integrate and deploy solutions.
• Manage projects, deliver estimates, and present progress to stakeholders.
• Contribute to predictive tools that support drug discovery and development.
Must-Have Skills
• 7–8 years of relevant experience with a degree in Computer Science, Chemistry, or related field.
• Strong expertise in Python (3.9+) with packages like Boto3, Pandas, pyodbc, openpyxl.
• Experience with AWS cloud services (Lambda, S3, CloudFormation, RDS, ECR).
• Data wrangling, ingestion, modeling, and relational databases (SQL).
• Proficiency with Git/GitHub, GitHub Actions, CI/CD, unit/system testing.
• Knowledge of file formats (XLSX, YAML, JSON, CSV, TSV).
• Excellent communication and ability to work independently or in teams.
Nice-to-Have Skills
• Additional AWS services (SQS, SNS, EventBridge, API Gateway).
• Python packages: Cerberus, PyYAML, logging; type hints and regex.
• Familiarity with Dataiku, Trifacta, or similar tools.
• Prior IT or data engineering experience within pharma/biotech R&D.
Why Join
This is not a typical IT role. You’ll be embedded in a digital sciences innovation hub, working closely with scientists and digital leaders to solve real-world challenges in drug development. If you enjoy building data pipelines, working with experimental data, and driving scientific discovery through technology, this opportunity is for you.
💡 Apply today to be part of a team that is transforming the future of medicine through data-driven innovation!
🚀 Now Hiring: Sr. Data Engineer / Data Scientist III (Senior)🚀
📍 Location: West Point, PA (Hybrid – 2–3 days onsite per week)
💼 Duration: 12 months (with possible extension)
💲 Pay: $78 – $88 /hr (W2)
🔢 Open Positions: 2
Are you passionate about data engineering in the life sciences and excited about enabling cutting-edge drug discovery and development? We are seeking an experienced Senior Data Engineer to join our Digital Sciences team within a leading pharmaceutical R&D organization.
This role offers the opportunity to work hands-on with scientists to build data workflows, automate processes, and create pipelines that accelerate research across small molecules, biologics, vaccines, and more.
What You’ll Do
• Design, develop, and maintain ETL processes, data workflows, and pipelines in Python.
• Collaborate with scientists to understand experimental data and help automate electronic lab notebooks.
• Partner with IT teams to integrate and deploy solutions.
• Manage projects, deliver estimates, and present progress to stakeholders.
• Contribute to predictive tools that support drug discovery and development.
Must-Have Skills
• 7–8 years of relevant experience with a degree in Computer Science, Chemistry, or related field.
• Strong expertise in Python (3.9+) with packages like Boto3, Pandas, pyodbc, openpyxl.
• Experience with AWS cloud services (Lambda, S3, CloudFormation, RDS, ECR).
• Data wrangling, ingestion, modeling, and relational databases (SQL).
• Proficiency with Git/GitHub, GitHub Actions, CI/CD, unit/system testing.
• Knowledge of file formats (XLSX, YAML, JSON, CSV, TSV).
• Excellent communication and ability to work independently or in teams.
Nice-to-Have Skills
• Additional AWS services (SQS, SNS, EventBridge, API Gateway).
• Python packages: Cerberus, PyYAML, logging; type hints and regex.
• Familiarity with Dataiku, Trifacta, or similar tools.
• Prior IT or data engineering experience within pharma/biotech R&D.
Why Join
This is not a typical IT role. You’ll be embedded in a digital sciences innovation hub, working closely with scientists and digital leaders to solve real-world challenges in drug development. If you enjoy building data pipelines, working with experimental data, and driving scientific discovery through technology, this opportunity is for you.
💡 Apply today to be part of a team that is transforming the future of medicine through data-driven innovation!