

Associate Director, Data Operations
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Associate Director, Data Operations on a W2 contract-to-hire basis, located in Redwood City, CA. Requires 5+ years in Data Engineering, expertise in AWS/Azure, and experience in pharma/life sciences. Key skills include ETL/ELT, CI/CD, and data governance.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 18, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
San Mateo County, CA
-
π§ - Skills detailed
#Data Quality #Scala #MongoDB #Cloud #Automation #Big Data #Computer Science #Docker #Azure #Continuous Deployment #EC2 #ADLS (Azure Data Lake Storage) #Lambda (AWS Lambda) #Logging #AI (Artificial Intelligence) #Ansible #NoSQL #R #S3 (Amazon Simple Storage Service) #GIT #Data Warehouse #ML (Machine Learning) #Storage #Kubernetes #Datadog #GDPR (General Data Protection Regulation) #Spark (Apache Spark) #Airflow #Terraform #Fivetran #SQL (Structured Query Language) #Microsoft Power BI #Metadata #Data Management #Azure DevOps #Data Engineering #Databases #Data Storage #DataOps #AWS (Amazon Web Services) #DevOps #Jenkins #Visualization #Documentation #Data Science #"ETL (Extract #Transform #Load)" #Monitoring #Version Control #Data Pipeline #Data Governance #Compliance #Data Processing #CRM (Customer Relationship Management) #SQL Server #Data Modeling #BI (Business Intelligence) #Kafka (Apache Kafka) #Python #GitHub #Tableau #PostgreSQL #Pandas #GitLab #Data Framework #Data Privacy #Databricks #SAS #Splunk #MS SQL (Microsoft SQL Server) #PySpark #Programming #Deployment
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Associate Director, Data Operations
W2 Contract-to-Hire
Location: Redwood City, CA - Hybrid Role
Job Summary:
We are seeking an experienced Data Operations Engineer to support our Commercial and Medical Affairs team, ensuring that critical data is readily available, reliable, and secure. In this position, you will design and automate robust data pipelines that support data-driven insights and decisions in our emerging oncology biopharmaceutical company. You'll work at the intersection of data engineering and operations, managing cloud infrastructure and upholding data governance standards to maintain data quality and compliance. This role is key to empowering analysts and data scientists with timely, well-governed data - ultimately helping the organization bring life-saving cancer therapies to patients more efficiently. The ideal candidate has experience in DataOps, cloud-based data platforms, and automation, with a strong background in handling data sources such as sales performance, claims, patient data, CRM, and digital marketing metrics.
Duties and Responsibilities:
β’ Design, build, and automate ETL/ELT workflows to ingest, transform, and integrate data from multiple sources (sales, marketing, clinical, etc.) into our cloud data platform.
β’ Ensure pipelines are scalable, efficient, and minimize downtime through automation and monitoring.
β’ Manage and optimize cloud-based data infrastructure (AWS and Azure) for data storage and processing.
β’ Oversee the provisioning of resources, scheduling of jobs, and infrastructure-as-code deployments to ensure high availability and performance of data systems.
β’ Implement data governance best practices, including data quality checks, validation processes, and metadata management.
β’ Maintain data privacy and compliance with industry regulations (e.g., HIPAA, GDPR), ensuring that sensitive data is handled securely and ethically.
β’ Develop continuous integration/continuous deployment (CI/CD) pipelines for data workflows and analytics applications.
β’ Use modern DevOps tools and containerization (Docker, Kubernetes) to deploy updates to data pipelines, databases, and analytics tools rapidly and reliably.
β’ Work closely with data scientists, BI analysts, and business stakeholders to understand data needs and translate them into technical solutions.
β’ Ensure data is accessible and well-structured for analytics, machine learning models, and business intelligence dashboards.
β’ Set up monitoring, alerting, and logging for data pipelines and databases to proactively identify issues and improve system reliability.
β’ Troubleshoot and resolve data pipeline failures, data discrepancies, or performance bottlenecks in a timely manner to minimize impact on business operations.
β’ Create and maintain clear documentation for data pipelines, infrastructure configurations, and processes.
β’ Champion DataOps best practices across teams, mentoring junior engineers and guiding developers in efficient data engineering and operational excellence.
Requirements and Qualifications:
β’ Excellent written and verbal communication skills, able to clearly explain complex data pipelines and infrastructure concepts to both technical colleagues and non-technical stakeholders.
β’ Strong team player who partners well with cross-functional teams on requirements and solutions, open to giving and receiving constructive feedback and sharing knowledge.
β’ Analytical mindset with a solution-oriented approach, capable of troubleshooting issues across the tech stack (data, code, infrastructure) and driving problems to resolution.
β’ Comfortable working in ambiguous environments, defining operating models, processes, roles, and responsibilities while executing and building capabilities and platforms.
β’ Self-motivated and accountable, with a high sense of ownership over deliverables.
β’ Strong experience with cloud platforms such as AWS or Azure (e.g., S3/ADLS, Lambda/Functions, EC2/VMs, Glue/Data Factory, etc.).
β’ Ability to architect and manage data warehouses or lakehouse solutions on the cloud (Databricks preferred).
β’ Proficiency in SQL for data querying and manipulation, as well as programming in Python (Pandas, PySpark, or similar data frameworks) for building pipeline logic and automation.
β’ Experience with containerization and orchestration tools (Docker and Kubernetes) to deploy data services and ensure reproducible environments.
β’ Knowledge of workflow orchestration platforms (Airflow, Airbyte, Fivetran, or similar) for scheduling and managing complex data workflows and integrations.
β’ Hands-on experience implementing CI/CD pipelines using tools like Jenkins, GitLab CI/CD, GitHub Actions, or Azure DevOps.
β’ Expertise in using infrastructure-as-code (Terraform, CloudFormation) and configuration management (Ansible, Helm) to automate deployments and environment management.
β’ Experience with big data processing frameworks (Spark) or streaming platforms (Kafka, Kinesis).
β’ Demonstrated ability to implement monitoring/logging (CloudWatch, Datadog, Splunk, or ELK stack) for data systems. Familiarity with version control (Git) and collaborative development workflows.
β’ Experience with supporting data science and AI/ML workflows, such as provisioning data for machine learning models or knowledge of MLOps principles.
β’ Solid understanding of relational and NoSQL databases (e.g., PostgreSQL, SQL Server, MongoDB) and data modeling concepts.
β’ Bachelor's degree (or equivalent experience) in Computer Science, Data Engineering, Information Systems, or related field.
β’ 5+ years of hands-on experience in Data Engineering, DevOps, or DataOps roles, with a track record of designing scalable data pipelines and infrastructure
β’ Experience in pharma/life sciences.
Preferred Qualifications:
β’ Experience with oncology data or commercial/medical affairs pharma data at time of launch.
β’ Understanding of industry-specific data sources, terminology, and compliance requirements is a strong plus.
β’ Familiarity with regulations and standards such as HIPAA, GDPR, and GxP as they pertain to data handling and software validation in pharma.
β’ Ability to optimize data pipelines for analytics tools like R, SAS, or visualization platforms (Tableau, Power BI).
β’ Relevant certifications such as AWS Certified Data Analytics or Azure Data Engineer that demonstrate validated expertise.
β’ Experience leading data engineering projects or initiatives. Ability to coordinate work among team members, manage project timelines, and engage with stakeholders to gather requirements and report progress.
Desired Skills and Experience
Clinical Data, Data Engineering, Information Systems, ETL/ELT, AWS, Azure, HIPAA, GDPR, CI/CD, DevOps, Docker, Kubernetes, Lambda, S3/ADLS, EC2/VMs, SQL, Databricks, Pandas, PySpark, Airflow, Airbyte, Fivetran, Terraform, Cloud Formation, Azure DevOps, Ansible, Helm, Spark, Kafka, Kinesis, CloudWatch, Datadog, Splunk, ELK, NoSQL, PostgreSQL, MongoDB
Bayside Solutions, Inc. is not able to sponsor any candidates at this time. Additionally, candidates for this position must qualify as a W2 candidate.
Bayside Solutions, Inc. may collect your personal information during the position application process. Please reference Bayside Solutions, Inc.'s CCPA Privacy Policy at www.baysidesolutions.com.