TalentBurst, an Inc 5000 company

DevOps Engineer V

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a DevOps Engineer V in Sunnyvale, CA, lasting 1 year with a hybrid schedule. Key skills include Terraform, Ansible, and cloud platforms (AWS/GCP). Experience in HIPAA-compliant environments and data engineering is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 3, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Sunnyvale, CA
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Snowflake #BigQuery #Compliance #Terraform #Infrastructure as Code (IaC) #Cloud #Monitoring #Scala #AWS (Amazon Web Services) #Batch #Data Integration #Data Engineering #Databricks #Docker #DevOps #Prometheus #Automation #Redshift #Airflow #Deployment #Kubernetes #Storage #Data Quality #GCP (Google Cloud Platform) #Grafana #Ansible #Data Pipeline #dbt (data build tool)
Role description
Position: DevOps Engineer Location: Sunnyvale, CA Duration: 1 year Schedule: Hybrid - Tues-Thurs onsite, Mon/Fri's remote Job Description We are seeking a senior DevOps Engineer with strong coding background to join our growing team. This role is ideal for a professional with a solid foundation in data infrastructure, cloud-native architecture, and automation, combined with deep experience in compliance-driven environments such as those governed by HIPAA. You'll be responsible for designing and maintaining scalable, secure, and efficient data platform and infrastructure, while also playing a key role in DevOps automation, cloud deployments, and collaborating with cross-functional teams across different operations. Key Responsibilities DevOps & Infrastructure Deploy and manage cloud-native infrastructure using Infrastructure as Code (Terraform, CloudFormation, Pulumi or similar). Write and maintain automation tools for configuration management and environment provisioning (e.g. ansible). Design and implement CI/CD pipelines to automate testing, deployment, and monitoring. Oversee containerization and orchestration (Docker, Kubernetes). Implement secure, scalable, and cost-effective cloud deployments in AWS and/or GCP. Monitor systems using tools like Prometheus, Grafana, or ELK stack. Data Engineering Design, build, and optimize robust, scalable data pipelines (batch and streaming). Develop data integration solutions across various structured and unstructured sources. Architect and manage cloud-based data platforms (e.g., AWS Redshift, Snowflake, BigQuery, Databricks). Ensure data quality, governance, and complianceβ€”particularly with HIPAA standards where applicable. Build and orchestrate ETL/ELT workflows using Airflow, DBT, or custom pipelines. Partner with analytics and product teams to deliver clean, reliable data for business use. Compliance & Collaboration Implement HIPAA-compliant infrastructure and data handling workflows. Work closely with large, distributed teams, including developers, QA, product managers, and compliance officers. Optional β€’ VM Ware experience β€’ Networking storage experience β€’ HIPPA experience #TB\_EN Job #: 25-44685