

OSI Engineering
Senior DevOps with Strong Python
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior DevOps Engineer with strong Python skills, offering a 12-month contract in Sunnyvale, CA. Key requirements include experience with AWS/GCP, HIPAA compliance, CI/CD, data pipelines, and automation tools. Pay rate is $125.00 - $140.00 per hour.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
1120
-
🗓️ - Date
October 30, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cupertino, CA
-
🧠 - Skills detailed
#Grafana #Computer Science #Cloud #Python #GCP (Google Cloud Platform) #Scala #VMware #"ETL (Extract #Transform #Load)" #Snowflake #Automation #Security #Storage #Terraform #Monitoring #DevOps #Redshift #Data Pipeline #Deployment #Data Engineering #Databricks #Kubernetes #Ansible #Airflow #Compliance #Data Integration #dbt (data build tool) #Data Quality #AWS (Amazon Web Services) #BigQuery #Prometheus #Batch #Docker
Role description
Description
Our client is seeking a Senior DevOps Engineer with strong coding skills and a deep understanding of data infrastructure, automation, and compliance. This role offers the opportunity to design and maintain secure, scalable, and efficient data platforms while driving automation and reliability across cloud environments.
You’ll collaborate with cross-functional engineering, product, and compliance teams to deliver innovative, compliant solutions in a high-growth, data-driven environment. This position is ideal for a professional who thrives at the intersection of DevOps, data engineering, and cloud architecture.
Key Responsibilities:
DevOps & Infrastructure
• Deploy and manage cloud-native infrastructure using Infrastructure-as-Code tools (Terraform, CloudFormation, Pulumi, or similar).
• Write and maintain automation tools for configuration management and environment provisioning (e.g., Ansible).
• Design and implement CI/CD pipelines to automate testing, deployment, and monitoring.
• Manage containerization and orchestration (Docker, Kubernetes).
• Implement secure, scalable, and cost-effective cloud deployments in AWS and/or GCP.
• Monitor and optimize systems using tools such as Prometheus, Grafana, or the ELK Stack.
Data Engineering
• Design, build, and optimize robust, scalable data pipelines (batch and streaming).
• Develop data integration solutions across structured and unstructured data sources.
• Architect and manage cloud-based data platforms (AWS Redshift, Snowflake, BigQuery, Databricks).
• Ensure data quality, governance, and compliance—particularly with HIPAA standards.
• Build and orchestrate ETL/ELT workflows using Airflow, DBT, or custom frameworks.
• Partner with analytics and product teams to deliver clean, reliable data for business use.
Compliance & Collaboration
• Design and implement HIPAA-compliant infrastructure and data handling workflows.
• Collaborate closely with distributed teams including developers, QA, product management, and compliance officers to ensure best practices in security and scalability.
Additional Skills:
• VMware experience
• Networking and storage experience
• Prior hands-on HIPAA or compliance experience
Education:
Bachelor’s degree in Computer Science, Engineering, or a related technical field preferred.
Type: Contract
Duration: 12 months with an extension possible
Work Location: Sunnyvale, CA (Hybrid)
Salary Range: $125.00 - $140.00 (DOE)
Description
Our client is seeking a Senior DevOps Engineer with strong coding skills and a deep understanding of data infrastructure, automation, and compliance. This role offers the opportunity to design and maintain secure, scalable, and efficient data platforms while driving automation and reliability across cloud environments.
You’ll collaborate with cross-functional engineering, product, and compliance teams to deliver innovative, compliant solutions in a high-growth, data-driven environment. This position is ideal for a professional who thrives at the intersection of DevOps, data engineering, and cloud architecture.
Key Responsibilities:
DevOps & Infrastructure
• Deploy and manage cloud-native infrastructure using Infrastructure-as-Code tools (Terraform, CloudFormation, Pulumi, or similar).
• Write and maintain automation tools for configuration management and environment provisioning (e.g., Ansible).
• Design and implement CI/CD pipelines to automate testing, deployment, and monitoring.
• Manage containerization and orchestration (Docker, Kubernetes).
• Implement secure, scalable, and cost-effective cloud deployments in AWS and/or GCP.
• Monitor and optimize systems using tools such as Prometheus, Grafana, or the ELK Stack.
Data Engineering
• Design, build, and optimize robust, scalable data pipelines (batch and streaming).
• Develop data integration solutions across structured and unstructured data sources.
• Architect and manage cloud-based data platforms (AWS Redshift, Snowflake, BigQuery, Databricks).
• Ensure data quality, governance, and compliance—particularly with HIPAA standards.
• Build and orchestrate ETL/ELT workflows using Airflow, DBT, or custom frameworks.
• Partner with analytics and product teams to deliver clean, reliable data for business use.
Compliance & Collaboration
• Design and implement HIPAA-compliant infrastructure and data handling workflows.
• Collaborate closely with distributed teams including developers, QA, product management, and compliance officers to ensure best practices in security and scalability.
Additional Skills:
• VMware experience
• Networking and storage experience
• Prior hands-on HIPAA or compliance experience
Education:
Bachelor’s degree in Computer Science, Engineering, or a related technical field preferred.
Type: Contract
Duration: 12 months with an extension possible
Work Location: Sunnyvale, CA (Hybrid)
Salary Range: $125.00 - $140.00 (DOE)





