

High 5 Games
DevOps Engineer - ML & Data Infrastructure (Remote - US )
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a DevOps Engineer - ML & Data Infrastructure (Remote - US) with a contract length of "Unknown" and a pay rate of "Unknown." Requires 3+ years in DevOps, expertise in GCP, Terraform, CI/CD, and familiarity with gaming or AI systems.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 30, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Cloud #Python #GCP (Google Cloud Platform) #Scala #Observability #Scripting #Automation #Security #Terraform #Monitoring #DevOps #Data Governance #Data Pipeline #ML Ops (Machine Learning Operations) #AI (Artificial Intelligence) #Deployment #Data Science #Kubernetes #Groovy #Ansible #Logging #Compliance #Dataflow #Langchain #BigQuery #Batch #Datadog #ML (Machine Learning) #Docker #Jenkins
Role description
Weβre looking for a DevOps Engineer to help design, build, and optimize the cloud infrastructure powering our machine learning operations. Youβll play a key role in scaling AI models from research to production β ensuring smooth deployments, real-time monitoring, and rock-solid reliability across our Google Cloud Platform (GCP) environment.
Youβll work hand-in-hand with data scientists, ML engineers, and other DevOps experts to automate workflows, enhance performance, and keep our AI systems running seamlessly for millions of players worldwide.
What Youβll Do
β’ Manage, configure, and automate cloud infrastructure using tools such as Terraform and Ansible.
β’ Implement CI/CD pipelines for ML models and data workflows, focusing on automation, versioning, rollback, and monitoring with tools like Vertex AI, Jenkins, and DataDog.
β’ Build and maintain scalable data and feature pipelines for both real-time and batch processing using BigQuery, BigTable, Dataflow, Composer, Pub/Sub, and Cloud Run.
β’ Set up infrastructure for model monitoring and observability β detecting drift, bias, and performance issues using Vertex AI Model Monitoring and custom dashboards.
β’ Optimize inference performance, improving latency and cost-efficiency of AI workloads.
β’ Ensure overall system reliability, scalability, and performance across the ML/Data platform.
β’ Define and implement infrastructure best practices for deployment, monitoring, logging, and security.
β’ Troubleshoot complex issues affecting ML/Data pipelines and production systems.
β’ Ensure compliance with data governance, security, and regulatory standards, especially for real-money gaming environments.
What Weβre Looking For
β’ 3+ years of experience as a DevOps Engineer, ideally with a focus on ML and Data infrastructure.
β’ Strong hands-on experience with Google Cloud Platform (GCP) β especially BigQuery, Dataflow, Vertex AI, Cloud Run, and Pub/Sub.
β’ Proficiency with Terraform (and bonus points for Ansible).
β’ Solid grasp of containerization (Docker, Kubernetes) and orchestration platforms like GKE.
β’ Experience building and maintaining CI/CD pipelines, preferably with Jenkins.
β’ Strong understanding of monitoring and logging best practices for cloud and data systems.
β’ Scripting experience with Python, Groovy, or Shell.
β’ Familiarity with AI orchestration frameworks (LangGraph or LangChain) is a plus.
β’ Bonus points if youβve worked in gaming, real-time fraud detection, or AI-driven personalization systems.
Weβre looking for a DevOps Engineer to help design, build, and optimize the cloud infrastructure powering our machine learning operations. Youβll play a key role in scaling AI models from research to production β ensuring smooth deployments, real-time monitoring, and rock-solid reliability across our Google Cloud Platform (GCP) environment.
Youβll work hand-in-hand with data scientists, ML engineers, and other DevOps experts to automate workflows, enhance performance, and keep our AI systems running seamlessly for millions of players worldwide.
What Youβll Do
β’ Manage, configure, and automate cloud infrastructure using tools such as Terraform and Ansible.
β’ Implement CI/CD pipelines for ML models and data workflows, focusing on automation, versioning, rollback, and monitoring with tools like Vertex AI, Jenkins, and DataDog.
β’ Build and maintain scalable data and feature pipelines for both real-time and batch processing using BigQuery, BigTable, Dataflow, Composer, Pub/Sub, and Cloud Run.
β’ Set up infrastructure for model monitoring and observability β detecting drift, bias, and performance issues using Vertex AI Model Monitoring and custom dashboards.
β’ Optimize inference performance, improving latency and cost-efficiency of AI workloads.
β’ Ensure overall system reliability, scalability, and performance across the ML/Data platform.
β’ Define and implement infrastructure best practices for deployment, monitoring, logging, and security.
β’ Troubleshoot complex issues affecting ML/Data pipelines and production systems.
β’ Ensure compliance with data governance, security, and regulatory standards, especially for real-money gaming environments.
What Weβre Looking For
β’ 3+ years of experience as a DevOps Engineer, ideally with a focus on ML and Data infrastructure.
β’ Strong hands-on experience with Google Cloud Platform (GCP) β especially BigQuery, Dataflow, Vertex AI, Cloud Run, and Pub/Sub.
β’ Proficiency with Terraform (and bonus points for Ansible).
β’ Solid grasp of containerization (Docker, Kubernetes) and orchestration platforms like GKE.
β’ Experience building and maintaining CI/CD pipelines, preferably with Jenkins.
β’ Strong understanding of monitoring and logging best practices for cloud and data systems.
β’ Scripting experience with Python, Groovy, or Shell.
β’ Familiarity with AI orchestration frameworks (LangGraph or LangChain) is a plus.
β’ Bonus points if youβve worked in gaming, real-time fraud detection, or AI-driven personalization systems.






