

MLOps Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an MLOps Engineer (GCP Specialization) on a long-term remote contract. Key skills include MLOps, GCP expertise, Python, and Terraform. Experience with large-scale ML systems and Google Cloud certifications are preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 20, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Colorado, United States
-
π§ - Skills detailed
#BigQuery #PySpark #MLflow #API (Application Programming Interface) #IAM (Identity and Access Management) #Spark (Apache Spark) #Deployment #AI (Artificial Intelligence) #ML (Machine Learning) #Python #BitBucket #Data Engineering #Docker #TensorFlow #Microservices #Monitoring #Kubernetes #VPC (Virtual Private Cloud) #Model Deployment #Cloud #Security #Compliance #Data Science #Logging #Data Ingestion #Automation #GitLab #Programming #PyTorch #Airflow #Datasets #"ETL (Extract #Transform #Load)" #Dataflow #Data Privacy #GCP (Google Cloud Platform) #GDPR (General Data Protection Regulation) #Terraform #Data Loss Prevention #Storage #Scala #DevOps
Role description
Hi
Our client is looking for MLOps Engineer (GCP Specialization) for a long-term project in Remote, below is the detailed requirement.
Job Title : MLOps Engineer (GCP Specialization)
Location : Remote
Duration : Long term Contract
Mandatory skills: The candidate should have a strong experience in MLOps and GCP
Position Overview:
The MLOps Engineer (GCP Specialization) is responsible for designing, implementing, and maintaining infrastructure and processes on Google Cloud Platform (GCP) to enable the seamless development, deployment, and monitoring of machine learning models at scale. This role bridges data science and data engineering, Infrastructure, ensuring that machine learning systems are reliable, scalable, and optimized for GCP environments.
Key Responsibilities
β’ Model Deployment: Design and implement pipelines for deploying machine learning models into production using GCP services such as AI Platform, Vertex AI, or Cloud Run, Cloud Composer ensuring high availability and performance.
β’ Infrastructure Management: Build and maintain scalable GCP-based infrastructure using services like Google Compute Engine, Google Kubernetes Engine (GKE), and Cloud Storage to support model training, deployment, and inference.
β’ Automation: Develop automated workflows for data ingestion, model training, validation, and deployment using GCP tools like Cloud Composer, and CI/CD pipelines integrated with GitLab and Bitbucket Repositories.
β’ Monitoring and Maintenance: Implement monitoring solutions using Google Cloud Monitoring and Logging to track model performance, data drift, and system health, and take corrective actions as needed.
β’ Collaboration: Work closely with data scientists, Data engineers, Infrastructure and DevOps teams to streamline the ML lifecycle and ensure alignment with business objectives.
β’ Versioning and Reproducibility: Manage versioning of datasets, models, and code using GCP tools like Artifact Registry or Cloud Storage to ensure reproducibility and traceability of machine learning experiments.
β’ Optimization: Optimize model performance and resource utilization on GCP, leveraging containerization with Docker and GKE, and utilizing cost-efficient resources like preemptible VMs or Cloud TPU/GPU.
β’ Security and Compliance: Ensure ML systems comply with data privacy regulations (e.g., GDPR, CCPA) using GCPβs security tools like Cloud IAM, VPC Service Controls, and Data Loss Prevention (DLP).
β’ Tooling: Integrate GCP-native tools (e.g., Vertex AI, Cloud composer) and open-source MLOps frameworks (e.g., MLflow, Kubeflow) to support the ML lifecycle.
Technical Skills:
β’ Proficiency in programming languages such as Python.
β’ Expertise in GCP services, including Vertex AI, Google Kubernetes Engine (GKE), Cloud Run, BigQuery, Cloud Storage, and Cloud Composer, Data proc or PySpark and managed Airflow.
β’ Experience with infrastructure-as-code - Terraform.
β’ Familiarity with containerization (Docker, GKE) and CI/CD pipelines, GitLab and Bitbucket.
β’ Knowledge of ML frameworks (TensorFlow, PyTorch, scikit-learn) and MLOps tools compatible with GCP (MLflow, Kubeflow) and Gen AI RAG applications.
β’ Understanding of data engineering concepts, including ETL pipelines with BigQuery and Dataflow, Dataproc - Pyspark.
Soft Skills:
β’ Strong problem-solving and analytical skills.
β’ Excellent communication and collaboration abilities.
β’ Ability to work in a fast-paced, cross-functional environment.
Preferred Qualifications
β’ Experience with large-scale distributed ML systems on GCP, such as Vertex AI Pipelines or Kubeflow on GKE, Feature Store.
β’ Exposure to Generative AI (GenAI) and Retrieval-Augmented Generation (RAG) applications and deployment strategies.
β’ Familiarity with GCPβs model monitoring tools and techniques for detecting data drift or model degradation.
β’ Knowledge of microservices architecture and API development using Cloud Endpoints or Cloud Functions.
β’ Google Cloud Professional certifications (e.g., Professional Machine Learning Engineer, Professional Cloud Architect)
Hi
Our client is looking for MLOps Engineer (GCP Specialization) for a long-term project in Remote, below is the detailed requirement.
Job Title : MLOps Engineer (GCP Specialization)
Location : Remote
Duration : Long term Contract
Mandatory skills: The candidate should have a strong experience in MLOps and GCP
Position Overview:
The MLOps Engineer (GCP Specialization) is responsible for designing, implementing, and maintaining infrastructure and processes on Google Cloud Platform (GCP) to enable the seamless development, deployment, and monitoring of machine learning models at scale. This role bridges data science and data engineering, Infrastructure, ensuring that machine learning systems are reliable, scalable, and optimized for GCP environments.
Key Responsibilities
β’ Model Deployment: Design and implement pipelines for deploying machine learning models into production using GCP services such as AI Platform, Vertex AI, or Cloud Run, Cloud Composer ensuring high availability and performance.
β’ Infrastructure Management: Build and maintain scalable GCP-based infrastructure using services like Google Compute Engine, Google Kubernetes Engine (GKE), and Cloud Storage to support model training, deployment, and inference.
β’ Automation: Develop automated workflows for data ingestion, model training, validation, and deployment using GCP tools like Cloud Composer, and CI/CD pipelines integrated with GitLab and Bitbucket Repositories.
β’ Monitoring and Maintenance: Implement monitoring solutions using Google Cloud Monitoring and Logging to track model performance, data drift, and system health, and take corrective actions as needed.
β’ Collaboration: Work closely with data scientists, Data engineers, Infrastructure and DevOps teams to streamline the ML lifecycle and ensure alignment with business objectives.
β’ Versioning and Reproducibility: Manage versioning of datasets, models, and code using GCP tools like Artifact Registry or Cloud Storage to ensure reproducibility and traceability of machine learning experiments.
β’ Optimization: Optimize model performance and resource utilization on GCP, leveraging containerization with Docker and GKE, and utilizing cost-efficient resources like preemptible VMs or Cloud TPU/GPU.
β’ Security and Compliance: Ensure ML systems comply with data privacy regulations (e.g., GDPR, CCPA) using GCPβs security tools like Cloud IAM, VPC Service Controls, and Data Loss Prevention (DLP).
β’ Tooling: Integrate GCP-native tools (e.g., Vertex AI, Cloud composer) and open-source MLOps frameworks (e.g., MLflow, Kubeflow) to support the ML lifecycle.
Technical Skills:
β’ Proficiency in programming languages such as Python.
β’ Expertise in GCP services, including Vertex AI, Google Kubernetes Engine (GKE), Cloud Run, BigQuery, Cloud Storage, and Cloud Composer, Data proc or PySpark and managed Airflow.
β’ Experience with infrastructure-as-code - Terraform.
β’ Familiarity with containerization (Docker, GKE) and CI/CD pipelines, GitLab and Bitbucket.
β’ Knowledge of ML frameworks (TensorFlow, PyTorch, scikit-learn) and MLOps tools compatible with GCP (MLflow, Kubeflow) and Gen AI RAG applications.
β’ Understanding of data engineering concepts, including ETL pipelines with BigQuery and Dataflow, Dataproc - Pyspark.
Soft Skills:
β’ Strong problem-solving and analytical skills.
β’ Excellent communication and collaboration abilities.
β’ Ability to work in a fast-paced, cross-functional environment.
Preferred Qualifications
β’ Experience with large-scale distributed ML systems on GCP, such as Vertex AI Pipelines or Kubeflow on GKE, Feature Store.
β’ Exposure to Generative AI (GenAI) and Retrieval-Augmented Generation (RAG) applications and deployment strategies.
β’ Familiarity with GCPβs model monitoring tools and techniques for detecting data drift or model degradation.
β’ Knowledge of microservices architecture and API development using Cloud Endpoints or Cloud Functions.
β’ Google Cloud Professional certifications (e.g., Professional Machine Learning Engineer, Professional Cloud Architect)