

Donyati
MLOps Architect – Google Cloud Platform (GCP)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Principal MLOps Architect with 10–15 years of experience, specializing in GCP. The contract is for an unspecified length, with a competitive pay rate. Key skills include GCP migration, MLOps frameworks, and AI/ML production systems.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 5, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#ML (Machine Learning) #Data Governance #Batch #Data Ingestion #Automation #AI (Artificial Intelligence) #Scala #Langchain #Strategy #IAM (Identity and Access Management) #Automated Testing #Model Deployment #GCP (Google Cloud Platform) #Airflow #VPC (Virtual Private Cloud) #Model Validation #Dataflow #BigQuery #PySpark #Jenkins #Python #GitHub #TensorFlow #Migration #Cloud #Data Science #Prometheus #Terraform #Spark (Apache Spark) #Security #Deployment #Leadership #PyTorch #Grafana #Monitoring #AWS (Amazon Web Services) #Observability #MLflow #Compliance #Azure #Data Engineering #Kubernetes #Databases #Microservices
Role description
Overview
We are looking for a Principal MLOps Architect with deep expertise in Google Cloud Platform (GCP) to lead the design, migration, and operationalization of enterprise AI/ML workloads. This role is responsible for defining and implementing the end-to-end MLOps architecture — from data ingestion to model deployment and monitoring — enabling scalable, secure, and automated AI delivery on GCP.
The ideal candidate brings a strong background in cloud architecture (AWS/Azure to GCP migration), MLOps strategy and tooling, and AI/ML production systems. This person will partner with data scientists, cloud engineers, and business stakeholders to modernize ML infrastructure, streamline delivery pipelines, and accelerate AI adoption across the enterprise.
Responsibilities:
• Architect and lead migration of AI/ML platforms and pipelines from AWS/Azure to GCP, ensuring performance, compliance, and cost efficiency.
• Define and implement MLOps architecture frameworks leveraging Vertex AI, GKE, Cloud Build, BigQuery, and Dataflow for training, deployment, and continuous integration of models.
• Establish standardized CI/CD pipelines for ML using GitHub Actions, Jenkins, ArgoCD, and Terraform, integrating model validation, automated testing, and deployment gating.
• Design scalable data ingestion and feature engineering pipelines using BigQuery, Dataflow, Pub/Sub, and Dataproc, supporting batch and streaming ML use cases.
• Develop and manage model registries, feature stores, and artifact repositories to ensure reproducibility and governance across ML lifecycles.
• Integrate observability and monitoring through Vertex AI Model Monitoring, Prometheus, Grafana, and OpenTelemetry, including drift detection and performance metrics.
• Enforce security and compliance best practices (IAM, KMS, VPC Service Controls, Secrets Manager) and align with Zero Trust principles.
• Collaborate with AI and Data Science teams to productionize models and LLMs, optimizing for scalability, reliability, and maintainability.
• Research and introduce Generative AI (LLM/RAG) patterns using Vertex AI Search, LangChain, and vector databases (FAISS, Pinecone) for enterprise adoption.
• Provide technical leadership, architectural guidance, and mentorship to engineering teams in adopting cloud-native and MLOps practices.
Required Skills & Experience
• 10–15 years of experience in cloud architecture, data engineering, or AI/ML engineering with 5+ years on GCP.
• Strong proficiency in Vertex AI, GKE, BigQuery, Dataflow, Cloud Run, and Cloud Build.
• Demonstrated success in migrating data and ML workloads from AWS or Azure to GCP.
• Deep understanding of MLOps frameworks (MLflow, Vertex AI Pipelines, Kubeflow, Airflow) and CI/CD automation for ML workloads.
• Expertise in Python, PySpark, TensorFlow, PyTorch, and integrating ML models into production pipelines.
• Solid grasp of data governance, security, and compliance frameworks (CIS, NIST, SOC2, HIPAA).
• Proven experience designing scalable, containerized ML architectures using Kubernetes and microservices principles.
• Strong communication and stakeholder-management skills — able to translate business objectives into actionable MLOps architecture blueprints.
Preferred Qualifications
• GCP Professional Machine Learning Engineer or Professional Cloud Architect certification.
• Experience building LLM and RAG pipelines using Vertex AI, LangChain, and vector databases.
• Familiarity with multi-cloud and hybrid AI environments leveraging Anthos.
• Exposure to AI platform modernization, model observability, and cost optimization in enterprise settings.
Overview
We are looking for a Principal MLOps Architect with deep expertise in Google Cloud Platform (GCP) to lead the design, migration, and operationalization of enterprise AI/ML workloads. This role is responsible for defining and implementing the end-to-end MLOps architecture — from data ingestion to model deployment and monitoring — enabling scalable, secure, and automated AI delivery on GCP.
The ideal candidate brings a strong background in cloud architecture (AWS/Azure to GCP migration), MLOps strategy and tooling, and AI/ML production systems. This person will partner with data scientists, cloud engineers, and business stakeholders to modernize ML infrastructure, streamline delivery pipelines, and accelerate AI adoption across the enterprise.
Responsibilities:
• Architect and lead migration of AI/ML platforms and pipelines from AWS/Azure to GCP, ensuring performance, compliance, and cost efficiency.
• Define and implement MLOps architecture frameworks leveraging Vertex AI, GKE, Cloud Build, BigQuery, and Dataflow for training, deployment, and continuous integration of models.
• Establish standardized CI/CD pipelines for ML using GitHub Actions, Jenkins, ArgoCD, and Terraform, integrating model validation, automated testing, and deployment gating.
• Design scalable data ingestion and feature engineering pipelines using BigQuery, Dataflow, Pub/Sub, and Dataproc, supporting batch and streaming ML use cases.
• Develop and manage model registries, feature stores, and artifact repositories to ensure reproducibility and governance across ML lifecycles.
• Integrate observability and monitoring through Vertex AI Model Monitoring, Prometheus, Grafana, and OpenTelemetry, including drift detection and performance metrics.
• Enforce security and compliance best practices (IAM, KMS, VPC Service Controls, Secrets Manager) and align with Zero Trust principles.
• Collaborate with AI and Data Science teams to productionize models and LLMs, optimizing for scalability, reliability, and maintainability.
• Research and introduce Generative AI (LLM/RAG) patterns using Vertex AI Search, LangChain, and vector databases (FAISS, Pinecone) for enterprise adoption.
• Provide technical leadership, architectural guidance, and mentorship to engineering teams in adopting cloud-native and MLOps practices.
Required Skills & Experience
• 10–15 years of experience in cloud architecture, data engineering, or AI/ML engineering with 5+ years on GCP.
• Strong proficiency in Vertex AI, GKE, BigQuery, Dataflow, Cloud Run, and Cloud Build.
• Demonstrated success in migrating data and ML workloads from AWS or Azure to GCP.
• Deep understanding of MLOps frameworks (MLflow, Vertex AI Pipelines, Kubeflow, Airflow) and CI/CD automation for ML workloads.
• Expertise in Python, PySpark, TensorFlow, PyTorch, and integrating ML models into production pipelines.
• Solid grasp of data governance, security, and compliance frameworks (CIS, NIST, SOC2, HIPAA).
• Proven experience designing scalable, containerized ML architectures using Kubernetes and microservices principles.
• Strong communication and stakeholder-management skills — able to translate business objectives into actionable MLOps architecture blueprints.
Preferred Qualifications
• GCP Professional Machine Learning Engineer or Professional Cloud Architect certification.
• Experience building LLM and RAG pipelines using Vertex AI, LangChain, and vector databases.
• Familiarity with multi-cloud and hybrid AI environments leveraging Anthos.
• Exposure to AI platform modernization, model observability, and cost optimization in enterprise settings.






