

MLOps Engineer (GCP Specialization) - Remote (Must Be Trave 3 Days in Month to Client Location)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an MLOps Engineer (GCP Specialization) on a long-term remote contract, requiring travel to the client location 3 days a month. Key skills include GCP expertise, Python proficiency, and experience with Terraform and ML frameworks.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 30, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Denver, CO
-
π§ - Skills detailed
#Dataflow #PyTorch #Cloud #TensorFlow #Terraform #Data Science #Kubernetes #Storage #"ETL (Extract #Transform #Load)" #Deployment #PySpark #ML (Machine Learning) #BigQuery #GCP (Google Cloud Platform) #Data Engineering #Computer Science #Python #Scala #Programming #Airflow #BitBucket #Docker #Spark (Apache Spark) #MLflow #Monitoring #GitLab #AI (Artificial Intelligence)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Our client is looking for MLOps Engineer (GCP Specialization) for a long tern project in Remote, below is the detailed requirement.
Job Title : MLOps Engineer (GCP Specialization)
Location : Remote
Duration : Long term
Position Overview:
The MLOps Engineer (GCP Specialization) is responsible for designing, implementing, and maintaining infrastructure and processes on Google Cloud Platform (GCP) to enable the seamless development, deployment, and monitoring of machine learning models at scale. This role bridges data science and data engineering, Infrastructure, ensuring that machine learning systems are reliable, scalable, and optimized for GCP environments.
Job Description:
β’ Bachelor's degree in Computer Science or equivalent, with minimum 10+years of relevant experience.
β’ Must be Proficiency in programming languages such as Python.
β’ You must be expertise in GCP services, including Vertex AI, Google Kubernetes Engine (GKE), Cloud Run, BigQuery, Cloud Storage, and Cloud Composer, Data proc or PySpark and managed Airflow.
β’ Strong experience with infrastructure-as-code - Terraform.
β’ Familiarity with containerization (Docker, GKE) and CI/CD pipelines, GitLab and Bitbucket.
β’ Knowledge of ML frameworks (TensorFlow, PyTorch, scikit-learn) and MLOps tools compatible with GCP (MLflow, Kubeflow) and Gen AI RAG applications.
β’ Understanding of data engineering concepts, including ETL pipelines with BigQuery and Dataflow, Dataproc - Pyspark.
β’ Demonstrate excellent communication skills including the ability to effectively communicate with internal and external customers.
β’ Ability to use strong industry knowledge to relate to customer needs and dissolve customer concerns and high level of focus and attention to detail.
β’ Strong work ethic with good time management with ability to work with diverse teams