

MLOps Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an MLOps Engineer with 8+ years of experience, offering a contract in Nashville, TN or Malvern, PA. Pay rate is competitive. Key skills include GCP services, Python, Terraform, and ML frameworks. An engineering degree and technical certifications are preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 29, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Nashville, TN
-
π§ - Skills detailed
#Data Ingestion #Data Science #Dataflow #MLflow #DevOps #AI (Artificial Intelligence) #Data Privacy #GitLab #Datasets #Automation #PyTorch #ML (Machine Learning) #API (Application Programming Interface) #Python #Airflow #Microservices #Storage #Kubernetes #VPC (Virtual Private Cloud) #Deployment #Spark (Apache Spark) #Scala #GDPR (General Data Protection Regulation) #Logging #Security #Monitoring #Data Loss Prevention #"ETL (Extract #Transform #Load)" #IAM (Identity and Access Management) #Data Engineering #Compliance #Cloud #BigQuery #Docker #Model Deployment #PySpark #Terraform #BitBucket #Programming #GCP (Google Cloud Platform) #TensorFlow
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Responsibilities: -
β’ Model Deployment: Design and implement pipelines for deploying machine learning models into production using GCP services such as AI Platform, Vertex AI, or Cloud Run, Cloud Composer ensuring high availability and performance.
β’ Infrastructure Management: Build and maintain scalable GCP-based infrastructure using services like Google Compute Engine, Google Kubernetes Engine (GKE), and Cloud Storage to support model training, deployment, and inference.
β’ Automation: Develop automated workflows for data ingestion, model training, validation, and deployment using GCP tools like Cloud Composer, and CI/CD pipelines integrated with GitLab and Bitbucket Repositories.
β’ Monitoring and Maintenance: Implement monitoring solutions using Google Cloud Monitoring and Logging to track model performance, data drift, and system health, and take corrective actions as needed.
β’ Collaboration: Work closely with data scientists, Data engineers, Infrastructure and DevOps teams to streamline the ML lifecycle and ensure alignment with business objectives.
β’ Versioning and Reproducibility: Manage versioning of datasets, models, and code using GCP tools like Artifact Registry or Cloud Storage to ensure reproducibility and traceability of machine learning experiments.
β’ Optimization: Optimize model performance and resource utilization on GCP, leveraging containerization with Docker and GKE, and utilizing cost-efficient resources like preemptible VMs or Cloud TPU/GPU.
β’ Security and Compliance: Ensure ML systems comply with data privacy regulations (e.g., GDPR, CCPA) using GCPβs security tools like Cloud IAM, VPC Service Controls, and Data Loss Prevention (DLP).
β’ Tooling: Integrate GCP-native tools (e.g., Vertex AI, Cloud composer) and open-source MLOps frameworks (e.g., MLflow, Kubeflow) to support the ML lifecycle.
β’ Enable successful project delivery and customer satisfaction.
β’ Drive project and technology goals in compliance with organizational objectives.
Experience: -
β’ 8+ Years
Location: -
β’ Nashville TN; Malvern, PA
Educational Qualifications: -
β’ Engineering Degree β BE/ME/BTech/MTech/BSc/MSc.
β’ Technical certification in multiple technologies is desirable.
Skills: -
Mandatory skills
Technical Skills:
β’ Proficiency in programming languages such as Python.
β’ Expertise in GCP services, including Vertex AI, Google Kubernetes Engine (GKE), Cloud Run, BigQuery, Cloud Storage, and Cloud Composer, Data proc or PySpark and managed Airflow.
β’ Experience with infrastructure-as-code - Terraform.
β’ Familiarity with containerization (Docker, GKE) and CI/CD pipelines, GitLab and Bitbucket.
β’ Knowledge of ML frameworks (TensorFlow, PyTorch, scikit-learn) and MLOps tools compatible with GCP (MLflow, Kubeflow) and Gen AI RAG applications.
β’ Understanding of data engineering concepts, including ETL pipelines with BigQuery and Dataflow, Dataproc - Pyspark.
Soft Skills:
β’ Strong problem-solving and analytical skills.
β’ Excellent communication and collaboration abilities.
β’ Ability to work in a fast-paced, cross-functional environment
Good to have skills: -
β’ Experience with large-scale distributed ML systems on GCP, such as Vertex AI Pipelines or Kubeflow on GKE, Feature Store.
β’ Exposure to Generative AI (GenAI) and Retrieval-Augmented Generation (RAG) applications and deployment strategies.
β’ Familiarity with GCPβs model monitoring tools and techniques for detecting data drift or model degradation.
β’ Knowledge of microservices architecture and API development using Cloud Endpoints or Cloud Functions.
β’ Google Cloud Professional certifications (e.g., Professional Machine Learning Engineer, Professional Cloud Architect).