

Danta Technologies
ML OPS Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an MLOps Engineer, remote, with a contract length of "unknown." Pay rate is $60/hr on W2 or $70/hr on C2C. Key skills include GCP, Docker, CI/CD, and ML frameworks like TensorFlow. Experience with Vertex AI is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
November 23, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Frisco, TX
-
🧠 - Skills detailed
#Data Management #API (Application Programming Interface) #Data Quality #Data Ingestion #Storage #Datasets #GCP (Google Cloud Platform) #Cloud #ML (Machine Learning) #ML Ops (Machine Learning Operations) #TensorFlow #AI (Artificial Intelligence) #Automation #Deployment #"ETL (Extract #Transform #Load)" #Model Deployment #Scala #Docker
Role description
Remote
Rate - $60/hr on W2 AND 70/hr on C2C
JD We are looking for a skilled MLOps Engineer to join our team and help us build, deploy, and maintain robust and scalable machine learning systems. You will be responsible for the full lifecycle of our ML pipelines, from data ingestion to model serving. This is a hands-on role where you will design and implement automated workflows, ensure data quality, and manage model deployments in a production environment.
Responsibilities
Data and Feature Pipelines: Design, build, and manage automated data ingestion, transformation, and validation pipelines using services like Kubeflow Pipelines and Vertex AI Pipelines.
Feature Engineering: Implement and containerize feature engineering logic for diverse datasets, ensuring reusability and scalability.
Data Validation: Integrate and manage data validation processes, including leveraging advanced techniques like AI Agents and the Generative Language API to automatically detect and remediate data quality issues.
Model Training And Experimentation
Set up and maintain automated continuous training (CT) pipelines using Vertex AI Pipelines (Schedules) and Cloud Scheduler.
Implement experiment tracking to log and compare model parameters, metrics, and artifacts.
Configure and execute Hyperparameter Tuning jobs using Vertex AI Training to optimize model performance.
Model Management: Establish a robust Model Versioning system to manage and store model artifacts securely in a centralized repository (Cloud Storage).
Deployment And Serving
Containerize ML models and their dependencies using Docker and manage images with Artifact Registry.
Build and maintain CI/CD workflows for ML models, ensuring seamless and automated deployment.
Configure and manage low-latency production serving environments using Vertex AI Endpoints for real-time inference.
Qualifications
Strong experience with Google Cloud Platform (GCP) services, specifically in the MLOps and ML domain (Vertex AI, Kubeflow, Cloud Storage, Artifact Registry).
Proven ability to design and implement end-to-end ML pipelines for data management, model training, and deployment.
Hands-on experience with containerization technologies like Docker.
Familiarity with CI/CD practices and pipeline automation.
Knowledge of ML frameworks like TensorFlow, and experience with experiment tracking and hyperparameter tuning.
Excellent problem-solving skills and a strong understanding of the ML lifecycle.
Experience with the Generative Language API (Gemini model) or other AI Agent integrations is a plus.
Notes:- All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.
Benefits: Danta offers a compensation package to all W2 employees that are competitive in the industry. It consists of competitive pay, the option to elect healthcare insurance (Dental, Medical, Vision), Major holidays and Paid sick leave as per state law.
The rate/ Salary range is dependent on numerous factors including Qualification, Experience and Location.
Remote
Rate - $60/hr on W2 AND 70/hr on C2C
JD We are looking for a skilled MLOps Engineer to join our team and help us build, deploy, and maintain robust and scalable machine learning systems. You will be responsible for the full lifecycle of our ML pipelines, from data ingestion to model serving. This is a hands-on role where you will design and implement automated workflows, ensure data quality, and manage model deployments in a production environment.
Responsibilities
Data and Feature Pipelines: Design, build, and manage automated data ingestion, transformation, and validation pipelines using services like Kubeflow Pipelines and Vertex AI Pipelines.
Feature Engineering: Implement and containerize feature engineering logic for diverse datasets, ensuring reusability and scalability.
Data Validation: Integrate and manage data validation processes, including leveraging advanced techniques like AI Agents and the Generative Language API to automatically detect and remediate data quality issues.
Model Training And Experimentation
Set up and maintain automated continuous training (CT) pipelines using Vertex AI Pipelines (Schedules) and Cloud Scheduler.
Implement experiment tracking to log and compare model parameters, metrics, and artifacts.
Configure and execute Hyperparameter Tuning jobs using Vertex AI Training to optimize model performance.
Model Management: Establish a robust Model Versioning system to manage and store model artifacts securely in a centralized repository (Cloud Storage).
Deployment And Serving
Containerize ML models and their dependencies using Docker and manage images with Artifact Registry.
Build and maintain CI/CD workflows for ML models, ensuring seamless and automated deployment.
Configure and manage low-latency production serving environments using Vertex AI Endpoints for real-time inference.
Qualifications
Strong experience with Google Cloud Platform (GCP) services, specifically in the MLOps and ML domain (Vertex AI, Kubeflow, Cloud Storage, Artifact Registry).
Proven ability to design and implement end-to-end ML pipelines for data management, model training, and deployment.
Hands-on experience with containerization technologies like Docker.
Familiarity with CI/CD practices and pipeline automation.
Knowledge of ML frameworks like TensorFlow, and experience with experiment tracking and hyperparameter tuning.
Excellent problem-solving skills and a strong understanding of the ML lifecycle.
Experience with the Generative Language API (Gemini model) or other AI Agent integrations is a plus.
Notes:- All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.
Benefits: Danta offers a compensation package to all W2 employees that are competitive in the industry. It consists of competitive pay, the option to elect healthcare insurance (Dental, Medical, Vision), Major holidays and Paid sick leave as per state law.
The rate/ Salary range is dependent on numerous factors including Qualification, Experience and Location.






