

Natsoft
MLOPS ENGINEER
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a MLOps Engineer in the SF Bay Area, offering a hybrid work environment for over 6 months at a competitive pay rate. Key skills include Python, SQL, and experience with cloud platforms, ML Ops frameworks, and containerization.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 6, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
San Francisco Bay Area
-
π§ - Skills detailed
#Docker #Spark (Apache Spark) #Python #DevOps #Deployment #Scala #Cloud #Observability #AWS (Amazon Web Services) #ML (Machine Learning) #ML Ops (Machine Learning Operations) #AI (Artificial Intelligence) #TensorFlow #Documentation #SQL (Structured Query Language) #Kubernetes #MLflow #Libraries #GCP (Google Cloud Platform) #Data Science #Compliance #Data Engineering #Airflow #PyTorch #Azure #Automation #Data Exploration #Monitoring
Role description
Job Title: Data Science & ML Ops Engineer
Location: SF Bay Area - Primary: Concord CA / Secondary: Phoenix
Overview
Tachyon Predictive AI team seeking a hybrid Data Science & ML Ops Engineer to drive the full lifecycle of machine learning solutionsβfrom data exploration and model development to scalable deployment and monitoring. This role bridges the gap between data science model development and production-grade ML Ops Engineering.
Key Responsibilities
1. Develop predictive models using structured/unstructured data across 10+ business lines, driving fraud reduction, operational efficiency, and customer insights.
1. Leverage AutoML tools (e.g., Vertex AI AutoML, H2O Driverless AI) for low-code/no-code model development, documentation automation, and rapid deployment
1. Develop and maintain ML pipelines using tools like MLflow, Kubeflow, or Vertex AI.
1. Automate model training, testing, deployment, and monitoring in cloud environments (e.g., GCP, AWS, Azure).
1. Implement CI/CD workflows for model lifecycle management, including versioning, monitoring, and retraining.
1. Monitor model performance using observability tools and ensure compliance with model governance frameworks (MRM, documentation, explainability)
1. Collaborate with engineering teams to provision containerized environments and support model scoring via low-latency APIs
Qualifications
1. Strong proficiency in Python, SQL, and ML libraries (e.g., scikit-learn, XGBoost, TensorFlow, PyTorch).
1. Experience with cloud platforms and containerization (Docker, Kubernetes).
1. Familiarity with data engineering tools (e.g., Airflow, Spark) and ML Ops frameworks.
1. Solid understanding of software engineering principles and DevOps practices.
Ability to communicate complex technical concepts to non-technical stakeholders
Job Title: Data Science & ML Ops Engineer
Location: SF Bay Area - Primary: Concord CA / Secondary: Phoenix
Overview
Tachyon Predictive AI team seeking a hybrid Data Science & ML Ops Engineer to drive the full lifecycle of machine learning solutionsβfrom data exploration and model development to scalable deployment and monitoring. This role bridges the gap between data science model development and production-grade ML Ops Engineering.
Key Responsibilities
1. Develop predictive models using structured/unstructured data across 10+ business lines, driving fraud reduction, operational efficiency, and customer insights.
1. Leverage AutoML tools (e.g., Vertex AI AutoML, H2O Driverless AI) for low-code/no-code model development, documentation automation, and rapid deployment
1. Develop and maintain ML pipelines using tools like MLflow, Kubeflow, or Vertex AI.
1. Automate model training, testing, deployment, and monitoring in cloud environments (e.g., GCP, AWS, Azure).
1. Implement CI/CD workflows for model lifecycle management, including versioning, monitoring, and retraining.
1. Monitor model performance using observability tools and ensure compliance with model governance frameworks (MRM, documentation, explainability)
1. Collaborate with engineering teams to provision containerized environments and support model scoring via low-latency APIs
Qualifications
1. Strong proficiency in Python, SQL, and ML libraries (e.g., scikit-learn, XGBoost, TensorFlow, PyTorch).
1. Experience with cloud platforms and containerization (Docker, Kubernetes).
1. Familiarity with data engineering tools (e.g., Airflow, Spark) and ML Ops frameworks.
1. Solid understanding of software engineering principles and DevOps practices.
Ability to communicate complex technical concepts to non-technical stakeholders





