Mphasis

ML Ops Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an ML Ops Engineer with a contract length of "unknown" and a pay rate of "unknown." It requires expertise in Python, SQL, and ML libraries, along with experience in cloud platforms and containerization. Hybrid work location.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
560
-
πŸ—“οΈ - Date
December 31, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
San Leandro, CA
-
🧠 - Skills detailed
#TensorFlow #AWS (Amazon Web Services) #Monitoring #Data Exploration #DevOps #Spark (Apache Spark) #Data Science #Observability #Python #Compliance #Automation #PyTorch #Deployment #Libraries #Azure #Kubernetes #Data Engineering #AI (Artificial Intelligence) #Documentation #Cloud #ML Ops (Machine Learning Operations) #GCP (Google Cloud Platform) #Docker #Airflow #MLflow #ML (Machine Learning) #Scala #SQL (Structured Query Language)
Role description
Job Description: Tachyon Predictive AI team seeking a hybrid Data Science & ML Ops Engineer to drive the full lifecycle of machine learning solutionsβ€”from data exploration and model development to scalable deployment and monitoring. This role bridges the gap between data science model development and production-grade ML Ops Engineering. About the Role This role involves developing predictive models and maintaining ML pipelines to enhance fraud reduction, operational efficiency, and customer insights. Responsibilities β€’ Develop predictive models using structured/unstructured data across 10+ business lines, driving fraud reduction, operational efficiency, and customer insights. β€’ Leverage AutoML tools (e.g., Vertex AI AutoML, H2O Driverless AI) for low-code/no-code model development, documentation automation, and rapid deployment. β€’ Develop and maintain ML pipelines using tools like MLflow, Kubeflow, or Vertex AI. β€’ Automate model training, testing, deployment, and monitoring in cloud environments (e.g., GCP, AWS, Azure). β€’ Implement CI/CD workflows for model lifecycle management, including versioning, monitoring, and retraining. β€’ Monitor model performance using observability tools and ensure compliance with model governance frameworks (MRM, documentation, explainability). β€’ Collaborate with engineering teams to provision containerized environments and support model scoring via low-latency APIs. Qualifications β€’ Strong proficiency in Python, SQL, and ML libraries (e.g., scikit-learn, XGBoost, TensorFlow, PyTorch). β€’ Experience with cloud platforms and containerization (Docker, Kubernetes). β€’ Familiarity with data engineering tools (e.g., Airflow, Spark) and ML Ops frameworks. β€’ Solid understanding of software engineering principles and DevOps practices. β€’ Ability to communicate complex technical concepts to non-technical stakeholders. Required Skills β€’ Python β€’ SQL β€’ ML libraries (scikit-learn, XGBoost, TensorFlow, PyTorch) β€’ Cloud platforms (GCP, AWS, Azure) β€’ Containerization (Docker, Kubernetes) Preferred Skills β€’ Data engineering tools (Airflow, Spark) β€’ ML Ops frameworks β€’ Software engineering principles β€’ DevOps practices