

Data Science & MLOps Engineer
β - Featured Role | Apply direct with Data Freelance Hub
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 9, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
San Leandro, CA
-
π§ - Skills detailed
#SQL (Structured Query Language) #Azure #Documentation #Automation #AWS (Amazon Web Services) #Spark (Apache Spark) #ML (Machine Learning) #Scala #PyTorch #Python #Libraries #Compliance #MLflow #GCP (Google Cloud Platform) #Cloud #Deployment #Data Exploration #Airflow #ML Ops (Machine Learning Operations) #Docker #Observability #Data Science #Kubernetes #TensorFlow #Monitoring #Data Engineering #AI (Artificial Intelligence) #DevOps
Role description
Title: Data Science & MLOps Engineer
Location: SF Bay Area ONLY (San Leandro)--Onsite
Long Term Contract.
Job Description:
Tachyon Predictive AI team seeking a hybrid Data Science & MLOps Engineer to drive the full lifecycle of machine learning solutionsβfrom data exploration and model development to scalable deployment and monitoring. This role bridges the gap between data science model development and production-grade ML Ops Engineering.
Key Responsibilities
β’ Develop predictive models using structured/unstructured data across 10+ business lines, driving fraud reduction, operational efficiency, and customer insights.
β’ Leverage AutoML tools (e.g., Vertex AI AutoML, H2O Driverless AI) for low-code/no-code model development, documentation automation, and rapid deployment
β’ Develop and maintain ML pipelines using tools like MLflow, Kubeflow, or Vertex AI.
β’ Automate model training, testing, deployment, and monitoring in cloud environments (e.g., GCP, AWS, Azure).
β’ Implement CI/CD workflows for model lifecycle management, including versioning, monitoring, and retraining.
β’ Monitor model performance using observability tools and ensure compliance with model governance frameworks (MRM, documentation, explainability)
β’ Collaborate with engineering teams to provision containerized environments and support model scoring via low-latency APIs
Qualifications
β’ Strong proficiency in Python, SQL, and ML libraries (e.g., scikit-learn, XGBoost, TensorFlow, PyTorch).
β’ Experience with cloud platforms and containerization (Docker, Kubernetes).
β’ Familiarity with data engineering tools (e.g., Airflow, Spark) and ML Ops frameworks.
β’ Solid understanding of software engineering principles and DevOps practices.
β’ Ability to communicate complex technical concepts to non-technical stakeholders.
Title: Data Science & MLOps Engineer
Location: SF Bay Area ONLY (San Leandro)--Onsite
Long Term Contract.
Job Description:
Tachyon Predictive AI team seeking a hybrid Data Science & MLOps Engineer to drive the full lifecycle of machine learning solutionsβfrom data exploration and model development to scalable deployment and monitoring. This role bridges the gap between data science model development and production-grade ML Ops Engineering.
Key Responsibilities
β’ Develop predictive models using structured/unstructured data across 10+ business lines, driving fraud reduction, operational efficiency, and customer insights.
β’ Leverage AutoML tools (e.g., Vertex AI AutoML, H2O Driverless AI) for low-code/no-code model development, documentation automation, and rapid deployment
β’ Develop and maintain ML pipelines using tools like MLflow, Kubeflow, or Vertex AI.
β’ Automate model training, testing, deployment, and monitoring in cloud environments (e.g., GCP, AWS, Azure).
β’ Implement CI/CD workflows for model lifecycle management, including versioning, monitoring, and retraining.
β’ Monitor model performance using observability tools and ensure compliance with model governance frameworks (MRM, documentation, explainability)
β’ Collaborate with engineering teams to provision containerized environments and support model scoring via low-latency APIs
Qualifications
β’ Strong proficiency in Python, SQL, and ML libraries (e.g., scikit-learn, XGBoost, TensorFlow, PyTorch).
β’ Experience with cloud platforms and containerization (Docker, Kubernetes).
β’ Familiarity with data engineering tools (e.g., Airflow, Spark) and ML Ops frameworks.
β’ Solid understanding of software engineering principles and DevOps practices.
β’ Ability to communicate complex technical concepts to non-technical stakeholders.