Ampstek

Data Science & ML Ops Engineer (CA Only)(W2 Contract)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Science & ML Ops Engineer (W2 Contract) in the SF Bay Area, requiring strong skills in Python, SQL, and cloud platforms. Candidates should have experience in ML Ops, predictive modeling, and containerization, with a focus on fraud reduction and operational efficiency.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
September 24, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
San Francisco Bay Area
-
🧠 - Skills detailed
#Airflow #ML Ops (Machine Learning Operations) #AI (Artificial Intelligence) #Monitoring #DevOps #Cloud #TensorFlow #Docker #Libraries #MLflow #Python #ML (Machine Learning) #Compliance #PyTorch #SQL (Structured Query Language) #Data Engineering #Azure #Automation #Data Science #Spark (Apache Spark) #Observability #Kubernetes #AWS (Amazon Web Services) #Deployment #Documentation #GCP (Google Cloud Platform)
Role description
Position: Data Science & ML Ops Engineer Location : SF Bay Area ONLY (San Leandro Preferably) (5days onsite) Duration: Contract(W2 Candidate Only) Job Description:: Candidate with strong experience in understanding of Google/Azure and Spark/Python and MLOPs in general. Candidate who has played both data scientist and ML engineer role will be ideal. But even if they are strong ML engineer with fair knowledge of Data science is ok. • Develop predictive models using structured/unstructured data across 10+ business lines, driving fraud reduction, operational efficiency, and customer insights. • Leverage AutoML tools (e.g., Vertex AI AutoML, H2O Driverless AI) for low-code/no-code model development, documentation automation, and rapid deployment • Develop and maintain ML pipelines using tools like MLflow, Kubeflow, or Vertex AI. • Automate model training, testing, deployment, and monitoring in cloud environments (e.g., GCP, AWS, Azure). • Implement CI/CD workflows for model lifecycle management, including versioning, monitoring, and retraining. • Monitor model performance using observability tools and ensure compliance with model governance frameworks (MRM, documentation, explainability) • Collaborate with engineering teams to provision containerized environments and support model scoring via low-latency APIs • Strong proficiency in Python, SQL, and ML libraries (e.g., scikit-learn, XGBoost, TensorFlow, PyTorch). • Experience with cloud platforms and containerization (Docker, Kubernetes). • Familiarity with data engineering tools (e.g., Airflow, Spark) and ML Ops frameworks. • Solid understanding of software engineering principles and DevOps practices. • Ability to communicate complex technical concepts to non-technical stakeholders.