

HMG AMERICA LLC
Machine Learning Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior MLOps Technical Lead in Cupertino, CA/Austin, TX. Contract length is unspecified, with a pay rate of "unknown." Key skills include 3+ years in backend systems, AI/ML integration, Python, and cloud infrastructure experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 21, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Deployment #GCP (Google Cloud Platform) #Kubernetes #Regression #TypeScript #AI (Artificial Intelligence) #Scala #Docker #Cloud #ML (Machine Learning) #AWS (Amazon Web Services) #Python #API (Application Programming Interface) #Azure
Role description
Job Title/Role: Senior MLOps Technical Lead
Location: Cupertino, CA/ Austin, TX Onsite Mandatory Need local
Objective:
Build intelligent, data-driven platform. The focus is to support the development of next-generation test analytics and test agents that enable faster insights, improved diagnostics, and scalable infrastructure for Generative AI systems connecting test stations, line level data and pipelines . You will build automated evaluation tools, and conduct rigorous statistical analyses to ensure the reliability of both human and AI-based assessment systems.
Benchmark, adapt, and integrate AI/ML models into existing software systems. Independently run and analyze ML experiments for real improvements.
Must-Have Requirements
Requirement Details
Backend/Systems Experience 3+ years building production backend or distributed systems (pre-AI experience required)
Production AI Systems Has shipped AI/LLM features serving real users at scale — not just prototypes or demos
Agentic Systems Has built AI agents, skills, tools, or MCP (Model Context Protocol) integrations
Python Proficient for backend development
Secondary Language Working knowledge of Go, TypeScript, or Rust
Cloud Infrastructure Deep experience with AWS/GCP/Azure — cost optimization, compute decisions, not just deployment
Container & Orchestration Hands-on with Docker and Kubernetes — can build, deploy, debug, and scale services themselves
LLM Integration Understands token economics, context limits, rate limiting, structured outputs, API failure modes
LLM Evaluation Understands how to evaluate LLM outputs and the inherent challenges (non-determinism, quality measurement, regression detection)
Hands-On Engineer Not just an architect — writes code, debugs production issues, deploys their own work
Job Title/Role: Senior MLOps Technical Lead
Location: Cupertino, CA/ Austin, TX Onsite Mandatory Need local
Objective:
Build intelligent, data-driven platform. The focus is to support the development of next-generation test analytics and test agents that enable faster insights, improved diagnostics, and scalable infrastructure for Generative AI systems connecting test stations, line level data and pipelines . You will build automated evaluation tools, and conduct rigorous statistical analyses to ensure the reliability of both human and AI-based assessment systems.
Benchmark, adapt, and integrate AI/ML models into existing software systems. Independently run and analyze ML experiments for real improvements.
Must-Have Requirements
Requirement Details
Backend/Systems Experience 3+ years building production backend or distributed systems (pre-AI experience required)
Production AI Systems Has shipped AI/LLM features serving real users at scale — not just prototypes or demos
Agentic Systems Has built AI agents, skills, tools, or MCP (Model Context Protocol) integrations
Python Proficient for backend development
Secondary Language Working knowledge of Go, TypeScript, or Rust
Cloud Infrastructure Deep experience with AWS/GCP/Azure — cost optimization, compute decisions, not just deployment
Container & Orchestration Hands-on with Docker and Kubernetes — can build, deploy, debug, and scale services themselves
LLM Integration Understands token economics, context limits, rate limiting, structured outputs, API failure modes
LLM Evaluation Understands how to evaluate LLM outputs and the inherent challenges (non-determinism, quality measurement, regression detection)
Hands-On Engineer Not just an architect — writes code, debugs production issues, deploys their own work






