The Fountain Group

Machine Learning Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer with a contract length of "unknown" and a pay rate of $100-110/hour W2. Located onsite in Mountain View, CA, it requires 3+ years in ML engineering, proficiency in Python and PyTorch, and experience with safety models.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
880
-
πŸ—“οΈ - Date
May 13, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Mountain View, CA
-
🧠 - Skills detailed
#Version Control #Python #Cloud #ML (Machine Learning) #Deployment #Computer Science #AI (Artificial Intelligence) #Security #TensorFlow #PyTorch
Role description
PAY: $100-110/hour W2. Our company offers our consultants a suite of benefits after a qualification period including health, vision, dental, life and disability insurance. Onsite role in Mountain View, CA (any hybrid work will be at the manager’s discretion) W2 Candidates only Position Summary β€’ Seeking an experienced Machine Learning Engineer to lead the development of prompt injection and prompt safety models that protect downstream agentic AI systems across phone, cloud, and XR/AR. β€’ Role will design, train, and deploy classifier and guardrail models (both cloud-based and hybrid on-device) that screen agent inputs and outputs for injection attacks, unsafe content, and policy violations. A core part of the role is post-training these models with RLHF, DPO, and related optimization techniques to push detection accuracy and false-positive rates beyond what off-the-shelf solutions provide. Role and Responsibilities β€’ Design and train prompt injection detection models and prompt safety classifiers that operate on both inputs to and outputs from agentic AI systems. β€’ Build hybrid deployment pipelines that split safety inference between on-device (phone, XR/AR) and cloud, optimizing for latency, privacy, and detection coverage. β€’ Apply post-training techniques (e.g. RLHF, reward modeling, policy optimization) to optimize guardrail model performance, calibration, and robustness against adaptive adversaries. β€’ Curate and generate adversarial training data: direct and indirect prompt injections, jailbreaks, tool-use exploits, and unsafe-output cases drawn from red-teaming and production signals. β€’ Build evaluation harnesses that measure attack success rate false-positive rate latency, and on-device footprint across model iterations and threat categories. β€’ Partner with agent, device, and platform teams to integrate safety models into mobile-use agents, XR/AR assistants, and cloud agentic workflows, and to close the loop from production incidents back into training data. β€’ Work cross-functionally with security researchers, modeling teams, and product engineers; document methods and, where appropriate, contribute to patents and publications. Required Qualifications β€’ M.S. or Ph.D. in Computer Science, Machine Learning, Electrical Engineering, or a related field; or B.S. with equivalent industry experience. β€’ 3+ years of industry experience in ML engineering or applied AI research, with demonstrated ownership of production ML systems. β€’ 2+ years of industry experience in software engineering. β€’ Strong proficiency in Python and PyTorch (or JAX/TensorFlow), with solid software engineering fundamentals (version control, testing, and reproducible experimentation). β€’ Hands-on experience post-training LLMs with RLHF, DPO, RLAIF, or reward modeling including reward design, preference data curation, and training stability. β€’ Hands-on experience training and deploying classifier or guardrail models for safety, content moderation, abuse detection, or adversarial robustness. β€’ Familiarity with prompt injection, jailbreak, and agentic AI threat models, and with distributed training frameworks (DeepSpeed, FSDP, Accelerate). Preferred Qualifications β€’ Experience building safety or moderation systems for agentic AI: tool-use guardrails, indirect prompt injection defenses, or output filtering for autonomous agents. β€’ Experience with red-teaming, adversarial data generation, or automated attack pipelines. Who We Are: The Fountain Group is a nationwide staffing firm with over 80 Fortune 100-500 clients. Since 2001, TFG has maintained a consistent standard of excellence, and our work is broadly recognized every year through numerous industry performance awards. Our success is a team effort. Browse our website below for additional information on our company. The Fountain Group 3407 W Martin Luther King Jr. Dr. Tampa, FL 33607 β€œWe work in Life Sciences, Clinical, Engineering, IT, and more. Above all, we specialize in people.” By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy at Privacy Policy