

Amaze Systems
Lead Data Science with LLM, Gen AI and Agentic Architectures
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Scientist specializing in LLMs, Generative AI, and Agentic Architectures. It offers a remote contract with a competitive pay rate. Candidates need 3+ years of ML experience, strong Python skills, and familiarity with LLM ecosystems.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 14, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Monitoring #SQL (Structured Query Language) #R #Data Pipeline #PyTorch #ML (Machine Learning) #Data Science #Langchain #Azure #Batch #Cloud #Deployment #GCP (Google Cloud Platform) #AI (Artificial Intelligence) #Model Evaluation #SageMaker #Data Engineering #AWS (Amazon Web Services) #Python #Hugging Face #TensorFlow #Anomaly Detection #Spark (Apache Spark) #AWS SageMaker #Databases #Transformers #Databricks #Programming #Computer Science #"ETL (Extract #Transform #Load)" #Observability
Role description
Job Title: Lead Data Science with LLM, Gen AI and Agentic Architectures
Location: Remote / SANTA, CA
About the Role
We are seeking a skilled and forward-looking Machine Learning Engineer with expertise in Large Language Models (LLMs), Generative AI, and Agentic Architectures to join our growing R&D and Applied AI team.
This role is pivotal in helping Oversight deliver the next generation of agentic AI systems for enterprise spend management and risk controls. You will collaborate closely with AI/ML researchers, data engineers, and product teams to design, implement, and optimize intelligent systems that power autonomous exception resolution, anomaly detection, and explainable insights.
This is a hands-on engineering role, where you will both build and scale ML systems and contribute to cutting-edge applied research in agentic AI.
Key Responsibilities
1. Core ML/LLM Engineering
β’ Design, train, fine-tune, and deploy ML/LLM models for production.
β’ Implement Retrieval-Augmented Generation (RAG) pipelines using vector databases.
β’ Prototype and optimize multi-agent workflows using LangChain, LangGraph, and MCP.
β’ Develop prompt engineering, optimization, and safety techniques for agentic LLM interactions.
β’ Integrate memory, evidence packs, and explainability modules into agentic pipelines.
β’ Work with multiple LLM ecosystems, including:
β’ OpenAI GPT (GPT-4, GPT-4o, fine-tuned GPTs)
β’ Anthropic Claude (Claude 2/3 for reasoning and safety-aligned workflows)
β’ Google Gemini (multimodal reasoning, advanced RAG integration)
β’ Meta LLaMA (fine-tuned/custom models for domain-specific tasks)
1. Data & Infrastructure
β’ Collaborate with Data Engineering to build and maintain real-time and batch data pipelines supporting ML/LLM workloads.
β’ Conduct feature engineering, preprocessing, and embedding generation for structured and unstructured data.
β’ Implement model monitoring, drift detection, and retraining pipelines.
β’ Utilize cloud ML platforms such as AWS SageMaker and Databricks ML for experimentation and scaling.
1. Research & Applied Innovation
β’ Explore and evaluate emerging LLM/SLM architectures and agent orchestration patterns.
β’ Experiment with generative AI and multimodal models (text, images, structured financial data).
β’ Collaborate with R&D to prototype autonomous resolution agents, anomaly detection models, and reasoning engines.
β’ Translate research prototypes into production-ready components.
1. Collaboration & Delivery
β’ Work cross-functionally with R&D, Data Science, Product, and Engineering teams to deliver AI-driven business features.
β’ Participate in architecture discussions, design reviews, and model evaluations.
β’ Document experiments, processes, and results for effective knowledge sharing.
β’ Mentor junior engineers and contribute to best practices in ML engineering.
Education, Experience, and Skills Required
β’ Bachelorβs or Masterβs degree in Computer Science, Data Science, Machine Learning, or a related field.
β’ 3+ years of experience building and deploying ML systems.
β’ Strong programming skills in Python, with experience in PyTorch, TensorFlow, Scikit-learn, and Hugging Face Transformers.
β’ Hands-on experience with LLMs/SLMs (fine-tuning, prompt design, inference optimization).
β’ Demonstrated expertise in at least two of the following:
β’ OpenAI GPT (chat, assistants, fine-tuning)
β’ Anthropic Claude (safety-first reasoning, summarization)
β’ Google Gemini (multimodal reasoning, enterprise APIs)
β’ Meta LLaMA (open-source fine-tuned models)
β’ Familiarity with vector databases, embeddings, and RAG pipelines.
β’ Proficiency in handling structured and unstructured data at scale.
β’ Working knowledge of SQL and distributed frameworks such as Spark or Ray.
β’ Strong understanding of the ML lifecycle β from data prep and training to deployment and monitoring.
Preferred Qualifications
β’ Experience with agentic frameworks such as LangChain, LangGraph, MCP, or AutoGen.
β’ Knowledge of AI safety, guardrails, and explainability.
β’ Hands-on experience deploying ML/LLM solutions in AWS, GCP, or Azure.
β’ Experience with MLOps practices β CI/CD, monitoring, and observability.
β’ Background in anomaly detection, fraud/risk modeling, or behavioral analytics.
β’ Contributions to open-source AI/ML projects or applied research publications.
Job Title: Lead Data Science with LLM, Gen AI and Agentic Architectures
Location: Remote / SANTA, CA
About the Role
We are seeking a skilled and forward-looking Machine Learning Engineer with expertise in Large Language Models (LLMs), Generative AI, and Agentic Architectures to join our growing R&D and Applied AI team.
This role is pivotal in helping Oversight deliver the next generation of agentic AI systems for enterprise spend management and risk controls. You will collaborate closely with AI/ML researchers, data engineers, and product teams to design, implement, and optimize intelligent systems that power autonomous exception resolution, anomaly detection, and explainable insights.
This is a hands-on engineering role, where you will both build and scale ML systems and contribute to cutting-edge applied research in agentic AI.
Key Responsibilities
1. Core ML/LLM Engineering
β’ Design, train, fine-tune, and deploy ML/LLM models for production.
β’ Implement Retrieval-Augmented Generation (RAG) pipelines using vector databases.
β’ Prototype and optimize multi-agent workflows using LangChain, LangGraph, and MCP.
β’ Develop prompt engineering, optimization, and safety techniques for agentic LLM interactions.
β’ Integrate memory, evidence packs, and explainability modules into agentic pipelines.
β’ Work with multiple LLM ecosystems, including:
β’ OpenAI GPT (GPT-4, GPT-4o, fine-tuned GPTs)
β’ Anthropic Claude (Claude 2/3 for reasoning and safety-aligned workflows)
β’ Google Gemini (multimodal reasoning, advanced RAG integration)
β’ Meta LLaMA (fine-tuned/custom models for domain-specific tasks)
1. Data & Infrastructure
β’ Collaborate with Data Engineering to build and maintain real-time and batch data pipelines supporting ML/LLM workloads.
β’ Conduct feature engineering, preprocessing, and embedding generation for structured and unstructured data.
β’ Implement model monitoring, drift detection, and retraining pipelines.
β’ Utilize cloud ML platforms such as AWS SageMaker and Databricks ML for experimentation and scaling.
1. Research & Applied Innovation
β’ Explore and evaluate emerging LLM/SLM architectures and agent orchestration patterns.
β’ Experiment with generative AI and multimodal models (text, images, structured financial data).
β’ Collaborate with R&D to prototype autonomous resolution agents, anomaly detection models, and reasoning engines.
β’ Translate research prototypes into production-ready components.
1. Collaboration & Delivery
β’ Work cross-functionally with R&D, Data Science, Product, and Engineering teams to deliver AI-driven business features.
β’ Participate in architecture discussions, design reviews, and model evaluations.
β’ Document experiments, processes, and results for effective knowledge sharing.
β’ Mentor junior engineers and contribute to best practices in ML engineering.
Education, Experience, and Skills Required
β’ Bachelorβs or Masterβs degree in Computer Science, Data Science, Machine Learning, or a related field.
β’ 3+ years of experience building and deploying ML systems.
β’ Strong programming skills in Python, with experience in PyTorch, TensorFlow, Scikit-learn, and Hugging Face Transformers.
β’ Hands-on experience with LLMs/SLMs (fine-tuning, prompt design, inference optimization).
β’ Demonstrated expertise in at least two of the following:
β’ OpenAI GPT (chat, assistants, fine-tuning)
β’ Anthropic Claude (safety-first reasoning, summarization)
β’ Google Gemini (multimodal reasoning, enterprise APIs)
β’ Meta LLaMA (open-source fine-tuned models)
β’ Familiarity with vector databases, embeddings, and RAG pipelines.
β’ Proficiency in handling structured and unstructured data at scale.
β’ Working knowledge of SQL and distributed frameworks such as Spark or Ray.
β’ Strong understanding of the ML lifecycle β from data prep and training to deployment and monitoring.
Preferred Qualifications
β’ Experience with agentic frameworks such as LangChain, LangGraph, MCP, or AutoGen.
β’ Knowledge of AI safety, guardrails, and explainability.
β’ Hands-on experience deploying ML/LLM solutions in AWS, GCP, or Azure.
β’ Experience with MLOps practices β CI/CD, monitoring, and observability.
β’ Background in anomaly detection, fraud/risk modeling, or behavioral analytics.
β’ Contributions to open-source AI/ML projects or applied research publications.