

Agentic AI Framework Engineer (LLM & Automation)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an "Agentic AI Framework Engineer (LLM & Automation)" in San Jose, CA, for 1 month (potentially extendable). Requires expertise in agentic AI frameworks, LLM deployment, Python, and microservices. W2 contract only; hybrid work preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
840
-
ποΈ - Date discovered
September 5, 2025
π - Project duration
1 to 3 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
California, United States
-
π§ - Skills detailed
#Computer Science #Python #Azure #Agile #Monitoring #Web Scraping #DevOps #GCP (Google Cloud Platform) #Langchain #Compliance #Microservices #Cloud #"ETL (Extract #Transform #Load)" #Scala #REST (Representational State Transfer) #Data Security #Java #Kubernetes #Automation #AWS (Amazon Web Services) #API (Application Programming Interface) #Docker #Security #GraphQL #AI (Artificial Intelligence)
Role description
Primary Skills: Agentic AI framework, Foundational Models, Python, Startup Background/Mindset
Location: San Jose, CA (Hybrid, San Jose preferred; open to San Francisco, CA and open to remote candidates)
Duration: 1 month, potential to extend or covert to FTE
Contract Type: W2 only
Job Summary
Join our team to build a next-generation agentic AI framework that translates conversational inputs (via LLMs) into automated workflows, orchestrating tasks like scheduling, video recording, and social media posting. Using foundational models (no fine-tuning required), youβll design a scalable, modular backend system integrating APIs, RPA, and microservices, ensuring low-latency performance and robust security.
Responsibilities
β’ Develop a scalable backend framework to translate conversational instructions into automated tasks using microservices, APIs, and RPA
β’ Integrate LLMs to parse user intent, extract tasks, and dynamically plan workflows
β’ Architect modular, extensible systems to support new services and integrations without refactoring
β’ Optimize LLM inference and agent operations for low latency and high availability
β’ Manage API integrations (e.g., social platforms, calendars) and web-based tasks via RPA or headless browsers
β’ Implement data security, governance, and compliance best practices
β’ Collaborate with frontend, DevOps, and product teams to define architecture and deliver solutions
β’ Build monitoring and analytics for automation performance and system metrics
Required Skills and Qualifications
β’ Bachelorβs degree or higher in Computer Science or related field
β’ Proven experience building agent-based frameworks (e.g., LangChain) for multi-step workflows
β’ Extensive experience deploying LLMs in production, optimizing for low-latency inference
β’ 5+ years developing high-performance backend systems (Python, Go, Java, or Node.js)
β’ Proficiency with REST, GraphQL, or gRPC, and integrating complex APIs (e.g., social media, scheduling)
β’ Experience with RPA tools, headless browsers, or web scraping for dynamic sites
β’ Expertise in microservices, cloud platforms (AWS, Azure, or GCP), and containerization (Docker/Kubernetes)
β’ Strong systems architecture skills for modular, distributed designs
β’ Comfortable in fast-paced, agile environments with evolving priorities
Primary Skills: Agentic AI framework, Foundational Models, Python, Startup Background/Mindset
Location: San Jose, CA (Hybrid, San Jose preferred; open to San Francisco, CA and open to remote candidates)
Duration: 1 month, potential to extend or covert to FTE
Contract Type: W2 only
Job Summary
Join our team to build a next-generation agentic AI framework that translates conversational inputs (via LLMs) into automated workflows, orchestrating tasks like scheduling, video recording, and social media posting. Using foundational models (no fine-tuning required), youβll design a scalable, modular backend system integrating APIs, RPA, and microservices, ensuring low-latency performance and robust security.
Responsibilities
β’ Develop a scalable backend framework to translate conversational instructions into automated tasks using microservices, APIs, and RPA
β’ Integrate LLMs to parse user intent, extract tasks, and dynamically plan workflows
β’ Architect modular, extensible systems to support new services and integrations without refactoring
β’ Optimize LLM inference and agent operations for low latency and high availability
β’ Manage API integrations (e.g., social platforms, calendars) and web-based tasks via RPA or headless browsers
β’ Implement data security, governance, and compliance best practices
β’ Collaborate with frontend, DevOps, and product teams to define architecture and deliver solutions
β’ Build monitoring and analytics for automation performance and system metrics
Required Skills and Qualifications
β’ Bachelorβs degree or higher in Computer Science or related field
β’ Proven experience building agent-based frameworks (e.g., LangChain) for multi-step workflows
β’ Extensive experience deploying LLMs in production, optimizing for low-latency inference
β’ 5+ years developing high-performance backend systems (Python, Go, Java, or Node.js)
β’ Proficiency with REST, GraphQL, or gRPC, and integrating complex APIs (e.g., social media, scheduling)
β’ Experience with RPA tools, headless browsers, or web scraping for dynamic sites
β’ Expertise in microservices, cloud platforms (AWS, Azure, or GCP), and containerization (Docker/Kubernetes)
β’ Strong systems architecture skills for modular, distributed designs
β’ Comfortable in fast-paced, agile environments with evolving priorities