

Full Stack Python with LLM Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Full Stack Python with LLM Engineer on a contract basis, located in Atlanta, GA, Seattle, WA, or Dallas, TX. Requires expertise in fullstack Python, LLM prompt engineering, context management, and LangGraph.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 13, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Cloud #Docker #Monitoring #Deployment #FastAPI #NoSQL #SQL (Structured Query Language) #AI (Artificial Intelligence) #Databases #Data Science #Django #Python #Scala #Flask
Role description
Role: LLM/Prompt-Context Engineer β Full Stack Python (AI Agents, LangGraph, Context Engineering)
Location: Atlanta, GA/Seattle, WA/Dallas, TX
Type of Employment: Contract
Note: Desired applicant needs to attend an In-person in any of the above locations
Description: We are looking for a highly skilled LLM/Prompt-Context Engineer with a strong fullstack Python background to design, develop, and integrate intelligent systems focused on large language models (LLMs), prompt engineering, and advanced context management. In this role, you will play a critical part in architecting context-rich AI solutions, crafting effective prompts, and ensuring seamless agent interactions using frameworks like LangGraph.
Key Responsibilities:
β’ Prompt & Context Engineering: Design, optimize, and evaluate prompts for LLMs to achieve precise, reliable, and contextually relevant outputs across a variety of use cases.
β’ Context Management: Architect and implement dynamic context management strategies, including session memory, retrieval-augmented generation, and user personalization, to enhance agent performance.
β’ LLM Integration: Integrate, fine-tune, and orchestrate LLMs within Python-based applications, leveraging APIs and custom pipelines for scalable deployment.
β’ LangGraph & Agent Flows: Build and manage complex conversational and agent workflows using the LangGraph framework to support multi-agent or multi-step solutions.
β’ Fullstack Development: Develop robust backend services, APIs, and (optionally) front-end interfaces to enable end-to-end AI-powered applications.
β’ Collaboration: Work closely with product, data science, and engineering teams to define requirements, run prompt experiments, and iterate quickly on solutions.
β’ Evaluation & Optimization: Implement testing, monitoring, and evaluation pipelines to continuously improve prompt effectiveness and context handling.
Required Skills & Qualifications:
β’ Deep experience with fullstack Python development (FastAPI, Flask, Django; SQL/NoSQL databases).
β’ Demonstrated expertise in prompt engineering for LLMs (e.g., OpenAI, Anthropic, open-source LLMs).
β’ Strong understanding of context engineering, including session management, vector search, and knowledge retrieval strategies.
β’ Hands-on experience integrating AI agents and LLMs into production systems.
β’ Proficient with conversational flow frameworks such as LangGraph.
β’ Familiarity with cloud infrastructure, containerization (Docker), and CI/CD practices.
β’ Exceptional analytical, problem-solving, and communication skills.
Preferred:
β’ Experience evaluating and fine-tuning LLMs or working with RAG architectures.
β’ Background in information retrieval, search, or knowledge management systems.
β’ Contributions to open-source LLM, agent, or prompt engineering projects.
Role: LLM/Prompt-Context Engineer β Full Stack Python (AI Agents, LangGraph, Context Engineering)
Location: Atlanta, GA/Seattle, WA/Dallas, TX
Type of Employment: Contract
Note: Desired applicant needs to attend an In-person in any of the above locations
Description: We are looking for a highly skilled LLM/Prompt-Context Engineer with a strong fullstack Python background to design, develop, and integrate intelligent systems focused on large language models (LLMs), prompt engineering, and advanced context management. In this role, you will play a critical part in architecting context-rich AI solutions, crafting effective prompts, and ensuring seamless agent interactions using frameworks like LangGraph.
Key Responsibilities:
β’ Prompt & Context Engineering: Design, optimize, and evaluate prompts for LLMs to achieve precise, reliable, and contextually relevant outputs across a variety of use cases.
β’ Context Management: Architect and implement dynamic context management strategies, including session memory, retrieval-augmented generation, and user personalization, to enhance agent performance.
β’ LLM Integration: Integrate, fine-tune, and orchestrate LLMs within Python-based applications, leveraging APIs and custom pipelines for scalable deployment.
β’ LangGraph & Agent Flows: Build and manage complex conversational and agent workflows using the LangGraph framework to support multi-agent or multi-step solutions.
β’ Fullstack Development: Develop robust backend services, APIs, and (optionally) front-end interfaces to enable end-to-end AI-powered applications.
β’ Collaboration: Work closely with product, data science, and engineering teams to define requirements, run prompt experiments, and iterate quickly on solutions.
β’ Evaluation & Optimization: Implement testing, monitoring, and evaluation pipelines to continuously improve prompt effectiveness and context handling.
Required Skills & Qualifications:
β’ Deep experience with fullstack Python development (FastAPI, Flask, Django; SQL/NoSQL databases).
β’ Demonstrated expertise in prompt engineering for LLMs (e.g., OpenAI, Anthropic, open-source LLMs).
β’ Strong understanding of context engineering, including session management, vector search, and knowledge retrieval strategies.
β’ Hands-on experience integrating AI agents and LLMs into production systems.
β’ Proficient with conversational flow frameworks such as LangGraph.
β’ Familiarity with cloud infrastructure, containerization (Docker), and CI/CD practices.
β’ Exceptional analytical, problem-solving, and communication skills.
Preferred:
β’ Experience evaluating and fine-tuning LLMs or working with RAG architectures.
β’ Background in information retrieval, search, or knowledge management systems.
β’ Contributions to open-source LLM, agent, or prompt engineering projects.