

GEN AI Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a GEN AI Engineer, requiring fullstack Python expertise and LLM/prompt-context engineering skills. Contract length is unspecified, with a pay rate of "unknown." Work location is onsite in Atlanta, GA; Dallas, TX; or Seattle, WA.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 17, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Docker #Visualization #Data Pipeline #ML (Machine Learning) #Cloud #Scala #Django #Monitoring #FastAPI #Azure #Flask #Knowledge Graph #Data Science #Deployment #Security #React #SQL (Structured Query Language) #Data Security #AI (Artificial Intelligence) #Logging #Angular #GCP (Google Cloud Platform) #Compliance #NoSQL #Databases #API (Application Programming Interface) #Python #Database Management #AWS (Amazon Web Services)
Role description
Hi,
Please let me know your interest on below position and share your updated resume.
3 Location β Atlanta,GA, Dallas, TX, Seattle, WA (Day1 Onsite no remote)
Required in-person interview
Position β 1
LLM/Prompt-Context Engineer β Fullstack Python (AI Agents, LangGraph, Context Engineering)
Position Overview:
We are looking for a highly skilled LLM/Prompt-Context Engineer with a strong fullstack Python background to design, develop, and integrate intelligent systems focused on large language models (LLMs), prompt engineering, and advanced context management. In this role, you will play a critical part in architecting context-rich AI solutions, crafting effective prompts, and ensuring seamless agent interactions using frameworks like LangGraph.
Key Responsibilities:
β’ Prompt & Context Engineering:
β’ Design, optimize, and evaluate prompts for LLMs to achieve precise, reliable, and contextually relevant outputs across a variety of use cases.
β’ Context Management:
β’ Architect and implement dynamic context management strategies, including session memory, retrieval-augmented generation, and user personalization, to enhance agent performance.
β’ LLM Integration:
β’ Integrate, fine-tune, and orchestrate LLMs within Python-based applications, leveraging APIs and custom pipelines for scalable deployment.
β’ LangGraph & Agent Flows:
β’ Build and manage complex conversational and agent workflows using the LangGraph framework to support multi-agent or multi-step solutions.
β’ Fullstack Development:
β’ Develop robust backend services, APIs, and (optionally) front-end interfaces to enable end-to-end AI-powered applications.
β’ Collaboration:
β’ Work closely with product, data science, and engineering teams to define requirements, run prompt experiments, and iterate quickly on solutions.
β’ Evaluation & Optimization:
β’ Implement testing, monitoring, and evaluation pipelines to continuously improve prompt effectiveness and context handling.
Required Skills & Qualifications:
β’ Deep experience with fullstack Python development (FastAPI, Flask, Django; SQL/NoSQL databases).
β’ Demonstrated expertise in prompt engineering for LLMs (e.g., OpenAI, Anthropic, open-source LLMs).
β’ Strong understanding of context engineering, including session management, vector search, and knowledge retrieval strategies.
β’ Hands-on experience integrating AI agents and LLMs into production systems.
β’ Proficient with conversational flow frameworks such as LangGraph.
β’ Familiarity with cloud infrastructure, containerization (Docker), and CI/CD practices.
β’ Exceptional analytical, problem-solving, and communication skills.
Preferred:
β’ Experience evaluating and fine-tuning LLMs or working with RAG architectures.
β’ Background in information retrieval, search, or knowledge management systems.
β’ Contributions to open-source LLM, agent, or prompt engineering projects.
Position β 2
Backend/Agent Engineer β Python, AI Agents, LangGraph, MCP
Position Overview:
We are seeking an experienced Backend/Agent Engineer to join our team, specializing in designing, developing, and maintaining robust backend systems for AI-driven agent platforms. In this role, you will focus on architecting scalable, reliable backend services and integrating advanced AI agents using Python and frameworks like LangGraph and MCP (Model Composition/Coordination Platform). You will work closely with multidisciplinary teams to deliver intelligent, context-aware solutions that power real-world applications.
Key Responsibilities:
β’ Backend System Development:
β’ Architect, implement, and maintain scalable backend services and APIs using Python (FastAPI, Flask, Django) to support AI agent operations.
β’ AI Agent & MCP Integration:
β’ Develop, deploy, and orchestrate autonomous AI agents, and leverage MCP to coordinate multiple AI models and agent workflows, ensuring seamless interoperability and orchestration.
β’ Agent Orchestration & Workflow:
β’ Utilize frameworks like LangGraph and MCP to design, compose, and manage complex agent workflows, multi-agent coordination, and state management.
β’ System Integration:
β’ Integrate external data sources, third-party APIs, vector databases, and machine learning models to enhance agent capabilities.
β’ Performance & Reliability:
β’ Optimize system performance, ensure high availability, and implement robust logging, monitoring, and error-handling solutions.
β’ Collaboration:
β’ Work cross-functionally with front-end developers, data scientists, and product teams to translate requirements into technical designs and deliverables.
β’ Security & Compliance:
β’ Implement best practices for data security, privacy, and compliance in all backend operations.
Required Skills & Qualifications:
β’ Strong experience with backend Python frameworks (FastAPI, Flask, Django), RESTful API design, and database management (SQL/NoSQL).
β’ Proven expertise in integrating and orchestrating AI agents, LLMs, or multi-agent systems in production environments.
β’ Hands-on experience with MCP (Model Composition/Coordination Platform) for coordinating and managing AI models and agent workflows.
β’ Familiarity with agent workflow orchestration tools such as LangGraph.
β’ Experience with containerization (Docker), CI/CD pipelines, and cloud infrastructure (AWS, GCP, Azure).
β’ Knowledge of vector databases, data pipelines, and scalable distributed systems.
β’ Excellent problem-solving abilities, attention to detail, and communication skills.
Preferred:
β’ Background in designing backend architectures for AI or data-intensive applications.
β’ Experience with knowledge graphs, RAG pipelines, or advanced agent coordination.
β’ Open-source contributions or experience with modern agent/LLM frameworks, MCP, or similar platforms.
Position β 3
Integration Engineer β Fullstack Python (AI Agents, LangGraph, Context Engineering)
Position Overview:
We are seeking a talented and driven Integration Engineer with a strong fullstack Python background to join our AI solutions team. In this role, you will be responsible for building, integrating, and optimizing intelligent systems leveraging AI agents, the LangGraph framework, and advanced context engineering techniques. You will work across the stack, collaborating with data scientists, ML engineers, and product teams to deliver scalable, adaptive, and context-aware solutions.
Key Responsibilities:
β’ Fullstack Development:
β’ Design, develop, and maintain end-to-end Python applications that interface with AI agents and backend services, ensuring robust, scalable, and maintainable codebases.
β’ AI Agent Integration:
β’ Implement and orchestrate autonomous and semi-autonomous AI agents, connecting them with APIs, data sources, and user-facing interfaces.
β’ LangGraph Utilization:
β’ Leverage the LangGraph framework to construct, visualize, and manage complex conversational flows and agent interactions.
β’ Context Engineering:
β’ Architect and implement systems for dynamic context management, memory, and prompt engineering to optimize agent behavior and user experience.
β’ System Integration:
β’ Integrate machine learning models, vector databases, and third-party services, ensuring seamless interoperability across components.
β’ Collaboration:
β’ Work closely with cross-functional teams to define requirements, propose technical solutions, and drive successful project delivery.
β’ Testing & Optimization:
β’ Develop automated tests, monitor system performance, and continually refine integration points for reliability and efficiency.
Required Skills & Qualifications:
β’ Proven experience with fullstack Python development (FastAPI, Flask, Django, React/Vue/Angular, SQL/NoSQL databases).
β’ Hands-on experience building and integrating AI agents (LLMs, RAG, multi-agent systems) in production environments.
β’ Familiarity with the LangGraph framework and agent orchestration patterns.
β’ Deep understanding of context engineering, including retrieval-augmented generation, prompt design, and session management.
β’ Experience with cloud platforms (AWS, GCP, Azure), containerization (Docker), and CI/CD pipelines.
β’ Strong problem-solving skills, attention to detail, and a collaborative mindset.
Preferred:
β’ Knowledge of advanced ML model serving, vector search, or knowledge graph integration.
β’ Experience with modern front-end frameworks and data visualization tools.
β’ Contributions to open-source AI or agent frameworks.
Regards,
Ashish Rawat
Hi,
Please let me know your interest on below position and share your updated resume.
3 Location β Atlanta,GA, Dallas, TX, Seattle, WA (Day1 Onsite no remote)
Required in-person interview
Position β 1
LLM/Prompt-Context Engineer β Fullstack Python (AI Agents, LangGraph, Context Engineering)
Position Overview:
We are looking for a highly skilled LLM/Prompt-Context Engineer with a strong fullstack Python background to design, develop, and integrate intelligent systems focused on large language models (LLMs), prompt engineering, and advanced context management. In this role, you will play a critical part in architecting context-rich AI solutions, crafting effective prompts, and ensuring seamless agent interactions using frameworks like LangGraph.
Key Responsibilities:
β’ Prompt & Context Engineering:
β’ Design, optimize, and evaluate prompts for LLMs to achieve precise, reliable, and contextually relevant outputs across a variety of use cases.
β’ Context Management:
β’ Architect and implement dynamic context management strategies, including session memory, retrieval-augmented generation, and user personalization, to enhance agent performance.
β’ LLM Integration:
β’ Integrate, fine-tune, and orchestrate LLMs within Python-based applications, leveraging APIs and custom pipelines for scalable deployment.
β’ LangGraph & Agent Flows:
β’ Build and manage complex conversational and agent workflows using the LangGraph framework to support multi-agent or multi-step solutions.
β’ Fullstack Development:
β’ Develop robust backend services, APIs, and (optionally) front-end interfaces to enable end-to-end AI-powered applications.
β’ Collaboration:
β’ Work closely with product, data science, and engineering teams to define requirements, run prompt experiments, and iterate quickly on solutions.
β’ Evaluation & Optimization:
β’ Implement testing, monitoring, and evaluation pipelines to continuously improve prompt effectiveness and context handling.
Required Skills & Qualifications:
β’ Deep experience with fullstack Python development (FastAPI, Flask, Django; SQL/NoSQL databases).
β’ Demonstrated expertise in prompt engineering for LLMs (e.g., OpenAI, Anthropic, open-source LLMs).
β’ Strong understanding of context engineering, including session management, vector search, and knowledge retrieval strategies.
β’ Hands-on experience integrating AI agents and LLMs into production systems.
β’ Proficient with conversational flow frameworks such as LangGraph.
β’ Familiarity with cloud infrastructure, containerization (Docker), and CI/CD practices.
β’ Exceptional analytical, problem-solving, and communication skills.
Preferred:
β’ Experience evaluating and fine-tuning LLMs or working with RAG architectures.
β’ Background in information retrieval, search, or knowledge management systems.
β’ Contributions to open-source LLM, agent, or prompt engineering projects.
Position β 2
Backend/Agent Engineer β Python, AI Agents, LangGraph, MCP
Position Overview:
We are seeking an experienced Backend/Agent Engineer to join our team, specializing in designing, developing, and maintaining robust backend systems for AI-driven agent platforms. In this role, you will focus on architecting scalable, reliable backend services and integrating advanced AI agents using Python and frameworks like LangGraph and MCP (Model Composition/Coordination Platform). You will work closely with multidisciplinary teams to deliver intelligent, context-aware solutions that power real-world applications.
Key Responsibilities:
β’ Backend System Development:
β’ Architect, implement, and maintain scalable backend services and APIs using Python (FastAPI, Flask, Django) to support AI agent operations.
β’ AI Agent & MCP Integration:
β’ Develop, deploy, and orchestrate autonomous AI agents, and leverage MCP to coordinate multiple AI models and agent workflows, ensuring seamless interoperability and orchestration.
β’ Agent Orchestration & Workflow:
β’ Utilize frameworks like LangGraph and MCP to design, compose, and manage complex agent workflows, multi-agent coordination, and state management.
β’ System Integration:
β’ Integrate external data sources, third-party APIs, vector databases, and machine learning models to enhance agent capabilities.
β’ Performance & Reliability:
β’ Optimize system performance, ensure high availability, and implement robust logging, monitoring, and error-handling solutions.
β’ Collaboration:
β’ Work cross-functionally with front-end developers, data scientists, and product teams to translate requirements into technical designs and deliverables.
β’ Security & Compliance:
β’ Implement best practices for data security, privacy, and compliance in all backend operations.
Required Skills & Qualifications:
β’ Strong experience with backend Python frameworks (FastAPI, Flask, Django), RESTful API design, and database management (SQL/NoSQL).
β’ Proven expertise in integrating and orchestrating AI agents, LLMs, or multi-agent systems in production environments.
β’ Hands-on experience with MCP (Model Composition/Coordination Platform) for coordinating and managing AI models and agent workflows.
β’ Familiarity with agent workflow orchestration tools such as LangGraph.
β’ Experience with containerization (Docker), CI/CD pipelines, and cloud infrastructure (AWS, GCP, Azure).
β’ Knowledge of vector databases, data pipelines, and scalable distributed systems.
β’ Excellent problem-solving abilities, attention to detail, and communication skills.
Preferred:
β’ Background in designing backend architectures for AI or data-intensive applications.
β’ Experience with knowledge graphs, RAG pipelines, or advanced agent coordination.
β’ Open-source contributions or experience with modern agent/LLM frameworks, MCP, or similar platforms.
Position β 3
Integration Engineer β Fullstack Python (AI Agents, LangGraph, Context Engineering)
Position Overview:
We are seeking a talented and driven Integration Engineer with a strong fullstack Python background to join our AI solutions team. In this role, you will be responsible for building, integrating, and optimizing intelligent systems leveraging AI agents, the LangGraph framework, and advanced context engineering techniques. You will work across the stack, collaborating with data scientists, ML engineers, and product teams to deliver scalable, adaptive, and context-aware solutions.
Key Responsibilities:
β’ Fullstack Development:
β’ Design, develop, and maintain end-to-end Python applications that interface with AI agents and backend services, ensuring robust, scalable, and maintainable codebases.
β’ AI Agent Integration:
β’ Implement and orchestrate autonomous and semi-autonomous AI agents, connecting them with APIs, data sources, and user-facing interfaces.
β’ LangGraph Utilization:
β’ Leverage the LangGraph framework to construct, visualize, and manage complex conversational flows and agent interactions.
β’ Context Engineering:
β’ Architect and implement systems for dynamic context management, memory, and prompt engineering to optimize agent behavior and user experience.
β’ System Integration:
β’ Integrate machine learning models, vector databases, and third-party services, ensuring seamless interoperability across components.
β’ Collaboration:
β’ Work closely with cross-functional teams to define requirements, propose technical solutions, and drive successful project delivery.
β’ Testing & Optimization:
β’ Develop automated tests, monitor system performance, and continually refine integration points for reliability and efficiency.
Required Skills & Qualifications:
β’ Proven experience with fullstack Python development (FastAPI, Flask, Django, React/Vue/Angular, SQL/NoSQL databases).
β’ Hands-on experience building and integrating AI agents (LLMs, RAG, multi-agent systems) in production environments.
β’ Familiarity with the LangGraph framework and agent orchestration patterns.
β’ Deep understanding of context engineering, including retrieval-augmented generation, prompt design, and session management.
β’ Experience with cloud platforms (AWS, GCP, Azure), containerization (Docker), and CI/CD pipelines.
β’ Strong problem-solving skills, attention to detail, and a collaborative mindset.
Preferred:
β’ Knowledge of advanced ML model serving, vector search, or knowledge graph integration.
β’ Experience with modern front-end frameworks and data visualization tools.
β’ Contributions to open-source AI or agent frameworks.
Regards,
Ashish Rawat