

Tuppl
Python Wirth GenAI :: Charlotte, NC :: Local
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Python Developer with 6-7 years of experience in Generative AI and Large Language Models, located in Charlotte, NC. Contract duration with a pay rate of "XX". Requires hands-on skills in Python, LangChain, and vector databases.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Langchain #Data Science #AI (Artificial Intelligence) #Web Services #Cloud #GCP (Google Cloud Platform) #Azure #Docker #Microsoft Azure #GIT #Programming #REST API #Hugging Face #Flask #Model Evaluation #REST (Representational State Transfer) #NLP (Natural Language Processing) #Python #Kubernetes #Databases #ML (Machine Learning) #FastAPI #AWS (Amazon Web Services) #Scala
Role description
Role: Python Wirth GenAI
Locations: Charlotte, NC- Local
Duration: Contract
Need Face to Face Interview Only
OPT will workable
Experince 6 -7 Years
Job Summary
We are seeking a skilled Python Developer with hands-on experience in Generative AI (GenAI) and Large Language Models (LLMs). The ideal candidate will design, develop, and deploy AI-powered applications using Python and modern GenAI frameworks such as LangChain, LlamaIndex, and vector databases. You will work closely with data scientists, product managers, and engineering teams to build scalable AI solutions including chatbots, copilots, document intelligence systems, and Retrieval-Augmented Generation (RAG) applications.
Key Responsibilities
• Develop and maintain Python-based applications leveraging Generative AI and LLM technologies.
• Build RAG pipelines using vector databases and embedding models.
• Integrate models from OpenAI, Anthropic, Google Gemini, Meta Llama, and similar platforms.
• Design prompt engineering strategies to optimize model performance.
• Implement APIs and backend services using FastAPI or Flask.
• Work with cloud AI services such as Amazon Web Services Bedrock, Microsoft Azure OpenAI, and Google Vertex AI.
• Deploy applications using Docker and Kubernetes.
• Monitor model quality, latency, and cost.
• Collaborate with stakeholders to gather requirements and deliver production-ready AI solutions.
Required Skills
• Strong programming experience in Python.
• Hands-on experience with Generative AI and LLMs.
• Experience with LangChain, LlamaIndex, CrewAI, or similar frameworks.
• Knowledge of RAG, embeddings, and vector search.
• Experience with vector databases such as Pinecone, Weaviate, FAISS, Chroma, or Milvus.
• Proficiency with REST APIs, FastAPI, or Flask.
• Familiarity with Git, Docker, and CI/CD pipelines.
• Understanding of prompt engineering and model evaluation.
Preferred Skills
• Experience fine-tuning models using Hugging Face and LoRA.
• Knowledge of NLP and machine learning concepts.
• Experience with cloud services (AWS, Azure, or GCP).
• Familiarity with orchestration and agent frameworks.
kat@tuppl.com | (469) 351-8138
Role: Python Wirth GenAI
Locations: Charlotte, NC- Local
Duration: Contract
Need Face to Face Interview Only
OPT will workable
Experince 6 -7 Years
Job Summary
We are seeking a skilled Python Developer with hands-on experience in Generative AI (GenAI) and Large Language Models (LLMs). The ideal candidate will design, develop, and deploy AI-powered applications using Python and modern GenAI frameworks such as LangChain, LlamaIndex, and vector databases. You will work closely with data scientists, product managers, and engineering teams to build scalable AI solutions including chatbots, copilots, document intelligence systems, and Retrieval-Augmented Generation (RAG) applications.
Key Responsibilities
• Develop and maintain Python-based applications leveraging Generative AI and LLM technologies.
• Build RAG pipelines using vector databases and embedding models.
• Integrate models from OpenAI, Anthropic, Google Gemini, Meta Llama, and similar platforms.
• Design prompt engineering strategies to optimize model performance.
• Implement APIs and backend services using FastAPI or Flask.
• Work with cloud AI services such as Amazon Web Services Bedrock, Microsoft Azure OpenAI, and Google Vertex AI.
• Deploy applications using Docker and Kubernetes.
• Monitor model quality, latency, and cost.
• Collaborate with stakeholders to gather requirements and deliver production-ready AI solutions.
Required Skills
• Strong programming experience in Python.
• Hands-on experience with Generative AI and LLMs.
• Experience with LangChain, LlamaIndex, CrewAI, or similar frameworks.
• Knowledge of RAG, embeddings, and vector search.
• Experience with vector databases such as Pinecone, Weaviate, FAISS, Chroma, or Milvus.
• Proficiency with REST APIs, FastAPI, or Flask.
• Familiarity with Git, Docker, and CI/CD pipelines.
• Understanding of prompt engineering and model evaluation.
Preferred Skills
• Experience fine-tuning models using Hugging Face and LoRA.
• Knowledge of NLP and machine learning concepts.
• Experience with cloud services (AWS, Azure, or GCP).
• Familiarity with orchestration and agent frameworks.
kat@tuppl.com | (469) 351-8138






