Generative AI Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a "Generative AI Engineer" in Austin, TX, offering a long-term contract. Key skills include proficiency in Python, LLM prompt engineering, and experience with AI/ML libraries. Familiarity with vector databases and cloud platforms is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 3, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Libraries #Stories #Documentation #TensorFlow #PyTorch #ML (Machine Learning) #Hugging Face #AI (Artificial Intelligence) #DevOps #Data Science #Langchain #Monitoring #Cloud #Scala #Databases #Python
Role description
Job Title: GenAI Application Engineer (Python & LLM Prompt Engineering) Location: Austin, TX (Onsite) Duration: Long Term Contract Job Summary: We are seeking a highly skilled and motivated GenAI Application Engineer to join our team in Austin, TX. The ideal candidate will have hands-on experience in building and deploying Generative AI applications using Python, with a strong foundation in LLM prompt engineering, RAG pipelines, and vector database integration. This role involves working closely with cross-functional teams to design, develop, and optimize AI-driven solutions that enhance user experience and operational efficiency. Required Skills: β€’ Proficiency in Python and experience with AI/ML libraries (e.g., LangChain, Hugging Face, PyTorch, TensorFlow). β€’ Agentic AI β€’ Strong understanding of LLMs, prompt engineering, and RAG architecture. β€’ Experience with vector databases β€’ Familiarity with cloud platforms Key Responsibilities: β€’ Design and develop GenAI-powered applications using Python and modern AI frameworks. β€’ Engineer and optimize prompts for LLMs. β€’ Implement Retrieval-Augmented Generation (RAG) pipelines aggregating data from diverse sources such as conversation histories and product documentation β€’ Integrate and manage vector databases for semantic search and context retrieval. β€’ Collaborate with product managers, data scientists, and DevOps teams to deliver scalable AI solutions. β€’ Fine-tune pre-trained LLMs for domain-specific tasks and performance improvements. β€’ Ensure robust testing, monitoring, and documentation of AI models and pipelines. If you're interested, feel free to connect! πŸ“© Email: Anmol@coretek.io | πŸ“ž Phone: +1 816-445-8499