

LLM Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an LLM Engineer in Dallas, TX (Hybrid - 3 days onsite), offering a contract length of "unknown" and a pay rate of "unknown." Key requirements include 5+ years in Machine Learning, 2+ years with LLMs, and proficiency in Python, LangChain, SQL, and cloud platforms.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 12, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#AWS (Amazon Web Services) #Azure #Scala #Deployment #GCP (Google Cloud Platform) #Cloud #AI (Artificial Intelligence) #SQL (Structured Query Language) #Programming #Python #ML (Machine Learning) #Langchain #Data Science
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: LLM Engineer
Dallas, TX (Hybrid - 3 days onsite)
Required Skills:
β’ 5+ years of professional experience in developing Machine Learning models and systems.
β’ 2+ years of hands-on expertise with LLMs and Generative AI, focusing on prompt engineering, Retrieval-Augmented Generation (RAG), and AI agents.
β’ Proficiency in programming using Python, LangChain/LangGraph, and SQL (must-have).
β’ Strong understanding of Cloud platforms such as Azure, GCP, or AWS.
β’ Excellent communication skills to collaborate effectively with business SMEs and technical teams.
Roles & Responsibilities:
β’ Develop & Optimize LLM-Based Solutions: Lead the design, training, fine-tuning, and deployment of LLMs using advanced techniques such as prompt engineering, RAG, and agent-based architectures.
β’ Codebase Ownership: Write and maintain high-quality, scalable, and efficient code in Python (LangChain/LangGraph) and SQL, ensuring reusability and performance optimization.
β’ Cloud Integration: Support deployment of GenAI applications on Azure, GCP, or AWS, ensuring efficient resource management and robust CI/CD processes.
β’ Cross-Functional Collaboration: Work closely with product owners, data scientists, and business SMEs to translate technical requirements into real-world AI-driven solutions.
β’ Continuous Innovation: Stay updated with the latest advancements in LLM research and Generative AI, exploring emerging techniques to enhance model performance.