Senior LLM Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior LLM Engineer in Dallas, TX, for 12 months, offering a competitive pay rate. Requires a Bachelor's in IT, 8+ years in ML, 2+ years with LLMs, proficiency in Python and SQL, and cloud service knowledge.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 25, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Dallas, TX
🧠 - Skills detailed
#Langchain #Azure #ML (Machine Learning) #Programming #Data Science #Scala #AI (Artificial Intelligence) #Computer Science #GCP (Google Cloud Platform) #Python #AWS (Amazon Web Services) #Cloud #Deployment #SQL (Structured Query Language)
Role description

Our client seeks an Sr. LLM Engineer for a Long term project in Dallas, TX . Below is the detailed requirement

Title: Sr. LLM Engineer

Location : Dallas, TX (local candidates)

Duration: 12 Months

Job Description:

Bachelor's degree preferably in Computer Science, Information technology, Computer Engineering, or related IT discipline or equivalent experience with 12+ Minimum Experience

   • 8+ years of professional experience in building Machine Learning models & systems.

   • 2+ years of hands-on experience in how LLMs work & Generative AI (LLM) techniques, particularly prompt engineering, RAG, and agents.

   • Expert proficiency in programming skills in Python, Langchain/Lang graph, and SQL is a must.

   • Understanding of Cloud services, including Azure, GCP, or AWS.

   • Excellent communication skills to effectively collaborate with business SMEs.

Roles & Responsibilities

   • Develop and optimize LLM-based solutions: Lead the design, training, fine-tuning, and deployment of large language models, leveraging techniques like prompt engineering, retrieval-augmented generation (RAG), and agent-based architectures.

   • Codebase ownership: Maintain high-quality, efficient code in Python (using frameworks like LangChain/LangGraph) and SQL, focusing on reusable components, scalability, and performance best practices.

   • Cloud integration: Aid in deployment of GenAI applications on cloud platforms (Azure, GCP, or AWS), optimizing resource usage and ensuring robust CI/CD processes.

   • Cross-functional collaboration: Work closely with product owners, data scientists, and business SMEs to define project requirements, translate technical details, and deliver impactful AI products.

   • Continuous innovation: Stay abreast of the latest advancements in LLM research and generative AI, proposing and experimenting with emerging techniques to drive ongoing improvements in model performance