

Crossing Hurdles
LLM Engineer | Hybrid
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an LLM Engineer with a contract duration of over 6 months, offering $150K – $400K/year. Key skills include Python, LLM deployment, multi-agent orchestration, cloud platforms (AWS GovCloud, Google GovCloud), and secure coding practices. Hybrid work location.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
1818
-
🗓️ - Date
March 18, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Azure #Security #Data Catalog #ML (Machine Learning) #Cloud #REST API #DevOps #Langchain #Python #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #REST (Representational State Transfer) #Metadata #Documentation #Data Pipeline #Scala
Role description
Position: AI/ML Engineer
Type: Full-Time
Compensation: $150K – $400K/yr
Location: Hybrid/Onsite
Commitment: 10-40 hours/week
Role Responsibilities
• Design, implement, and optimize AI/ML models leveraging LLMs, RAG, and prompt engineering for production-grade applications.
• Develop and orchestrate multi-agent systems using frameworks such as LangGraph and LangChain.
• Integrate and deploy solutions in secure cloud environments including AWS GovCloud, Google GovCloud, Azure IL5+, Vertex AI, and AWS Bedrock.
• Build and maintain robust data pipelines, ETL processes, metadata catalogs, and ontologies to support high-quality AI training and inference.
• Develop REST APIs and SDK integrations to enable seamless interaction between models and systems.
• Collaborate with product, security, and engineering teams to ensure secure and scalable delivery aligned with DevOps and CI/CD best practices.
• Document architectural decisions and communicate technical concepts clearly to technical and non-technical stakeholders.
Requirements
• Strong proficiency in Python for AI/ML development, including REST APIs and SDK integrations.
• Hands-on experience deploying LLM and RAG systems in production environments.
• Experience with multi-agent orchestration and frameworks such as LangGraph or LangChain.
• Deep familiarity with cloud AI platforms including AWS GovCloud, Google GovCloud, Azure IL5+, Vertex AI, or AWS Bedrock.
• Experience building data pipelines, ETL systems, metadata catalogs, and ontologies.
• Strong understanding of secure coding practices and modern DevOps workflows including CI/CD pipelines.
• Excellent written and verbal communication skills for cross-functional collaboration and documentation.
Application Process
• Upload resume
• Interview (15 min)
• Submit form
Position: AI/ML Engineer
Type: Full-Time
Compensation: $150K – $400K/yr
Location: Hybrid/Onsite
Commitment: 10-40 hours/week
Role Responsibilities
• Design, implement, and optimize AI/ML models leveraging LLMs, RAG, and prompt engineering for production-grade applications.
• Develop and orchestrate multi-agent systems using frameworks such as LangGraph and LangChain.
• Integrate and deploy solutions in secure cloud environments including AWS GovCloud, Google GovCloud, Azure IL5+, Vertex AI, and AWS Bedrock.
• Build and maintain robust data pipelines, ETL processes, metadata catalogs, and ontologies to support high-quality AI training and inference.
• Develop REST APIs and SDK integrations to enable seamless interaction between models and systems.
• Collaborate with product, security, and engineering teams to ensure secure and scalable delivery aligned with DevOps and CI/CD best practices.
• Document architectural decisions and communicate technical concepts clearly to technical and non-technical stakeholders.
Requirements
• Strong proficiency in Python for AI/ML development, including REST APIs and SDK integrations.
• Hands-on experience deploying LLM and RAG systems in production environments.
• Experience with multi-agent orchestration and frameworks such as LangGraph or LangChain.
• Deep familiarity with cloud AI platforms including AWS GovCloud, Google GovCloud, Azure IL5+, Vertex AI, or AWS Bedrock.
• Experience building data pipelines, ETL systems, metadata catalogs, and ontologies.
• Strong understanding of secure coding practices and modern DevOps workflows including CI/CD pipelines.
• Excellent written and verbal communication skills for cross-functional collaboration and documentation.
Application Process
• Upload resume
• Interview (15 min)
• Submit form






