

TEK NINJAS
AI/ML Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a GEN AI/ML Engineer based in Dallas, TX or Charlotte, NC, with a 12-month contract at an undisclosed pay rate. Requires 10+ years in AI, ML, and Data Science, with strong Python and MLOps skills.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 21, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Databases #API (Application Programming Interface) #GitHub #Kubernetes #Scala #Cloud #MLflow #Docker #Hugging Face #Lambda (AWS Lambda) #Microservices #Langchain #REST API #PyTorch #REST (Representational State Transfer) #S3 (Amazon Simple Storage Service) #Python #Data Science #AWS SageMaker #AI (Artificial Intelligence) #ML Ops (Machine Learning Operations) #Transformers #GCP (Google Cloud Platform) #Azure #ML (Machine Learning) #Deployment #TensorFlow #"ETL (Extract #Transform #Load)" #FastAPI #pydantic #AWS (Amazon Web Services) #Automation #JSON (JavaScript Object Notation) #SageMaker #EC2
Role description
Job Title: GEN AI/ML Engineer
Location&: Dallas, TX or Charlotte, NC (Onsite-Hybrid. Will consider candidates willing to relocate to clientβs location)
Duration: 12 Monthts
Must Have Skills:
β¦ GEN AI
β¦ Agentic AI
β¦ ML Ops
β¦ Python
β¦ ML
β¦ Data Science
β¦ RAG
β¦ LLM
Nice to Have Skills:
β¦ GCP
β¦ Prompt Engineering
Detailed Job Description:
We are seeking a highly skilled Generative AI Engineer with a strong Python background to design, develop, and deploy cutting-edge AI solutions. The ideal candidate will have hands-on experience with Large Language Models (LLMs), prompt engineering, and Gen AI frameworks, along with expertise in building scalable AI applications. Experience in Developing Agentic AI solutions.
Key Responsibilities:
β¦ Design and implement Generative AI models for text, image, or multimodal applications.
β¦ Develop prompt engineering strategies and embedding-based retrieval systems.
β¦ Integrate Gen AI capabilities into web applications and enterprise workflows.
β¦ Build agentic AI applications with context engineering and MCP tools.
Required Skills & Qualifications:
β¦ 10+ years of hands-on experience in AI, Data science, ML, GEN AI.
β¦ Strong hands on experience designing and deploying Retrieval-Augmented Generation (RAG) pipelines
β¦ Strong MLOps/LLMOps experience with CI/CD automation,
β¦ Extensive experience with LangChain, LangGraph, and agentic AI patterns including routing, memory, multi-agent orchestration, guardrails, and failure recovery.
β¦ Experience in Cloud-native engineering across AWS (SageMaker, Lambda, ECS/Fargate, S3, API Gateway, Step Functions) and GCP (Vertex AI) for scalable AI delivery
β¦ Experience in Developing microservices and API development using FastAPI, REST APIs, Pydantic/JSON schemas, Docker, and Kubernetes for low-latency serving.
β¦ Strong Hands-on experience with vector databases and semantic search technologies including Pinecone, FAISS, ChromaDB, and embedding lifecycle management
β¦ Strong proficiency in Python and AI/ML frameworks (PyTorch, TensorFlow).
β¦ Hands on experience using session and memory for building multi-agent systems along with using MCP tools.
β¦ Hands-on experience with LLMs, transformers, and Hugging Face ecosystem.
β¦ Knowledge and experience with vector databases and RAG technique for semantic search.
β¦ Familiarity with cloud AI services (AWS SageMaker, Azure OpenAI, GCP Vertex AI).
β¦ Understanding of MLOps practices for scalable AI deployment.
β¦ Strong experience in working with LLM fine-tuning with LoRA, QLoRA, PEFT,
β¦ Strong experience in Architected advanced RAG systems using Pinecone, FAISS, Weaviate, Chroma, hybrid retrieval, and custom embeddings,
β¦ Strong experience in Designing end-to-end LLMOps/MLOps pipelines using MLflow, DVC, SageMaker Pipelines, Vertex AI Pipelines, and GitHub Actions
β¦ Experience in using cloud-native AI systems on AWS (SageMaker, Lambda, EKS, EC2, Step Functions, S3, Glue) and GCP Vertex AI, supporting high-volume inference and secure enterprise operations
β¦ Experience in developing multi-agent orchestration workflows using LangGraph and CrewAI for tool-calling, validation agents, automated reasoning, and workflow supervision
Job Title: GEN AI/ML Engineer
Location&: Dallas, TX or Charlotte, NC (Onsite-Hybrid. Will consider candidates willing to relocate to clientβs location)
Duration: 12 Monthts
Must Have Skills:
β¦ GEN AI
β¦ Agentic AI
β¦ ML Ops
β¦ Python
β¦ ML
β¦ Data Science
β¦ RAG
β¦ LLM
Nice to Have Skills:
β¦ GCP
β¦ Prompt Engineering
Detailed Job Description:
We are seeking a highly skilled Generative AI Engineer with a strong Python background to design, develop, and deploy cutting-edge AI solutions. The ideal candidate will have hands-on experience with Large Language Models (LLMs), prompt engineering, and Gen AI frameworks, along with expertise in building scalable AI applications. Experience in Developing Agentic AI solutions.
Key Responsibilities:
β¦ Design and implement Generative AI models for text, image, or multimodal applications.
β¦ Develop prompt engineering strategies and embedding-based retrieval systems.
β¦ Integrate Gen AI capabilities into web applications and enterprise workflows.
β¦ Build agentic AI applications with context engineering and MCP tools.
Required Skills & Qualifications:
β¦ 10+ years of hands-on experience in AI, Data science, ML, GEN AI.
β¦ Strong hands on experience designing and deploying Retrieval-Augmented Generation (RAG) pipelines
β¦ Strong MLOps/LLMOps experience with CI/CD automation,
β¦ Extensive experience with LangChain, LangGraph, and agentic AI patterns including routing, memory, multi-agent orchestration, guardrails, and failure recovery.
β¦ Experience in Cloud-native engineering across AWS (SageMaker, Lambda, ECS/Fargate, S3, API Gateway, Step Functions) and GCP (Vertex AI) for scalable AI delivery
β¦ Experience in Developing microservices and API development using FastAPI, REST APIs, Pydantic/JSON schemas, Docker, and Kubernetes for low-latency serving.
β¦ Strong Hands-on experience with vector databases and semantic search technologies including Pinecone, FAISS, ChromaDB, and embedding lifecycle management
β¦ Strong proficiency in Python and AI/ML frameworks (PyTorch, TensorFlow).
β¦ Hands on experience using session and memory for building multi-agent systems along with using MCP tools.
β¦ Hands-on experience with LLMs, transformers, and Hugging Face ecosystem.
β¦ Knowledge and experience with vector databases and RAG technique for semantic search.
β¦ Familiarity with cloud AI services (AWS SageMaker, Azure OpenAI, GCP Vertex AI).
β¦ Understanding of MLOps practices for scalable AI deployment.
β¦ Strong experience in working with LLM fine-tuning with LoRA, QLoRA, PEFT,
β¦ Strong experience in Architected advanced RAG systems using Pinecone, FAISS, Weaviate, Chroma, hybrid retrieval, and custom embeddings,
β¦ Strong experience in Designing end-to-end LLMOps/MLOps pipelines using MLflow, DVC, SageMaker Pipelines, Vertex AI Pipelines, and GitHub Actions
β¦ Experience in using cloud-native AI systems on AWS (SageMaker, Lambda, EKS, EC2, Step Functions, S3, Glue) and GCP Vertex AI, supporting high-volume inference and secure enterprise operations
β¦ Experience in developing multi-agent orchestration workflows using LangGraph and CrewAI for tool-calling, validation agents, automated reasoning, and workflow supervision





