

Artificial Intelligence Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Artificial Intelligence Engineer with a contract length of "unknown," offering a pay rate of "unknown." Candidates should have 10+ years in AI/ML, 3+ years in Generative AI/LLM projects, and strong Python skills with relevant frameworks.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 29, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Texas, United States
-
π§ - Skills detailed
#BERT #"ETL (Extract #Transform #Load)" #Langchain #Data Privacy #PyTorch #Programming #AI (Artificial Intelligence) #Databases #AWS (Amazon Web Services) #Monitoring #R #Compliance #Hugging Face #TensorFlow #Deployment #Python #GitHub #GCP (Google Cloud Platform) #ML (Machine Learning) #Scala #Cloud #Azure #Model Optimization #Data Ingestion
Role description
Job Title: Generative AI / LLM Engineer
Role Overview
We are seeking an experienced Generative AI / LLM Engineer to design, develop, and deploy advanced AI solutions leveraging large language models (LLMs) and cloud AI platforms. The ideal candidate will have deep technical expertise in AI/ML frameworks, prompt engineering, and foundation models, along with the ability to innovate and deliver enterprise-grade AI systems responsibly and ethically.
Key Responsibilities
Technical Execution
β’ Design, fine-tune, and deploy Generative AI models, including LLMs (e.g., GPT, LLaMA, Claude).
β’ Lead evaluation and integration of foundation models and APIs (e.g., OpenAI, Azure OpenAI Services).
β’ Architect scalable and secure AI pipelines for data ingestion, training, inference, and deployment.
β’ Build GenAI solutions on cloud platforms such as Azure OpenAI, AWS Bedrock, and GCP Vertex AI.
β’ Implement prompt engineering, model optimization, and fine-tuning techniques for domain-specific use cases.
β’ Leverage Python and AI/ML frameworks (TensorFlow, PyTorch, Hugging Face, LangChain, etc.) for development.
Innovation & R&D
β’ Stay updated with latest AI research in multimodal models, MLOps, and emerging LLM techniques.
β’ Drive experimentation and POCs with new models, embeddings, and retrieval-augmented generation (RAG).
β’ Develop internal AI copilots, agents, or productivity tools to enhance organizational efficiency and customer engagement.
Governance & Ethics
β’ Ensure explainability, fairness, and compliance with data privacy and AI ethics guidelines.
β’ Establish monitoring systems for model drift, bias, and performance tracking in production.
Required Qualifications
β’ 10+ years in AI/ML, with 3+ years hands-on in Generative AI / LLM projects.
β’ Proven experience with Microsoft Copilot (GitHub Copilot), Azure OpenAI Services, or AWS Bedrock.
β’ Strong knowledge of LLMs (GPT, BERT, Claude, LLaMA, etc.) and transformer architectures.
β’ Expertise in prompt engineering, embeddings, RAG, fine-tuning, and model optimization.
β’ Strong programming skills in Python, with hands-on experience in ML frameworks (TensorFlow, PyTorch, Hugging Face).
β’ Proven record of delivering enterprise-grade GenAI applications.
Preferred Qualifications
β’ Experience with multimodal AI models (text, vision, speech).
β’ Familiarity with LangChain, Vector Databases (Pinecone, Weaviate, FAISS), and MLOps pipelines.
β’ Strong publication/research track record in AI/ML or LLMs.
Job Title: Generative AI / LLM Engineer
Role Overview
We are seeking an experienced Generative AI / LLM Engineer to design, develop, and deploy advanced AI solutions leveraging large language models (LLMs) and cloud AI platforms. The ideal candidate will have deep technical expertise in AI/ML frameworks, prompt engineering, and foundation models, along with the ability to innovate and deliver enterprise-grade AI systems responsibly and ethically.
Key Responsibilities
Technical Execution
β’ Design, fine-tune, and deploy Generative AI models, including LLMs (e.g., GPT, LLaMA, Claude).
β’ Lead evaluation and integration of foundation models and APIs (e.g., OpenAI, Azure OpenAI Services).
β’ Architect scalable and secure AI pipelines for data ingestion, training, inference, and deployment.
β’ Build GenAI solutions on cloud platforms such as Azure OpenAI, AWS Bedrock, and GCP Vertex AI.
β’ Implement prompt engineering, model optimization, and fine-tuning techniques for domain-specific use cases.
β’ Leverage Python and AI/ML frameworks (TensorFlow, PyTorch, Hugging Face, LangChain, etc.) for development.
Innovation & R&D
β’ Stay updated with latest AI research in multimodal models, MLOps, and emerging LLM techniques.
β’ Drive experimentation and POCs with new models, embeddings, and retrieval-augmented generation (RAG).
β’ Develop internal AI copilots, agents, or productivity tools to enhance organizational efficiency and customer engagement.
Governance & Ethics
β’ Ensure explainability, fairness, and compliance with data privacy and AI ethics guidelines.
β’ Establish monitoring systems for model drift, bias, and performance tracking in production.
Required Qualifications
β’ 10+ years in AI/ML, with 3+ years hands-on in Generative AI / LLM projects.
β’ Proven experience with Microsoft Copilot (GitHub Copilot), Azure OpenAI Services, or AWS Bedrock.
β’ Strong knowledge of LLMs (GPT, BERT, Claude, LLaMA, etc.) and transformer architectures.
β’ Expertise in prompt engineering, embeddings, RAG, fine-tuning, and model optimization.
β’ Strong programming skills in Python, with hands-on experience in ML frameworks (TensorFlow, PyTorch, Hugging Face).
β’ Proven record of delivering enterprise-grade GenAI applications.
Preferred Qualifications
β’ Experience with multimodal AI models (text, vision, speech).
β’ Familiarity with LangChain, Vector Databases (Pinecone, Weaviate, FAISS), and MLOps pipelines.
β’ Strong publication/research track record in AI/ML or LLMs.