Artificial Intelligence Engineer/ GenAI/LLM Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Artificial Intelligence Engineer/GenAI/LLM Engineer, remote, long-term contract. Key skills include Deep Learning, NLP, PyTorch, TensorFlow, and experience with LLMOps. Familiarity with utility-specific content and advanced fine-tuning techniques is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 28, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Oakland, CA
-
🧠 - Skills detailed
#Hugging Face #Monitoring #Mathematics #Documentation #Deep Learning #Compliance #AI (Artificial Intelligence) #PyTorch #Transformers #Databases #Data Pipeline #Libraries #"ETL (Extract #Transform #Load)" #TensorFlow #NLP (Natural Language Processing)
Role description
GenAI/LLM Engineer Location: Remote Duration: Long Term Job Description: β€’ Implement and optimize advanced fine-tuning approaches (LoRA, PEFT, QLoRA) to adapt foundation models to PG&E's domain β€’ Develop systematic prompt engineering methodologies specific to utility operations, regulatory compliance, and technical documentation β€’ Create reusable prompt templates and libraries to standardize interactions across multiple LLM applications and use cases β€’ Implement prompt testing frameworks to quantitatively evaluate and iteratively improve prompt effectiveness β€’ Establish prompt versioning systems and governance to maintain consistency and quality across applications β€’ Apply model customization techniques like knowledge distillation, quantization, and pruning to reduce memory footprint and inference costs β€’ Tackle memory constraints using techniques such as sharded data parallelism, GPU offloading, or CPU+GPU hybrid approaches β€’ Build robust retrieval-augmented generation (RAG) pipelines with vector databases, embedding pipelines, and optimized chunking strategies β€’ Design advanced prompting strategies including chain-of-thought reasoning, conversation orchestration, and agent-based approaches β€’ Collaborate with the MLOps engineer to ensure models are efficiently deployed, monitored, and retrained as needed β€’ Deep Learning & NLP: Proficiency with PyTorch/TensorFlow, Hugging Face Transformers, DSPy, and advanced LLM training techniques β€’ GPU/Hardware Knowledge: Experience with multi-GPU training, memory optimization, and parallelization strategies β€’ LLMOps: Familiarity with workflows for maintaining LLM-based applications in production and monitoring model performance β€’ Technical Adaptability: Ability to interpret research papers and implement emerging techniques (without necessarily requiring PhD-level mathematics) β€’ Domain Adaptation: Skills in creating data pipelines for fine-tuning models with utility-specific content "