

Generative AI Engineer/ LLM Engineer- NO C2C
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Generative AI Engineer/LLM Engineer, offering a long-term remote contract. Key skills required include proficiency in PyTorch, TensorFlow, and Hugging Face, along with experience in multi-GPU training and LLMOps.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 18, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Corp-to-Corp (C2C)
-
π - Security clearance
Unknown
-
π - Location detailed
Oakland, CA
-
π§ - Skills detailed
#Transformers #Monitoring #Libraries #NLP (Natural Language Processing) #TensorFlow #Deep Learning #PyTorch #AI (Artificial Intelligence) #Data Pipeline #Mathematics #Hugging Face #"ETL (Extract #Transform #Load)"
Role description
GenAI/LLM Engineer
Location: Remote
Duration: Long Term
Job Description:
Create reusable prompt templates and libraries to standardize interactions across multiple LLM applications and use cases
β’ Deep Learning & NLP: Proficiency with PyTorch/TensorFlow, Hugging Face Transformers, DSPy, and advanced LLM training techniques
β’
β’ GPU/Hardware Knowledge: Experience with multi-GPU training, memory optimization, and parallelization strategies
β’
β’ LLMOps: Familiarity with workflows for maintaining LLM-based applications in production and monitoring model performance
β’
β’ Technical Adaptability: Ability to interpret research papers and implement emerging techniques (without necessarily requiring PhD-level mathematics)
β’
β’ Domain Adaptation: Skills in creating data pipelines for fine-tuning models with utility-specific content "
GenAI/LLM Engineer
Location: Remote
Duration: Long Term
Job Description:
Create reusable prompt templates and libraries to standardize interactions across multiple LLM applications and use cases
β’ Deep Learning & NLP: Proficiency with PyTorch/TensorFlow, Hugging Face Transformers, DSPy, and advanced LLM training techniques
β’
β’ GPU/Hardware Knowledge: Experience with multi-GPU training, memory optimization, and parallelization strategies
β’
β’ LLMOps: Familiarity with workflows for maintaining LLM-based applications in production and monitoring model performance
β’
β’ Technical Adaptability: Ability to interpret research papers and implement emerging techniques (without necessarily requiring PhD-level mathematics)
β’
β’ Domain Adaptation: Skills in creating data pipelines for fine-tuning models with utility-specific content "