

Machine Learning Engineer/ LLMOps Engineer-NO C2C
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer/LLMOps Engineer, remote, long-term contract. Key skills include Kubernetes, Docker, CI/CD, Python, and cloud services (AWS/Azure/GCP). Experience with LLM-specific frameworks and performance optimization techniques is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
May 21, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Corp-to-Corp (C2C)
-
π - Security clearance
Unknown
-
π - Location detailed
San Francisco Bay Area
-
π§ - Skills detailed
#Ansible #Cloud #MLflow #Python #Automation #ML (Machine Learning) #GCP (Google Cloud Platform) #Kubernetes #AWS (Amazon Web Services) #Terraform #DevOps #Bash #Docker #Azure
Role description
MLOps/LLMOps Engineer
Location: Remote
Duration: Long Term
Job Description::
β’
β’ Key Responsibilities:
β’ DevOps + ML: Expertise in Kubernetes, Docker, CI/CD tools, and MLflow or similar platforms
β’ Cloud & Infrastructure: Understanding of GPU instance options, cloud services (AWS/Azure/GCP), and optimization techniques
β’ Automation: Proficiency in Python, Bash, and infrastructure-as-code tools like Terraform or Ansible
β’ LLM-Specific Frameworks: Experience with tools like TensorBoard, MLFLow, or equivalent for scaling LLMs
β’ Performance Optimization: Knowledge of techniques to monitor and improve inference speed, throughput, and cost
β’ Collaboration: Ability to work effectively across technical teams while adhering to enterprise architecture standards
MLOps/LLMOps Engineer
Location: Remote
Duration: Long Term
Job Description::
β’
β’ Key Responsibilities:
β’ DevOps + ML: Expertise in Kubernetes, Docker, CI/CD tools, and MLflow or similar platforms
β’ Cloud & Infrastructure: Understanding of GPU instance options, cloud services (AWS/Azure/GCP), and optimization techniques
β’ Automation: Proficiency in Python, Bash, and infrastructure-as-code tools like Terraform or Ansible
β’ LLM-Specific Frameworks: Experience with tools like TensorBoard, MLFLow, or equivalent for scaling LLMs
β’ Performance Optimization: Knowledge of techniques to monitor and improve inference speed, throughput, and cost
β’ Collaboration: Ability to work effectively across technical teams while adhering to enterprise architecture standards