

Tekgence Inc
LLMOps Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an "LLMOps Engineer" with a contract length of "unknown," offering a pay rate of "$X per hour." Requires 5+ years in LLM/ML Ops, proficiency in Python/Go, and expertise in containerization, orchestration, and monitoring tools.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 31, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Tampa, FL
-
🧠 - Skills detailed
#Deployment #Monitoring #AWS (Amazon Web Services) #Terraform #DevOps #Python #Batch #Libraries #ML (Machine Learning) #Logging #Cloud #GCP (Google Cloud Platform) #ML Ops (Machine Learning Operations) #Microservices #Grafana #Kubernetes #Model Deployment #"ETL (Extract #Transform #Load)" #Prometheus
Role description
Minimum Qualifications
• 5+ years of experience in LLM/ML Ops, DevOps, or infrastructure engineering with a focus on machine learning systems.
• Advance level proficiency in Python/Go, with ability to write clean, performant, and maintainable production code.
• Deep understanding of transformer architectures, LLM tokenization, attention mechanisms, memory management, and batching strategies.
• Proven experience deploying and optimizing LLMs using multiple inference engines.
• Strong background in containerization and orchestration (Kubernetes, Helm).
• Familiarity with monitoring tools (e.g., Prometheus, Grafana), logging frameworks, and performance profiling.
Preferred Qualifications
• Experience integrating LLMs into micro-services or edge inference platforms.
• Experience with Ray distributed inference
• Hands-on with quantization libraries
• Contributions to open-source ML infrastructure or LLM optimization tools.
• Familiarity with cloud platforms (AWS, GCP) and infrastructure-as-code (Terraform).
• Exposure to secure and compliant model deployment workflows
Minimum Qualifications
• 5+ years of experience in LLM/ML Ops, DevOps, or infrastructure engineering with a focus on machine learning systems.
• Advance level proficiency in Python/Go, with ability to write clean, performant, and maintainable production code.
• Deep understanding of transformer architectures, LLM tokenization, attention mechanisms, memory management, and batching strategies.
• Proven experience deploying and optimizing LLMs using multiple inference engines.
• Strong background in containerization and orchestration (Kubernetes, Helm).
• Familiarity with monitoring tools (e.g., Prometheus, Grafana), logging frameworks, and performance profiling.
Preferred Qualifications
• Experience integrating LLMs into micro-services or edge inference platforms.
• Experience with Ray distributed inference
• Hands-on with quantization libraries
• Contributions to open-source ML infrastructure or LLM optimization tools.
• Familiarity with cloud platforms (AWS, GCP) and infrastructure-as-code (Terraform).
• Exposure to secure and compliant model deployment workflows






