

Sr. AI/LLM DevOps Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. AI/LLM DevOps Engineer on a 12+ month remote contract (W2 only) for US-based candidates in CST or EST. Key skills include Kubernetes, OpenShift, AI/LLM integration, and virtualization technologies.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 14, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Migration #Containers #Documentation #Kubernetes #DevOps #KVM (Kernel-based Virtual Machine) #Storage #Replication #Cloud #Virtualization #Deployment #Database Replication #AI (Artificial Intelligence) #Automation #F5
Role description
AI/LLM DevOps Engineer
Duration: 12+ months / Long-Term (yearly renewals)
Location: Remote (Must reside in the US in CST or EST)
Rate type: W2 ONLY
Weβre seeking a highly skilled AI/LLM Toolkit Developer to join a long-term remote contract engagement focused on building a cutting-edge enablement solution for technical support teams.
This role involves developing a comprehensive toolkit that leverages AI agents and large language models (LLMs) with retrieval-augmented generation (RAG) to streamline onboarding and reduce reliance on complex documentation.
π§ What Youβll Build:
Youβll design and implement a robust, AI-powered toolkit that empowers support teams to manage and troubleshoot hybrid environments with ease. The toolkit will cover:
β’ Kubernetes Foundations: Core concepts like pods, deployments, services, namespaces, limits, eviction policies, and network policies.
β’ Virtualization Layer: Deep dive into OpenShift Virtualization and KVM-based VM management.
β’ Mixed Workloads: Best practices for managing VMs and containers on a unified platform.
β’ Containerization: Communication and coordination between VMs and containers, including useful container tools.
β’ Resource Management: Efficient handling of CPU, memory, and storage for both VMs and containers.
β’ Networking: Kubernetes networking, User Defined Networks (UDNs), and F5 SPK integration.
β’ Storage: Persistent volume frameworks, tuning storage traffic, fault tolerance, and database replication strategies.
β’ Live Migration: Seamless VM migration with minimal downtime, including database and storage considerations.
β’ Automation & Orchestration: Automating provisioning, deployment, and scaling in a Webscale environment.
β’ Troubleshooting & Maintenance: Diagnosing issues, analyzing logs, and applying software updates.
β’ Configuration & Customization: Tailored networking and storage solutions to meet specific use cases.
β’ High Availability: Strategies for fault tolerance and resilient application design across various workloads.
β
Ideal Candidate:
β’ Strong experience with Kubernetes, OpenShift 4.18, and virtualization technologies
β’ Familiarity with AI/LLM integration, especially RAG-based systems
β’ Proven ability to create technical enablement tools and documentation
β’ Comfortable working independently in a remote, contract-based environment
π Work Environment:
β’ Remote-first: Work from anywhere
β’ Long-term contract: Stability with flexibility
β’ Impactful work: Help shape how support teams onboard and operate at scale
If you're passionate about AI, cloud-native infrastructure, and building tools that make complex systems easier to manageβweβd love to hear from you.
AI/LLM DevOps Engineer
Duration: 12+ months / Long-Term (yearly renewals)
Location: Remote (Must reside in the US in CST or EST)
Rate type: W2 ONLY
Weβre seeking a highly skilled AI/LLM Toolkit Developer to join a long-term remote contract engagement focused on building a cutting-edge enablement solution for technical support teams.
This role involves developing a comprehensive toolkit that leverages AI agents and large language models (LLMs) with retrieval-augmented generation (RAG) to streamline onboarding and reduce reliance on complex documentation.
π§ What Youβll Build:
Youβll design and implement a robust, AI-powered toolkit that empowers support teams to manage and troubleshoot hybrid environments with ease. The toolkit will cover:
β’ Kubernetes Foundations: Core concepts like pods, deployments, services, namespaces, limits, eviction policies, and network policies.
β’ Virtualization Layer: Deep dive into OpenShift Virtualization and KVM-based VM management.
β’ Mixed Workloads: Best practices for managing VMs and containers on a unified platform.
β’ Containerization: Communication and coordination between VMs and containers, including useful container tools.
β’ Resource Management: Efficient handling of CPU, memory, and storage for both VMs and containers.
β’ Networking: Kubernetes networking, User Defined Networks (UDNs), and F5 SPK integration.
β’ Storage: Persistent volume frameworks, tuning storage traffic, fault tolerance, and database replication strategies.
β’ Live Migration: Seamless VM migration with minimal downtime, including database and storage considerations.
β’ Automation & Orchestration: Automating provisioning, deployment, and scaling in a Webscale environment.
β’ Troubleshooting & Maintenance: Diagnosing issues, analyzing logs, and applying software updates.
β’ Configuration & Customization: Tailored networking and storage solutions to meet specific use cases.
β’ High Availability: Strategies for fault tolerance and resilient application design across various workloads.
β
Ideal Candidate:
β’ Strong experience with Kubernetes, OpenShift 4.18, and virtualization technologies
β’ Familiarity with AI/LLM integration, especially RAG-based systems
β’ Proven ability to create technical enablement tools and documentation
β’ Comfortable working independently in a remote, contract-based environment
π Work Environment:
β’ Remote-first: Work from anywhere
β’ Long-term contract: Stability with flexibility
β’ Impactful work: Help shape how support teams onboard and operate at scale
If you're passionate about AI, cloud-native infrastructure, and building tools that make complex systems easier to manageβweβd love to hear from you.