

DevOps Engineer / Machine Learning Operations
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a DevOps Engineer / Machine Learning Operations in Greenwood Village, CO, offering a competitive pay rate. Key skills include Kubernetes, AWS EKS, PostgreSQL, and CI/CD pipelines. Experience with AI/ML workloads and secure deployment practices is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
592
-
ποΈ - Date discovered
August 8, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Englewood, CO
-
π§ - Skills detailed
#API (Application Programming Interface) #Terraform #Datadog #Security #Scala #Databases #Cloud #Database Performance #ML Ops (Machine Learning Operations) #Compliance #Kubernetes #GCP (Google Cloud Platform) #Splunk #DevOps #Azure #ML (Machine Learning) #PostgreSQL #SageMaker #Docker #Storage #Deployment #Infrastructure as Code (IaC) #Monitoring #GitLab #Data Science #AI (Artificial Intelligence) #AWS (Amazon Web Services) #Microservices
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Responsibilities
Kforce has a client in Greenwood Village, CO that is seeking a DevOps Engineer / Machine Learning Operations. Summary: Our client is seeking a skilled Kubernetes & ML Ops Engineer to join their fast-paced, collaborative DevOps and Data Science team. This role focuses on deploying and operating scalable AI/ML services on AWS EKS, managing vector databases, and integrating cutting-edge AI services like Azure OpenAI APIs. If you have a passion for Kubernetes mastery, AI/ML workload operationalization, and secure enterprise AI deployment, this opportunity is for you. Responsibilities:
β’ Deploy and manage microservices, including APIs, Docker-based services, and vector stores on Amazon EKS
β’ Ensure 24/7 cluster uptime and seamless service connectivity
β’ Validate stability and performance of deployed configurations and services
β’ Manage PostgreSQL with pgvector for embedding storage
β’ Securely expose and integrate vector databases within Kubernetes environments
β’ Monitor and troubleshoot database performance issues
β’ Implement and maintain CI/CD pipelines using GitLab
β’ Use Terraform for Infrastructure as Code (CloudFormation experience a plus)
β’ Automate build, test, and deployment workflows
β’ Set up monitoring dashboards and alerting via Datadog (Splunk or alternatives welcomed)
β’ Ensure full visibility into system health, latency, and uptime metrics
β’ Collaborate with data science teams to operationalize ML workloads
β’ Support Retrieval-Augmented Generation (RAG) architectures and vector-based search
β’ Integrate securely with Azure OpenAI APIs, including implementing internal guardrails
Requirements
β’ Proven expertise managing scalable production Kubernetes workloads on EKS (or equivalent)
β’ Hands-on experience with PostgreSQL + pgvector; Knowledge of other vector stores like Pinecone, Weaviate, or Milvus is a plus
β’ Solid conceptual understanding of LLMs, embeddings, retrievers, and RAG systems
β’ Familiarity with OpenAI services and API-based LLM workflows
β’ Highly collaborative DevOps/Data Science team with a strong emphasis on secure and ethical AI usage
β’ Fast-paced environment dedicated to pushing the boundaries of enterprise AI capabilities
Nice To Have
β’ Experience deploying AI/ML workloads on AWS, Azure, or GCP
β’ Exposure to cloud-native AI services such as SageMaker, Azure ML, or Vertex AI
β’ Understanding of compliance and security guardrails related to LLM deployments
β’ Knowledge of service meshes or API gateways for secure model exposure
The pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.
We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.
Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.
This job is not eligible for bonuses, incentives or commissions.
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
By clicking βApply Todayβ you agree to receive calls, AI-generated calls, text messages or emails from Kforce and its affiliates, and service providers. Note that if you choose to communicate with Kforce via text messaging the frequency may vary, and message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You will always have the right to cease communicating via text by using key words such as STOP.