

Openkyber
DevOps/AI Ops Administrator
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a DevOps/AI Ops Administrator with a contract length of ten months, offering a pay rate of "unknown." It requires advanced Python skills, experience with AI agents and cloud platforms, and familiarity with Terraform and CI/CD practices. Remote work in "location" is available.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 18, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Illinois
-
🧠 - Skills detailed
#Datasets #ADLS (Azure Data Lake Storage) #ML (Machine Learning) #Monitoring #Observability #SQL (Structured Query Language) #Terraform #Scala #Firewalls #Infrastructure as Code (IaC) #"ETL (Extract #Transform #Load)" #IAM (Identity and Access Management) #BigQuery #Scripting #VPC (Virtual Private Cloud) #AWS (Amazon Web Services) #Prometheus #Cloud #Data Management #Consulting #Grafana #Dataflow #Automation #Data Quality #Kubernetes #Power Automate #Azure #Data Engineering #Security #Snowflake #Synapse #Bash #Data Pipeline #NoSQL #DevOps #Databases #Linux #Storage #Documentation #Compliance #Data Lake #Azure DevOps #GCP (Google Cloud Platform) #AI (Artificial Intelligence) #Deployment #Python
Role description
Infrastructure Engineer Job Summary: OpenKyber Senior ServiceNow is in search of an Infrastructure Engineer for a contract position in Minnetonka, MN. The opportunity will be ten months with a strong chance for a long-term extension.
Position Summary: We are looking for an experienced contractor who is highly proficient in Python and has practical experience developing AI-powered systems. The preferred candidate should have worked with AI agents, Model Context Protocol (MCP), modern data management techniques, and cloud platforms to create scalable, production-ready solutions.
Primary Responsibilities/Accountabilities:
Design, build, and maintain Python-based services and automation workflows
Implement MCP for agent communication, control, and observability
Build, transform, and manage data pipelines supporting AI and analytics use cases
Deploy, monitor, and optimize solutions in cloud environments
Collaborate with product, data, and engineering teams to deliver end-to-end solutions
Ensure code quality, performance, security, and maintainability
Qualifications:
Python : Advanced proficiency; production experience with APIs, async processing, and testing
AI / LLM Agents : Experience designing and implementing autonomous or semiautonomous AI agents (e.g., tool-using agents, planners, orchestrators)
MCP (Model Context Protocols) : Experience with agent communication, coordination frameworks, or protocol-driven AI architectures
Data Management : Data modelling and data pipelines Working with SQL and NoSQL databases Experience with data quality, governance, and large scale datasets
Cloud Experience : Hands-on work in at least one major cloud platform (Azure, AWS, or Google Cloud Platform) Experience with cloud storage, compute, and managed services Familiarity with CI/CD and cloud native deployment patterns
Preferred: Experience with vector databases and embeddings Familiarity with MLOps or LLMOps practices Experience with streaming data or event driven architectures Knowledge of security and compliance considerations for AI systems Prior work in enterprise or large-scale data management Healthcare or other data regulated experience preferred Engagement Characteristics Contractor is expected to work independently with minimal supervision Comfortable operating in fast moving, evolving technical environments Strong documentation and communication skills Experience collaborating with remote and cross functional teams
Technical Skills
AI/LLM Agent and MCP (Model Control Protocols) Google ADK, Copilot Studio
Cloud Experience Google Cloud or Azure preferred.
Database Knowledge BigQuery, Firestore, Cloud SQL, etc.
Data pipeline Dataflow Power Automate Automation Tooling UI Path, etc.
CI/CD Pipeline Azure DevOps Pipeline
Infrastructure as Code (IaC) - Terraform
Role 2: Infrastructure Engineer (Terraform, CI/CD, Google Cloud Platform & Azure, Data & AI Platforms)
Position Summary: We are looking for an experienced Infrastructure Engineer to design, automate, and operate scalable cloud infrastructure supporting data platforms and AI/ML workloads across Google Cloud Platform and Azure . This role focuses on Infrastructure such as Code , CI/CD automation , cloud networking , and enabling reliable, secure environments for data engineering and analytics teams.
Primary Responsibilities/Accountabilities:
Design, provision, and manage cloud infrastructure using Terraform
Build and maintain CI/CD pipelines using Azure DevOps
Provision and manage Google Cloud Platform infrastructure , including compute, storage, IAM, and networking
Support and manage Azure infrastructure (VNets, networking, compute, storage)
Design and implement network provisioning (VPC/VNet architecture, routing, firewalls, load balancers, private connectivity)
Build and operate infrastructure for data platforms (data lakes, warehouses, streaming, analytics platforms)
Provision and support AI/ML infrastructure , including GPU resources and AI platforms
Implement security best practices, IAM, encryption, and compliance controls
Optimize infrastructure for performance, reliability, and cost
Collaborate with data engineering, analytics, and ML teams
Document infrastructure, architecture, standards, and operational runbooks
Qualifications:
Strong experience with Terraform (Infrastructure as Code)
Experience with CI/CD pipelines , preferably Azure DevOps
Strong hands-on experience with Google Cloud Platform (Google Cloud Platform)
Solid understanding of cloud networking and network provisioning
Experience supporting data platforms or large-scale data workloads
Experience with AI/ML infrastructure
Strong Linux and scripting skills (Bash, Python, etc.)
Preferred: Hands-on experience with Azure infrastructure Experience with Kubernetes (GKE / AKS) Experience with data services such as BigQuery, Dataflow, Dataproc, Synapse, ADLS, Snowflake Monitoring and observability tools (Prometheus, Grafana, Cloud Monitoring) Multi-cloud experience and relevant certifications
If this job is a match for your background, we would be honoured to receive your application! Providing consulting opportunities to OpenKyber people since 1987, we offer a host of opportunities, including contract, contract to hire, and permanent placement. Let's talk!
For applications and inquiries, contact: hirings@openkyber.com
Infrastructure Engineer Job Summary: OpenKyber Senior ServiceNow is in search of an Infrastructure Engineer for a contract position in Minnetonka, MN. The opportunity will be ten months with a strong chance for a long-term extension.
Position Summary: We are looking for an experienced contractor who is highly proficient in Python and has practical experience developing AI-powered systems. The preferred candidate should have worked with AI agents, Model Context Protocol (MCP), modern data management techniques, and cloud platforms to create scalable, production-ready solutions.
Primary Responsibilities/Accountabilities:
Design, build, and maintain Python-based services and automation workflows
Implement MCP for agent communication, control, and observability
Build, transform, and manage data pipelines supporting AI and analytics use cases
Deploy, monitor, and optimize solutions in cloud environments
Collaborate with product, data, and engineering teams to deliver end-to-end solutions
Ensure code quality, performance, security, and maintainability
Qualifications:
Python : Advanced proficiency; production experience with APIs, async processing, and testing
AI / LLM Agents : Experience designing and implementing autonomous or semiautonomous AI agents (e.g., tool-using agents, planners, orchestrators)
MCP (Model Context Protocols) : Experience with agent communication, coordination frameworks, or protocol-driven AI architectures
Data Management : Data modelling and data pipelines Working with SQL and NoSQL databases Experience with data quality, governance, and large scale datasets
Cloud Experience : Hands-on work in at least one major cloud platform (Azure, AWS, or Google Cloud Platform) Experience with cloud storage, compute, and managed services Familiarity with CI/CD and cloud native deployment patterns
Preferred: Experience with vector databases and embeddings Familiarity with MLOps or LLMOps practices Experience with streaming data or event driven architectures Knowledge of security and compliance considerations for AI systems Prior work in enterprise or large-scale data management Healthcare or other data regulated experience preferred Engagement Characteristics Contractor is expected to work independently with minimal supervision Comfortable operating in fast moving, evolving technical environments Strong documentation and communication skills Experience collaborating with remote and cross functional teams
Technical Skills
AI/LLM Agent and MCP (Model Control Protocols) Google ADK, Copilot Studio
Cloud Experience Google Cloud or Azure preferred.
Database Knowledge BigQuery, Firestore, Cloud SQL, etc.
Data pipeline Dataflow Power Automate Automation Tooling UI Path, etc.
CI/CD Pipeline Azure DevOps Pipeline
Infrastructure as Code (IaC) - Terraform
Role 2: Infrastructure Engineer (Terraform, CI/CD, Google Cloud Platform & Azure, Data & AI Platforms)
Position Summary: We are looking for an experienced Infrastructure Engineer to design, automate, and operate scalable cloud infrastructure supporting data platforms and AI/ML workloads across Google Cloud Platform and Azure . This role focuses on Infrastructure such as Code , CI/CD automation , cloud networking , and enabling reliable, secure environments for data engineering and analytics teams.
Primary Responsibilities/Accountabilities:
Design, provision, and manage cloud infrastructure using Terraform
Build and maintain CI/CD pipelines using Azure DevOps
Provision and manage Google Cloud Platform infrastructure , including compute, storage, IAM, and networking
Support and manage Azure infrastructure (VNets, networking, compute, storage)
Design and implement network provisioning (VPC/VNet architecture, routing, firewalls, load balancers, private connectivity)
Build and operate infrastructure for data platforms (data lakes, warehouses, streaming, analytics platforms)
Provision and support AI/ML infrastructure , including GPU resources and AI platforms
Implement security best practices, IAM, encryption, and compliance controls
Optimize infrastructure for performance, reliability, and cost
Collaborate with data engineering, analytics, and ML teams
Document infrastructure, architecture, standards, and operational runbooks
Qualifications:
Strong experience with Terraform (Infrastructure as Code)
Experience with CI/CD pipelines , preferably Azure DevOps
Strong hands-on experience with Google Cloud Platform (Google Cloud Platform)
Solid understanding of cloud networking and network provisioning
Experience supporting data platforms or large-scale data workloads
Experience with AI/ML infrastructure
Strong Linux and scripting skills (Bash, Python, etc.)
Preferred: Hands-on experience with Azure infrastructure Experience with Kubernetes (GKE / AKS) Experience with data services such as BigQuery, Dataflow, Dataproc, Synapse, ADLS, Snowflake Monitoring and observability tools (Prometheus, Grafana, Cloud Monitoring) Multi-cloud experience and relevant certifications
If this job is a match for your background, we would be honoured to receive your application! Providing consulting opportunities to OpenKyber people since 1987, we offer a host of opportunities, including contract, contract to hire, and permanent placement. Let's talk!
For applications and inquiries, contact: hirings@openkyber.com






