

Excelon Solutions
MLOps Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an MLOps Engineer in Bolingbrook, IL, with a contract length of "unknown" and a pay rate of "unknown." Key skills include Python, CI/CD, Docker, Kubernetes, and experience with LLM and RAG systems.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 28, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Bolingbrook, IL
-
🧠 - Skills detailed
#Data Quality #Kubernetes #Data Lineage #Python #ML (Machine Learning) #Deployment #Dataflow #Security #IAM (Identity and Access Management) #Cloud #Batch #Automation #AutoScaling #Documentation #"ETL (Extract #Transform #Load)" #Monitoring #Data Engineering #Docker #Indexing #Automated Testing #AI (Artificial Intelligence) #Data Ingestion #Scala #Observability #BigQuery #Metadata
Role description
MLOps Engineer - Bolingbrook, IL [ONSITE]
Job Description:
The MLOps Engineer is responsible for operationalizing, scaling, and maintaining enterprise AI/ML systems across cloud, hybrid, and on‑premise environments. The role focuses on enabling reliable delivery of LLM workloads, retrieval‑augmented generation (RAG), document intelligence, multimodal processing, and predictive/ML pipelines—supported by strong governance, observability, security, and automation.
Key Responsibilities:
• Build and automate end‑to‑end ML pipelines (data ingestion → feature engineering → training → evaluation → packaging → deployment).
• Establish model CI/CD workflows including versioning, automated testing, canary/blue‑green deployments, and rollback strategies.
• Operationalize LLM‑based and RAG systems (embedding workflows, vector indexing, latency optimization, grounding quality checks).
• Productionize document‑processing and multimodal workflows (OCR parsing, enrichment flows, batch/stream scaling).
• Implement observability (data quality, drift, safety indicators, inference latency, error conditions).
• Enforce Responsible AI controls (auditability, reproducibility, governance metadata, lineage, approval workflows).
• Maintain secure serving environments (container hardening, IAM, secrets, network isolation).
• Optimize GPU/CPU utilization, autoscaling, throughput, and cost efficiency.
• Create reusable templates, reference architectures, starter repos, and documentation.
Required Skills & Qualifications:
• Strong Python, CI/CD, Docker, Kubernetes.
• Experience operationalizing LLM, RAG, and predictive ML systems.
• Strong foundations in data engineering, schema governance, batch/stream pipelines.
• Security mindset (PII controls, secrets, network boundaries, auditability).
• Vertex AI (ML orchestration & CI/CD, training, tuning, deployment, model registry & monitoring).
• BigQuery / BigQuery ML (analytics & in‑warehouse ML).
• Cloud Composer + Dataflow (batch/stream ETL orchestration).
• GKE or Cloud Run (secure, scalable model serving).
• Artifact Registry + Cloud Build/Cloud Deploy (container & CI/CD).
Preferred Qualifications:
• Familiarity with agentic reasoning patterns and workflow chaining.
• Experience with LLM evaluation, grounding, bias/safety checks.
• Contributions to open-source ML/MLOps tooling.
Regards
Gagan Rajput
MLOps Engineer - Bolingbrook, IL [ONSITE]
Job Description:
The MLOps Engineer is responsible for operationalizing, scaling, and maintaining enterprise AI/ML systems across cloud, hybrid, and on‑premise environments. The role focuses on enabling reliable delivery of LLM workloads, retrieval‑augmented generation (RAG), document intelligence, multimodal processing, and predictive/ML pipelines—supported by strong governance, observability, security, and automation.
Key Responsibilities:
• Build and automate end‑to‑end ML pipelines (data ingestion → feature engineering → training → evaluation → packaging → deployment).
• Establish model CI/CD workflows including versioning, automated testing, canary/blue‑green deployments, and rollback strategies.
• Operationalize LLM‑based and RAG systems (embedding workflows, vector indexing, latency optimization, grounding quality checks).
• Productionize document‑processing and multimodal workflows (OCR parsing, enrichment flows, batch/stream scaling).
• Implement observability (data quality, drift, safety indicators, inference latency, error conditions).
• Enforce Responsible AI controls (auditability, reproducibility, governance metadata, lineage, approval workflows).
• Maintain secure serving environments (container hardening, IAM, secrets, network isolation).
• Optimize GPU/CPU utilization, autoscaling, throughput, and cost efficiency.
• Create reusable templates, reference architectures, starter repos, and documentation.
Required Skills & Qualifications:
• Strong Python, CI/CD, Docker, Kubernetes.
• Experience operationalizing LLM, RAG, and predictive ML systems.
• Strong foundations in data engineering, schema governance, batch/stream pipelines.
• Security mindset (PII controls, secrets, network boundaries, auditability).
• Vertex AI (ML orchestration & CI/CD, training, tuning, deployment, model registry & monitoring).
• BigQuery / BigQuery ML (analytics & in‑warehouse ML).
• Cloud Composer + Dataflow (batch/stream ETL orchestration).
• GKE or Cloud Run (secure, scalable model serving).
• Artifact Registry + Cloud Build/Cloud Deploy (container & CI/CD).
Preferred Qualifications:
• Familiarity with agentic reasoning patterns and workflow chaining.
• Experience with LLM evaluation, grounding, bias/safety checks.
• Contributions to open-source ML/MLOps tooling.
Regards
Gagan Rajput






