

enableIT
ML Ops Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an ML Ops Engineer, contracting for an unspecified length with a pay rate of "unknown." Candidates must have 10+ years of Python experience, 5+ years with Kubernetes and Terraform, and expertise in AWS SageMaker. Local to LA/Burbank required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
800
-
🗓️ - Date
February 4, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Burbank, CA
-
🧠 - Skills detailed
#GIT #Programming #Kafka (Apache Kafka) #Python #AWS (Amazon Web Services) #Monitoring #Model Deployment #Splunk #Data Engineering #ML Ops (Machine Learning Operations) #SageMaker #DevOps #ML (Machine Learning) #Deployment #Terraform #Version Control #Apache Kafka #Docker #AWS SageMaker #Kubernetes #Datadog #Automation #Observability #Scala #Scripting #Cloud #Data Science #Ansible
Role description
Not available for c2c engagements | Vendors marketing candidates will be blocked
Must be eligible for w2 employment without sponsorship
Must be local to the LA/Burbank Area
Must have experience:
• Python (10 years)
• Kubernetes
• Terraform
• Deploying ML models on AWS SageMaker
• CI/CD Automation
About the Role
We're building a brand-new application from the ground up and seeking an experienced MLOps Engineer to architect and operationalize our data science infrastructure. This is a greenfield opportunity to establish best practices, build scalable deployment pipelines, and bridge the gap between data science innovation and production-ready systems.
You'll work hands-on with our team of Data Scientists, an ML Ops Engineer, Application Architect, and Infrastructure Architect to create seamless CI/CD pipelines that deploy streaming ML models at scale.
What You'll Do
• Build & maintain cloud infrastructure for data science and machine learning workflows using infrastructure-as-code principles
• Design and implement CI/CD pipelines that operationalize data science models from development to production
• Deploy streaming ML models on AWS SageMaker and manage the full lifecycle of model deployment
• Establish infrastructure-as-code standards using Terraform to ensure reproducible, version-controlled environments
• Implement containerization strategies with Docker and Kubernetes for scalable model serving
• Set up monitoring and observability using Splunk and DataDog to ensure system reliability and performance
• Automate configuration management using Ansible for seamless deployments across environments
• Collaborate closely with data scientists to understand model requirements and translate them into robust production systems
What You Bring
Required Experience
• 10+ years of Python programming experience with a focus on automation and infrastructure
• 5+ years of hands-on experience with Kubernetes, Terraform, and cloud infrastructure
• Proven track record deploying streaming ML models on AWS SageMaker
• Deep expertise in CI/CD automation and establishing deployment pipelines from scratch
• Strong experience with containerization (Docker) and orchestration (Kubernetes)
• Infrastructure-as-Code proficiency with Terraform
• Configuration management experience with Ansible or similar tools
• Git and scripting for version control and automation workflows
Preferred Skills
• Experience with MLOps practices and ML model lifecycle management
• Familiarity with Managed Streaming for Apache Kafka (MSK)
• Knowledge of Splunk and DataDog for monitoring and observability
• Background in data engineering or data science domains
• AWS certifications or equivalent cloud expertise
What Makes This Role Unique
• Greenfield project: Shape the architecture and practices from day one
• No on-call rotation: Focus on building quality systems without overnight interruptions
• Collaborative environment: Work directly with data scientists and architects to solve complex problems
• Impact-driven: Your infrastructure will directly enable groundbreaking data science work
What We're Looking For
Beyond technical skills, we value:
• Excellent communication skills to collaborate across technical and non-technical stakeholders
• Systems thinking to design for scalability, reliability, and maintainability
• Problem-solving mindset to navigate ambiguity in a new application build
• Passion for automation and eliminating manual processes
Team Structure
You'll join as an individual contributor working within a cross-functional team that includes Data Scientists, an ML Ops Engineer, Application Architect, and Infrastructure Architect. This role offers significant autonomy and ownership over the DevOps and infrastructure domain.
Not available for c2c engagements | Vendors marketing candidates will be blocked
Must be eligible for w2 employment without sponsorship
Must be local to the LA/Burbank Area
Must have experience:
• Python (10 years)
• Kubernetes
• Terraform
• Deploying ML models on AWS SageMaker
• CI/CD Automation
About the Role
We're building a brand-new application from the ground up and seeking an experienced MLOps Engineer to architect and operationalize our data science infrastructure. This is a greenfield opportunity to establish best practices, build scalable deployment pipelines, and bridge the gap between data science innovation and production-ready systems.
You'll work hands-on with our team of Data Scientists, an ML Ops Engineer, Application Architect, and Infrastructure Architect to create seamless CI/CD pipelines that deploy streaming ML models at scale.
What You'll Do
• Build & maintain cloud infrastructure for data science and machine learning workflows using infrastructure-as-code principles
• Design and implement CI/CD pipelines that operationalize data science models from development to production
• Deploy streaming ML models on AWS SageMaker and manage the full lifecycle of model deployment
• Establish infrastructure-as-code standards using Terraform to ensure reproducible, version-controlled environments
• Implement containerization strategies with Docker and Kubernetes for scalable model serving
• Set up monitoring and observability using Splunk and DataDog to ensure system reliability and performance
• Automate configuration management using Ansible for seamless deployments across environments
• Collaborate closely with data scientists to understand model requirements and translate them into robust production systems
What You Bring
Required Experience
• 10+ years of Python programming experience with a focus on automation and infrastructure
• 5+ years of hands-on experience with Kubernetes, Terraform, and cloud infrastructure
• Proven track record deploying streaming ML models on AWS SageMaker
• Deep expertise in CI/CD automation and establishing deployment pipelines from scratch
• Strong experience with containerization (Docker) and orchestration (Kubernetes)
• Infrastructure-as-Code proficiency with Terraform
• Configuration management experience with Ansible or similar tools
• Git and scripting for version control and automation workflows
Preferred Skills
• Experience with MLOps practices and ML model lifecycle management
• Familiarity with Managed Streaming for Apache Kafka (MSK)
• Knowledge of Splunk and DataDog for monitoring and observability
• Background in data engineering or data science domains
• AWS certifications or equivalent cloud expertise
What Makes This Role Unique
• Greenfield project: Shape the architecture and practices from day one
• No on-call rotation: Focus on building quality systems without overnight interruptions
• Collaborative environment: Work directly with data scientists and architects to solve complex problems
• Impact-driven: Your infrastructure will directly enable groundbreaking data science work
What We're Looking For
Beyond technical skills, we value:
• Excellent communication skills to collaborate across technical and non-technical stakeholders
• Systems thinking to design for scalability, reliability, and maintainability
• Problem-solving mindset to navigate ambiguity in a new application build
• Passion for automation and eliminating manual processes
Team Structure
You'll join as an individual contributor working within a cross-functional team that includes Data Scientists, an ML Ops Engineer, Application Architect, and Infrastructure Architect. This role offers significant autonomy and ownership over the DevOps and infrastructure domain.






