

Jobs via Dice
Senior ML Ops Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior MLOps Engineer on a contract basis, highly preferred in Austin, TX. Key skills include AWS SageMaker, Python, and ML frameworks. Requires experience with CI/CD, Docker, and Kubernetes. Remote work is less preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 11, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Austin, TX
-
π§ - Skills detailed
#Deployment #Monitoring #Python #PyTorch #Batch #S3 (Amazon Simple Storage Service) #Docker #Kubernetes #Lambda (AWS Lambda) #Infrastructure as Code (IaC) #TensorFlow #AWS (Amazon Web Services) #MLflow #AWS SageMaker #ML (Machine Learning) #Scala #Cloud #Data Science #ML Ops (Machine Learning Operations) #SageMaker #A/B Testing
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Triunity Software, is seeking the following. Apply via Dice today!
π Hiring: MLOps Engineer (AWS) |
Contract Role
π Location Preference: Austin, TX (Highly Preferred) | CST (Second Preference) | Remote (US β Last Preference)
π About the Role
We are seeking a highly experienced MLOps Engineer to design, build, and manage scalable machine learning infrastructure on AWS. This role focuses on end-to-end ML lifecycle managementβfrom automated training pipelines and experiment tracking to deployment, monitoring, and continuous retraining.
You will play a key role in bridging Data Science and Engineering, ensuring reliable and efficient delivery of ML solutions at scale using AWS-native services and tools like SageMaker, Kubeflow, and MLflow.
π οΈ Key Responsibilities
Design and manage scalable AWS-based MLOps infrastructure
Build end-to-end ML pipelines using SageMaker Pipelines, Step Functions, Kubeflow
Implement model versioning, experiment tracking, and model registry
Develop and maintain CI/CD pipelines for ML workflows
Deploy models using SageMaker endpoints (real-time & batch)
Enable model monitoring, drift detection, and automated retraining
Implement A/B testing and canary deployments
Work closely with Data Scientists and Engineering teams
Monitor systems using CloudWatch, X-Ray, CloudTrail
β
Required Skills
Strong experience in Python and ML frameworks (TensorFlow / PyTorch)
Hands-on with AWS SageMaker & SageMaker Pipelines
Expertise in MLflow, Kubeflow
Experience with Docker, Kubernetes (Amazon EKS)
Strong knowledge of CI/CD (CodePipeline, CodeBuild, CodeDeploy)
Proficiency in AWS services (Lambda, S3, Step Functions, Bedrock)
Experience with Infrastructure as Code (CloudFormation / CDK)
Strong understanding of Model Monitoring, Drift Detection, Model Registry
π§ Skills Evaluated
Python | AWS SageMaker | SageMaker Pipelines | MLflow | Kubeflow | Docker | Kubernetes | Amazon EKS | CI/CD | CodePipeline | CodeBuild | MLOps | Model Registry | Model Monitoring | Drift Detection | Step Functions | CloudFormation | Infrastructure-as-Code
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Triunity Software, is seeking the following. Apply via Dice today!
π Hiring: MLOps Engineer (AWS) |
Contract Role
π Location Preference: Austin, TX (Highly Preferred) | CST (Second Preference) | Remote (US β Last Preference)
π About the Role
We are seeking a highly experienced MLOps Engineer to design, build, and manage scalable machine learning infrastructure on AWS. This role focuses on end-to-end ML lifecycle managementβfrom automated training pipelines and experiment tracking to deployment, monitoring, and continuous retraining.
You will play a key role in bridging Data Science and Engineering, ensuring reliable and efficient delivery of ML solutions at scale using AWS-native services and tools like SageMaker, Kubeflow, and MLflow.
π οΈ Key Responsibilities
Design and manage scalable AWS-based MLOps infrastructure
Build end-to-end ML pipelines using SageMaker Pipelines, Step Functions, Kubeflow
Implement model versioning, experiment tracking, and model registry
Develop and maintain CI/CD pipelines for ML workflows
Deploy models using SageMaker endpoints (real-time & batch)
Enable model monitoring, drift detection, and automated retraining
Implement A/B testing and canary deployments
Work closely with Data Scientists and Engineering teams
Monitor systems using CloudWatch, X-Ray, CloudTrail
β
Required Skills
Strong experience in Python and ML frameworks (TensorFlow / PyTorch)
Hands-on with AWS SageMaker & SageMaker Pipelines
Expertise in MLflow, Kubeflow
Experience with Docker, Kubernetes (Amazon EKS)
Strong knowledge of CI/CD (CodePipeline, CodeBuild, CodeDeploy)
Proficiency in AWS services (Lambda, S3, Step Functions, Bedrock)
Experience with Infrastructure as Code (CloudFormation / CDK)
Strong understanding of Model Monitoring, Drift Detection, Model Registry
π§ Skills Evaluated
Python | AWS SageMaker | SageMaker Pipelines | MLflow | Kubeflow | Docker | Kubernetes | Amazon EKS | CI/CD | CodePipeline | CodeBuild | MLOps | Model Registry | Model Monitoring | Drift Detection | Step Functions | CloudFormation | Infrastructure-as-Code





