

Servsys Corporation
ML/Ops Engineer (AWS & Databricks)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an ML/Ops Engineer (AWS & Databricks) with a 1-year contract, hybrid location in Miramar, FL or Dallas, TX. Key skills include AWS services, Databricks, Python, CI/CD, and MLOps experience in production environments.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 17, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#GitHub #ECR (Elastic Container Registery) #Databases #Databricks #Logging #Data Ingestion #Monitoring #Automation #Airflow #Datadog #Docker #Terraform #Scala #SageMaker #DevOps #Version Control #AWS SageMaker #Data Science #A/B Testing #Prometheus #Lambda (AWS Lambda) #Shell Scripting #MLflow #Infrastructure as Code (IaC) #IAM (Identity and Access Management) #VPC (Virtual Private Cloud) #AI (Artificial Intelligence) #Python #Batch #GIT #Cloud #Deployment #Scripting #AWS (Amazon Web Services) #AWS Lambda #REST (Representational State Transfer) #ML (Machine Learning)
Role description
Job Title: ML/Ops Engineer (AWS & Databricks)
Location: Hybrid – Miramar, FL or Dallas, TX ( 4 days onsite in a week)
Duration: 1 Year | Temp Only
MLOps Engineer (AWS & Databricks)
• Primary Responsibilities Design, implement, and maintain CI/CD pipelines for machine learning applications using AWS CodePipeline, CodeCommit, and CodeBuild.
• Automate the deployment of ML models into production using Amazon SageMaker, Databricks, and MLflow for model versioning, tracking, and lifecycle management.
• Develop, test, and deploy AWS Lambda functions for triggering model workflows, automating pre/post-processing, and integrating with other AWS services.
• Maintain and monitor Databricks model serving endpoints, ensuring scalable and low-latency inference workloads.
• Use Airflow (MWAA) or Databricks Workflows to orchestrate complex, multi-stage ML pipelines, including data ingestion, model training, evaluation, and deployment.
• Collaborate with Data Scientists and ML Engineers to productionize models and convert notebooks into reproducible and version-controlled ML pipelines.
• Integrate and automate model monitoring (drift detection, performance logging) and alerting mechanisms using tools like CloudWatch, Prometheus, or Datadog.
• Optimize compute workloads by managing infrastructure-as-code (IaC) via CloudFormation or Terraform for reproducible, secure deployments across environments.
• Ensure secure and compliant deployment pipelines using IAM roles, VPC, and secrets management with AWS Secrets Manager or SSM Parameter Store.
• Champion DevOps best practices across the ML lifecycle, including canary deployments, rollback strategies, and audit logging for model changes.
Minimum Requirements
• hands-on experience in MLOps deploying ML applications in production at scale. Proficient in AWS services: SageMaker, Lambda, CodePipeline, CodeCommit, ECR, ECS/Fargate, and CloudWatch.
• Strong experience with Databricks workflows and Databricks Model Serving, including MLflow for model tracking, packaging, and deployment.
• Proficient in Python and shell scripting with the ability to containerize applications using Docker.
• Deep understanding of CI/CD principles for ML, including testing ML pipelines, data validation, and model quality gates.
• Hands-on experience orchestrating ML workflows using Airflow (open-source or MWAA) or Databricks Workflows.
• Familiarity with model monitoring and logging stacks (e.g., Prometheus, ELK, Datadog, or OpenTelemetry).
• Experience deploying models as REST endpoints, batch jobs, and asynchronous workflows.
• Version control expertise with Git/GitHub and experience in automated deployment reviews and rollback strategies.
Nice to Have
• Experience with Feature Store (e.g., AWS SageMaker Feature Store, Feast).
• Familiarity with Kubeflow, SageMaker Pipelines, or Vertex AI (if multi-cloud).
• Exposure to LLM-based models, vector databases, or retrieval-augmented generation (RAG) pipelines.
• Knowledge of Terraform or AWS CDK for infrastructure automation.
• Experience with A/B testing or shadow deployments for ML models.
Job Title: ML/Ops Engineer (AWS & Databricks)
Location: Hybrid – Miramar, FL or Dallas, TX ( 4 days onsite in a week)
Duration: 1 Year | Temp Only
MLOps Engineer (AWS & Databricks)
• Primary Responsibilities Design, implement, and maintain CI/CD pipelines for machine learning applications using AWS CodePipeline, CodeCommit, and CodeBuild.
• Automate the deployment of ML models into production using Amazon SageMaker, Databricks, and MLflow for model versioning, tracking, and lifecycle management.
• Develop, test, and deploy AWS Lambda functions for triggering model workflows, automating pre/post-processing, and integrating with other AWS services.
• Maintain and monitor Databricks model serving endpoints, ensuring scalable and low-latency inference workloads.
• Use Airflow (MWAA) or Databricks Workflows to orchestrate complex, multi-stage ML pipelines, including data ingestion, model training, evaluation, and deployment.
• Collaborate with Data Scientists and ML Engineers to productionize models and convert notebooks into reproducible and version-controlled ML pipelines.
• Integrate and automate model monitoring (drift detection, performance logging) and alerting mechanisms using tools like CloudWatch, Prometheus, or Datadog.
• Optimize compute workloads by managing infrastructure-as-code (IaC) via CloudFormation or Terraform for reproducible, secure deployments across environments.
• Ensure secure and compliant deployment pipelines using IAM roles, VPC, and secrets management with AWS Secrets Manager or SSM Parameter Store.
• Champion DevOps best practices across the ML lifecycle, including canary deployments, rollback strategies, and audit logging for model changes.
Minimum Requirements
• hands-on experience in MLOps deploying ML applications in production at scale. Proficient in AWS services: SageMaker, Lambda, CodePipeline, CodeCommit, ECR, ECS/Fargate, and CloudWatch.
• Strong experience with Databricks workflows and Databricks Model Serving, including MLflow for model tracking, packaging, and deployment.
• Proficient in Python and shell scripting with the ability to containerize applications using Docker.
• Deep understanding of CI/CD principles for ML, including testing ML pipelines, data validation, and model quality gates.
• Hands-on experience orchestrating ML workflows using Airflow (open-source or MWAA) or Databricks Workflows.
• Familiarity with model monitoring and logging stacks (e.g., Prometheus, ELK, Datadog, or OpenTelemetry).
• Experience deploying models as REST endpoints, batch jobs, and asynchronous workflows.
• Version control expertise with Git/GitHub and experience in automated deployment reviews and rollback strategies.
Nice to Have
• Experience with Feature Store (e.g., AWS SageMaker Feature Store, Feast).
• Familiarity with Kubeflow, SageMaker Pipelines, or Vertex AI (if multi-cloud).
• Exposure to LLM-based models, vector databases, or retrieval-augmented generation (RAG) pipelines.
• Knowledge of Terraform or AWS CDK for infrastructure automation.
• Experience with A/B testing or shadow deployments for ML models.