

FUSTIS LLC
Sr. MLOps Engineer-W2
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. MLOps Engineer on a W2 contract basis, hybrid in Dallas, TX or Miramar, FL. Pay rate is $70-$75/hr. Key skills include AWS, Databricks, CI/CD pipelines, and Python. Experience with MLOps and ML model deployment is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date
October 22, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#ML (Machine Learning) #SageMaker #Logging #Cloud #Prometheus #Automation #AI (Artificial Intelligence) #Terraform #Monitoring #REST (Representational State Transfer) #Airflow #Databricks #GIT #Deployment #GitHub #A/B Testing #Python #Scripting #MLflow #Lambda (AWS Lambda) #Docker #AWS SageMaker #AWS (Amazon Web Services) #Databases #Datadog #Shell Scripting #Batch #Version Control #ECR (Elastic Container Registery)
Role description
W2 Only
Job Title: Sr. MLOps Engineer
Location: Hybrid Dallas, TX or Miramar, FL - local only
Visa: USC, GC, H4, L2 Only(contract to hire)
Rate:-$70-$75/hr on W2
MUST HAVE:
Design, implement, and maintain CI/CD pipelines for machine learning applications using AWS
CodePipeline, CodeCommit, and CodeBuild.
Automate the deployment of ML models into production using Amazon SageMaker, Databricks, and
MLflow for model versioning, tracking, and lifecycle management.
Minimum Requirements
hands-on experience in MLOps deploying ML applications in production at scale.
Proficient in AWS services: SageMaker, Lambda, CodePipeline, CodeCommit, ECR, ECS/Fargate,
and CloudWatch.
Strong experience with Databricks workflows and Databricks Model Serving, including MLflow for
model tracking, packaging, and deployment.
Proficient in Python and shell scripting with the ability to containerize applications using Docker.
Deep understanding of CI/CD principles for ML, including testing ML pipelines, data validation, and
model quality gates.
Hands-on experience orchestrating ML workflows using Airflow (open-source or MWAA) or
Databricks Workflows.
Familiarity with model monitoring and logging stacks (e.g., Prometheus, ELK, Datadog, or
OpenTelemetry).
Experience deploying models as REST endpoints, batch jobs, and asynchronous workflows.
Version control expertise with Git/GitHub and experience in automated deployment reviews and
rollback strategies.
---
Nice to Have
Experience with Feature Store (e.g., AWS SageMaker Feature Store, Feast).
Familiarity with Kubeflow, SageMaker Pipelines, or Vertex AI (if multi-cloud).
Exposure to LLM-based models, vector databases, or retrieval-augmented generation (RAG)
pipelines.
Knowledge of Terraform or AWS CDK for infrastructure automation.
Experience with A/B testing or shadow deployments for ML models.
Best Regards,
Jaideep Shastri
Sr. Technical Recruiter
916-365-9533 (D) | jaideep.shastri@fustis.com
W2 Only
Job Title: Sr. MLOps Engineer
Location: Hybrid Dallas, TX or Miramar, FL - local only
Visa: USC, GC, H4, L2 Only(contract to hire)
Rate:-$70-$75/hr on W2
MUST HAVE:
Design, implement, and maintain CI/CD pipelines for machine learning applications using AWS
CodePipeline, CodeCommit, and CodeBuild.
Automate the deployment of ML models into production using Amazon SageMaker, Databricks, and
MLflow for model versioning, tracking, and lifecycle management.
Minimum Requirements
hands-on experience in MLOps deploying ML applications in production at scale.
Proficient in AWS services: SageMaker, Lambda, CodePipeline, CodeCommit, ECR, ECS/Fargate,
and CloudWatch.
Strong experience with Databricks workflows and Databricks Model Serving, including MLflow for
model tracking, packaging, and deployment.
Proficient in Python and shell scripting with the ability to containerize applications using Docker.
Deep understanding of CI/CD principles for ML, including testing ML pipelines, data validation, and
model quality gates.
Hands-on experience orchestrating ML workflows using Airflow (open-source or MWAA) or
Databricks Workflows.
Familiarity with model monitoring and logging stacks (e.g., Prometheus, ELK, Datadog, or
OpenTelemetry).
Experience deploying models as REST endpoints, batch jobs, and asynchronous workflows.
Version control expertise with Git/GitHub and experience in automated deployment reviews and
rollback strategies.
---
Nice to Have
Experience with Feature Store (e.g., AWS SageMaker Feature Store, Feast).
Familiarity with Kubeflow, SageMaker Pipelines, or Vertex AI (if multi-cloud).
Exposure to LLM-based models, vector databases, or retrieval-augmented generation (RAG)
pipelines.
Knowledge of Terraform or AWS CDK for infrastructure automation.
Experience with A/B testing or shadow deployments for ML models.
Best Regards,
Jaideep Shastri
Sr. Technical Recruiter
916-365-9533 (D) | jaideep.shastri@fustis.com