

ML/OPS Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an ML/OPS Engineer based in Miramar, FL, for a 12+ month contract, paying $75.00 - $85.00/hr. Key skills include AWS, Databricks, Python, CI/CD, and MLOps experience. US Citizenship or authorized work status is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
680
-
ποΈ - Date discovered
August 20, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Miramar, FL
-
π§ - Skills detailed
#MLflow #SageMaker #IAM (Identity and Access Management) #Databases #Databricks #Datadog #Deployment #AI (Artificial Intelligence) #ML (Machine Learning) #Python #REST (Representational State Transfer) #Docker #Prometheus #Lambda (AWS Lambda) #GIT #Monitoring #VPC (Virtual Private Cloud) #AWS Lambda #AWS (Amazon Web Services) #Cloud #Data Science #Logging #Data Ingestion #Automation #Batch #Shell Scripting #ECR (Elastic Container Registery) #Airflow #GitHub #Infrastructure as Code (IaC) #Version Control #Scripting #AWS SageMaker #Terraform #A/B Testing #Scala #DevOps
Role description
Title:Β ML/OPS Engineer
Location: Miramar, FL
Duration:Β 12+ months
Compensation: $75.00 - 85.00/hr
Work Requirements: US Citizen, GC Holders or Authorized to Work in the U.S.
MLOps Engineer (AWS & Databricks)
Primary Responsibilities
β’ Design, implement, and maintain CI/CD pipelines for machine learning applications using AWS CodePipeline, CodeCommit, and CodeBuild.
β’ Automate the deployment of ML models into production using Amazon SageMaker, Databricks, and MLflow for model versioning, tracking, and lifecycle management.
β’ Develop, test, and deploy AWS Lambda functions for triggering model workflows, automating pre/post-processing, and integrating with other AWS services.
β’ Maintain and monitor Databricks model serving endpoints, ensuring scalable and low-latency inference workloads.
β’ Use Airflow (MWAA) or Databricks Workflows to orchestrate complex, multi-stage ML pipelines, including data ingestion, model training, evaluation, and deployment.
β’ Collaborate with Data Scientists and ML Engineers to productionize models and convert notebooks into reproducible and version-controlled ML pipelines.
β’ Integrate and automate model monitoring (drift detection, performance logging) and alerting mechanisms using tools like CloudWatch, Prometheus, or Datadog.
β’ Optimize compute workloads by managing infrastructure-as-code (IaC) via CloudFormation or Terraform for reproducible, secure deployments across environments.
β’ Ensure secure and compliant deployment pipelines using IAM roles, VPC, and secrets management with AWS Secrets Manager or SSM Parameter Store.
β’ Champion DevOps best practices across the ML lifecycle, including canary deployments, rollback strategies, and audit logging for model changes.
Minimum Requirements
β’ Hands-on experience in MLOps deploying ML applications in production at scale.
β’ Proficient in AWS services: SageMaker, Lambda, CodePipeline, CodeCommit, ECR, ECS/Fargate, and CloudWatch.
β’ Strong experience with Databricks workflows and Databricks Model Serving, including MLflow for model tracking, packaging, and deployment.
β’ Proficient in Python and shell scripting with the ability to containerize applications using Docker.
β’ Deep understanding of CI/CD principles for ML, including testing ML pipelines, data validation, and model quality gates.
β’ Hands-on experience orchestrating ML workflows using Airflow (open-source or MWAA) or Databricks Workflows.
β’ Familiarity with model monitoring and logging stacks (e.g., Prometheus, ELK, Datadog, or OpenTelemetry).
β’ Experience deploying models as REST endpoints, batch jobs, and asynchronous workflows.
β’ Version control expertise with Git/GitHub and experience in automated deployment reviews and rollback strategies.
Nice to Have
β’ Experience with Feature Store (e.g., AWS SageMaker Feature Store, Feast).
β’ Familiarity with Kubeflow, SageMaker Pipelines, or Vertex AI (if multi-cloud).
β’ Exposure to LLM-based models, vector databases, or retrieval-augmented generation (RAG) pipelines.
β’ Knowledge of Terraform or AWS CDK for infrastructure automation.
β’ Experience with A/B testing or shadow deployments for ML models.
Our benefits package includes:
β’ Comprehensive medical benefits
β’ Competitive pay
β’ 401(k) Retirement plan
β’ β¦and much more!
About INSPYR Solutions
Technology is our focus and quality is our commitment. As a national expert in delivering flexible technology and talent solutions, we strategically align industry and technical expertise with our clients' business objectives and cultural needs. Our solutions are tailored to each client and include a wide variety of professional services, project, and talent solutions. By always striving for excellence and focusing on the human aspect of our business, we work seamlessly with our talent and clients to match the right solutions to the right opportunities. Learn more about us at inspyrsolutions.com.
INSPYR Solutions provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, INSPYR Solutions complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities.
Title:Β ML/OPS Engineer
Location: Miramar, FL
Duration:Β 12+ months
Compensation: $75.00 - 85.00/hr
Work Requirements: US Citizen, GC Holders or Authorized to Work in the U.S.
MLOps Engineer (AWS & Databricks)
Primary Responsibilities
β’ Design, implement, and maintain CI/CD pipelines for machine learning applications using AWS CodePipeline, CodeCommit, and CodeBuild.
β’ Automate the deployment of ML models into production using Amazon SageMaker, Databricks, and MLflow for model versioning, tracking, and lifecycle management.
β’ Develop, test, and deploy AWS Lambda functions for triggering model workflows, automating pre/post-processing, and integrating with other AWS services.
β’ Maintain and monitor Databricks model serving endpoints, ensuring scalable and low-latency inference workloads.
β’ Use Airflow (MWAA) or Databricks Workflows to orchestrate complex, multi-stage ML pipelines, including data ingestion, model training, evaluation, and deployment.
β’ Collaborate with Data Scientists and ML Engineers to productionize models and convert notebooks into reproducible and version-controlled ML pipelines.
β’ Integrate and automate model monitoring (drift detection, performance logging) and alerting mechanisms using tools like CloudWatch, Prometheus, or Datadog.
β’ Optimize compute workloads by managing infrastructure-as-code (IaC) via CloudFormation or Terraform for reproducible, secure deployments across environments.
β’ Ensure secure and compliant deployment pipelines using IAM roles, VPC, and secrets management with AWS Secrets Manager or SSM Parameter Store.
β’ Champion DevOps best practices across the ML lifecycle, including canary deployments, rollback strategies, and audit logging for model changes.
Minimum Requirements
β’ Hands-on experience in MLOps deploying ML applications in production at scale.
β’ Proficient in AWS services: SageMaker, Lambda, CodePipeline, CodeCommit, ECR, ECS/Fargate, and CloudWatch.
β’ Strong experience with Databricks workflows and Databricks Model Serving, including MLflow for model tracking, packaging, and deployment.
β’ Proficient in Python and shell scripting with the ability to containerize applications using Docker.
β’ Deep understanding of CI/CD principles for ML, including testing ML pipelines, data validation, and model quality gates.
β’ Hands-on experience orchestrating ML workflows using Airflow (open-source or MWAA) or Databricks Workflows.
β’ Familiarity with model monitoring and logging stacks (e.g., Prometheus, ELK, Datadog, or OpenTelemetry).
β’ Experience deploying models as REST endpoints, batch jobs, and asynchronous workflows.
β’ Version control expertise with Git/GitHub and experience in automated deployment reviews and rollback strategies.
Nice to Have
β’ Experience with Feature Store (e.g., AWS SageMaker Feature Store, Feast).
β’ Familiarity with Kubeflow, SageMaker Pipelines, or Vertex AI (if multi-cloud).
β’ Exposure to LLM-based models, vector databases, or retrieval-augmented generation (RAG) pipelines.
β’ Knowledge of Terraform or AWS CDK for infrastructure automation.
β’ Experience with A/B testing or shadow deployments for ML models.
Our benefits package includes:
β’ Comprehensive medical benefits
β’ Competitive pay
β’ 401(k) Retirement plan
β’ β¦and much more!
About INSPYR Solutions
Technology is our focus and quality is our commitment. As a national expert in delivering flexible technology and talent solutions, we strategically align industry and technical expertise with our clients' business objectives and cultural needs. Our solutions are tailored to each client and include a wide variety of professional services, project, and talent solutions. By always striving for excellence and focusing on the human aspect of our business, we work seamlessly with our talent and clients to match the right solutions to the right opportunities. Learn more about us at inspyrsolutions.com.
INSPYR Solutions provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, INSPYR Solutions complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities.