

MLOps Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an MLOps Engineer on a contract basis, offering competitive pay. Requires expertise in AWS services (SageMaker, Glue, EC2), ML model deployment, Python programming, and experience with LLMs. Relevant AWS certifications are a plus.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 19, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
South San Francisco, CA
-
π§ - Skills detailed
#Security #Data Pipeline #AI (Artificial Intelligence) #Programming #Data Extraction #Kubernetes #AWS CloudWatch #Logging #Model Deployment #S3 (Amazon Simple Storage Service) #Debugging #AWS (Amazon Web Services) #Docker #Metadata #Data Ingestion #Containers #Data Storage #Storage #SageMaker #RDS (Amazon Relational Database Service) #Data Processing #DynamoDB #Monitoring #Deployment #Scala #Amazon RDS (Amazon Relational Database Service) #Automation #ML (Machine Learning) #Scripting #DevOps #Cloud #Python #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #AWS Glue #EC2
Role description
β’ Strong understanding of machine learning concepts, algorithms, and best practices.
β’ Proven experience in creating, managing, and deploying ML models using core AWS services such as Amazon SageMaker (for model building, training, and deployment), EC2 (for compute instances), S3 (for data storage), and Lambda (for serverless functions).
β’ Experience with AWS Textract for document data extraction.
β’ Demonstrable experience in designing, developing, and maintaining automated data processing and ML training pipelines using AWS Glue (for ETL) and AWS Step Functions (for workflow orchestration).
β’ Proficiency in ensuring seamless data ingestion, transformation, and storage strategies within the AWS ecosystem.
β’ Experience in optimizing AWS resource usage for cost-effectiveness and efficiency in ML operations.
β’ Experience with Amazon Bedrock for leveraging and managing foundation models in generative AI applications.
β’ Knowledge of database services like Amazon RDS or Amazon DynamoDB for storing metadata, features, or serving model predictions where applicable.
β’ Hands-on experience with implementing monitoring, logging, and alerting mechanisms using AWS CloudWatch.
β’ Experience with AWS container services like EKS (Elastic Kubernetes Service) or ECS (Elastic Container Service) for managing container orchestration.
β’ Experience in implementing scalable and reliable ML model deployments in a production environment.
β’ Practical experience in implementing, deploying, and optimizing Large Language Models (LLMs) for production use cases.
β’ Ability to monitor LLM performance, fine-tune parameters, and continuously update/refine models based on new data and performance metrics.
β’ Proven ability to create and experiment with effective prompt engineering strategies to improve LLM performance, accuracy, and relevance.
β’ Proficiency in using Docker to package ML models and applications into containers.
β’ Experience with Kubernetes for container orchestration, including managing deployments, scaling, and networking.
β’ Knowledge of best practices for container security, performance optimization, and resource utilization.
β’ Strong proficiency in Python programming for data processing, model training, deployment automation, and general scripting.
β’ Experience in implementing robust testing (e.g., unit tests, integration tests) and debugging practices for Python code.
β’ Adherence to best practices and coding standards in Python development.
β’ Experience or familiarity with integrating external systems or platforms, such as Veeva Promomat (or similar content management/regulatory systems), with ML workflows.
β’ Strong analytical and problem-solving skills with the ability to troubleshoot complex issues in ML systems and data pipelines.
β’ A proactive and results-oriented mindset with a focus on continuous improvement and innovation in MLOps practices.
β’ Relevant AWS certifications (e.g., AWS Certified Machine Learning - Specialty, AWS Certified DevOps Engineer) is a plus.
β’ Strong understanding of machine learning concepts, algorithms, and best practices.
β’ Proven experience in creating, managing, and deploying ML models using core AWS services such as Amazon SageMaker (for model building, training, and deployment), EC2 (for compute instances), S3 (for data storage), and Lambda (for serverless functions).
β’ Experience with AWS Textract for document data extraction.
β’ Demonstrable experience in designing, developing, and maintaining automated data processing and ML training pipelines using AWS Glue (for ETL) and AWS Step Functions (for workflow orchestration).
β’ Proficiency in ensuring seamless data ingestion, transformation, and storage strategies within the AWS ecosystem.
β’ Experience in optimizing AWS resource usage for cost-effectiveness and efficiency in ML operations.
β’ Experience with Amazon Bedrock for leveraging and managing foundation models in generative AI applications.
β’ Knowledge of database services like Amazon RDS or Amazon DynamoDB for storing metadata, features, or serving model predictions where applicable.
β’ Hands-on experience with implementing monitoring, logging, and alerting mechanisms using AWS CloudWatch.
β’ Experience with AWS container services like EKS (Elastic Kubernetes Service) or ECS (Elastic Container Service) for managing container orchestration.
β’ Experience in implementing scalable and reliable ML model deployments in a production environment.
β’ Practical experience in implementing, deploying, and optimizing Large Language Models (LLMs) for production use cases.
β’ Ability to monitor LLM performance, fine-tune parameters, and continuously update/refine models based on new data and performance metrics.
β’ Proven ability to create and experiment with effective prompt engineering strategies to improve LLM performance, accuracy, and relevance.
β’ Proficiency in using Docker to package ML models and applications into containers.
β’ Experience with Kubernetes for container orchestration, including managing deployments, scaling, and networking.
β’ Knowledge of best practices for container security, performance optimization, and resource utilization.
β’ Strong proficiency in Python programming for data processing, model training, deployment automation, and general scripting.
β’ Experience in implementing robust testing (e.g., unit tests, integration tests) and debugging practices for Python code.
β’ Adherence to best practices and coding standards in Python development.
β’ Experience or familiarity with integrating external systems or platforms, such as Veeva Promomat (or similar content management/regulatory systems), with ML workflows.
β’ Strong analytical and problem-solving skills with the ability to troubleshoot complex issues in ML systems and data pipelines.
β’ A proactive and results-oriented mindset with a focus on continuous improvement and innovation in MLOps practices.
β’ Relevant AWS certifications (e.g., AWS Certified Machine Learning - Specialty, AWS Certified DevOps Engineer) is a plus.