
MLOps Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an MLOps Engineer with an 18-month contract, hybrid location in South San Francisco, CA. Key skills include AWS services, ML model deployment, Python proficiency, and experience with LLMs. Relevant AWS certifications are a plus.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 6, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
South San Francisco, CA
-
π§ - Skills detailed
#Infrastructure as Code (IaC) #AWS (Amazon Web Services) #ML (Machine Learning) #Terraform #Scala #AWS Glue #Containers #SageMaker #Model Deployment #Kubernetes #Deployment #Scripting #Security #Monitoring #Cloud #"ETL (Extract #Transform #Load)" #Data Ingestion #Storage #Data Processing #Data Pipeline #Metadata #S3 (Amazon Simple Storage Service) #AWS CloudWatch #AI (Artificial Intelligence) #Programming #DynamoDB #Debugging #Python #Logging #Amazon RDS (Amazon Relational Database Service) #Lambda (AWS Lambda) #Docker #DevOps #Data Governance #Automation #Data Storage #RDS (Amazon Relational Database Service) #Data Extraction #EC2
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Duration: 18 Months
Job Title: MLOps Engineer
Location: South San Francisco, CA (Hybrid β 3 days/week onsite)
β’ Strong understanding of machine learning concepts, algorithms, and best practices.
β’ Proven experience in creating, managing, and deploying ML models using core AWS services such as Amazon SageMaker (for model building, training, and deployment), EC2 (for compute instances), S3 (for data storage), and Lambda (for serverless functions).
β’ Experience with AWS Textract for document data extraction.
β’ Demonstrable experience in designing, developing, and maintaining automated data processing and ML training pipelines using AWS Glue (for ETL) and AWS Step Functions (for workflow orchestration).
β’ Proficiency in ensuring seamless data ingestion, transformation, and storage strategies within the AWS ecosystem.
β’ Experience in optimizing AWS resource usage for cost-effectiveness and efficiency in ML operations.
β’ Experience with Amazon Bedrock for leveraging and managing foundation models in generative AI applications.
β’ Knowledge of database services like Amazon RDS or Amazon DynamoDB for storing metadata, features, or serving model predictions where applicable.
β’ Hands-on experience with implementing monitoring, logging, and alerting mechanisms using AWS CloudWatch.
β’ Experience with AWS container services like EKS (Elastic Kubernetes Service) or ECS (Elastic Container Service) for managing container orchestration.
β’ Experience in implementing scalable and reliable ML model deployments in a production environment.
β’ Practical experience in implementing, deploying, and optimizing Large Language Models (LLMs) for production use cases.
β’ Ability to monitor LLM performance, fine-tune parameters, and continuously update/refine models based on new data and performance metrics.
β’ Proven ability to create and experiment with effective prompt engineering strategies to improve LLM performance, accuracy, and relevance.
β’ Proficiency in using Docker to package ML models and applications into containers.
β’ Experience with Kubernetes for container orchestration, including managing deployments, scaling, and networking.
β’ Knowledge of best practices for container security, performance optimization, and resource utilization.
β’ Strong proficiency in Python programming for data processing, model training, deployment automation, and general scripting.
β’ Experience in implementing robust testing (e.g., unit tests, integration tests) and debugging practices for Python code.
β’ Adherence to best practices and coding standards in Python development.
β’ Experience or familiarity with integrating external systems or platforms, such as Veeva Promomat (or similar content management/regulatory systems), with ML workflows.
β’ Strong analytical and problem-solving skills with the ability to troubleshoot complex issues in ML systems and data pipelines.
β’ A proactive and results-oriented mindset with a focus on continuous improvement and innovation in MLOps practices.
β’ Relevant AWS certifications (e.g., AWS Certified Machine Learning - Specialty, AWS Certified DevOps Engineer) is a plus.
β’ Experience with Infrastructure as Code (IaC) tools like AWS CloudFormation or Terraform.
β’ Familiarity with CI/CD pipelines and tools for automating ML workflows.
β’ Understanding of data governance and security best practices in the context of ML.