

MokshaaLLC
ML Engineer with AI Deployment Experience
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer with AI deployment experience on AWS Cloud, offering a contract of W2/C2C/1099 at $70/hr to $75/hr. Requires expertise in AWS SageMaker, TensorFlow, and MLOps automation. Remote work authorized in the USA only.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
600
-
ποΈ - Date
November 12, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
1099 Contractor
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Batch #Lambda (AWS Lambda) #SageMaker #Data Lineage #Compliance #Data Ingestion #Data Engineering #AWS (Amazon Web Services) #Apache Spark #Monitoring #PyTorch #TensorFlow #GitHub #Deep Learning #Data Science #Data Lake #AWS Lambda #Deployment #AWS SageMaker #MLflow #Data Pipeline #Cloud #Redshift #Amazon CloudWatch #ML (Machine Learning) #Kubernetes #Model Evaluation #Scala #API (Application Programming Interface) #AI (Artificial Intelligence) #Docker #Automation #AWS Glue #Spark (Apache Spark)
Role description
Job Title: Machine Learning Engineer β AI Deployments on AWS Cloud
Location: Remote (Authorized to work in USA only)
Contract - W2/C2C/1099
Rate: $70/hr to $75/hr
Overview:
We are seeking a Machine Learning Engineer experienced in developing, deploying, and optimizing AI/ML solutions using AWS Cloud. The ideal candidate will have end-to-end ownership of the ML lifecycle β from data ingestion and model training to scalable deployment, monitoring, and continuous improvement using AWS-native services.
Key Responsibilities:
Model Development & Training
β’ Design, develop, and optimize machine learning and deep learning models using frameworks like TensorFlow, PyTorch, or Scikit-learn.
β’ Perform data preprocessing, feature engineering, and model evaluation using AWS data and analytics services.
β’ Collaborate with data scientists to productionize research models into scalable, reliable cloud-based AI solutions.
AWS Cloud AI Deployments
β’ Deploy and manage ML models in AWS SageMaker (training jobs, endpoints, pipelines, and model registry).
β’ Build serverless inference APIs using AWS Lambda, API Gateway, or ECS/Fargate.
β’ Implement real-time or batch inference pipelines with AWS Step Functions, Kinesis, or AWS Batch.
β’ Manage containerized workloads for ML inference using Docker and Amazon EKS (Kubernetes).
MLOps & Automation
β’ Develop CI/CD pipelines for ML using AWS CodePipeline, CodeBuild, and CodeCommit (or GitHub Actions).
β’ Automate data versioning, model versioning, and model retraining using SageMaker Pipelines, MLflow, or DVC.
β’ Monitor model performance, data drift, and prediction accuracy using Amazon CloudWatch, AWS Model Monitor, or Evidently AI.
Data Engineering Collaboration
β’ Work closely with data engineers to design scalable data ingestion and transformation pipelines using:
β’ AWS Glue, AWS DataBrew, AWS Data Pipeline, or Apache Spark on EMR.
β’ Ensure data lineage, quality, and compliance within AWS data lakes and Redshift environments.
Optimization & Scaling
β’ Optimize model performance, latency, and cost efficiency using AWS Inferentia, Elastic Inference, and Auto Scaling.
β’ Leverage GPU/TPU-based instances for high-performance training and fine-tuning tasks.
Job Title: Machine Learning Engineer β AI Deployments on AWS Cloud
Location: Remote (Authorized to work in USA only)
Contract - W2/C2C/1099
Rate: $70/hr to $75/hr
Overview:
We are seeking a Machine Learning Engineer experienced in developing, deploying, and optimizing AI/ML solutions using AWS Cloud. The ideal candidate will have end-to-end ownership of the ML lifecycle β from data ingestion and model training to scalable deployment, monitoring, and continuous improvement using AWS-native services.
Key Responsibilities:
Model Development & Training
β’ Design, develop, and optimize machine learning and deep learning models using frameworks like TensorFlow, PyTorch, or Scikit-learn.
β’ Perform data preprocessing, feature engineering, and model evaluation using AWS data and analytics services.
β’ Collaborate with data scientists to productionize research models into scalable, reliable cloud-based AI solutions.
AWS Cloud AI Deployments
β’ Deploy and manage ML models in AWS SageMaker (training jobs, endpoints, pipelines, and model registry).
β’ Build serverless inference APIs using AWS Lambda, API Gateway, or ECS/Fargate.
β’ Implement real-time or batch inference pipelines with AWS Step Functions, Kinesis, or AWS Batch.
β’ Manage containerized workloads for ML inference using Docker and Amazon EKS (Kubernetes).
MLOps & Automation
β’ Develop CI/CD pipelines for ML using AWS CodePipeline, CodeBuild, and CodeCommit (or GitHub Actions).
β’ Automate data versioning, model versioning, and model retraining using SageMaker Pipelines, MLflow, or DVC.
β’ Monitor model performance, data drift, and prediction accuracy using Amazon CloudWatch, AWS Model Monitor, or Evidently AI.
Data Engineering Collaboration
β’ Work closely with data engineers to design scalable data ingestion and transformation pipelines using:
β’ AWS Glue, AWS DataBrew, AWS Data Pipeline, or Apache Spark on EMR.
β’ Ensure data lineage, quality, and compliance within AWS data lakes and Redshift environments.
Optimization & Scaling
β’ Optimize model performance, latency, and cost efficiency using AWS Inferentia, Elastic Inference, and Auto Scaling.
β’ Leverage GPU/TPU-based instances for high-performance training and fine-tuning tasks.






