

Machine Learning Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer on a contract basis, requiring expertise in Linux (Ubuntu), Python, PyTorch, and Hugging Face Transformers. Familiarity with AWS EC2 and GCP is essential. Contract length and pay rate are unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
May 21, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Bronx, NY
🧠 - Skills detailed
#Shell Scripting #Cloud #Scripting #PyTorch #Python #EC2 #Linux #Programming #ML (Machine Learning) #AWS EC2 (Amazon Elastic Compute Cloud) #Model Deployment #Conda #Deployment #GCP (Google Cloud Platform) #AWS (Amazon Web Services) #Transformers #"ETL (Extract #Transform #Load)" #Hugging Face
Role description
Key Responsibilities:
• Set up and manage Linux-based environments (Ubuntu preferred), including shell scripting and package management.
• Deploy and configure LLMs such as LLaMA using PyTorch and Hugging Face Transformers.
• Run inference jobs using GPU resources (NVIDIA CUDA).
• Manage cloud-based GPU instances on platforms like AWS EC2, Google Cloud Platform (GCP), or Hugging Face Spaces.
• Create and maintain isolated development environments using Conda and pip.
• Collaborate with the team to ensure smooth model integration and deployment in production-ready pipelines.
Required Qualifications:
Operating Systems & Tools:
• Proficient in Linux (Ubuntu preferred)
• Strong experience with shell scripting and package managers
Cloud Platforms:
• Familiarity with GPU-based instances (AWS EC2, GCP, Hugging Face Spaces)
Programming & Frameworks:
• Python (must-have)
• Experience with PyTorch and Hugging Face Transformers
Model Deployment & Inference:
• Hands-on experience installing and configuring LLMs (e.g., LLaMA)
• Running GPU-based inference workloads (CUDA)
• Experience setting up virtual environments and managing Python dependencies.
Key Responsibilities:
• Set up and manage Linux-based environments (Ubuntu preferred), including shell scripting and package management.
• Deploy and configure LLMs such as LLaMA using PyTorch and Hugging Face Transformers.
• Run inference jobs using GPU resources (NVIDIA CUDA).
• Manage cloud-based GPU instances on platforms like AWS EC2, Google Cloud Platform (GCP), or Hugging Face Spaces.
• Create and maintain isolated development environments using Conda and pip.
• Collaborate with the team to ensure smooth model integration and deployment in production-ready pipelines.
Required Qualifications:
Operating Systems & Tools:
• Proficient in Linux (Ubuntu preferred)
• Strong experience with shell scripting and package managers
Cloud Platforms:
• Familiarity with GPU-based instances (AWS EC2, GCP, Hugging Face Spaces)
Programming & Frameworks:
• Python (must-have)
• Experience with PyTorch and Hugging Face Transformers
Model Deployment & Inference:
• Hands-on experience installing and configuring LLMs (e.g., LLaMA)
• Running GPU-based inference workloads (CUDA)
• Experience setting up virtual environments and managing Python dependencies.