

MyRemoteTeam Inc
Expert Rater – AI Model Evaluation & Code Quality
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Expert Rater – AI Model Evaluation & Code Quality, a remote contract position for US-based applicants. Requires strong software engineering background, expert Python skills, and proficiency in additional programming languages. Prior code review experience preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 18, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Ruby #Debugging #ML (Machine Learning) #Python #PyTorch #Web Scraping #TypeScript #Model Evaluation #Computer Science #NLP (Natural Language Processing) #Web Development #PHP #C++ #Documentation #Datasets #JavaScript #TensorFlow
Role description
We’re expanding our technical evaluation team and looking for Expert Raters to review AI-generated code, machine learning outputs, and technical reasoning. This contract role is ideal for software engineers, ML practitioners, and data professionals who enjoy code review, debugging, and working with advanced AI systems.
Responsibilities:
• Review AI-generated code and ML model outputs
• Identify errors, inefficiencies, and deviations from best practices
• Annotate and label code/documentation
• Compare model outputs and rank quality
• Validate datasets and ensure technical accuracy
Requirements:
• Strong background in Software Engineering, Computer Science, ML/AI
• Expert-level Python (must-have)
• Proficiency in at least one: JavaScript, TypeScript, Node.js, C/C++, Go, Rust, Shell, Ruby, PHP, Swift, Kotlin
• Familiarity with ML/DL frameworks (TensorFlow, PyTorch, JAX, etc.)
• Knowledge of NLP, CV, RL, or LLM-based systems
• Experience with APIs, web scraping, or web development
• Strong analytical and written communication skills
Preferred:
• Prior experience in code review, annotations, or LLM evaluation
Employment: Contract, Remote (US-based applicants preferred)
For more information, please share your resume and hourly rate at sagar.gupta@truelancer.com
We’re expanding our technical evaluation team and looking for Expert Raters to review AI-generated code, machine learning outputs, and technical reasoning. This contract role is ideal for software engineers, ML practitioners, and data professionals who enjoy code review, debugging, and working with advanced AI systems.
Responsibilities:
• Review AI-generated code and ML model outputs
• Identify errors, inefficiencies, and deviations from best practices
• Annotate and label code/documentation
• Compare model outputs and rank quality
• Validate datasets and ensure technical accuracy
Requirements:
• Strong background in Software Engineering, Computer Science, ML/AI
• Expert-level Python (must-have)
• Proficiency in at least one: JavaScript, TypeScript, Node.js, C/C++, Go, Rust, Shell, Ruby, PHP, Swift, Kotlin
• Familiarity with ML/DL frameworks (TensorFlow, PyTorch, JAX, etc.)
• Knowledge of NLP, CV, RL, or LLM-based systems
• Experience with APIs, web scraping, or web development
• Strong analytical and written communication skills
Preferred:
• Prior experience in code review, annotations, or LLM evaluation
Employment: Contract, Remote (US-based applicants preferred)
For more information, please share your resume and hourly rate at sagar.gupta@truelancer.com





