

MyRemoteTeam Inc
Expert Rater – AI Model Evaluation & Code Quality
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a freelance Expert Rater for AI Model Evaluation & Code Quality, requiring 7+ years in software development/ML/AI, proficiency in Python, and familiarity with Git. It offers remote work from India or the USA, with production-based earnings.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 19, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#PHP #Java #Compliance #NLP (Natural Language Processing) #JavaScript #AI (Artificial Intelligence) #ML (Machine Learning) #C++ #Model Evaluation #GIT #Security #Python
Role description
About the Role:
We are looking for Expert AI Code Raters to evaluate AI-generated code and machine learning outputs for a global project.
This is a freelance, task-based role ideal for experienced software engineers, ML professionals, and technical evaluators who enjoy solving problems and improving AI systems.
Responsibilities:
• Evaluate AI-generated code for correctness, efficiency, and readability.
• Identify security or logic issues and ensure compliance with best practices.
• Annotate and label code snippets with precision.
• Compare and rank multiple model outputs based on performance and logic.
• Provide constructive written feedback to improve model outputs.
• Collaborate with engineering teams to refine evaluation guidelines.
Requirements:
• 7+ years of experience in software development / ML / AI.
• Proficiency in Python (mandatory) + one or more: JavaScript, C++, Go, Rust, Java, PHP, or Kotlin.
• Strong foundation in ML/AI concepts (training, evaluation, NLP, or computer vision).
• Familiarity with Git, APIs, and software engineering principles.
• Excellent written English and attention to detail.
Preferred:
• Experience working with Scale AI, Outlier, Appen, RWS, or Surge AI.
• Exposure to LLMs or AI model evaluation.
What We Offer:
• Remote freelance role – work from anywhere in India or the USA
• Paid assessment + production-based earnings
• Opportunity to contribute to next-gen AI system development.
Email : sajid.ahmed@truelancer.com
About the Role:
We are looking for Expert AI Code Raters to evaluate AI-generated code and machine learning outputs for a global project.
This is a freelance, task-based role ideal for experienced software engineers, ML professionals, and technical evaluators who enjoy solving problems and improving AI systems.
Responsibilities:
• Evaluate AI-generated code for correctness, efficiency, and readability.
• Identify security or logic issues and ensure compliance with best practices.
• Annotate and label code snippets with precision.
• Compare and rank multiple model outputs based on performance and logic.
• Provide constructive written feedback to improve model outputs.
• Collaborate with engineering teams to refine evaluation guidelines.
Requirements:
• 7+ years of experience in software development / ML / AI.
• Proficiency in Python (mandatory) + one or more: JavaScript, C++, Go, Rust, Java, PHP, or Kotlin.
• Strong foundation in ML/AI concepts (training, evaluation, NLP, or computer vision).
• Familiarity with Git, APIs, and software engineering principles.
• Excellent written English and attention to detail.
Preferred:
• Experience working with Scale AI, Outlier, Appen, RWS, or Surge AI.
• Exposure to LLMs or AI model evaluation.
What We Offer:
• Remote freelance role – work from anywhere in India or the USA
• Paid assessment + production-based earnings
• Opportunity to contribute to next-gen AI system development.
Email : sajid.ahmed@truelancer.com






