

YO IT CONSULTING
Python Developer - AI Training
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Python Developer - AI Training, offering a remote contract of over 6 months. Candidates should have a BS, MS, or PhD in Computer Science, expertise in programming languages, and experience in software engineering and LLMs.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
Unknown
-
ποΈ - Date
February 22, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Remote
-
π§ - Skills detailed
#AI (Artificial Intelligence) #JavaScript #Python #C++ #Programming #Java #Model Evaluation #Computer Science
Role description
Work Mode: Remote Engagement Type: Independent Contractor Schedule: Full-Time or Part-Time Contract
Language Requirement: Fluent English
Role Overview
We partner with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems.
This project focuses specifically on evaluating and improving how AI systems reason about code, generate programming solutions, and explain technical concepts across various complexity levels.
The role involves rigorous technical evaluation of AI-generated responses in coding and software engineering contexts.
What Youβll Do Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness Conduct fact-checking using trusted public sources and authoritative references Conduct accuracy testing by executing code and validating outputs using appropriate tools Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies Assess code quality, readability, algorithmic soundness, and explanation quality Ensure model responses align with expected conversational behavior and system guidelines Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines
Who You Are You hold a BS, MS, or PhD in Computer Science or a closely related field You have significant real-world experience in software engineering or related technical roles You are an expert in at least one relevant programming language (e.g., Python, Java, C++, JavaScript, Go, Rust) You are able to solve HackerRank or LeetCode Medium and Hardβlevel problems independently You have experience contributing to well-known open-source projects, including merged pull requests You have significant experience using LLMs while coding and understand their strengths and failure modes You have strong attention to detail and are comfortable evaluating complex technical reasoning, identifying subtle bugs or logical flaws
Nice-to-Have Specialties Prior experience with RLHF, model evaluation, or data annotation work Track record in competitive programming Experience reviewing code in production environments Familiarity with multiple programming paradigms or ecosystems Experience explaining complex technical concepts to non-expert audiences
What Success Looks Like You identify incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussions Your feedback improves the correctness, robustness, and clarity of AI coding outputs You deliver reproducible evaluation artifacts that strengthen model performance
Work Mode: Remote Engagement Type: Independent Contractor Schedule: Full-Time or Part-Time Contract
Language Requirement: Fluent English
Role Overview
We partner with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems.
This project focuses specifically on evaluating and improving how AI systems reason about code, generate programming solutions, and explain technical concepts across various complexity levels.
The role involves rigorous technical evaluation of AI-generated responses in coding and software engineering contexts.
What Youβll Do Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness Conduct fact-checking using trusted public sources and authoritative references Conduct accuracy testing by executing code and validating outputs using appropriate tools Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies Assess code quality, readability, algorithmic soundness, and explanation quality Ensure model responses align with expected conversational behavior and system guidelines Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines
Who You Are You hold a BS, MS, or PhD in Computer Science or a closely related field You have significant real-world experience in software engineering or related technical roles You are an expert in at least one relevant programming language (e.g., Python, Java, C++, JavaScript, Go, Rust) You are able to solve HackerRank or LeetCode Medium and Hardβlevel problems independently You have experience contributing to well-known open-source projects, including merged pull requests You have significant experience using LLMs while coding and understand their strengths and failure modes You have strong attention to detail and are comfortable evaluating complex technical reasoning, identifying subtle bugs or logical flaws
Nice-to-Have Specialties Prior experience with RLHF, model evaluation, or data annotation work Track record in competitive programming Experience reviewing code in production environments Familiarity with multiple programming paradigms or ecosystems Experience explaining complex technical concepts to non-expert audiences
What Success Looks Like You identify incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussions Your feedback improves the correctness, robustness, and clarity of AI coding outputs You deliver reproducible evaluation artifacts that strengthen model performance






