

GSOBA
AI Trainer - $75-$100/hour <Test Job Viewable by LinkedIn Employees Only>
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI Trainer, offering $75-$100/hour for part-time, remote project-based work lasting over 6 months. Key skills include proficiency in Python and JavaScript/TypeScript, with experience in AI/ML systems, software engineering, and testing frameworks required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
800
-
ποΈ - Date
April 18, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#JavaScript #TypeScript #Java #XUnit #JUnit #Code Reviews #AI (Artificial Intelligence) #ML (Machine Learning) #C# #Python #Migration #Pytest
Role description
About the project
We are seeking a project team of skilled Software Engineers to work as project consultants in our AI Labor Marketplace. This is not a full-time employment position β you will be engaged as an expert project consultant on a contract basis.
Location: U.S.-based experts only
Engagement: Part-time, project-based expert evaluation work
Work Type: Remote
Contributors will design and evaluate realistic software engineering tasks, including bug resolution, feature implementation, refactoring/migration, and test generation. Work includes both creating complex coding scenarios and reviewing peer submissions for quality and accuracy.
This is a project-based consultant role. Consultants will be paid on a per-project basis; hourly rates are estimates based on anticipated completion time. Consultants control their own schedule, provide their own tools, and may simultaneously provide services to other vendors/employers (subject to those vendorsβ allowances).
Responsibilities:
Contributors will:
β’ Design and implement multi-file coding tasks across bug fixing, feature development, refactoring, and testing
β’ Write clear natural-language specifications and reference implementations
β’ Develop and extend unit and integration test suites
β’ Review peer-generated tasks for correctness, clarity, and realism
β’ Identify edge cases, ambiguities, and potential failure modes
β’ Ensure alignment between specifications, code, and expected outputs
Expected Outcomes:
β’ High-quality, production-realistic coding tasks
β’ Complete and correct reference implementations
β’ Robust test coverage and validation artifacts
β’ Structured, actionable peer review feedback
Qualifications:
β’ Advanced professional written proficiency in English
β’ 3β5 years of professional software engineering experience
β’ Strong proficiency in Python and JavaScript/TypeScript; working knowledge of Java, C#, or Go
β’ Backend or fullβstack development experience in production systems
β’ Experience with testing frameworks (e.g., pytest, Jest, JUnit, xUnit, Go testing)
β’ Proven ability to debug and navigate large, multiβfile codebases
β’ Experience with code reviews, refactoring, and production migrations
β’ Prior experience with data annotation, evaluation, or structured labeling for AI/ML systems
β’ Experience defining solution trajectories and writing evaluation rubrics
β’ Familiarity with LLM evaluation, prompt design, or benchmarking workflows
About the project
We are seeking a project team of skilled Software Engineers to work as project consultants in our AI Labor Marketplace. This is not a full-time employment position β you will be engaged as an expert project consultant on a contract basis.
Location: U.S.-based experts only
Engagement: Part-time, project-based expert evaluation work
Work Type: Remote
Contributors will design and evaluate realistic software engineering tasks, including bug resolution, feature implementation, refactoring/migration, and test generation. Work includes both creating complex coding scenarios and reviewing peer submissions for quality and accuracy.
This is a project-based consultant role. Consultants will be paid on a per-project basis; hourly rates are estimates based on anticipated completion time. Consultants control their own schedule, provide their own tools, and may simultaneously provide services to other vendors/employers (subject to those vendorsβ allowances).
Responsibilities:
Contributors will:
β’ Design and implement multi-file coding tasks across bug fixing, feature development, refactoring, and testing
β’ Write clear natural-language specifications and reference implementations
β’ Develop and extend unit and integration test suites
β’ Review peer-generated tasks for correctness, clarity, and realism
β’ Identify edge cases, ambiguities, and potential failure modes
β’ Ensure alignment between specifications, code, and expected outputs
Expected Outcomes:
β’ High-quality, production-realistic coding tasks
β’ Complete and correct reference implementations
β’ Robust test coverage and validation artifacts
β’ Structured, actionable peer review feedback
Qualifications:
β’ Advanced professional written proficiency in English
β’ 3β5 years of professional software engineering experience
β’ Strong proficiency in Python and JavaScript/TypeScript; working knowledge of Java, C#, or Go
β’ Backend or fullβstack development experience in production systems
β’ Experience with testing frameworks (e.g., pytest, Jest, JUnit, xUnit, Go testing)
β’ Proven ability to debug and navigate large, multiβfile codebases
β’ Experience with code reviews, refactoring, and production migrations
β’ Prior experience with data annotation, evaluation, or structured labeling for AI/ML systems
β’ Experience defining solution trajectories and writing evaluation rubrics
β’ Familiarity with LLM evaluation, prompt design, or benchmarking workflows



