

TransPerfect
Coding Specialist
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Coding Specialist, UK Remote, on a freelance contract starting ASAP. Requires proficiency in Python and additional languages, experience in ML/AI, and skills in data validation, code annotation, and quality assessment. Background check needed.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 4, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Compliance #SQL (Structured Query Language) #Flask #ML (Machine Learning) #HTML (Hypertext Markup Language) #Datasets #Web Scraping #GitHub #Data Science #Reinforcement Learning #NoSQL #NumPy #Documentation #Ruby #NLP (Natural Language Processing) #JavaScript #PHP #AI (Artificial Intelligence) #Quality Assurance #Debugging #Pandas #Database Management #React #TypeScript #Programming #TensorFlow #Python #Strategy #Matplotlib #Visualization #GIT #PyTorch #Deep Learning #C++ #Data Analysis
Role description
Work Location: UK Remote
Engagement Model: Freelancer / Independent Contractor
Start Date: ASAP
Qualification Requirements: Successful completion of a role-specific written assessment and a background check.
Main Responsibilities:
• Model Quality Assessment: Evaluate the quality of AI model responses that include code, machine learning, AI, identifying errors, inefficiencies, and non-compliance with established standards.
• Code Annotation and Labeling: Accurately generate, annotate and label code snippets, algorithms, and technical documentation according to project-specific guidelines.
• Review and Feedback: Provide detailed, constructive feedback on model and other outputs
• Comparative Analysis: Compare multiple outputs and rank them based on criteria such as correctness, efficiency, readability, and adherence to programming best practices.
• Data Validation: Validate and correct datasets to ensure high-quality data for model training and evaluation.
• Collaboration: Work closely with data scientists and engineers to identify new annotation guidelines, resolve ambiguities, and contribute to the overall project strategy.
Job requirements
Requirements
• Programming: Proficient in Python (required) and at least one additional language such as JavaScript, Node.js, TypeScript, C/C++, or Rust. Bonus for experience with Go, Swift, Ruby, PHP, or Kotlin.
• Web Technologies: Experience with web scraping, APIs, HTML/CSS/JavaScript, and both frontend (React) and backend (Node.js, Flask) development.
• Machine Learning & AI: Knowledge of ML model development, deep learning frameworks (TensorFlow, PyTorch), NLP, reinforcement learning, computer vision, and game AI.
• Data Science & Engineering: Skilled in data analysis, visualization (Pandas, Matplotlib, NumPy), and database management (SQL, NoSQL).
• Algorithms & Math: Understanding of general and specialized algorithms, optimization, and problem-solving techniques.
• Software Engineering Practices: Familiarity with Git/GitHub, clean coding principles, software design patterns, and debugging.
Preferred Qualifications:
• Experience with AI/ML concepts, particularly with large language models (LLMs) and code generation.
• Familiarity with various programming paradigms (e.g., object-oriented, functional).
• Experience with code review in a professional or academic setting.
• Experience in data annotation or similar quality assurance roles.
Work Location: UK Remote
Engagement Model: Freelancer / Independent Contractor
Start Date: ASAP
Qualification Requirements: Successful completion of a role-specific written assessment and a background check.
Main Responsibilities:
• Model Quality Assessment: Evaluate the quality of AI model responses that include code, machine learning, AI, identifying errors, inefficiencies, and non-compliance with established standards.
• Code Annotation and Labeling: Accurately generate, annotate and label code snippets, algorithms, and technical documentation according to project-specific guidelines.
• Review and Feedback: Provide detailed, constructive feedback on model and other outputs
• Comparative Analysis: Compare multiple outputs and rank them based on criteria such as correctness, efficiency, readability, and adherence to programming best practices.
• Data Validation: Validate and correct datasets to ensure high-quality data for model training and evaluation.
• Collaboration: Work closely with data scientists and engineers to identify new annotation guidelines, resolve ambiguities, and contribute to the overall project strategy.
Job requirements
Requirements
• Programming: Proficient in Python (required) and at least one additional language such as JavaScript, Node.js, TypeScript, C/C++, or Rust. Bonus for experience with Go, Swift, Ruby, PHP, or Kotlin.
• Web Technologies: Experience with web scraping, APIs, HTML/CSS/JavaScript, and both frontend (React) and backend (Node.js, Flask) development.
• Machine Learning & AI: Knowledge of ML model development, deep learning frameworks (TensorFlow, PyTorch), NLP, reinforcement learning, computer vision, and game AI.
• Data Science & Engineering: Skilled in data analysis, visualization (Pandas, Matplotlib, NumPy), and database management (SQL, NoSQL).
• Algorithms & Math: Understanding of general and specialized algorithms, optimization, and problem-solving techniques.
• Software Engineering Practices: Familiarity with Git/GitHub, clean coding principles, software design patterns, and debugging.
Preferred Qualifications:
• Experience with AI/ML concepts, particularly with large language models (LLMs) and code generation.
• Familiarity with various programming paradigms (e.g., object-oriented, functional).
• Experience with code review in a professional or academic setting.
• Experience in data annotation or similar quality assurance roles.






