Alignerr

Data Science Expert - AI Content Specialist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a fully remote Data Science Expert - AI Content Specialist contract position, offering $[pay rate] for 10–40 hours/week. Requires a Master's or PhD in a quantitative field, strong skills in Python, R, SQL, and AI evaluation.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
April 30, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Python #Data Quality #Datasets #PyTorch #Data Science #TensorFlow #Computer Science #Deep Learning #Statistics #R #AI (Artificial Intelligence) #Supervised Learning #ML (Machine Learning) #Unsupervised Learning #Monitoring #Spark (Apache Spark) #Data Engineering #Libraries #Hadoop #SQL (Structured Query Language) #SQL Queries #Big Data #NLP (Natural Language Processing)
Role description
Data Science Expert – AI Content Specialist About The Role What if your expertise in machine learning, statistics, and data engineering could directly influence how the world's most advanced AI systems think and reason? We're partnering with leading AI research labs to find data science experts who can stress-test, evaluate, and improve cutting-edge language models — and we need people who truly know their stuff. This is a fully remote, flexible contract role designed for data scientists, ML engineers, and quantitative researchers who want to do meaningful, intellectually stimulating work on their own schedule. • Organization: Alignerr • Type: Hourly Contract • Location: Remote (Global) • Commitment: 10–40 hours/week What You'll Do • Design Advanced Challenges — Craft rigorous data science problems spanning hyperparameter optimization, Bayesian inference, cross-validation strategies, dimensionality reduction, and more to expose the limits of AI reasoning • Author Ground-Truth Solutions — Develop precise, step-by-step technical solutions including Python/R scripts, SQL queries, and mathematical derivations that serve as the definitive benchmark for AI responses • Audit AI-Generated Code — Critically evaluate AI outputs using libraries like Scikit-Learn, PyTorch, and TensorFlow, assessing correctness, efficiency, and best practices • Identify Reasoning Failures — Spot and document logical errors in AI reasoning — such as data leakage, overfitting, or improper handling of imbalanced datasets — and provide structured, actionable feedback to improve model performance Who You Are • Holds or is pursuing a Master's or PhD in Data Science, Statistics, Computer Science, or a related quantitative field • Strong foundational knowledge across core areas: supervised/unsupervised learning, deep learning, NLP, or big data technologies (Spark, Hadoop) • Able to communicate complex algorithmic concepts and statistical findings clearly and concisely in writing • Highly precise and detail-oriented when reviewing code syntax, mathematical notation, and statistical conclusions • Self-motivated and comfortable working independently in an asynchronous environment • No prior AI industry experience required Nice to Have • Experience with data annotation, data quality evaluation, or AI evaluation frameworks • Familiarity with production-level ML workflows — MLOps, CI/CD for models, model monitoring • Background in academic research or technical writing Why Join Us • Work directly with industry-leading AI models and contribute to their development at a foundational level • Fully remote and asynchronous — work on your schedule, from anywhere in the world • Freelance autonomy with consistent, intellectually engaging task-based work • Collaborate with a global network of experts contributing to the frontier of AI research • Potential for ongoing work and contract renewals as new projects launch