Alignerr

Data Science Expert - AI Content Specialist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Science Expert - AI Content Specialist, offering a flexible contract for 10–40 hours/week at an hourly pay rate. Key requirements include a Master's or PhD in a quantitative field, expertise in machine learning, and strong coding skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
April 18, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Sheffield, TX
-
🧠 - Skills detailed
#Libraries #Big Data #Supervised Learning #SQL Queries #Computer Science #Data Quality #AI (Artificial Intelligence) #Monitoring #Unsupervised Learning #NLP (Natural Language Processing) #Python #Statistics #Deep Learning #Data Engineering #ML (Machine Learning) #TensorFlow #SQL (Structured Query Language) #R #Data Science #Spark (Apache Spark) #Hadoop #PyTorch
Role description
Data Science Expert – AI Content Specialist About The Role What if your deep knowledge of machine learning, statistics, and data engineering could directly shape how the next generation of AI reasons through complex problems? We're looking for Data Science Experts to challenge, evaluate, and improve cutting-edge AI models — helping them think more rigorously, reason more accurately, and solve harder problems. This is a fully remote, flexible contract role built for working data scientists, researchers, and quantitative professionals who want to apply their expertise in a high-impact, asynchronous environment. • Organization: Alignerr • Type: Hourly Contract • Location: Remote • Commitment: 10–40 hours/week What You'll Do • Design Advanced Challenges — Create complex, domain-specific data science problems spanning hyperparameter optimization, Bayesian inference, cross-validation strategies, dimensionality reduction, and more • Author Ground-Truth Solutions — Develop rigorous, step-by-step reference solutions including Python/R scripts, SQL queries, and mathematical derivations that serve as the gold standard for AI evaluation • Audit AI-Generated Code — Evaluate AI outputs using libraries like Scikit-Learn, PyTorch, and TensorFlow for technical correctness, efficiency, and best practices • Sharpen AI Reasoning — Identify logical failures in AI-generated analysis — such as data leakage, overfitting, or mishandled class imbalance — and provide structured feedback that improves how models think and respond Who You Are • Holds a Master's (pursuing or completed) or PhD in Data Science, Statistics, Computer Science, or a related quantitative field • Strong foundational expertise in areas such as supervised/unsupervised learning, deep learning, NLP, or big data technologies (Spark, Hadoop) • Able to communicate complex algorithmic and statistical concepts clearly in writing • Precise and detail-oriented when reviewing code syntax, mathematical notation, and statistical conclusions • Self-motivated and comfortable working independently in an async, task-based environment • No prior AI or annotation experience required Nice to Have • Experience with data annotation, data quality, or AI evaluation workflows • Familiarity with production-level data science practices — MLOps, CI/CD for models, model monitoring • Background in technical writing, research, or academic publishing Why Join Us • Work directly with industry-leading AI research labs on frontier model development • Fully remote and asynchronous — work when and where it suits you • Freelance autonomy with meaningful, intellectually stimulating work • Contribute to AI systems that will shape how data science is understood and applied at scale • Potential for ongoing contract renewals as new projects launch