Alignerr

Data Scientist (Masters)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a fully remote Data Scientist (Masters) position for an hourly contract lasting 10–40 hours/week, focusing on AI model evaluation. Key skills include machine learning, statistics, and data engineering; no prior AI experience is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
April 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Seattle, WA
-
🧠 - Skills detailed
#Computer Science #Data Engineering #R #Big Data #Data Quality #Data Science #Quality Assurance #Model Evaluation #Data Analysis #NLP (Natural Language Processing) #ML (Machine Learning) #Datasets #TensorFlow #Spark (Apache Spark) #Statistics #Hadoop #Unsupervised Learning #Libraries #AI (Artificial Intelligence) #SQL (Structured Query Language) #Supervised Learning #Python #Deep Learning #PyTorch #SQL Queries
Role description
Data Scientist (Masters) — AI Data Trainer About The Role What if your deep knowledge of machine learning, statistics, and data engineering could directly shape how the world's most advanced AI systems reason and solve problems? We're looking for Masters-level data scientists to challenge, audit, and refine cutting-edge AI models — exposing their blind spots and helping build smarter, more reliable systems. This is a fully remote, flexible contract role. No prior AI industry experience needed — just rigorous domain expertise and a sharp eye for technical quality. • Organization: Alignerr • Type: Hourly Contract • Location: Remote • Commitment: 10–40 hours/week What You'll Do • Design Advanced Challenges: Create complex, domain-rich data science problems spanning hyperparameter optimization, Bayesian inference, cross-validation strategies, dimensionality reduction, and more — problems that genuinely stress-test AI reasoning • Author Ground-Truth Solutions: Develop rigorous, step-by-step reference solutions including Python/R scripts, SQL queries, and mathematical derivations that serve as the definitive benchmark for AI outputs • Audit AI-Generated Code: Evaluate code produced by AI models using libraries like Scikit-Learn, PyTorch, and TensorFlow — assessing correctness, efficiency, and best practices • Identify Reasoning Failures: Spot logical flaws in AI outputs such as data leakage, overfitting, improper handling of imbalanced datasets, and flawed statistical conclusions • Provide Structured Feedback: Document failure modes clearly and systematically so model teams can directly improve AI reasoning and reliability Who You Are • Pursuing or holding a Masters or PhD in Data Science, Statistics, Computer Science, or a quantitative field with heavy emphasis on data analysis • Strong foundational expertise in supervised/unsupervised learning, deep learning, statistical inference, or big data technologies (Spark, Hadoop, etc.) • Able to communicate complex algorithmic concepts and statistical results clearly in writing • Naturally detail-oriented — you catch errors in code syntax, mathematical notation, and statistical reasoning that others miss • Self-motivated and comfortable working independently on technical tasks • No prior AI or data annotation experience required Nice to Have • Prior experience with data annotation, data quality assurance, or model evaluation systems • Proficiency in production-level data science workflows such as MLOps or CI/CD for models • Familiarity with NLP techniques or large language model evaluation • Background in academic research or technical writing Why Join Us • Work directly with industry-leading AI research labs on genuinely frontier problems • Fully remote and flexible — work when and where it suits you • Freelance autonomy with meaningful, intellectually stimulating task-based work • Make a tangible impact on how AI understands and solves complex data science problems • Potential for ongoing work and contract extension as new projects launch