

Swoon
Data Science Eval QC Lead
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Science Eval QC Lead contractor, 6-month contract, offering "$X per hour". Requires 3+ years in data science or related fields, strong attention to detail, proficiency in SQL and Python, and experience with data quality and database architecture.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 23, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Access #Automation #Data Engineering #Data Science #Scala #SQL (Structured Query Language) #Schema Design #Python #Metadata #AI (Artificial Intelligence) #Database Architecture #Monitoring #Data Quality #Documentation
Role description
Our leading AI client, is looking for their next Evaluation Consultant/Quality Lead (Data Science) contractor!
Scope of responsibilities
• Incoming Data Quality Control (QC): Review incoming tasks, deliverables, and rubrics to confirm they meet campaign standards for complexity, feasibility, completeness, clarity, and consistency.
• Review materials for domain realism, including realistic analytics or data infrastructure workflows, appropriate assumptions about data availability and schema design, reasonable timelines, and deliverable expectations that match how work is actually performed in data science and database architecture contexts.
• Identify common failure modes such as underspecified business questions, unrealistic data access assumptions, missing schema or metadata context, unclear success criteria, or deliverables that do not match the underlying task.
• Provide actionable feedback to contributors and reviewers, and track rework or resubmission through closure.
• Help ensure that rubric criteria are specific, fair, and aligned to the actual task requirements rather than generic preferences.
• Flag cases where a task depends on inaccessible systems, privileged production data, or non-portable internal knowledge unless such dependencies are explicitly intended and supported.
Data Collection Process Monitoring
• Monitor campaign health and throughput, including submission volume, rework rates, bottlenecks, turnaround time, and recurring quality issues.
• Proactively flag blocked contributors or reviewers, repeated sources of confusion, and drift from agreed standards or templates.
• Surface systematic issues to leads and propose concrete mitigations, such as checklist updates, examples, clarifications, or changes to templates and guidance.
• Maintain lightweight QC logs, issue trackers, or audit trails to support retrospectives, analysis, and process improvements.
Contributor Support and Coordination (as needed)
• Answer contributor questions to improve first-pass quality and reduce avoidable back-and-forth.
• Coordinate with reviewers and subject matter experts on edge cases and escalate ambiguous decisions quickly.
• Provide weekly or ad hoc summaries of quality themes, common issue categories, open risks, and recommended updates to instructions or examples.
• Help calibrate contributors on what “good” looks like for tasks and rubrics in the target domain.
Technical Work (light to moderate)
• Use spreadsheets, SQL, or simple Python scripts to spot anomalies, review metadata completeness, identify broken links or missing files, and support workflow optimization or automation.
• Assist with basic data hygiene and organization of uploaded materials, naming conventions, and record completeness.
Additional tasks
• Draft or revise campaign documentation, examples, QC checklists, and “good vs. bad” examples for the target domain.
• Participate in targeted QA sweeps or audits for high-risk slices, such as new task types, new contributors, or newly introduced rubric formats.
• Assist with tagging, categorization, and structured review of tasks or deliverables for downstream analysis.
Qualifications / profile
• 3+ years of professional experience in data science, analytics, data engineering, database architecture, or a closely related function.
• Strong judgment about what realistic, high-quality work looks like in those settings.
• Excellent attention to detail and comfort applying standards and rubrics consistently.
• Able to write clear, direct feedback that improves quality quickly.
• Comfortable working in spreadsheets, SQL, and basic Python.
• Able to work in ambiguous, fast-moving environments and make pragmatic judgment calls.
Our leading AI client, is looking for their next Evaluation Consultant/Quality Lead (Data Science) contractor!
Scope of responsibilities
• Incoming Data Quality Control (QC): Review incoming tasks, deliverables, and rubrics to confirm they meet campaign standards for complexity, feasibility, completeness, clarity, and consistency.
• Review materials for domain realism, including realistic analytics or data infrastructure workflows, appropriate assumptions about data availability and schema design, reasonable timelines, and deliverable expectations that match how work is actually performed in data science and database architecture contexts.
• Identify common failure modes such as underspecified business questions, unrealistic data access assumptions, missing schema or metadata context, unclear success criteria, or deliverables that do not match the underlying task.
• Provide actionable feedback to contributors and reviewers, and track rework or resubmission through closure.
• Help ensure that rubric criteria are specific, fair, and aligned to the actual task requirements rather than generic preferences.
• Flag cases where a task depends on inaccessible systems, privileged production data, or non-portable internal knowledge unless such dependencies are explicitly intended and supported.
Data Collection Process Monitoring
• Monitor campaign health and throughput, including submission volume, rework rates, bottlenecks, turnaround time, and recurring quality issues.
• Proactively flag blocked contributors or reviewers, repeated sources of confusion, and drift from agreed standards or templates.
• Surface systematic issues to leads and propose concrete mitigations, such as checklist updates, examples, clarifications, or changes to templates and guidance.
• Maintain lightweight QC logs, issue trackers, or audit trails to support retrospectives, analysis, and process improvements.
Contributor Support and Coordination (as needed)
• Answer contributor questions to improve first-pass quality and reduce avoidable back-and-forth.
• Coordinate with reviewers and subject matter experts on edge cases and escalate ambiguous decisions quickly.
• Provide weekly or ad hoc summaries of quality themes, common issue categories, open risks, and recommended updates to instructions or examples.
• Help calibrate contributors on what “good” looks like for tasks and rubrics in the target domain.
Technical Work (light to moderate)
• Use spreadsheets, SQL, or simple Python scripts to spot anomalies, review metadata completeness, identify broken links or missing files, and support workflow optimization or automation.
• Assist with basic data hygiene and organization of uploaded materials, naming conventions, and record completeness.
Additional tasks
• Draft or revise campaign documentation, examples, QC checklists, and “good vs. bad” examples for the target domain.
• Participate in targeted QA sweeps or audits for high-risk slices, such as new task types, new contributors, or newly introduced rubric formats.
• Assist with tagging, categorization, and structured review of tasks or deliverables for downstream analysis.
Qualifications / profile
• 3+ years of professional experience in data science, analytics, data engineering, database architecture, or a closely related function.
• Strong judgment about what realistic, high-quality work looks like in those settings.
• Excellent attention to detail and comfort applying standards and rubrics consistently.
• Able to write clear, direct feedback that improves quality quickly.
• Comfortable working in spreadsheets, SQL, and basic Python.
• Able to work in ambiguous, fast-moving environments and make pragmatic judgment calls.






