

10a Labs
Data Assistant
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Assistant contract for 3 months, paying "pay rate". Located remotely, it requires basic SQL and Python skills, experience with Jupyter notebooks, and comfort with sensitive content related to mental health and safety in AI systems.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 10, 2025
🕒 - Duration
3 to 6 months
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Libraries #Automation #Data Cleaning #API (Application Programming Interface) #AI (Artificial Intelligence) #NumPy #Datasets #SQL Queries #Python #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Pandas #Jupyter #Security #Scala #Alation
Role description
About 10a Labs: 10a Labs is an applied research and AI security company trusted by AI unicorns, Fortune 10 companies, and U.S. tech leaders. We combine proprietary technology, deep expertise, and multilingual threat intelligence to detect abuse at scale. We also deliver state-of-the-art red teaming across high-impact security and safety challenges.
Data Assistant: Help Advance AI Safety Through Precision Cleaning
At the forefront of applied AI safety, our team works to ensure that advanced AI systems protect and empower people—not harm them. We believe that safeguarding mental health and human agency is just as essential as preventing technical failure. If you're driven by real-world impact and have a keen eye for detail, this short-term role offers a chance to contribute to the foundational work that enables large-scale behavioral risk detection in AI systems.
About The Role
We are seeking a short-term contractor (3 months) to support an AI Lab safety team in preparing datasets for future automation and analysis. This role centers on data cleaning, labeling, and organization, with a focus on flagging behavioral patterns in historical AI data. Your work will directly support efforts to improve detection of safety-relevant behaviors—such as self-harm, manipulation, and emotional escalation—in large language models.
This role is ideal for someone who enjoys methodical, detail-oriented work, is comfortable navigating large datasets, and can work within a technical environment. Basic SQL and Python experience using Jupyter notebooks is required.
Please note: This work may involve reviewing text that references sensitive topics, including mental health struggles, suicidal ideation, addiction, or interpersonal violence. Candidates should be comfortable reading and labeling this kind of material.
In This Role, You Will
• Label historical conversation data: Use internal tools and clear guidelines to tag safety-relevant behaviors in AI data, including signs of harmful behavior.
• Query, clean, and organize datasets: Use basic SQL and python to extract, clean, and review relevant data for labeling and analysis.
• Work in Jupyter notebooks: Navigate and run your work within Jupyter notebooks to support traceability and collaboration with technical teams.
• Collaborate with cross-functional partners: Work alongside analysts, researchers, and engineers to refine labeling criteria and surface edge cases.
• Support labeling tool improvements: Provide structured feedback on labeling workflows to help improve future annotation tools and processes.
You Might Thrive in This Role If You…
• Have a meticulous eye for detail and enjoy repetitive but high-stakes tasks where accuracy matters.
• Are organized and systematic, with a track record of following annotation guidelines and managing large volumes of information consistently.
• Are comfortable writing and interpreting basic SQL queries to retrieve and filter relevant data.
• Understand how to use python to clean and label data, including strong python fundamentals and experience with python libraries like pandas and numpy. Bonus points if you’ve worked with the Google Drive API.
• Have hands-on experience using Jupyter notebooks to review data and run workflows.
• Communicate clearly and take initiative when something looks off or doesn’t align with the guidelines.
• Care about mental health, safety, and ethical tech—and want to support systems that take these seriously.
This is a great opportunity to support cutting-edge safety work in AI while building experience with real-world data annotation workflows in a technical research setting.
About 10a Labs: 10a Labs is an applied research and AI security company trusted by AI unicorns, Fortune 10 companies, and U.S. tech leaders. We combine proprietary technology, deep expertise, and multilingual threat intelligence to detect abuse at scale. We also deliver state-of-the-art red teaming across high-impact security and safety challenges.
Data Assistant: Help Advance AI Safety Through Precision Cleaning
At the forefront of applied AI safety, our team works to ensure that advanced AI systems protect and empower people—not harm them. We believe that safeguarding mental health and human agency is just as essential as preventing technical failure. If you're driven by real-world impact and have a keen eye for detail, this short-term role offers a chance to contribute to the foundational work that enables large-scale behavioral risk detection in AI systems.
About The Role
We are seeking a short-term contractor (3 months) to support an AI Lab safety team in preparing datasets for future automation and analysis. This role centers on data cleaning, labeling, and organization, with a focus on flagging behavioral patterns in historical AI data. Your work will directly support efforts to improve detection of safety-relevant behaviors—such as self-harm, manipulation, and emotional escalation—in large language models.
This role is ideal for someone who enjoys methodical, detail-oriented work, is comfortable navigating large datasets, and can work within a technical environment. Basic SQL and Python experience using Jupyter notebooks is required.
Please note: This work may involve reviewing text that references sensitive topics, including mental health struggles, suicidal ideation, addiction, or interpersonal violence. Candidates should be comfortable reading and labeling this kind of material.
In This Role, You Will
• Label historical conversation data: Use internal tools and clear guidelines to tag safety-relevant behaviors in AI data, including signs of harmful behavior.
• Query, clean, and organize datasets: Use basic SQL and python to extract, clean, and review relevant data for labeling and analysis.
• Work in Jupyter notebooks: Navigate and run your work within Jupyter notebooks to support traceability and collaboration with technical teams.
• Collaborate with cross-functional partners: Work alongside analysts, researchers, and engineers to refine labeling criteria and surface edge cases.
• Support labeling tool improvements: Provide structured feedback on labeling workflows to help improve future annotation tools and processes.
You Might Thrive in This Role If You…
• Have a meticulous eye for detail and enjoy repetitive but high-stakes tasks where accuracy matters.
• Are organized and systematic, with a track record of following annotation guidelines and managing large volumes of information consistently.
• Are comfortable writing and interpreting basic SQL queries to retrieve and filter relevant data.
• Understand how to use python to clean and label data, including strong python fundamentals and experience with python libraries like pandas and numpy. Bonus points if you’ve worked with the Google Drive API.
• Have hands-on experience using Jupyter notebooks to review data and run workflows.
• Communicate clearly and take initiative when something looks off or doesn’t align with the guidelines.
• Care about mental health, safety, and ethical tech—and want to support systems that take these seriously.
This is a great opportunity to support cutting-edge safety work in AI while building experience with real-world data annotation workflows in a technical research setting.






