

Alignerr
Data Security & DLP Analyst
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Security & DLP Analyst on an hourly contract, remote, requiring 10-40 hours/week. Key skills include 2+ years in data security, DLP tools, and familiarity with cloud security. Experience with frameworks like NIST or GDPR is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
April 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#Data Security #GCP (Google Cloud Platform) #Cloud #Classification #AWS (Amazon Web Services) #GDPR (General Data Protection Regulation) #Security #SaaS (Software as a Service) #Compliance #AI (Artificial Intelligence) #Azure
Role description
Data Security & DLP Analyst (AI Training)
About The Role
Your instincts for spotting data leaks, policy gaps, and DLP failures are exactly what's needed to make AI smarter about security. At Alignerr, we partner with the world's leading AI research labs to build and train cutting-edge AI models — and we need practitioners who understand how data exposure actually happens in the real world.
This is your opportunity to apply your security expertise in a new and meaningful way: helping frontier AI systems reason accurately about sensitive information, data risk, and protection strategies.
• Organization: Alignerr
• Type: Hourly Contract
• Location: Remote
• Commitment: 10–40 hours/week
What You'll Do
• Analyze realistic data security and DLP scenarios spanning cloud, SaaS, and enterprise environments
• Classify data sensitivity levels, exposure paths, and policy violations
• Evaluate prevention, detection, and incident response strategies for accuracy and completeness
• Generate, label, and validate realistic data security cases used to train and benchmark AI systems
• Help AI models develop a nuanced understanding of how sensitive information is exposed, detected, and protected
Who You Are
• 2+ years of experience in data security, compliance, or security operations
• Hands-on familiarity with DLP tools, data classification frameworks, and privacy or regulatory requirements
• Practical understanding of how data risk plays out in real organizational environments
• Able to assess scenarios critically and communicate findings clearly in writing
• Self-directed and comfortable working asynchronously on task-based assignments
Nice to Have
• Experience with cloud security platforms (AWS, Azure, GCP) or SaaS security tooling
• Background in frameworks such as NIST, ISO 27001, SOC 2, GDPR, or HIPAA
• Prior experience in red teaming, threat modeling, or incident response
• Familiarity with AI or data annotation workflows
Why Join Us
• Work directly on frontier AI systems alongside top research labs
• Fully remote and flexible — set your own schedule and work at your own pace
• Freelance autonomy with meaningful, intellectually engaging work
• Contribute to AI safety and responsible AI development in a tangible way
• Potential for ongoing work and contract extension
Data Security & DLP Analyst (AI Training)
About The Role
Your instincts for spotting data leaks, policy gaps, and DLP failures are exactly what's needed to make AI smarter about security. At Alignerr, we partner with the world's leading AI research labs to build and train cutting-edge AI models — and we need practitioners who understand how data exposure actually happens in the real world.
This is your opportunity to apply your security expertise in a new and meaningful way: helping frontier AI systems reason accurately about sensitive information, data risk, and protection strategies.
• Organization: Alignerr
• Type: Hourly Contract
• Location: Remote
• Commitment: 10–40 hours/week
What You'll Do
• Analyze realistic data security and DLP scenarios spanning cloud, SaaS, and enterprise environments
• Classify data sensitivity levels, exposure paths, and policy violations
• Evaluate prevention, detection, and incident response strategies for accuracy and completeness
• Generate, label, and validate realistic data security cases used to train and benchmark AI systems
• Help AI models develop a nuanced understanding of how sensitive information is exposed, detected, and protected
Who You Are
• 2+ years of experience in data security, compliance, or security operations
• Hands-on familiarity with DLP tools, data classification frameworks, and privacy or regulatory requirements
• Practical understanding of how data risk plays out in real organizational environments
• Able to assess scenarios critically and communicate findings clearly in writing
• Self-directed and comfortable working asynchronously on task-based assignments
Nice to Have
• Experience with cloud security platforms (AWS, Azure, GCP) or SaaS security tooling
• Background in frameworks such as NIST, ISO 27001, SOC 2, GDPR, or HIPAA
• Prior experience in red teaming, threat modeling, or incident response
• Familiarity with AI or data annotation workflows
Why Join Us
• Work directly on frontier AI systems alongside top research labs
• Fully remote and flexible — set your own schedule and work at your own pace
• Freelance autonomy with meaningful, intellectually engaging work
• Contribute to AI safety and responsible AI development in a tangible way
• Potential for ongoing work and contract extension


