English
Hebrew
SME Careers Is Hiring A Hebrew Trust & Safety Data Trainer
Join Our Remote AI Safety Evaluation Team — Help Improve AI Model Safety and Accuracy 🌍🤖
This hourly, fully remote contractor role involves reviewing and generating AI content with a focus on safety, accuracy, and clarity, requiring fluency in Hebrew and proficient English interpretation skills.
Role Overview
In this position, you will:
- Review AI-generated responses and produce safety-focused evaluation content 🔍
- Assess reasoning quality and step-by-step problem-solving 🧠
- Provide expert feedback to ensure outputs are accurate, safe, and clearly explained ✅
- Annotate and evaluate content in English and Hebrew, requiring fluency in both languages 🇮🇱🇺🇸
Note: Annotations will help prevent the Large Language Model from generating unsafe, toxic, or unintentional content, which may include sensitive topics such as sexual, violent, or disturbing themes.
About the Company
This role is with SME Careers, a rapidly growing AI Data Services company and a subsidiary of SuperAnnotate. We provide training data for leading AI companies and foundation-model labs to enhance the world's top AI models 🌟.
Key Responsibilities
- Curate & Label Safety Data: Create and label safety training examples, including adversarial and red-team cases, in both English and Hebrew. Cover topics such as hate speech, harassment, sexual content, self-harm, violence, bias, illegal services, malicious activity, malicious code, and misinformation, capturing nuance with C1-level English and near-native French proficiency.
- Model Response Evaluation: Review, score, and compare multiple model responses against established safety policies and quality rubrics. Document safety assessments and identify failure modes like evasion, normalization, escalation, or procedural issues.
- Continuous Safety Testing & Auditing: Conduct ongoing stress-tests and audits of model behavior, flag ambiguous cases, propose clearer decision rules, and help maintain consistent annotation standards across reviewers.
Your Ideal Profile
- Bachelor’s degree or higher in Communications, Linguistics, Psychology, Law/Policy, Security Studies, or related fields, or equivalent professional experience.
- Near-native or native proficiency in Hebrew (reading/writing) for precise safety labeling and cultural nuance 🇮🇱
- Minimum C1 level in English (reading/writing) for policy interpretation and documentation 🇺🇸
- Experience in Trust & Safety, content moderation, policy enforcement, risk operations, investigations, or safety evaluation 🔐
- Proven LLM red teaming expertise to probe safety boundaries and document adversarial patterns 🛡️
- Strong knowledge of safety domains such as hate & harassment, sexual content, self-harm, violence, bias, illegal activities, and misinformation ⚠️
- Emotional resilience to handle unsafe, explicit, or toxic content, including sensitive themes of a sexual, violent, or psychologically disturbing nature 🧠🛡️
- Excellent judgment under ambiguous situations with the ability to interpret policies and explain decisions clearly 📝
- Reliable work ethic as an hourly contractor, with clear documentation and responsiveness across time zones ⏰
- Previous experience with AI data annotation, training, or evaluation (preferred) 🎯
- Hands-on experience with tools like Perplexity, Gemini, ChatGPT, and others 🧰
📝 If you meet these qualifications and are eager to contribute to safer AI systems worldwide, we encourage you to apply!
