Effective altruism
opportunities board
Work on the world's most pressing problems. Browse jobs, fellowships, internships, courses, and more at high-impact organisations.
Routes to impact
Direct high impact on an important cause
Skill-building & building career capital
Description
A senior researcher role at OpenAI for an experienced AI safety professional focused on identifying and stress-testing misalignment risks in frontier AGI systems.
- Focus area - design worst-case demonstrations and adversarial evaluations targeting deceptive behavior, scheming, reward hacking, and power-seeking in AI systems
- Scope - build automated red-teaming infrastructure and conduct research on alignment failure modes, with findings directly shaping product launches and safety strategy
- Collaboration - partner across research, engineering, policy, and legal teams; publish internal and external papers to advance industry-wide safety practice
- Compensation - $295K–$445K + equity, based in San Francisco
Apply directly at openai.com/careers
This text was generated by AI. If you notice any inconsistencies, please let us know using this form
Related opportunities
Research Assistant / Associate, Evidence Exchange
Centre for the Study of Existential Risk (CSER)Cambridge, United Kingdom
1 week ago
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities