Opportunity type
Fellowship
Cause areas
AI safety & policy
Routes to impact
Skill-building & building career capital
Learning about important cause areas
Testing your fit for a certain career path
Direct high impact on an important cause
Skill set
Conceptual & empirical research
Software engineering
Academia
Forecasting
Organization building, boosting & running
Deadline
2026-01-12
Location
London, UK; Ontario, CA; Remote-Friendly, United States; San Francisco, CA
Description
Anthropic’s AI Safety Fellows Program offers a four-month, full-time, paid opportunity for technically skilled individuals to conduct empirical AI safety research with mentorship from leading experts.
- Receive direct mentorship from Anthropic researchers and connect with the broader AI safety community
- Work on impactful projects in areas like scalable oversight, adversarial robustness, model internals, and AI welfare
- Access a weekly stipend, research funding, and shared workspaces in London or Berkeley, with remote options in the US, UK, or Canada
- Must have work authorization in the US, UK, or Canada; Anthropic does not sponsor visas for this program
Apply via Constellation’s portal: https://constellation.fillout.com/anthropicsecurityfellows
This text was generated by AI. If you notice any inconsistencies, please let us know using this form
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities