Opportunity type
Fellowship
Funding
Cause areas
AI safety & policy
Routes to impact
Direct high impact on an important cause
Skill-building & building career capital
Learning about important cause areas
Skill set
Research
Software engineering
Forecasting
Location
London, UK; Ontario, CA; Remote-Friendly, San Francisco, US
Description
The Anthropic AI Safety Fellows Program offers a four-month, full-time, paid opportunity for technically skilled individuals to conduct empirical AI safety research with mentorship from leading experts.
- Receive direct mentorship from Anthropic researchers and access to a collaborative workspace in London or Berkeley (remote options available in the US, UK, or Canada)
- Work on impactful AI safety projects using open-source models and public APIs, with funding for compute and research expenses
- Weekly stipend provided (3,850 USD / 2,310 GBP / 4,300 CAD), plus benefits and connection to the broader AI safety research community
- Must have work authorization in the US, UK, or Canada; visa sponsorship is not available for this program
Apply via the official Constellation application form: https://constellation.fillout.com/anthropicfellows
This text was generated by AI. If you notice any inconsistencies, please let us know using this form
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities