Opportunity type
Fellowship
Cause areas
AI safety & policy
Routes to impact
Direct high impact on an important cause
Skill-building & building career capital
Networking with peers around AI governance topics
Skill set
Research
Software engineering
AI governance understanding
Deadline
2026-05-03
Location
Remote or Berkeley (USA)
Description
The OpenAI Safety Fellowship is a pilot program supporting independent research on the safety and alignment of advanced AI systems, open to external researchers, engineers, and practitioners.
- Fellows conduct high-impact research in areas like safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving methods, agentic oversight, and misuse domains.
- The program runs from September 14, 2026, to February 5, 2027, offering mentorship from OpenAI, a monthly stipend, compute support, and workspace in Berkeley or remote options.
- Applicants from diverse backgrounds (computer science, social science, cybersecurity, privacy, HCI, and related fields) are encouraged, with selection based on research ability and technical judgment.
- Fellows are expected to produce substantial research output (e.g., paper, benchmark, or dataset); API credits and resources are provided, but no internal system access.
For more details or to apply, visit the application form. Visa requirements are not specified in the source.
This text was generated by AI. If you notice any inconsistencies, please let us know using this form
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities