Opportunity type
Full-time
Cause areas
AI safety & policy
Routes to impact
Direct high impact on an important cause
Increasing awareness of institutional decision making challenges in AI safety
Skill set
Research
Political & bureaucratic
AI governance understanding
Policy writing
Location
Berkeley, CA
Description
A full-time research role in Berkeley for candidates interested in shaping technical AI governance to reduce catastrophic and existential risks from advanced AI.
  • Research and policy design – Analyze AI risk scenarios, evaluate current proposals, and design technical standards and governance mechanisms that scale to frontier systems
  • Policy engagement – Contribute to government Requests for Comment, brief policymakers, and consult with governance bodies (e.g., NIST, EU, UN)
  • Evaluations and empirical work – Design model evaluations, study failure modes, and apply risk management frameworks from other high-stakes industries
  • Collaboration and writing – Produce clear internal and external reports, and work closely with researchers and external stakeholders
This text was generated by AI. If you notice any inconsistencies, please let us know using this form
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities