AGI Safety Fundamentals Program

Course title

AGI Safety Fundamentals Program

Time Zone



1 January - 31 March 2022

Contact email

Applications to this course have closed

Ensuring that powerful AI systems don’t present a catastrophic threat to humanity might be among the most pressing and impactful issues we face. We created the AGI Safety Fundamentals Programme to help participants learn about important concepts in AI safety.

There are two tracks of this programme: technical AI alignment and AI governance. You can find the curriculum for the technical AI alignment track here, and the curriculum for the AI governance track here.

The alignment curriculum was designed by Richard Ngo, a former ML research engineer at DeepMind who now works on the policy team at OpenAI. The governance curriculum was designed by the organizers of the Stanford Existential Risks Initiative, in collaboration with Richard.

Over the course of eight weeks, you will have weekly 1.5-hour discussions with a cohort of 4-5 participants and one facilitator. Before attending each discussion, you will complete a set of readings (and sometimes a brief written exercise). After the eighth week, you will have several weeks to work on a project of your choice.

The programme may involve speaker events with experienced AI safety researchers and professionals, as well as networking opportunities with other participants.

This programme is free for all participants. Anyone can apply, and there is no application fee.

We highly recommend this program if you:

  • Have a good understanding of core ideas in effective altruism.
  • Are already interested in AI safety, and want to learn more about technical research or governance..
  • Are interested in pursuing a career in ensuring that future AI systems are beneficial for humanity.
  • Can commit at least 2 hours of readings and exercises plus a 1.5-hour discussion per week.
  • Can attend at least 7 out of the 8 weekly discussion sessions.

We expect the technical track to be most useful to people with technical backgrounds (e.g. maths or computer science), though the curriculum is intended to be accessible for those who aren’t familiar with machine learning, and participants will be grouped with others from similar backgrounds. We expect the governance track to be accessible to people from a broader range of academic backgrounds.