AI Governance Course [Self-Paced]
BlueDot Impact
Opportunity type
Independent project
Training program
Part-time role
Recurring opportunity
Cause areas
AI Safety & Policy
Routes to impact
💡 Direct/Increased Engagement with EA
📈 Skill-Building & Building Career Capital
📖 Learning about Important Cause Areas
🧪 Testing Your Fit for a Certain Career Path
Relevant aptitudes
Conceptual & Empirical Research
Political & Bureaucratic
Academia
Location
Remote
Description
This course will get you up to speed on extreme risks from AI and governance approaches to mitigating these risks.
In this course, we examine risks posed by increasingly advanced AI systems, ideas for addressing these risks through standards and regulation, and foreign policy approaches to promoting AI safety internationally. There is much uncertainty and debate about these issues, but the following resources offer a useful entry point for understanding how to reduce extreme risks in the current AI governance landscape.
This course was developed by subject matter experts, with input from leaders in the field, such as Ben Garfinkel, Haydn Belfield and Claire Boine. See who else is involved on our website.
By the end of this course, you should be able to understand a range of agendas in AI governance and make informed decisions about your next steps to engage with the field.
Course Overview
First, we'll examine the risks posed by advances in AI. Machine learning has advanced rapidly in recent years. Further advances threaten various catastrophic risks, including powerful AIs in conflict with human interests, AI-assisted bioterrorism, and AI-exacerbated conflict.
- Week 1: Introduction to AI and Machine Learning
- Week 2: Introduction to potential catastrophic risks from AI
- Week 3: Challenges in achieving AI safety
Standards and regulations could help address extreme risks from frontier AI. They could include model evaluations and security measures. One approach to reducing risks from states that do not regulate AI is for a cautious coalition of countries to lead in AI (e.g., through hardware export controls, infosecurity, and immigration policy) and use this lead to reduce risks. Another approach is expanding the cautious coalition, which may be doable through treaties with privacy-preserving verification mechanisms. We'll provide resources to help you to examine both approaches.
- Week 4: AI standards and regulations
- Week 5: Closing regulatory gaps through non-proliferation
- Week 6: Closing regulatory gaps through international agreements
Other prominent (sometimes complementary) AI governance ideas include lab governance, mitigating AGI misuse; slowing down AI now; and "CERN for AI." We'll briefly provide an overview of these approaches so that you know what others are discussing.
- Week 7: Additional proposals
The final week discusses ways you can contribute through policy work, strategy research, "technical governance" work, and other paths.
- Week 8: Career Advice
- Week 9: Projects (weeks 9-12, select your track)
Timing
We expect each section takes 2 hours to engage with all the materials (~16 hours overall). Additionally, there are exercises to help you think through topics yourself and make progress in your learning about AI alignment. We organise content into weeks to help you structure and pace your engagement with the curriculum.
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities