Request for Proposals: AI Governance

Request for Proposals: AI Governance

Open Philanthropy
Opportunity type
Funding
Cause areas
AI Safety & Policy
Routes to impact
💡 Direct/Increased Engagement with EA
📈 Skill-Building & Building Career Capital
🤲 Direct High Impact on an Important Cause
Relevant aptitudes
Conceptual & Empirical Research
Political & Bureaucratic
Entrepreneur
Organization Building, Boosting & Running
Community Building
Software Engineering
Information Security
Academia
Communicator
Forecasting
Location
Remote
Description
AI has enormous beneficial potential if it is governed well. However, in line with a growing contingent of AI (and otherexperts from academia, industry, government, and civil society, we also think that AI systems could soon (e.g. in the next 15 years) cause catastrophic harm. For example, this could happen if malicious human actors deliberately misuse advanced AI systems, or if we lose control of future powerful systems designed to take autonomous actions.[1] - To improve the odds that humanity successfully navigates these risks, we are soliciting short expressions of interest (EOIs) for funding for work across six subject areas, described below. - Strong applications might be funded by Good Ventures (Open Philanthropy’s partner organization), or by any of >20 (and growing) other philanthropists who have told us they are concerned about these risks and are interested to hear about grant opportunities we recommend.[2] (You can indicate in your application whether we have permission to share your materials with other potential funders.) - As this is a new initiative, we are uncertain about the volume of interest we will receive. Our goal is to keep this form open indefinitely; however, we may need to temporarily pause accepting EOIs if we lack the staff capacity to properly evaluate them. We will post any updates or changes to the application process on this page. - Anyone is eligible to apply, including those working in academia, nonprofits, industry, or independently.[3] We will evaluate EOIs on a rolling basis. See below for more details. - If you have any questions, please email us. If you have any feedback about this page or program, please let us know (anonymously, if you want) via this short feedback form. - 1. Eligible proposal subject areas We are primarily seeking EOIs in the following subject areas, but will consider exceptional proposals outside of these areas, as long as they are relevant to mitigating catastrophic risks from AI:
  • Technical AI governance: Developing and vetting technical mechanisms that improve the efficacy or feasibility of AI governance interventions, or answering technical questions that can inform governance decisions. Examples include compute governancemodel evaluationstechnical safety and security standards for AI developerscybersecurity for model weights, and privacy-preserving transparency mechanisms.
  • Policy development: Developing and vetting government policy proposals in enough detail that they can be debated and implemented by policymakers. Examples of policies that seem like they might be valuable (but which typically need more development and debate) include some of those mentioned e.g. herehere, and here.
  • Frontier company policy: Developing and vetting policies and practices that frontier AI companies could volunteer or be required to implement to reduce risks, such as model evaluations, model scaling “red lines” and “if-then commitments,” incident reporting protocols, and third-party audits. See e.g. herehere, and here.
  • International AI governance: Developing and vetting paths to effective, broad, and multilateral AI governance, and working to improve coordination and cooperation among key state actors. See e.g. here.
  • Law: Developing and vetting legal frameworks for AI governance, exploring relevant legal issues such as liability and antitrust, identifying concrete legal tools for implementing high-level AI governance solutions, encouraging sound legal drafting of impactful AI policies, and understanding the legal aspects of various AI policy proposals. See e.g. here.
  • Strategic analysis and threat modeling: Improving society’s understanding of the strategic landscape around transformative AI through research into potential threat modelstakeoff speeds and timelinesreference class forecasting, and other approaches to gain clarity on key strategic considerations. See e.g. hereand here.
- Please keep in mind that while there are a wide range of projects that could hypothetically fit into each of these subject areas and might improve outcomes from increasingly wide societal adoption of AI systems, we are focused on work that could help characterize or mitigate potential catastrophic risks from AI of the sort described above. Familiarity with the broad perspective that underpins our grantmaking in AI will likely improve applicants’ odds of success in this RFP; however, we expect that many strong applicants will hold views that differ significantly from those of most OP staff who work on AI (just as our staff disagree with each other on many points!). See footnote for potential readings.[4] - 1.1 Ineligible subject areas This program is currently not seeking EOIs on the following subject areas:
  • AI governance work that doesn’t have likely major relevance to global catastrophic risks from AI. In other terms, we’re seeking proposals to address AI impacts on the scale of “transformative AI” as defined here.
  • EOIs that are a better fit for one of our other active AI-related RFPs, listed here.
- To continue reading about the details of this project proposal offering, please directly visit the Open Philanthropy website by clicking the 'Apply Here' button in the top right.
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities