How sure are we about this AI stuff?

It is increasingly clear that artificial intelligence is poised to have a huge impact on the world, potentially of comparable magnitude to the agricultural or industrial revolutions. But what does that actually mean for us today? Should it influence our behavior? In this talk from EA Global 2018: London, Ben Garfinkel makes the case for measured skepticism.

A transcript of Ben's talk is below, which we have lightly edited for clarity. You can share your thoughts about the talk on the EA Forum.

The Talk

Today, work on risks from artificial intelligence constitutes a noteworthy but still fairly small portion of the EA portfolio.

How sure are we about this AI stuff

Only a small portion of donations made by individuals in the community are targeted at risks from AI. Only about 5% of the grants given out by the Open Philanthropy Project, the leading grant-making organization in the space, target risks from AI. And in surveys of community members, most do not list AI as the area that they think should be most prioritized.

How sure are we about this AI stuff (1)

At the same time though, work on AI is prominent in other ways. Leading career advising and community building organizations like 80,000 Hours and CEA often highlight careers in AI governance and safety as especially promising ways to make an impact with your career. Interest in AI is also a clear element of community culture. And lastly, I think there's also a sense of momentum around people's interest in AI. I think especially over the last couple of years, quite a few people have begun to consider career changes into the area, or made quite large changes in their careers. I think this is true more for work around AI than for most other cause areas.

How sure are we about this AI stuff (2)

So I think all of this together suggests that now is a pretty good time to take stock. It's a good time to look backwards and ask how the community first came to be interested in risks from AI. It's a good time look forward and ask how large we expect the community's bet on AI to be: how large a portion of the portfolio we expect AI to be five or ten years down the road. It's a good time to ask, are the reasons that we first got interested in AI still valid? And if they're not still valid, are there perhaps other reasons which are either more or less compelling?

How sure are we about this AI stuff (3)

To give a brief talk roadmap, first I'm going to run through what I see as an intuitively appealing argument for focusing on AI. Then I'm going to say why this argument is a bit less forceful than you might anticipate. Then I'll discuss a few more concrete arguments for focusing on AI and highlight some missing pieces of those arguments. And then I'll close by giving concrete implications for cause prioritization.

The intuitive argument

So first, here's what I see as an intuitive argument for working on AI, and that'd be the sort of, "AI is a big deal" argument.

How sure are we about this AI stuff (4)

There are three concepts underpinning this argument:

  1. The future is what matters most in the sense that, if you could have an impact that carries forward and affects future generations, then this is likely to be more ethically pressing than having impact that only affects the world today.
  2. Technological progress is likely to make the world very different in the future: that just as the world is very different than it was a thousand years ago because of technology, it's likely to be very different again a thousand years from now.
  3. If we're looking at technologies that are likely to make especially large changes, then AI stands out as especially promising among them.

So given these three premises, we have the conclusion that working on AI is a really good way to have leverage over the future, and that shaping the development of AI positively is an important thing to pursue.

How sure are we about this AI stuff (5)

I think that a lot of this argument works. I think there are compelling reasons to try and focus on your impact in the future. I think that it's very likely that the world will be very different in the far future. I also think it's very likely that AI will be one of the most transformative technologies. It seems at least physically possible to have machines that eventually can do all the things that humans can do, and perhaps do all these things much more capably. If this eventually happens, then whatever their world looks like, we can be pretty confident the world will look pretty different than it does today.

How sure are we about this AI stuff (6)

What I find less compelling though is the idea that these premises entail the conclusion that we ought to work on AI. Just because a technology will produce very large changes, that doesn't necessarily mean that working on that technology is a good way to actually have leverage over the future. Look back at the past and consider the most transformative technologies that have ever been developed. So things like electricity, or the steam engine, or the wheel, or steel. It's very difficult to imagine what individuals early in the development of these technologies could have done to have a lasting and foreseeably positive impact. An analogy is sometimes made to the industrial revolution and the agricultural revolution. The idea is that in the future, impacts of AI may be substantial enough that there will be changes that are comparable to these two revolutionary periods throughout history.

How sure are we about this AI stuff (7)

The issue here, though, is that it's not really clear that either of these periods actually were periods of especially high leverage. If you were, say, an Englishman in 1780, and trying to figure out how to make this industry thing go well in a way that would have a lasting and foreseeable impact on the world today, it's really not clear you could have done all that much. The basic point here is that from a long-termist perspective, what matters is leverage. This means finding something that could go one way or the other, and that's likely to stick in a foreseeably good or bad way far into the future. Long-term importance is perhaps a necessary condition for leverage, but certainly not a sufficient one, and it's a sort of flawed indicator in its own right.

Three concrete cases

So now I'm going to move to three somewhat more concrete cases for potentially focusing on AI. You might have a few concerns that lead you to work in this area:

How sure are we about this AI stuff (8)

  1. Instability. You might think that there are certain dynamics around the development or use of AI systems that will increase the risk of permanently damaging conflict or collapse, for instance war between great powers.
  2. Lock-in. Certain decisions regarding the governance or design of AI systems may permanently lock in, in a way that propagates forward into the future in a lastingly positive or negative way.
  3. Accidents. It might be quite difficult to use future systems safely. And that there may be accidents that occur in the future with more advanced systems that cause lasting harm that again carries forward into the future.
Instability

How sure are we about this AI stuff (9)

First, the case from instability. A lot of the thought here is that it's very likely that countries will compete to reap the benefits economically and militarily from the applications of AI. This is already happening to some extent. And you might think that as the applications become more significant, the competition will become greater. And in this context, you might think that this all increases the risk of war between great powers. So one concern here is that there may be a potential for transitions in terms of what countries are powerful compared to which other countries.

A lot of people in the field of international security think that these are conditions under which conflict becomes especially likely. You might also be concerned about changes in military technology that, for example, increase the odds of accidental escalation, or make offense more favorable compared to defense. You may also just be concerned that in periods of rapid technological change, there are greater odds of misperception or miscalculation as countries struggle to figure out how to use the technology appropriately or interpret the actions of their adversaries. Or you could be concerned that certain applications of AI will in some sense damage domestic institutions in a way that also increases instability. That rising unemployment or inequality might be quite damaging, for example. And lastly, you might be concerned about the risks from terrorism, that certain applications might make it quite easy for small actors to cause large amounts of harm.

How sure are we about this AI stuff (10)

In general, I think that many of these concerns are plausible and very clearly important. Most of them have not received very much research attention at all. I believe that they warrant much, much more attention. At the same time though, if you're looking at things from a long-termist perspective, there are at least two reservations you could continue to have. The first is just we don't really know how worried to be. These risks really haven't been researched much, and we shouldn't really take it for granted that AI will be destabilizing. It could be or it couldn't be. We just basically have not done enough research to feel very confident one way or the other.

You may also be concerned, if you're really focused on long term, that lots of instability may not be sufficient to actually have a lasting impact that carries forward through generations. This is a somewhat callous perspective. If you really are focused on the long term, it's not clear, for example, that a mid-sized war by historical standards would be sufficient to have a big long term impact. So it may be actually a quite high bar to achieve a level of instability that a long-termist would really be focused on.

Lock-in

How sure are we about this AI stuff (11)

The case from lock-in I'll talk about just a bit more briefly. Some of the intuition here is that certain decisions have been made in the past about, for instance the design of political institutions, software standards, or certain outcomes of military or economic competitions, which seem to produce outcomes that carry forward into the future for centuries. Some examples would be the design of the US Constitution, or the outcome of the Second World War. You might have the intuition that certain decisions about the governance or design of AI systems, or certain outcomes of strategic competitions, might carry forward into the future, perhaps for even longer periods of time. For this reason, you might try and focus on making sure that whatever locks in is something that we actually want.

How sure are we about this AI stuff (12)

I think that this is a somewhat difficult argument to make, or at least it's a fairly non-obvious one. I think the standard skeptical reply is that with very few exceptions, we don't really see many instances of long term lock-in, especially long term lock-in where people really could have predicted what would be good and what would be bad. Probably the most prominent examples of lock-in are choices around major religions that have carried forward for thousands of years. But it's quite hard to find examples that last for hundreds of years. Those seem quite few. It's also generally hard to judge what you would want to lock in. If you imagine fixing some aspect of the world, as the rest of world changes dramatically, it's really hard to guess what would actually be good under quite different circumstances in the future. I think my general feeling on this line of argument is that, I think it's probably not that likely that we should expect any truly irreversible decisions around AI to be made anytime soon, even if progress is quite rapid, although other people certainly might disagree.

Accidents

How sure are we about this AI stuff (13)

Last, we have the case from accidents. The idea here is that, we know that there are certain safety engineering challenges around AI systems. It's actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances. This has been laid out most clearly in the paper 'Concrete Problems in AI Safety,' from a couple of years ago by Dario Amodei and others. I'd recommend for anyone interested in safety issues to take a look at that paper. Then we might think, given the existence of these safety challenges, and given the belief or expectation that AI systems will become much more powerful in the future or be given much more responsibility, we might expect that these safety concerns will become more serious as time goes on.

How sure are we about this AI stuff (14)

At the limit you might worry that these safety failures could become so extreme that they could perhaps derail civilization on the whole. In fact, there is a bit of writing arguing that we should be worried about these sort of existential safety failures. The main work arguing for this is still the book 'Superintelligence' by Nick Bostrom, published in 2014. Before this, essays by Eliezer Yudkowsky were the main source of arguments along these lines. And then a number of other writers such as Stuart Russell or, a long time ago, IJ Goods or David Chalmers have also expressed similar concerns, albeit more briefly. The writing on existential safety accidents definitely isn't homogeneous, but often there's a sort of similar narrative that appears in these essays expressing these concerns. There's this basic standard disaster scenario that has a few common elements.

How sure are we about this AI stuff (15)

First, the author imagines that a single AI system experiences a massive jump in capabilities. Over some short period of time, a single system becomes much more general or much more capable than any other system in existence, and in fact any human in existence. Then given the system, researchers specify a goal for it. They give it some input which is meant to communicate what behavior it should engage in. The goal ends up being something quite simple, and the system goes off and single-handedly pursues this very simple goal in a way that violates the full nuances of what its designers intended.

There's a classic sort of toy example, which is often used to illustrate this concern. We imagine that some poor paperclip factory owner receives a general super-intelligent AI on his doorstep. There's a slot that's to stick in a goal. He writes down the goal "maximize paperclip production," puts it in the AI system, and then lets it go off and do that. The system figures out the best way to maximize paperclip production is to take over all the world's resources, just to plow them all into paperclips. And the system is so capable that designers can do nothing to stop it, even though it's doing something that they actually really do not intend.

How sure are we about this AI stuff (16)

I have some general concerns about the existing writing on existential accidents. So first there's just still very little of it. It really is just mostly Superintelligence and essays by Eliezer Yudkowsky, and then sort of a handful of shorter essays and talks that express very similar concerns. There's also been very little substantive written criticism of it. Many people have expressed doubts or been dismissive of it, but there's very little in the way of skeptical experts who are sitting down and fully engaging with it, and writing down point by point where they disagree or where they think the mistakes are. Most of the work on existential accidents was also written before large changes in the field of AI, especially before the recent rise of deep learning, and also before work like 'Concrete Problems in AI Safety,' which laid out safety concerns in a way which is more recognizable to AI researchers today.

Most of the arguments for existential accidents often rely on these sort of fuzzy, abstract concepts like optimization power or general intelligence or goals, and toy thought experiments like the paper clipper example. And certainly thought experiments and abstract concepts do have some force, but it's not clear exactly how strong a source of evidence we should take these as. Then lastly, although many AI researchers actually have expressed concern about existential accidents, for example Stuart Russell, it does seem to be the case that many, and perhaps most AI researchers who encounter at least abridged or summarized versions of these concerns tend to bounce off them or just find them not very plausible. I think we should take that seriously.

I also have some more concrete concerns about writing on existential accidents. You should certainly take these concerns with a grain of salt because I am not a technical researcher, although I have talked to technical researchers who have essentially similar or even the same concerns. The general concern I have is that these toy scenarios are quite difficult to map onto something that looks more recognizably plausible. So these scenarios often involve, again, massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. This is a wooly issue. I would recommend checking out writing by Katja Grace or Paul Christiano online. That sort of lays out some concerns about the plausibility of massive jumps.

Another element of these narratives is, they often imagine some system which becomes quite generally capable and then is given a goal. In some sense, this is the reverse of the way machine learning research tends to look today. At least very loosely speaking, you tend to specify a goal or some means of providing feedback. You direct the behavior of a system and then allow it to become more capable over time, as opposed to the reverse. It's also the case that these toy examples stress the nuances of human preferences, with the idea being that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them. But it's also the case in machine learning that we can train lots of systems to engage in behaviors that are actually quite nuanced and that we can't specify precisely. Recognizing faces from images is an example of this. So is flying a helicopter.

It's really not clear exactly why human preferences would be so fatal to understand. So it's quite difficult to figure out how to map the toy examples onto something which looks more realistic.

Caveats

Some general caveats on the concerns expressed. None of my concerns are meant to be decisive. I've found, for example, that many people working in the field of AI safety in fact list somewhat different concerns as explanations for why they believe the area is very important. There are many more arguments that I believe are shared individually, or inside people's heads and currently unpublished. I really can't speak exactly to how compelling these are. The main point I want to stress here is essentially that when it comes to the writing which has actually been published, and which is out there for analysis, I don't think it's necessarily that forceful, and at the very least it's not decisive.

How sure are we about this AI stuff (17)

So now I have some brief, practical implications, or thoughts on prioritization. You may think, from all the stuff I've just said, that I'm quite skeptical about AI safety or governance as areas to work in. In fact, I'm actually fairly optimistic. My reasoning here is that I really don't think that there are any slam-dunks for improving the future. I'm not aware of any single cause area that seems very, very promising from the perspective of offering high assurance of long-term impact. I think that the fact that there are at least plausible pathways for impact by working on AI safety and AI governance puts it head and shoulders above most areas you might choose to work in. And AI safety and AI governance also stand out for being pretty extraordinarily neglected.

Depending on how you count, there are probably fewer than a hundred people in the world working on technical safety issues or governance challenges with an eye towards very long-term impacts. And that's just truly, very surprisingly small. The overall point though, is that the exact size of the bet that EA should make on artificial intelligence, sort of the size of the portfolio that AI should take up will depend on the strength of the arguments for focusing on AI. And most of those arguments still just aren't very fleshed out yet.

How sure are we about this AI stuff (18)

I also have some broader epistemological concerns which connect to the concerns I've expressed. I think it's also possible that there are social factors relating to EA communities that might bias us to take an especially large interest in AI.

One thing is just that AI is especially interesting or fun to talk about, especially compared to other cause areas. It's an interesting, kind of contrarian answer to the question of what is most important to work on. It's surprising in certain ways. And it's also now the case that interest in AI is to some extent an element of community culture. People have an interest in it that goes beyond just the belief that it's an important area to work in. It definitely has a certain role in the conversations that people have casually, and what people like to talk about. I think these wouldn't necessarily be that concerning, except people sometimes also think that we can't really count on external feedback to push us back if we sort of drift a bit.

So first it just seems to be empirically the case that skeptical AI researchers generally will not take the time to sit down and engage with all of the writing, and then explain carefully why they disagree with our concerns. So we can't really expect that much external feedback of that form. People who are skeptical or confused, but not AI researchers, or just generally not experts may be concerned about sounding ignorant or dumb if they push back, and they also won't be inclined to become experts. We should also expect generally very weak feedback loops. If you're trying to influence the very long-run future, it's hard to tell how well you're doing, just because the long-run future hasn't happened yet and won't happen for a while.

Generally, I think one thing to watch out for is justification drift. If we start to notice that the community's interest in AI stays constant, but the reasons given for focusing on it change over time, then this would be sort of a potential check engine light, or at least a sort of trigger to be especially self-conscious or self-critical, because that may be some indication of motivated reasoning going on.

Conclusion

How sure are we about this AI stuff (19)

I have just a handful of short takeaways. First, I think that not enough work has gone into analyzing the case for prioritizing AI. Existing published arguments are not decisive. There may be many other possible arguments out there, which could be much more convincing or much more decisive, but those just aren't out there yet, and there hasn't been much written criticizing the stuff that's out there.

For this reason, thinking about the case for prioritizing AI may be an especially high impact thing to do, because it may shape the EA portfolio for years into the future. And just generally, we need to be quite conscious of possible community biases. It's possible that certain social factors will lead us to drift in what we prioritize, that we really should not be allowing to influence us. And just in general, if we're going to be putting substantial resources into anything as a community, we need to be especially certain that we understand why we're doing this, and that we stay conscious that our reasons for getting interested in the first place continue to be good reasons. Thank you.

Questions

Question: What advice would you give to one who wants to do the kind of research that you are doing here about the case for AI potentially, as opposed to the AI itself?

Ben: Something that I believe would be extremely valuable is just basically talking to lots of people who are concerned about AI and asking them precisely what reasons they find compelling. I've started to do this a little bit recently and it's actually been quite interesting that people seem to have pretty diverse reasons, and many of them are things that people want to write blog posts on, but just haven't done. So, I think this is a low-hanging fruit that would be quite valuable. Just talking to people who are concerned about AI, trying to understand exactly why they're concerned, and either writing up their ideas or helping them to do that. I think that would be very valuable and probably not that time intensive either.

Question: Have you seen any of the justification drift that you alluded to? Can you pinpoint that happening in the community?

Ben: Yeah. I think that's certainly happening to some extent. Even for myself, I believe that's happened for me to some extent. When I initially became interested in AI, I was especially concerned about these existential accidents. I think I now place relatively greater prominence on sort of the case from instability as I described it. And that's certainly, you know, one possible example of justification drift. It may be the case that this was actually a sensible way to shift emphasis, but would be something of a warning sign. And I've also just spoken to technical researchers, as well, who used to be especially concerned about this idea of an intelligence explosion or recursive self improvement. These very large jumps. I now have spoken to a number of people who are still quite concerned about existential accidents, but make arguments that don't hinge on there being this one single massive jump into a single system.

Question: You made the analogy to the industrial revolution, and the 1780 Englishman who doesn't really have much ability to shape how the steam engine is going to be used. It seems intuitively quite right. The obvious counterpoint would be, well AI is a problem-solving machine. There's something kind of different about it. I mean, does that not feel compelling to you, the sort of inherent differentness of AI?

Ben: So I think probably the strongest intuition is, you might think that there will eventually be a point where we start turning more and more responsibility over to automated systems or machines, and that there might eventually come a point where humans have almost no control over what's happening whatsoever, that we keep turning over more and more responsibility and there's a point where machines are in some sense in control and you can't back out. And you might have some sort of irreversible juncture here. I definitely, to some extent, share that intuition that if you're looking over a very long time span, that that is probably fairly plausible. I suppose the intuition I don't necessarily have is that unless things go, I suppose quite wrong or if they happen in somewhat surprising ways, I don't necessarily anticipate that there will be this really irreversible juncture coming anytime soon. If let's say it takes a thousand years for control to be handed off, then I am not that optimistic about people having that much control over what that handoff looks like by working on things today. But I certainly am not very confident.

Question: Are there any policies that you think a government should implement at this stage of the game, in light of the concerns around AI safety? And how would you allocate resources between existing issues and possible future risks?

Ben: Yeah, I am still quite hesitant, I think, to recommend very substantive policies that I think governments should be implementing today. I currently have a lot of agnosticism about what would be useful, and I think that most current existing issues that governments are making decisions on aren't necessarily that critical. I think there's lots of stuff that can be done that would be very valuable, like having stronger expertise or stronger lines of dialogue between the public and private sector, and things like this. But I would be hesitant at this point to recommend a very concrete policy that at least I'm confident would be good to implement right now.

Question: You mentioned the concept of kind of a concrete decisive argument. Do you see concrete, decisive arguments for other cause areas that are somehow more concrete and decisive than for AI, and what is the difference?

Ben: Yeah. So I guess I tried to allude to this a little bit, but I don't think that really any cause area has an especially decisive argument for being a great way to influence the future. There's some that I think you can put sort of a lower bound on at least how likely it is to be useful that's somewhat clear. So for example, risk from nuclear war. It's fairly clear that it's at least plausible this could happen over the next century. You know, nuclear war has almost happened in the past, the climate effects are speculative, but at least somewhat well understood. And then there's this question of if there were nuclear war, how damaging is this? Do people eventually come back from this? And that's quite uncertain, but I think it'd be difficult to put above 99% chance that people would come back from a nuclear war.

So, in that case you might have some sort of a clean lower bound on, let's say working on nuclear risk. Or, quite similarly, working on pandemics. And I think for AI it's difficult to have that sort of confident lower bound. I actually tend to think, I guess as I alluded to, that AI is probably or possibly still the most promising area based on my current credences, and its extreme neglectedness. But yeah, I don't think any cause area stands out as especially decisive as a great place to work.

Question: I'm an AI machine learning researcher PhD student currently, and I'm skeptical about the risk of AGI. How would you suggest that I contribute to the process of providing this feedback that you're identifying as a need?

Ben: Yeah, I mean I think just a combination of in-person conversations and then I think even simple blog posts can be quite helpful. I think there's still been surprisingly little in the way of just, let's say something written online that I would point someone to who wants the skeptical case. This actually is a big part of the reason I suppose I gave this talk, even though I consider myself not extremely well placed to give it, given that I am not a technical person. There's so little out there along these lines that there's low hanging fruit, essentially.

Question: Prominent deep learning experts such as Yann Lecun and Andrew Ng do not seem to be worried about risks from superintelligence. Do you think that they have essentially the same view that you have or are they coming at it from a different angle?

Ben: I'm not sure of their specific concerns. I know this classic thing that Andrew Ng always says is he compares it to worrying about overpopulation on Mars, where the suggestion is that these risks, if they materialize, are just so far away that it's really premature to worry about them. So it seems to be sort of an argument from timeline considerations. I'm actually not quite sure what his view is in terms of, if we were like, let's say 50 years in the future, would he think that this is a really great area to work on? I'm really not quite sure.

I actually tend to think that the line of thinking that says, "Oh, this is so far away so we shouldn't work on it" just really isn't that compelling. It seems like we have a load of uncertainty about AI timelines. It seems like no one can be very confident about that. So yeah, it'd be hard to get under, let's say one percent that interesting things won't happen in the next 30 years or so. So I'm not quite sure about the extent of his concerns, but if they're based on timelines, I actually don't find them that compelling.