Fireside Chat with Toby Ord (2018)

March 01, 2019

Toby Ord is working on a book about existential risks for a general audience. This fireside chat with Will MacAskill, from EA Global 2018: London, illuminates much of Toby’s recent thinking. Topics include: What are the odds of an existential catastrophe this century? Which risks do we have the most reason to worry about? And why should we consider printing out Wikipedia?

Below is a transcript of Toby's fireside chat, which we have lightly edited for clarity. You can discuss this talk on the EA Forum.

The Talk

Will: Toby, you're working on a book at the moment. Just to start off, tell us about that.

Toby: I've been working on a book for a couple of years now, and ultimately, I think with big books - this one is on existential risk - I think that they're often a little bit like an iceberg, and certainly Doing Good Better was, where there's this huge amount of work that goes on before you even decide to write the book, coming up with ideas and distilling them.

I'm trying to write really the definitive book on existential risk. I think the best book so far, if you're looking for something before my book comes out, is John Leslie's The End of the World. That's from 1996. That book actually inspired Nick Bostrom, to some degree, to get into this.

I thought about writing an academic book. Certainly a lot of the ideas that are going to be included are cutting edge ideas that haven't really been talked about anywhere before. But I ultimately thought that it was better to write something at the really serious end of general non-fiction, to try to reach a wider audience. That's been an interesting aspect of writing it.

Will: And how do you define an existential risk? What counts as existential risks?

Toby: Yeah. This is actually something where even within effective altruism, people often make a mistake here, because the name existential risk, that Nick Bostrom coined, is designed to be evocative of extinction. But the purpose of the idea, really, is that there's the risk of human extinction, but there's also a whole lot of other risks which are very similar in how we have to treat them. They all involve a certain common methodology for dealing with them, in that they're risks that are so serious that we can't afford to have even one of them happen. We can't learn from trial and error, so we have to have a proactive approach.

The way that I currently think about it is that existential risks are risks that threaten the destruction of humanity's long-term potential. Extinction would obviously destroy all of our potential over the long term, as would a permanent unrecoverable collapse of civilization, if we were reduced to a pre-agricultural state again or something like that, and as would various other things that are neither extinction nor collapse. There could be some form of permanent totalitarianism. If the Nazis had succeeded in a thousand-year Reich, and then maybe it went on for a million years, we might still say that that was an utter, perhaps irrevocable, disaster.

I'm not sure that at the time it would have been possible for the Nazis to achieve that outcome with existing technology, but as we get more advanced surveillance technology and genetic engineering and other things, it might be possible to have lasting terrible political states. So existential risk includes both extinction and these other related areas.

Will: In terms of what your aims are with the book, what's the change you're trying to effect?

Toby: One key aim is to introduce the idea of existential risk to a wider audience. I think that this is actually one of the most important ideas of our time. It really deserves a proper airing, trying to really get all of the framing right. And then also, as I said, to introduce a whole lot of new cutting edge ideas that are to do with new concepts, mathematics of existential risk and other related ideas, lots of the best science, all put into one place. There's that aspect as well, so it's definitely a book for everyone on existential risk. I've learned a lot while writing it, actually.

But also, when it comes to effective altruism, I think that often we have some misconceptions around existential risk, and we also have some bad framings of it. It's often framed as if it's this really counterintuitive idea. There's different ways of doing this. A classic one involves saying "There could be 10 to the power of 53 people who live in the future, so even if there's only a very small chance..." and going from there, which makes it seem unnecessarily nerdy, where you've kind of got to be a math person to really get any pull from that argument. And even if you are a mathsy person, it feels a little bit like a trick of some sort, like some convincing argument that one equals two or something, where you can't quite see what the problem is, but you're not compelled by it.

Actually, though, I think that there's room for a really broad group of people to get behind the idea of existential risk. There's no reason that my parents or grandparents couldn't be deeply worried about the permanent destruction of humanity's long-term potential. These things are really bad, and I actually think that it's not a counterintuitive idea at all. In fact, ultimately I think that the roots of existential risk, and worrying about them, came from the risk of nuclear war in the 20th century.

My parents were out on marches against nuclear weapons. At the time, the biggest protest in US history was 2 million people in Central Park protesting nuclear weapons. It was a huge thing. It was actually the biggest thing at that time, in terms of civic engagement. And so when people can see that there's a real and present threat that could threaten the whole future, they really get behind it. That's also one of the aspects of climate change, is people perceive it as a threat to continued human existence, among other things, and that's one of the reasons that motivates them.

So I think that you can have a much more intuitive framing of this. The future is so much longer than the present, so some of the ways that we could help really could be by helping this long-term future, if there are ways that we could help that whole time period.

Will: Looking to the next century, let's say, where do you see the main existential risks being? What are all the ones that we are facing, and which are the ones we should be most concerned about?

Toby: I think that there is some existential risk remaining from nuclear war and from climate change. I think that both of those are current anthropogenic existential risks. The nuclear war risk is via nuclear winter, where the soot from burning cities would rise up into the upper atmosphere, above the cloud level, so that it can't get rained down, and then would block sunlight for about eight years or so. The risk there isn't that it gets really dark and you can't see or something like that, and it's not that it's so cold that we can't survive, it's that there are more frosts, and that the temperatures are depressed by quite a lot, such that the growing season for crops is only a couple of months. And there's not enough time for the wheat to germinate and so forth, and so there'll be widespread famine. That's the threat there.

And then there's climate change. Climate change is a warming. Nuclear winter is actually also a change in the climate, but a cooling. I think that the amount of warming that could happen from climate change is really underappreciated. The tail risk, the chance that the warming is a lot worse than we expect, is really big. Even if you set aside the serious risks of runaway climate change, of big feedbacks from the methane clathrates or the permafrost, even if you set all of those things aside, scientists say that the estimate for if you doubled CO2 in the atmosphere is three degrees of warming. And that's what would happen if you doubled it.

But if you look at the fine print, they say it's actually from 1.5 degrees to 4.5 degrees. That's a huge range. There's a factor of three between those estimates, and that's just a 66% confidence interval. They actually think there's a one in six chance it's more than 4.5 degrees. So I think there's a very serious chance that if it doubled, it's more than 4.5 degrees, but also there's uncertainty about how many doublings will happen. It could easily be the case that humanity doubles the CO2 levels twice, in which case, if we also got unlucky on the sensitivity, there could be nine degrees of warming.

And so when you hear these things about how many degrees of warming they're talking about, they're often talking about the median of an estimate. If there saying we want to keep it below two degrees, what they mean is want to keep the median below two degrees, such that there's still a serious chance that it's much higher than that. If you look into all of that, there could be very serious warming, much more serious than you get in a lot of scientific reports. But if you read the fine print in the analyses, this is in there. And so I think there's a lack of really looking into that, so I'm actually a lot more worried about it than I was before I started looking into this.

By the same token, though, it's difficult for it to be an existential risk. Even if there were 10 degrees of warming or something beyond what you're reading about in the newspapers, the warming... it would be extremely bad, just to clarify. But I've been thinking about all these things in terms of whether they could be existential risks, rather than whether they could lead to terrible situations, which could then lead to other bad outcomes. But one thing is that in both cases, both nuclear winter and climate change, coastal areas are a lot less affected. There's obviously flooding when it comes to climate change, but a country like New Zealand, which is mostly coastal, would be mostly spared the effects of either of these types of calamities. Civilization, as far as I can tell, should continue in New Zealand roughly as it does today, but perhaps without low priced chips coming in from China.

Will: I really think we should buy some land in New Zealand.

Toby: Like as a hedge?

Will: I'm completely serious about this idea.

Toby: I mean, we definitely should not screw up with climate change. It's a really serious problem. It's just a question that I'm looking at is, is it an existential risk? Ultimately, it's probably better thought of as a change in the usable areas on the earth. They currently don't include Antarctica. They don't include various parts of Siberia and some parts of Canada, which are covered in permafrost. Effectively, with extreme climate change, the usable parts of the earth would move a bit, and they would also shrink a lot. It would be a catastrophe, but I don't see why that would be the end.

Will: Between climate change and nuclear winter, do you think climate change is too neglected by EA?

Toby: Yeah, actually, I think it probably is. Although you don't see many people in EA looking at either of those. I think they're actually very reasonable. In both cases, it's unclear why they would the end of humanity, and people generally in the nuclear winter research do not say that it would be. They say it would be catastrophic, and maybe 90% of people could die, but they don't say that it would kill everyone. I think in both cases, they're such large changes to the earth's environment, huge unprecedented changes, that you can't rule out that something that we haven't yet modeled happens.

I mean, we didn't even know about nuclear winter until more than 30 years after the use of nuclear weapons. There was a whole period of time when new effects could have happened, and we would have been completely ignorant of them at the time when we launched a war. So there could be other things like that. And in both cases, that's where I think most of the danger of existential risk lies, just that it's such a large perturbation of the earth's system that one wouldn't be shocked if it turned out to be an existential catastrophe. So there are those ones, but I think the things that are of greatest risk are things that are forthcoming.

Will: So, tell us about the risks from unprecedented technology.

Toby: Yeah. The two areas that I'm most worried about in particular are biotechnology and artificial intelligence. When it comes to biotech, there's a lot to be worried about. If you look at some of the greatest disasters in human history, in terms of the proportion of the population who died in them, great plagues and pandemics are in this category. The Black Death killed between a quarter and 60% of people in Europe, and it was somewhere between 5 and 15% of the entire world's population. And there are a couple of other cases that are perhaps at a similar level, such as the spread of Afro-Eurasian germs into the Americas when Columbus went across and they exchanged germs. And also, say, the 1918 flu killed about 4% of the people in the world.

So we've had some cases that were big, really big. Could they be so big that everyone dies? I don't think so, at least from natural causes. But maybe. It wouldn't be silly to be worried about that, but it's not my main area of concern. I'm more concerned with biotechnological advances that we've had. We've had radical breakthroughs recently. It's only recently that we've discovered even that there are bacteria and viruses, that we've worked out about DNA, and that we've worked out how to take parts of DNA from one organism and put them into another. How to synthesize entire viruses just based on their DNA code. Things like this. And these radical advances in technology have let us do some very scary things.

And there's also been this extreme, it's often called democratization of this technology, but since the technology could be used for harm, it's also a form of proliferation, and so I'm worried about that. It's very quick. You probably all remember when the human genome project was first announced. That cost billions of dollars, and now a complete human genome can be sequenced for $1,000. It's kind of a routine part of PhD work, that you get a genome sequenced.

These things have come so quickly, and other things like CRISPR and also if we look at gene drives, these were technologies, really radical things, CRISPR for putting arbitrary genetic code from one animal into another, and gene drives for releasing it into the wild and having it proliferate, that were less than two years between being invented by the cutting edge labs in the world, the very smartest scientists, Nobel Prize-worthy stuff, to being replicated by undergraduates in science competitions. Just two years, and so if you think about that, the pool of people who could have bad motives, who have access to the ability to do these things, is increasing massively, from just a select group of people where you might think there's only five people in the world who could do it, who have the skills, who have the money, and who have the time to do it, through to a thing that's much faster and where the pool of people is in the millions. There's just much more chance you get someone with bad motivation.

And there's also states with bioweapons programs. We often think that we're protected by things like the Bioweapons Convention, the BWC. That is the main protection, but there are states who violate it. We know, for example, that Russia has been violating it for a long time. They had massive programs with more than 10,000 scientists working on versions of smallpox, and they had an outbreak when they did a smallpox weapons test, which has been confirmed, and they also killed a whole lot of people with anthrax accidentally when they forgot to replace a filter on their lab and blew a whole lot of anthrax spores out over the city that the lab was based in.

There's really bad examples of bio-safety there, and also the scary thing is that people are actually working on these things. The US believes that there are about six countries in violation of this treaty. Some counties, like Israel, haven't even signed up to it. And also it has the budget of a typical McDonald's, and it has four employees. So that's the thing that stands between us and misuse of these technologies, and I really think that that is grossly inadequate.

Will: The Bioweapons Convention has four people working in it?

Toby: Yeah. It had three. I had to change it in my book, because a new person got employed.

Will: How does that compare to other sorts of conventions?

Toby: I don't know. It's a good question. So those are the types of reasons that I'm really worried about developments in bio.

Will: Yeah. And what would you say to the response that it's just very hard for a virus to kill literally everybody, because they have this huge bunker system in Switzerland, nuclear submarines have six-month tours, and so on? Obviously, this is an unimaginable tragedy for civilization, but still there would be enough people alive that over some period of time, populations would increase again.

Toby: Yeah. I mean, you could add to that uncontacted tribes and also researchers in Antarctica as other hard-to-reach populations. I think it's really good that we've diversified somewhat like that. I think that it would be really hard, and so I think that even if there is a catastrophe, it's likely to not be an existential disaster.

But there are reasons for some actors to try to push something to be extremely dangerous. For example, as I said, the Soviets, then Russians after the collapse of the Soviet Union, were working on weaponizing smallpox, and weaponizing Ebola. It was crazy stuff, and tens of thousands of people were working on it. And they were involved in a mutually assured destruction nuclear weapons system with a dead hand policy, where even if their command centers were destroyed, they would force retaliation with all of their weapons. There was this logic of mutually assured destruction and deterrence, where they needed to have ways of plausibly inflicting extreme amounts of harm in order to try to deter the US. So they were already involved in that type of logic, and so it would have made some sense for them to do terrible things with bioweapons too, assuming the underlying logic makes any sense at all. So I think that there could be realistic attempts to make extremely dangerous bioweapons.

I should also say that I think this is an area that's under-invested in, in EA. I think that sometimes there's about... I would say that the existential risk from bio is maybe about half that of AI, or a quarter or something like that. But a factor of two or four in how big the risk is. If you recall, in effective altruism we're not interested in work on the problem that has the biggest size, we're interested in what marginal impact you'll have. And it's entirely possible that someone would be more than a couple of times better at working on trying to avoid bio problems than they would be on trying to avoid AI problems.

And also, the community among EAs who are working on biosecurity is much smaller as well, so one would expect there to be good opportunities there. But work on bio-risk does require quite a different skillset, because in bio, lot of the risk is misuse risk, either by lone individuals, small groups, or nation states. It's much more of a traditional security-type area, where working in biosecurity might involve talking a lot with national security programs and so forth. It's not the kind of thing that one wants free and open discussions of all of the different things. And one also doesn't want to just say, "Hey, let's have this open research forum where we're just on the internet throwing out ideas, like, 'How would you kill every last person? Oh, I know! What about this?'" We don't actually want that kind of discussion about it, which puts it in a bit of a different zone.

But I think that for people who think that they actually are able to not talk about things that they find interesting and fascinating and important, which a lot of us have trouble not talking about those things, but for people who could do that and also perhaps who already have a bio background, it could be a very useful area.

Will: Okay. And so you think that EA in general, even though they're taking these risks more seriously than maybe most people, you think we're still neglecting it relative to the EA portfolio.

Toby: I think so. And then AI, I think, is probably the biggest risk.

Will: Okay, so tell us a little bit about that.

Toby: Yeah. You may have heard more than you ever want to about AI risk. But basically, my thinking about this is that the reason that humanity is in control of its destiny, and the reason that we have such a large long-term potential, is because we are the species that's in control. For example, gorillas are not in control of their destiny. Whether they flourish or not, I hope that they will, but it depends upon human choices. We're not in such a position compared to any other species, and that's because of our intellectual abilities, both what we think of as intelligence, like problem-solving, and also our ability to communicate and cooperate.

But these intellectual abilities have given us the position where we have the majority of the power on the planet, and where we have the control of our destiny. If we create some artificial intelligence, generally intelligent systems, and we make them be smarter than humans and also just generally capable and have initiative and motivation and agency, then by default, we should expect that they would be in control of our future, not us. Unless we made good efforts to stop that. But the relevant professional community, who are trying to work out how to stop it, how to guarantee that they obey commands or that they're just motivated to help humans in the first place, they think it's really hard, and they have higher estimates of the risk from AI than anyone else.

There's disagreement about the level of risk, but there's also some of the most prominent AI researchers, including ones who are attempting to build such generally intelligent systems, who are very scared about it. They aren't the whole AI community, but they are a significant part of it. There are a couple of other AI experts who say that worrying about existential risk is a really fringe position in AI, but they're actually either just lying or they're incompetently ignorant, because they should notice that Stuart Russell and Demis Hassabis are very prominently on the record saying this is a really big issue.

So I think that that should just give us a whole lot of reason to just expect, yeah, I guess creating a successor species probably could well be the last thing we do. And maybe we'd create something that also is even more important than us, and it would be a great future to create a successor. It would be effectively our children, or our "mind children," maybe. But also, we don't have a very good idea how to do that. We have even less of an idea about how to create artificial intelligence systems that have themselves moral status and have feelings and emotions, and strive to achieve greater perfections than us and so on. More likely it would be for some more trivial ultimate purpose. Those are the kind of reasons that I'm worried about.

Will: Yeah, you hinted briefly, but what's your overall... over the next hundred years, let's say, overall chance you'd assign some existential risk event, and then how does that break down between these different risks you've suggested?

Toby: Yeah. I would say something like a one in six chance that we don't make it through this century. I think that there was something like a one in a hundred chance that we didn't make it through the 20th century. Overall, we've seen this dramatic trend towards humanity having more and more power, often increasing at exponential rates, depending on how you measure it. But there hasn't been this kind of similar increase in human wisdom, and so our power has been outstripping our wisdom. The 20th century is the first one where we really had the potential to destroy ourselves. I don't see any particular reason why we wouldn't expect, then, the 21st century to have our power even more outbalance our wisdom, and indeed that seems to be the case. We also know of particular technologies that look like this could happen.

And then the 22nd century, I think would be even more dangerous. I don't really see a natural end to this until we discover almost all the technologies that can be built or something, or we go extinct, or we get our act together and decide that we've had enough of that and we're going to make sure that we never suffer any of these catastrophes. I think that that's what we should be attempting to do. If we had a business-as-usual century, I don't know what I'd put the risk at for this century. A lot higher than one in six. My one in six is because I think that there's a good chance, particularly later in the century, that we get our act together. If I knew we wouldn't get our act together, it'd be more like one in two, or one in three.

Will: Okay, cool. Okay. So if we just, no one really cared, no one was really taking action, it would be more like 50/50?

Toby: Yeah, if it was pretty much like it is at moment, with us just running forward, then yeah. I'm not sure. I haven't really tried to estimate that, but it would be something, maybe a third or a half.

Will: Okay. And then within that one in six, how does that break down between these different risks?

Toby: Yeah. Again, these numbers are all very rough, I should clarify to everyone, but I think it's useful to try to give quantitative estimates when you're giving rough numbers, because if you just say, "I think it's tiny," and the other person says, "No, I think it's really important," you may actually both think it's the same number, like 1% or something like that. I think that I would say AI risk is something like 10%, and bio is something like 5%.

Will: And then the others are less than a percent?

Toby: Yeah, that's right. I think that climate change and... I mean, climate change wouldn't kill us this century if it kills us, anyway. And nuclear war, definitely less than a percent. And probably the remainder would be more in the unknown risks category. Maybe I should actually have even more of the percentage in that unknown category.

Will: Let's talk a little bit about that. How seriously do you take unknown existential risks? I guess they are known unknowns, because we know there are some.

Toby: Yeah.

Will: How seriously do you take them, and then what do you think we should do, if anything, to guard against them?

Toby: Yeah, it's a good question. I think we should take them quite seriously. If we think backwards, and think what risks would we have known about in the past, we had very little idea. Only two people had any idea about nuclear bombs in, let's say, 1935 or something like that, a few years before the bomb was first started to be designed. It would have been unknown technology for almost everyone. And if you go back five more years, then it was unknown to everyone. I think that these issues about AI and, actually, man-made pandemics, there were a few people who were talking these things very early on, but only a couple of people, and it might have been hard to distinguish them from the noise.

But I think ultimately, we should expect that there are unknown risks. There are things that we can do about them. One of the things that we could do about them is to work on things like stopping war. So I think that, say, avoiding great power war, as opposed to avoiding all particular wars. Some potential wars have no real chance of causing existential catastrophe. But things like World War II or the Cold War were cases where they plausibly could have.

I think the way to think about this is not that war itself, or great power war, is an existential risk, but rather it's something else, which I call an existential risk factor. I take inspiration in this from the Global Burden of Disease, which looks at different diseases and shows how much does, say, heart disease cause mortality, morbidity in the world, and adds up a number of disability adjusted life years for that. They do that for all the different diseases, and then they also want to ask questions like how much ill health does smoking cause, or alcohol?