November 18, 2016
The below transcript is lightly edited for readability.
Okay. So now I'm going to talk about how actually this is a collaborative endeavor. We're not just all, each of us individually, going, “Okay. I need to work out where the most gold is. And that's most neglected, most tractable. And then I personally am just going to go and do that.” Because there's a whole lot of people who are thinking like this, and there's more every year. I'm really excited about this. I'm really excited to have so many people here and also this idea that maybe in two years time, we'll have a lot more again.
But, then we need to work out how to cooperate. Largely we have the same view or pretty similar views over what to value. Maybe some people think that silver matters, too - it's not just gold - but we all agree that gold matters. We're basically cooperating, here. We want to be able to coordinate and make sure that we're getting people working on the things which make sense most for them.
So, there is this idea of comparative advantage. So I have Harry, Hermione and Ron, and they have three tasks that they need to do, in order to get some gold. They need to do some research, they need to mix some potions, and they need to do some wand work. Hermione is the best at everything, but she doesn't have a time turner, so she can't do everything. So we need to have some way of distributing this. And this is the idea of comparative advantage. Hermione has an absolute advantage on all of these tasks, but it would be a waste for her to go and work on the potions because Harry is not so bad at potions. And really, nobody else is at all good at doing the research in the library. So we should probably put her on this.
And this is a tool that we can use to help guide our thinking about what we should do as individuals. If I think that some technical domain and technical work is the most valuable thing to be doing, but I would be pretty mediocre at that, and I'm a great communicator? Then maybe I should go into trying to help technical researchers in that domain communicate their work in order to get more people engaged with it, and bring in more fantastic people.
So that's applying this at the individual level. We can also apply this at the group level. We can notice that different organizations or groups may be better placed to take different opportunities.
And this is like a bit more speculative, but I think we can also apply this at the time level. We can ask ourselves, “What are we, today, the people of 2016, particularly well suited to do, versus people in the past and people in the future?” Okay. We can't change what people in past did. But we can make this comparison of what is our comparative advantage relative to people in the future. And if there's a challenge, if there were going to be a number of different possible challenges in the future that we need to meet, it makes sense that we should be working on the early ones. Because if there's challenges coming in 2020, the people in 2025 just don't have a chance to work on that.
Another thing which might come here is that we have a position, perhaps, to influence how many future people there will be who are interested in and working on these challenges. We have more influence over that than people in that future scenario do, so should we think about whether that makes sense as a thing for us to focus on?
Another particularly important question is how to work stuff out. The world is big and complicated and messy. And we can't expect all of us, individually, to work out perfect models of it. In fact, it's too complicated for us to expect anybody to do this. So, maybe we’re all walking around with the little ideas which, in my metaphor here, are puzzle pieces for a map to where the gold is. We want institutions for assembling these into a map. It's a bit complicated, because some people have puzzle pieces which are from the wrong puzzle, and this doesn't actually track where gold is. Ideally, we'd like our institutions to filter these out and only assemble the correct pieces to guide us where we want to go.
As a society, we've had to deal with this problem in a number of different domains, and we've developed a number of different institutions for doing this. So there's the peer review process in science. Wikipedia does quite a lot of work aggregating knowledge. Amazon reviews aggregate knowledge that individuals have about which products are good. Democracy lets us aggregate preferences over many different people to try and choose what's actually going to be good.
Of course, none of these institutions are perfect. And this is a challenge. This is like one of those wrong puzzle pieces, which made it into the dialogue. And this comes up in the other cases as well. The crisis of replication in parts of psychology has been making headlines recently. Wikipedia, we all know, sometimes gets vandalized and you go and you just read something which is nonsense. Amazon reviews have problems of people making fake reviews, to make their product look good or other people's products look bad.
So, maybe it's the case that we can adapt one of these existing institutions for our purpose, which is trying to aggregate knowledge about what are the ways to go and do the most good. But maybe we want something a bit different, and maybe somebody in this room is going to do some work on coming up with valuable institutions for this. I actually think this is a really important problem. And it's one that is going to just become more important for us to deal with as a community, as the community grows.
That was all about what are our global institutions for pulling this information together and aggregating it. Another thing, which can help us to move towards getting a better picture, is trying to have good local norms. So, we tell people the ideas that we have, and then other people maybe start listening. And sometimes it might just be that they listen based on the charisma of the person who is talking, more than based on the truthiness of the puzzle piece. But we'd like to have ways of promoting the spread of good ideas, inhibiting the spread of bad ideas, and also encouraging original contributions. One way of trying to promote the spread of good ideas and inhibit bad ideas is just to rely on authority. We'll say, “Well, we've worked out this stuff. We're totally confident about this. And now we just won't accept anything else.” But that isn't going to let us actually get new stuff.
I think something to do here is to pay attention to why you believe something. Do you believe it because somebody else told you? Do you believe it because you've really actually thought this through carefully and worked it out for yourself? There's a blur between those. Often somebody tells you and they kind of give you some reasons. And you're like, “Oh, those reasons kind of check out,” but you haven't gone and deeply examined the argument yourself.
And I think it's useful to be honest with yourself about that. And then also to communicate it to other people. To let them know why it is. Is it the case that you believe this because Joe Bloggs has told you? And actually, Joe is a pretty careful guy, and he's pretty diligent about checking out his stuff, so you think it probably makes sense. You can just communicate that. Or is it that you cut out this puzzle piece yourself?
Now, cutting it out yourself doesn't necessarily mean we should have higher credence in it. I've definitely worked things out, I've thought I've proved things before and there was a mistake in my proof. So you can separately keep track of the level of credence you have in a thing, and why you believe it.
And also our individual and collective reasons for believing things can differ. So here's this statement, that is costs about $3,500 to save a life from malaria. I think this is broadly believed across the Effective Altruism community. I think that collectively, the reason we believe this is that there have been a number of randomized control trials. And then some pretty smart, reasonable analysts at GiveWell have gone and looked carefully at this, and they've dived into all the counterfactuals and they've produced their analysis, and they say, “On net, it looks like it's about $3,500.”
But that isn't why I believe it. I believe it because people have told me that the GiveWell people have done this analysis and they say it's $3,500. And they say, “Oh, yeah. I read it on the website.” Actually that was why I believed it, until I started prepping for this talk, when I went and read it on the website. Because I think that this is a bit more work for me, but it's doing a bit of value for the community. Because I'm shortening the chain of Chinese whispers, of passing this message along. And as things get passed along, it's more possible that mistakes enter or just something isn't well grounded and then it gets repeated. By going back and checking earlier sources in the chain, we can try to reduce that. And try to make ourselves more robustly confident in these statements.
Another thing that comes up is when you notice that you disagree with somebody. If you're sitting down and talking with someone and they're saying something, and you're like “Well, that's obviously false.” You can see perhaps that parts of their jigsaw puzzle are wrong. You could just dismiss what they have to say, but I think that, that's often not the most productive thing to do. Because even if part of what they have to say are wrong, maybe they have some other part that's going into their thinking process which would fill a gap in your perspective on it. And help you to have a better picture of what's going on.
I often do this actually when I find that someone has a perspective that I think is unlikely to be correct. I'm interested in this process of how they get there and how they think about it. Partly this is just that people are fascinating and the way that people think is fascinating so this is interesting. But I also think that it's polite and I think it's useful. I think it does help me to build a deeper picture of all the different bits of evidence that we have collectively.
Okay. So, in this section, I'm going to put the stuff I've just been talking about into action. I've told you about a whole load of different things through this talk. But I didn't tell you much about exactly what my level of competence in these is, or why I believe these. So, I'm going to do that here.
I'm aware that nobody ever goes away from a talk saying, “Oh, that was so inspiring. The way she carefully hedged all her statements.” But I think it's important. I would like people to go away from talks saying that. So I'm just going to do it.
So, heavy-tailed distributions. I think it's actually pretty robust that the kind of baseline distribution of opportunities in the world does follow something like this, a distribution with this heavy-tailed property. I think that just seeing this in many different domains and understanding some of the theory behind why it should arise makes it extremely likely. I think that there's an open empirical question to exactly how far that tail goes out. Heavy-tailedness isn't just a binary property, it's a continuum. Anders Sandberg is going to be talking more about this, I think, later today.
But, there's an important caveat here. This is the only one of these I've allowed myself, a digression. And this is that there's a mechanism which might push against that, which is people seeking out, and taking the best opportunities. If people are pretty good at identifying the best opportunities, and they are uniformly seeking out and taking them, then the best things that are left might not be so much better.
And this comes up in just regular markets. Ways to make money, maybe they actually start out distributed across a wide range. This is a log scale now, so this is meant to represent one of those heavy-tailed distributions, but then people who are losing money, say, “Well, this sucks,” and they stopped doing that thing. And they see other people who are doing activities which are making lots of money. And they're like, “Yeah, I'm going to go do that.” And then you get more people going into that area, and then diminishing returns mean that you actually make less money than you used to, by doing stuff in that area. So you end up kind of afterwards, with a much more narrow distribution of the value that's being produced by people doing these different things, than we started with.
We might get a push in that direction among opportunities to create altruistic value. I certainly don't think that we are at a properly efficient market. I'm not sure how efficient it is, how much we are curving that tail. I hope that as this community grows, as we get more people who are actively trying to choose very valuable things, that will mean the distribution does get less heavy-tailed because of this.
One of the mechanisms that leads to efficiency in regular markets is the feedback loops, where people just notice they're getting rich or that they're losing money. Another mechanism is people doing analysis, and they do this because of the feedback loops, trying to work out that actually we should put more resources there, because then we'll get richer. I think that doing that analysis is an important part of this project that we're collectively embarking on here.
So, overall, I don't think that we do have an efficient market for this. I do think we have heavy-tailed distributions. I'm not sure how extreme, but that's because of the fact that it responds to actions people are taking.
Factoring cost effectiveness, I think that basically, this is just an extremely simple point, and there isn't really space for it to be wrong. But there's an empirical question as to how much these different dimensions matter. It might be that you just have way more variation in one of the dimensions than others. Actually, I don't have that much of a view of how much the different dimensions matter. We saw that the intervention effectiveness within global health varied by three or four orders of magnitude. Area effectiveness, I think, may be more than that, but I'm not sure how much more. Organization effectiveness, I'm just not an expert and I don't want to try and claim to have much of a view on that.
Diminishing returns, I just think this is extremely robust point. Sometimes, in some domains, there are actually increasing returns to scale, where you get efficiencies of scale and that helps you. I think that more often applies at the organization scale or organization within a domain. Whereas diminishing returns often applies at the kind of domain scale. But I do know some smart people who think that I am overstating the case for diminishing returns. So although I think, personally, that there's a pretty robust case, I would add a note of caution there.
Scale, tractability, neglectedness, I think it's obvious that they all matter. I think it's obvious, it's just trivial, that this factorization is correct as a factorization. What's less clear is whether this actually breaks it up into things that are easier to measure and whether this is a helpful way of doing it. I think it probably is. We get some evidence from the fact that it loosely matches up with an informal framework that people have been using for a few years, and have seemed to find helpful.
Absolute and marginal priority, again, at some level, is just trivial. I brought this up as a point about communication, because I think not everybody has these separate notions and we can confuse each other if we blur them.
Differential progress, I think that this argument basically checks out. It appears in a few academic papers. It's also believed by some of the smartest and most reasonable people I know, which gives me some evidence that it might be true, outside from my own introspection. It hasn't had that much scrutiny and it's a bit counterintuitive, so maybe we want to expose it to more scrutiny.
Comparative advantage, is just a pretty standard idea from economics. Normally markets try to work to push people into working in the way that utilizes their comparative advantage. We don't necessarily have that in this, when we're aiming for more altruistic value.
The application across time is also a bit more speculative. I'm one of the main people who has been trying to reason this way. I haven't had anybody push back on it, but take it with a bit more salt, because it's just less well checked out.
Aggregating knowledge, I think everyone tend to think that yes, we want intuitions for this. And I think there's also pretty broad consensus that the existing institutions are not perfect. Whether we can build better institutions, I'm less certain.
Stating reasons for beliefs, this again, is something where I think that it's kind of common sense that all else equal, this is a good thing. But of course, there are costs to doing it. It slows down our communication. And it may just not sound glamorous and therefore be harder to get people on board with this. I think that at least we want to nudge people in this direction, but I don't know exactly how far in this direction. We don't want to be overwhelmingly demanding on this. I, to some extent, believe this because a load of smart, reasonable people I know, think that we want to go in this direction. And I weigh other people's opinions when I don't see a reason that I should have a particularly better perspective on it than them.
Okay. Finally, why have I been sharing all of this with you? You know, people can go and mine gold without understanding all theses theoretical things about the distribution of gold in the world. But, because it's invisible, we need to be more careful about aiming at the right things. And so I think it's more important for our community to have this knowledge broadly spread. And I think that we are still in the early days of the community and so it's particularly important to try and get this knowledge in at the foundations, and work out better versions of this. We don't want to have the kind of gold rush phenomenon where people charge off after a thing and it turns out there wasn't actually that much value there.
- 1. Introduction to Effective Altruism
- 2. Prospecting for Gold (part 1)
- 3. Prospecting for Gold (part 2)
- 4. Crucial Considerations and Wise Philanthropy
- 5. The Moral Value of Information
- 6. The Long-Term Future
- 7. A Proposed Adjustment to the Astronomical Waste Argument
- 8. Three Impacts of Machine Intelligence
- 9. Potential Risks from Advanced AI
- 10. What Does (and Doesn’t) AI Mean for Effective Altruism?
- 11. Biosecurity as an EA Cause Area
- 12. Animal Welfare
- 13. Effective Altruism in Government
- 14. Global Health and Development
- 15. How valuable is movement growth?