You do not have Javascript enabled. Some elements of this website may not work correctly.

November 18, 2016

In this 2016 talk, Oxford University's Owen Cotton-Barratt discusses how effective altruists can improve the world, using the metaphor of someone looking for gold. He discusses a series of key effective altruist concepts, such as heavy-tailed distributions, diminishing marginal returns, and comparative advantage.


The below transcript is lightly edited for readability. Part 2 of the transcript can be found here.

Mining for gold

The central metaphor that's going to be running through my talk is Effective Altruism as mining for gold. And I'm going to keep on coming back to this metaphor to illustrate different points. And gold here is a standing for whatever it is that we actually value. So, some things we might value include making more people happy and well educated. Or trying to avert a lot of suffering. Or trying to increase the probability that humanity makes it out to the stars. When you see gold, take a moment to think about what you actually value. For many people it won't just be one thing that they value. But do think about what you care about and put that in place of the gold. And then there's lots of observations we can make.

Victor Zhdanov small

So, this is a photo of Viktor Zhdanov, and I learned about him by reading in Will MacAskill's book, Doing Good Better. He was a Ukrainian biologist, who was instrumentally extremely important in getting an eradication program for smallpox to actually occur. As a result he probably was counterfactually responsible for saving tens of millions of lives.

Obviously, we don't all achieve this. So, by looking at examples like this, we can notice that some people manage to get a lot more gold, manage to achieve a lot more of whatever we altruistically value, than others. And that's reason enough to make us question what is it that gives some people better opportunities than others? How can we go and find opportunities like that?

Techniques for finding gold

Elsewhere in this conference, there are going to be treasure maps and discussions of where the gold is. I'm actually not going to do that in this talk. I'm instead going to be focusing on the tools and techniques that we can use for locating gold, rather than trying to give my view of where the things are directly.

Another thing I just want to cover here is actually, I'm giving this metaphor, I want to say a little bit about why I'm even using a metaphor, because we care about these things. We care about a lot of these big, complicated, valuable things. Why would I try and reduce that down to gold? Well, it's because of where I want the focus of this talk to be. I want the focus to be on these techniques and the tools and approaches that we can use. And if you have complex values, that we're trying to put in the background, it's just going to keep on pulling attention. But a lot of the things that we might do to try to identify where valuable things are, and how to go and achieve them, are constant, regardless of what the valuable thing is. So, by replacing them with a super simple stand-in for value, I think it helps to put the focus on this abstract layer that we're putting on top of that.

Surveying the Land

Gold is unevenly spread

So, the first thing I'm going to talk about is the fact that gold is, like literal gold is pretty unevenly spread through the world. There's loads of places with almost no gold at all, and then there's a few places where there's a big seam of gold running into the ground. This has some implications. One is that we would really like to find those seams.

Heavy-tailed distributions 4

Another is about sampling. For some quantities, say if I want to know roughly how tall people are, sampling five people, measuring their height and saying, “Well, the average is probably like that” is not a bad methodology. However, if I want to know on average how much gold there is in the world, sampling five random places, and measuring that, is not a great methodology, because it's quite likely that I'll find five places where there's no gold, and I'll significantly underestimate. Or possibly, one of them will have a load of gold and now I'll have a massively inflated sense of how much gold there is in the world.

Heavy-tailed distributions

So this is a statistical property that loosely gets called having a heavy tail on the distribution. This here on the left is a distribution without a heavy tail. There's a range of different amounts of gold in different places, but none of them have massively more or massively less than typical.

On the right, in contrast, is a heavy tail distribution. It looks similar-ish to the one on the left hand side, but there's this long tail of, getting up to very large amounts of gold, where the probabilities aren't dying off very fast. And then this has implications.

Heavy-tailed distributions 2

So here's another way of looking at these distributions. In this case, I've arranged going from left to right, the places in order of increasing amounts of how much gold they have. These are the percentiles, and then I've just put the amounts of gold on the vertical axis. And in this case, I've colored in beneath the graph. And that's because that quantity, that area, is meaningful. It corresponds to actually the total amount of where that gold is. So, in this case, on the left of the distribution that wasn't heavy tailed, I can see that the gold is fairly evenly spread across lot of different places. And so, if we want to just get most of the gold, what important is getting to as many different places as possible.

Solar power is like this. Sure, some places get more sunlight than other places, but the amount of solar power you generate depends more on how many total solar panels you have, than on exactly where you place them.

Over on the right, though, we have a distribution where you can see a lot of the area is that spike right at the right hand side. And so this just means that a lot of the gold, and if this is a true stand-in for something that we value, a lot of what's valuable comes in this extreme of the distribution of things, which are just unusually good.

So literal gold, I think, is distributed like this. Disclaimer, I'm not a geologist, I actually don't know anything about gold, but I understand that this is right. We might ask, is this also true of opportunities to do good in the world? So here's a couple of bit of support for this.

Heavy-tailed distributions 5

First is just general things, when we look into the world and it's pretty complex, we do see distributions with this heavy tail property coming up in a lot of different places. There are some, kind of theoretical reasons to expect certain types of distribution to arise and also empirically, if we go and look at something like income distributions around the world, again this is that percentile version and you can see the spike.

Okay, that was if we just go and look at the world. Obviously, there are lots of things which don't have this property as well. But the more that we look at things where there are complex systems and there are lots of interactions, that often increases the degree to which we see this property. And that is a big feature of lots of ways that we try and interact to improve the world.

I can also just try and look explicitly at opportunities to do good. And I can see a couple of reasons why I personally am convinced that we get some of this property. So, one is just convincing arguments. If I care about stopping people starving, and I do care about stopping people starving, I could ask should I be interested in direct famine relief and trying to get food to people who are starving today? I can compare this to something more speculative and I personally have been convinced by the arguments in this book, that it would be more effective to focus on doing research towards having solutions for feeding very large numbers of people in the event that agriculture collapses. It's pretty extreme. It's not something we normally think about, but I think that the argument basically checks out. And this tells me that one way of doing this, I'm just limiting myself here to trying to feed people, and one of the mechanisms looks much more effective than the other.

Heavy-tailed properties in opportunities for good

I can also look at data. So, this is data from DCP2, which has tried to estimate the cost effectiveness of lots of different developing world health interventions. The x axis is on a log scale, so these have been put into buckets, and each column is on average, 10 times more effective to than the one on its left. So, here the rightmost column is about 10,000 times more effective than the leftmost column. And this was, again, this was just within one area where we have managed to get good enough data that we can actually go and estimate these things. There's just a very wide range of cost effectivenesses.

So, the implications of this are that if we want to go and get gold, we really should focus on finding seams. In some cases, it might give us this surprising conclusion that getting evidence, say you discover that something is at the 90th percentile, that might make us less excited about it. Because before we knew anything, it might have been anywhere on the distribution. And if most of the possible value of it comes from it being up at the 99th percentile, then discovering it's only at the 90th percentile could actually be a bad thing. I mean, it's a good thing to discovery, but it makes us think less well of it. Now, that's if you've got a fairly extreme distribution, but it's interesting to see how you can get these kind of counterintuitive properties.

Another implication is that perhaps kind of naïve empiricism, “we'll just do a load of stuff and see what comes out best”, isn't going to be enough for us in judging this, because of this sampling issue. We can't go and sample enough times and measure the outcomes well enough to judge how it's actually going to be.

Maximizing gold

Okay. So, if we actually want to get as much gold as possible, we want to go to a place where there's lots of gold. We want to have the right tools for getting the gold out, and we want to have a great team who is going to be using those tools. I think that we can port this analogy over to opportunities to doing good as well. And we can roughly measure the effectiveness of the area or type of thing that we're doing, and the effectiveness of the intervention that we're doing to create value in that area, relative to other interventions in the area, and the effectiveness of the team or the organization who is implementing that, relative to how well other teams might implement such an intervention.

Value is roughly multiplicative

And if you have these things then the total value that you're going to be getting is equal to the product of these. I've represented it here by volume, and we want to be maximizing the volume. That means we're going to want to be trying to do fairly well on each of the different dimensions. At least, not terribly on any of the dimensions. And so, some implications there might be are that if we have an area and an intervention that we're really excited about, but we can only find a kind of bad, mediocre team working on it, it may be better not to just support them to do more of that, but to try and get somebody else working on it. Or to do something to really improve that team. Similarly, we might not want to support even a great team, if they're working in an area that doesn't seem important.

Locating gold

Recognising gold

Okay. So, now in the next part I'm going to go into talking about the tools and techniques for identifying where in the world gold is. A nice property about literal gold is that when you dig it up, you're pretty sure that you can recognize, yes I have gold. We often have to deal with cases where we don't have this. We don't have the gold, so we have to carefully try to infer its existence, by using different tools. So this is like the dark matter of value.

Recognizing gold 2

And so that increases the importance of having good tools for trying to actually measure and assess this. It increases the importance of actually applying those tool diligently, as well. Iron pyrite also looks a lot like gold, so just because somebody says, “Hey, this is gold,” doesn’t mean we should always take people's word on this. It does provide some evidence, but we have motivation for wanting to have great tools for identifying particularly valuable opportunities. And being able to differentiate and say, “Okay, actually this thing, although it has some aspects of value, maybe it's not what we want to pursue.”

Running out of easy gold 2

Okay. If you first go to an area where nobody has been before, then the seams of gold that are running through the ground have often been eroded a little bit, and you can have little nuggets of gold just lying around on the ground, and it's extremely easy to get gold. So you have some people go in, they do this for a bit, and they run out of all the gold on the ground.

Running out of easy gold 3

And now, if they want to get more gold, maybe more people come along, they bring some shovels, and it's a bit more work, but you can still get gold out.

Running out of easy gold

And then you dig deep enough and you can't just get in with shovels anymore and so you need bigger teams and heavier machinery to get gold out. You can still get gold, but it's more work for each little bit, for each nugget that you're getting out. This is the general phenomenon of diminishing returns on work that you're putting in, and I think that this actually comes up in a lot of different places and so it's worth having an idea about.

By the way, this is like several of the different things I'm going to be talking about, this is a concept, which I guess is native to economics. And in some cases, I'm fairly simply just porting across from economics and in some cases there's a little bit more modification on that.

But, for instance, I think that we get this in global health. I understand that 15 or 20 years ago, mass vaccinations were extremely cost effective and probably the best thing to be doing. And then the Gates Foundation has come in and they funded a lot of the mass vaccination stuff. And now, the most cost effective stuff is less cost effective than mass vaccinations. I mean, that's great, because we've taken those low hanging fruit. Or similarly, if in AI safety, writing the first book on super intelligence is a pretty big deal. Writing the 101st book on super intelligence is just not going to matter as much.

So, a minute ago, I talked about how we could factor the effectiveness of organizations into the area in which they were working, the intervention they were pursuing, and the team working on it. Now, I'm going to focus on that first one, trying to assess the area. And I'm going to give a further factorization, splitting that into three different things.

Scale

The first of these dimensions is scale. All else being equal, we would prefer to go to somewhere where there is a lot of gold, rather than a little bit of gold. And probably per unit efforts, we're going to get more gold, if we do that.

Tractability

Second, tractability. We'd like to go somewhere where you kind of make more progress per unit work. So, somewhere where it's nice and easy to dig the ground, rather than trying to get your gold out of a swamp.

Uncrowdedness

And third is uncrowdedness. This has sometimes been called neglectedness. I think that term is a bit confusing. It's a bit ambiguous, because sometimes people use neglectedness to just mean all things considered, this is an area which we should really put more resources in. What I mean here is just there aren't many people looking at it. All else being equal, we'd rather go to an area where people haven't already gone and picked up the nuggets of gold on the ground, than one where they have. And now the only gold remaining is quite hard to extract.

And so, ideally, of course, we'd like to be in the world where there's loads of gold, that’s easier to get out and nobody has taken any of it. But, we're rarely going to be in that exactly ideal circumstance. So one question is, how can we trade these off against each other? And I'm going to present one attempted way to try and make that precise. I've allowed myself one equation in this talk. This is it.

Scale, tractability, uncrowdedness

If you're not used to thinking in terms of derivatives, just ignore the ds here. But this on the left is the value of a little bit of extra work, so this is generally what we care about if we're trying to assess which of these different areas should we go and do more work on.

On the right is a factorization. So this is mathematically trivial, I've just taken this one expression and I've added in a load more garbage. And on the face of it, it looks like I've made things a lot worse. And I can only justify this, if it turns out that these terms I've added in, which cancel with each other, actually mean that the right hand side here is easier to interpret or easier to measure. And so I'm going to present a little bit of a case for why I think it is.

So this first term here, is measuring the amount of value you get for, say, solving an extra one percent of a solution. And that roughly tracks how much of a big deal the whole problem that you're looking at is, the whole area. And so, that, I think, is a pretty good precise version of the notion of scale.

The second one is a little bit more complicated. It's an elasticity, here, which is a technical term. It's actually a pretty useful and general term (go look it up on Wikipedia, if you're interested). Here it's measuring, for a proportional increase in the amount of work that's being done, what proportion of a solution does that give you?

And then the final term actually just cancels to one over the total amount of work being done. So that's very naturally a measure of uncrowdedness.

People have talked about this kind of scale, tractability, uncrowdedness framework for a few years without having a precise version. And that means that people have given different characterizations of the different terms, and I think there have been a few different versions of tractability, not all of them lining up with this exactly. But I think that this idea of it measuring how much more work actually gets you towards a solution, is fairly well captured by this version here.

All three dimensions

And I think that all of these dimensions, again, matter. And again, that means we probably don't want to do or work on something, which does absolutely terribly on any of the dimensions. I'm not going to spend an hour helping a bee, even if nobody else is helping it and it would be pretty easy to help, because just the scale of it is pretty small. I don't think we should work on perpetual motion machines, even though basically nobody is working on it and it would be really fantastic if we succeeded. Because it seems like it's not tractable.

And there's nothing which is extremely not uncrowded, (that would be extremely crowded, just because there's only seven billion people in the world, as you can't force this that low, but this might give us a warning against actually working on climate change. Because at a global scale, that gets a lot of attention, as a problem.

I'm going to add some more caveats to that one. One is that this is going to be true while we think that there are other problems, which are just significantly more under resourced. And another is that, you might think that you have an exception, if you have a much better way of making progress on the problem of climate change, than typical work that is being done on it.

Even so, I think maybe we should think it's a bit surprising that I'm making a statement like “climate change is not a high priority area.” This just sounds controversial and we should be skeptical of this. But, I think that the term high priority is a little bit overloaded. And so I want to distinguish that a little bit.

Absolute and marginal priority

If we have these two places where there's gold in the ground, and we say, “Where should we send people if we want to get gold?” The answer is going to depend. Maybe we send the first person to this place on the right, where there's only a little bit of gold, but it's really easy to get out. And then we send the next 10 people to the place on the left, just because there's more total gold there. The first person will already have gotten most of the gold on the right. And we want more people total working on this place on the left. Which of these is higher priority? Well, that just depends on which question you're actually asking.

Absolute and marginal priority 2

These numbers are just made up off the top of my head, but we might have some kind of distribution like this on the left, where if we ask the question “How much should the world spend on this area, total?” we get one distribution, where maybe climate change actually looks very big on this.

And if we instead ask, how valuable is marginal spending? The graph might look actually quite different, because here it's significantly about how much is already being spent. You’ll see some black lines on the diagram on the left - they might represent how much is already being spent. And then, the graph on the right is a function of all sorts of things, like how much should be spent in total, how much is already being spent and of course, what the marginal returns are - what the curve looks like there.

But I think that both of these are actually important notions, and which one we use should depend on what we're talking about. If we're having conversation about what we as individuals or as small groups should do, I think it's appropriate to use this notion of marginal priority, of how much do extra resources help. If we're talking about what we collectively as a society or the world should do, I think it's often correct to talk about this kind of notion of absolute priority and how much resources ought to be invested in it, total.

Okay, for most of the things here, I've been extremely agnostic about what our view of value is. Just for this point, I'm going to start making more assumptions. I think quite a few people have the view that what we want to do is try and make as much value over the long term, as we can. Some people don't have that view, some people haven't thought about it. If you don't have that view, you can just treat this as a hypothetical: “Now I can understand what people with that view would think.” If you haven't thought about it, go away and think about it, some time. It's a pretty interesting question, and I think it's an important question, and is worth spending some time on.

But, if we do care about creating as much value in the long term as possible, in our gold metaphor, that might mean wanting to get as much gold out of the ground eventually, as possible, rather than just trying to get as much gold out of that ground this year.

Long-term gold

And maybe we have some technologies which are destructive. So we can use dynamite and dynamite gets us loads of gold now, but it also blows up some gold and now we never get that gold later. And so that could be pretty good, if you are focusing just on trying to get gold in the short term. But it could be bad from this eventual gold perspective.

If we have different technologies that we can develop, maybe we can develop some that are also efficient but less destructive. And there are going to be some people in the world who do care about creating as much gold as possible in the short term. Then they're going to use whichever technology is the most efficient for that. And so one of the major drivers of how much gold is eventually extracted is the order in which the technologies are developed, and the sequencing. If we discover the dynamite first, people are going to go and have fun with their dynamite and they're going to destroy a lot of the gold. If we discover the drill first, then by the time dynamite comes along, people will go “Well, why would we use that? We have this fantastic drill.”

So philosophers like Nick Bostrom have used this to argue for trying to develop societal wisdom and good institutions for decision making, before developing technologies or progress which might threaten the long run trajectory of civilization. And also for trying to focus on differentially aiming to develop technologies which enhance the safety of new developments, rather than or before anything that's driving risk.

Working together

Okay. So now I'm going to talk about how actually this is a collaborative endeavor. We're not just all, each of us individually, going, “Okay. I need to work out where the most gold is. And that's most neglected, most tractable. And then I personally am just going to go and do that.” Because there's a whole lot of people who are thinking like this, and there's more every year. I'm really excited about this. I'm really excited to have so many people here and also this idea that maybe in two years time, we'll have a lot more again.

But, then we need to work out how to cooperate. Largely we have the same view or pretty similar views over what to value. Maybe some people think that silver matters, too - it's not just gold - but we all agree that gold matters. We're basically cooperating, here. We want to be able to coordinate and make sure that we're getting people working on the things which make sense most for them.

Comparative advantage

So, there is this idea of comparative advantage. So I have Harry, Hermione and Ron, and they have three tasks that they need to do, in order to get some gold. They need to do some research, they need to mix some potions, and they need to do some wand work. Hermione is the best at everything, but she doesn't have a time turner, so she can't do everything. So we need to have some way of distributing this. And this is the idea of comparative advantage. Hermione has an absolute advantage on all of these tasks, but it would be a waste for her to go and work on the potions because Harry is not so bad at potions. And really, nobody else is at all good at doing the research in the library. So we should probably put her on this.

And this is a tool that we can use to help guide our thinking about what we should do as individuals. If I think that some technical domain and technical work is the most valuable thing to be doing, but I would be pretty mediocre at that, and I'm a great communicator? Then maybe I should go into trying to help technical researchers in that domain communicate their work in order to get more people engaged with it, and bring in more fantastic people.

Comparative advantage 2

So that's applying this at the individual level. We can also apply this at the group level. We can notice that different organizations or groups may be better placed to take different opportunities.

And this is like a bit more speculative, but I think we can also apply this at the time level. We can ask ourselves, “What are we, today, the people of 2016, particularly well suited to do, versus people in the past and people in the future?” Okay. We can't change what people in past did. But we can make this comparison of what is our comparative advantage relative to people in the future. And if there's a challenge, if there were going to be a number of different possible challenges in the future that we need to meet, it makes sense that we should be working on the early ones. Because if there's challenges coming in 2020, the people in 2025 just don't have a chance to work on that.

Another thing which might come here is that we have a position, perhaps, to influence how many future people there will be who are interested in and working on these challenges. We have more influence over that than people in that future scenario do, so should we think about whether that makes sense as a thing for us to focus on?

Building a map together

Another particularly important question is how to work stuff out. The world is big and complicated and messy. And we can't expect all of us, individually, to work out perfect models of it. In fact, it's too complicated for us to expect anybody to do this. So, maybe we’re all walking around with the little ideas which, in my metaphor here, are puzzle pieces for a map to where the gold is. We want institutions for assembling these into a map. It's a bit complicated, because some people have puzzle pieces which are from the wrong puzzle, and this doesn't actually track where gold is. Ideally, we'd like our institutions to filter these out and only assemble the correct pieces to guide us where we want to go.

Building a map together 2

As a society, we've had to deal with this problem in a number of different domains, and we've developed a number of different institutions for doing this. So there's the peer review process in science. Wikipedia does quite a lot of work aggregating knowledge. Amazon reviews aggregate knowledge that individuals have about which products are good. Democracy lets us aggregate preferences over many different people to try and choose what's actually going to be good.

Building a map together 3

Of course, none of these institutions are perfect. And this is a challenge. This is like one of those wrong puzzle pieces, which made it into the dialogue. And this comes up in the other cases as well. The crisis of replication in parts of psychology has been making headlines recently. Wikipedia, we all know, sometimes gets vandalized and you go and you just read something which is nonsense. Amazon reviews have problems of people making fake reviews, to make their product look good or other people's products look bad.

So, maybe it's the case that we can adapt one of these existing institutions for our purpose, which is trying to aggregate knowledge about what are the ways to go and do the most good. But maybe we want something a bit different, and maybe somebody in this room is going to do some work on coming up with valuable institutions for this. I actually think this is a really important problem. And it's one that is going to just become more important for us to deal with as a community, as the community grows.

Good local norms

That was all about what are our global institutions for pulling this information together and aggregating it. Another thing, which can help us to move towards getting a better picture, is trying to have good local norms. So, we tell people the ideas that we have, and then other people maybe start listening. And sometimes it might just be that they listen based on the charisma of the person who is talking, more than based on the truthiness of the puzzle piece. But we'd like to have ways of promoting the spread of good ideas, inhibiting the spread of bad ideas, and also encouraging original contributions. One way of trying to promote the spread of good ideas and inhibit bad ideas is just to rely on authority. We'll say, “Well, we've worked out this stuff. We're totally confident about this. And now we just won't accept anything else.” But that isn't going to let us actually get new stuff.

Why we believe things

I think something to do here is to pay attention to why you believe something. Do you believe it because somebody else told you? Do you believe it because you've really actually thought this through carefully and worked it out for yourself? There's a blur between those. Often somebody tells you and they kind of give you some reasons. And you're like, “Oh, those reasons kind of check out,” but you haven't gone and deeply examined the argument yourself.

And I think it's useful to be honest with yourself about that. And then also to communicate it to other people. To let them know why it is. Is it the case that you believe this because Joe Bloggs has told you? And actually, Joe is a pretty careful guy, and he's pretty diligent about checking out his stuff, so you think it probably makes sense. You can just communicate that. Or is it that you cut out this puzzle piece yourself?

Now, cutting it out yourself doesn't necessarily mean we should have higher credence in it. I've definitely worked things out, I've thought I've proved things before and there was a mistake in my proof. So you can separately keep track of the level of credence you have in a thing, and why you believe it.

And also our individual and collective reasons for believing things can differ. So here's this statement, that is costs about $3,500 to save a life from malaria. I think this is broadly believed across the Effective Altruism community. I think that collectively, the reason we believe this is that there have been a number of randomized control trials. And then some pretty smart, reasonable analysts at GiveWell have gone and looked carefully at this, and they've dived into all the counterfactuals and they've produced their analysis, and they say, “On net, it looks like it's about $3,500.”

Shortening the chain

But that isn't why I believe it. I believe it because people have told me that the GiveWell people have done this analysis and they say it's $3,500. And they say, “Oh, yeah. I read it on the website.” Actually that was why I believed it, until I started prepping for this talk, when I went and read it on the website. Because I think that this is a bit more work for me, but it's doing a bit of value for the community. Because I'm shortening the chain of Chinese whispers, of passing this message along. And as things get passed along, it's more possible that mistakes enter or just something isn't well grounded and then it gets repeated. By going back and checking earlier sources in the chain, we can try to reduce that. And try to make ourselves more robustly confident in these statements.

Disagreement is an opportunity to learn

Another thing that comes up is when you notice that you disagree with somebody. If you're sitting down and talking with someone and they're saying something, and you're like “Well, that's obviously false.” You can see perhaps that parts of their jigsaw puzzle are wrong. You could just dismiss what they have to say, but I think that, that's often not the most productive thing to do. Because even if part of what they have to say are wrong, maybe they have some other part that's going into their thinking process which would fill a gap in your perspective on it. And help you to have a better picture of what's going on.

I often do this actually when I find that someone has a perspective that I think is unlikely to be correct. I'm interested in this process of how they get there and how they think about it. Partly this is just that people are fascinating and the way that people think is fascinating so this is interesting. But I also think that it's polite and I think it's useful. I think it does help me to build a deeper picture of all the different bits of evidence that we have collectively.

Retrospective - What I believe and why

Okay. So, in this section, I'm going to put the stuff I've just been talking about into action. I've told you about a whole load of different things through this talk. But I didn't tell you much about exactly what my level of competence in these is, or why I believe these. So, I'm going to do that here.

I'm aware that nobody ever goes away from a talk saying, “Oh, that was so inspiring. The way she carefully hedged all her statements.” But I think it's important. I would like people to go away from talks saying that. So I'm just going to do it.

Heavy-tailed distributions 3

So, heavy-tailed distributions. I think it's actually pretty robust that the kind of baseline distribution of opportunities in the world does follow something like this, a distribution with this heavy-tailed property. I think that just seeing this in many different domains and understanding some of the theory behind why it should arise makes it extremely likely. I think that there's an open empirical question to exactly how far that tail goes out. Heavy-tailedness isn't just a binary property, it's a continuum. Anders Sandberg is going to be talking more about this, I think, later today.

Altruistic market efficiency 3

But, there's an important caveat here. This is the only one of these I've allowed myself, a digression. And this is that there's a mechanism which might push against that, which is people seeking out, and taking the best opportunities. If people are pretty good at identifying the best opportunities, and they are uniformly seeking out and taking them, then the best things that are left might not be so much better.

Altruistic market efficiency

And this comes up in just regular markets. Ways to make money, maybe they actually start out distributed across a wide range. This is a log scale now, so this is meant to represent one of those heavy-tailed distributions, but then people who are losing money, say, “Well, this sucks,” and they stopped doing that thing. And they see other people who are doing activities which are making lots of money. And they're like, “Yeah, I'm going to go do that.” And then you get more people going into that area, and then diminishing returns mean that you actually make less money than you used to, by doing stuff in that area. So you end up kind of afterwards, with a much more narrow distribution of the value that's being produced by people doing these different things, than we started with.

Altruistic market efficiency 2

We might get a push in that direction among opportunities to create altruistic value. I certainly don't think that we are at a properly efficient market. I'm not sure how efficient it is, how much we are curving that tail. I hope that as this community grows, as we get more people who are actively trying to choose very valuable things, that will mean the distribution does get less heavy-tailed because of this.

One of the mechanisms that leads to efficiency in regular markets is the feedback loops, where people just notice they're getting rich or that they're losing money. Another mechanism is people doing analysis, and they do this because of the feedback loops, trying to work out that actually we should put more resources there, because then we'll get richer. I think that doing that analysis is an important part of this project that we're collectively embarking on here.

So, overall, I don't think that we do have an efficient market for this. I do think we have heavy-tailed distributions. I'm not sure how extreme, but that's because of the fact that it responds to actions people are taking.

Factoring cost-effectiveness

Factoring cost effectiveness, I think that basically, this is just an extremely simple point, and there isn't really space for it to be wrong. But there's an empirical question as to how much these different dimensions matter. It might be that you just have way more variation in one of the dimensions than others. Actually, I don't have that much of a view of how much the different dimensions matter. We saw that the intervention effectiveness within global health varied by three or four orders of magnitude. Area effectiveness, I think, may be more than that, but I'm not sure how much more. Organization effectiveness, I'm just not an expert and I don't want to try and claim to have much of a view on that.

Diminishing returns

Diminishing returns, I just think this is extremely robust point. Sometimes, in some domains, there are actually increasing returns to scale, where you get efficiencies of scale and that helps you. I think that more often applies at the organization scale or organization within a domain. Whereas diminishing returns often applies at the kind of domain scale. But I do know some smart people who think that I am overstating the case for diminishing returns. So although I think, personally, that there's a pretty robust case, I would add a note of caution there.

Scale, tractability, uncrowdedness 2

Scale, tractability, neglectedness, I think it's obvious that they all matter. I think it's obvious, it's just trivial, that this factorization is correct as a factorization. What's less clear is whether this actually breaks it up into things that are easier to measure and whether this is a helpful way of doing it. I think it probably is. We get some evidence from the fact that it loosely matches up with an informal framework that people have been using for a few years, and have seemed to find helpful.

Absolute and marginal priority 2

Absolute and marginal priority, again, at some level, is just trivial. I brought this up as a point about communication, because I think not everybody has these separate notions and we can confuse each other if we blur them.

Differential progress

Differential progress, I think that this argument basically checks out. It appears in a few academic papers. It's also believed by some of the smartest and most reasonable people I know, which gives me some evidence that it might be true, outside from my own introspection. It hasn't had that much scrutiny and it's a bit counterintuitive, so maybe we want to expose it to more scrutiny.

Comparative advantage

Comparative advantage, is just a pretty standard idea from economics. Normally markets try to work to push people into working in the way that utilizes their comparative advantage. We don't necessarily have that in this, when we're aiming for more altruistic value.

The application across time is also a bit more speculative. I'm one of the main people who has been trying to reason this way. I haven't had anybody push back on it, but take it with a bit more salt, because it's just less well checked out.

Aggregating knowledge

Aggregating knowledge, I think everyone tend to think that yes, we want intuitions for this. And I think there's also pretty broad consensus that the existing institutions are not perfect. Whether we can build better institutions, I'm less certain.

Sharing reasons for beliefs

Stating reasons for beliefs, this again, is something where I think that it's kind of common sense that all else equal, this is a good thing. But of course, there are costs to doing it. It slows down our communication. And it may just not sound glamorous and therefore be harder to get people on board with this. I think that at least we want to nudge people in this direction, but I don't know exactly how far in this direction. We don't want to be overwhelmingly demanding on this. I, to some extent, believe this because a load of smart, reasonable people I know, think that we want to go in this direction. And I weigh other people's opinions when I don't see a reason that I should have a particularly better perspective on it than them.

Conclusions

Okay. Finally, why have I been sharing all of this with you? You know, people can go and mine gold without understanding all theses theoretical things about the distribution of gold in the world. But, because it's invisible, we need to be more careful about aiming at the right things. And so I think it's more important for our community to have this knowledge broadly spread. And I think that we are still in the early days of the community and so it's particularly important to try and get this knowledge in at the foundations, and work out better versions of this. We don't want to have the kind of gold rush phenomenon where people charge off after a thing and it turns out there wasn't actually that much value there.


This is part of a series of articles setting out the key ideas in effective altruism. Click "next" to keep reading.

If you'd prefer to get more by email, sign up below. (After we've sent you some initial reading, you'll receive roughly one article per month, no spam.)