To solve the world’s biggest problems, we need to direct as much money and talent towards them as possible. We need a community of people who are bright, motivated, and trying to do as much good as they can. They also need the ability, time, and incentives to work on whatever seems most important. We think effective altruism could be such a community, and therefore investing in it could be very high impact.
The community has a good track record so far. More than 3,000 peoplefn-1 have pledged to donate at least 10% of their income to the most effective charities. GiveWell moved over $100 million to their top charities in the year 2015 alonefn-2, including $38 million to the Against Malaria Foundation (enough to buy 19 million malaria nets at current pricesfn-3), and $50 million going directly to the poorest people in sub-Saharan Africa through GiveDirectly. Animal Charity Evaluators (ACE) moved nearly $3 million to their top recommended charities in 2016, and over $800,000 to their standout charities.fn-4 The Open Philanthropy Project grew out of a partnership with a large foundation, and is now directing billions of dollars to the most effective causes: funding research and advocacy to improve farm animal welfare, for example, and research on high priority neglected problems such as criminal justice reform and biosecurity.fn-6 80,000 Hours have helped hundreds of people to change their career plans to have more impact: including directing more talent towards research into risks from artificial intelligence.fn-7
This profile sets out why you might want to focus on building the effective altruism community - and why you might not. One key question is whether you think investing in the effective altruism community is better than working on the most important problems more directly. Community building is especially promising if you’re uncertain about which problems are the most pressing right now, or if you expect to change your mind about this.
Investing in the effective altruism (EA) community can mean several different things in practice:
A final area which might sometimes fall into this category is investing in cause prioritisation research. This helps us to identify which problems in the world are most pressing. Though closely related, we think the reasons for investing in cause prioritisation research are slightly different from those for EA community building, so we focus on the latter here. For a more detailed discussion of this area, see 80,000 Hours’ profile on global priorities research.
One dollar spent on building the community can cause others to donate more than one dollar, or to do more than one dollar’s worth of good. This means that your impact is “multiplied” by donating to community building. For example, Giving What We Can estimates that for every dollar it has spent on creating and growing its community, its members have given $6 to effective charities already (ie. not including future pledged donations).fn-8
Investing in the effective altruism community is also a robust approach. It is likely to be valuable even if we change our minds about which causes are most effective. By inspiring others to do more good and think about effectiveness, we increase the pool of people and resources available to take up new challenges as they arise.
EA community building also seems neglected. There are between 25 and 100 people working on building the EA community full time, with an annual budget (as of 2016) of around $8 million.fn-9 There do seem to be good opportunities available which build on existing community-building work: effective altruism is still a relatively small movement which could grow a great deal. Moreover, it is in need of more talent: many effective altruism organisations are actively hiring.
Investing in a broad area like EA community building may also provide opportunities for moral cooperation between people who disagree about what the most important problems are.
For example, suppose Alice and Bob have the same amount to donate, and are deciding where to give. Alice thinks that poverty is the most pressing problem, and Bob thinks that animal welfare is. Both think that investing in EA community building will help their respective causes, but only around 60% as much as investing in the problems directly. It may be that if Alice and Bob cooperate - if they both agree to donate to EA community building - then each can actually get more of what they value than if both instead donated to their preferred causes directly.
The EA community itself also provides opportunities for people with different values or beliefs to work together on a shared goal.
It might seem self-serving for effective altruists to recommend growing their own community. What should we think when people recommend that we fund their own work? On the one hand, this is exactly what you would expect from someone who is interested in power rather than helping others. On the other hand, it is also exactly what you would expect from someone who genuinely believes, for good reason, that their work is high impact. In fact, it would be strange for someone in this position not to recommend that others support their cause. It is therefore difficult to assess the extent to which this is a problem in any given case.
We might be more sure that someone requesting funding is altruistically motivated if they can make a compelling case for funding them. It is also easier to ensure altruistic motivation if the organisation or individual is exceptionally transparent about what they are doing and why. This gives you more information about their decision making process, and makes trust easier. One particularly important factor here may be how they respond to feedback. Knowing some of the employees personally can increase your confidence in their integrity (although this may also bias you). Many organisations in this space are very transparent about their operations, which makes it easier to build the requisite trust. However, some degree of skepticism here is understandable.
Of course, we don’t directly care about investing in the EA community - we care about it because we think it’s likely to lead to more resources being directed towards the most important problems. But if we prioritise this too highly, we might actually make little progress on real problems. If everyone in EA focused on building the community, then we might have a strong, capable community which never actually did anything but invest in their own strengths and capabilities.
There certainly needs to be a balance between building capacity, and actually solving problems. The argument given above is for why investing in community building seems promising on the margin. Given that it currently doesn’t get funded much, work here may be particularly valuable. But there will be a point where funds are better spent on other things.
Investing in the community and ideas around effective altruism is potentially very high value. But there are still a number of reasons you might choose not to work on it, or why you might choose to prioritise other cause areas.
When we invest in the EA community, we are not investing directly in what we care about. We don’t care about having a strong and capable community for its own sake. This means that getting evidence for the direct impact of community building requires making more assumptions, and is therefore less strong than more direct interventions (in global health, for example.)
However, though there’s more uncertainty in these estimates, this is traded off against the potential higher expected value overall. It’s not clear that we should prioritise proven interventions over ones which are more uncertain but have higher potential impact, and so this choice needs justification.
We said that investing in the EA community is a particularly good option if you’re uncertain what object-level causes are most important. If you think that a specific cause area is incredibly important right now, and it is relatively unlikely that you will change your mind about this, then this argument from uncertainty has much less weight.
Similar reasoning might apply if you think that you’re well placed to work on a specific problem. For instance, you might have a deep understanding of an area which makes it easier for you to recognise good opportunities, or to solve certain problems. If you have a decade’s experience researching biotechnology, the best way for you to have an impact is by doing research into the risks from biotechnology. This might be true even if you think that this is not in general the highest priority problem in the world.
However, even if you’re convinced that a certain cause area is highest priority (or that you can contribute much better to a specific problem), you might still be able to multiply your impact by working in advocacy or research rather than trying to solve the problem directly. In some cases, this might mean investing in the EA community. For example, you might donate to or work for GiveWell to leverage more donations towards global poverty causes.
Building the capabilities of the EA community is similar to an individual building career capital early in their career. It often makes sense to build skills and influence before trying to make an impact directly. However, there is a tradeoff here for both individuals and the community more broadly: we want to spend some time building skills, influence, and knowledge, but we don’t want to pass up good opportunities to have a real impact. There’s a difficult judgement call to be made: how many resources do we invest in building capability relative to more direct work?
80,000 Hours generally recommends spending the early parts of your career prioritising career capital, unless you have very clear opportunities to do good more directly. So you might spend 5 to 10 years building career capital before doing good directly. But what are the “early stages” of the effective altruism movement? Are we still in the early stages now? It seems likely, especially given that effective altruism as an idea has only existed since about 2011,fn-10 and the community still seems to be growing quickly,fn-11 but there is some uncertainty here.
Related to the previous point, you might think that working on concrete problems is actually one of the best ways to build capacity.
It might be that one of the best ways to build a community that can solve the world’s most pressing problems is for people and organisations to start by trying to solve what currently look like the most pressing problems. Even if we then change our minds about which problems to work on, many of the skills learned will be transferable. The strength of this argument obviously depends on just how transferable these skills are.
Relatedly, working on concrete problems can help to ensure that the values of the community remain focused on helping others. This may help prevent people gradually investing more in themselves, and less in others, and never fulfilling their original altruistic intentions.
We also need to consider how the EA community is perceived. Even if we avoid the traps just mentioned, it might look externally like we’re over-investing in ourselves, and this would decrease the impact of EA. Arguably one reason that the EA community has been growing quickly is that it has achieved impressive things, such as identifying giving opportunities that are much more effective than average, and directing large amounts of funding towards those opportunities. If more resources were put into growing capacity relative to funding and working on concrete problems, our achievements might seem less impressive to outsiders, damaging the community’s reputation.
Worse, a community of people building up skills, connections, and influence to help the world can look suspiciously similar to a community of people doing the same things for selfish reasons. This can invite skepticism about how genuine the community’s altruism is. These kinds of concerns can be an additional reason to invest slightly more effort into direct work that yields immediate tangible impact.