You do not have Javascript enabled. Some elements of this website may not work correctly.

Fin Moorhouse

January 27, 2021

'Longtermism' is the view that positively influencing the long-term future is a key moral priority of our time. Although ethical concern for the long-term future is not a new idea, only recently has a serious intellectual project emerged around it — asking whether it is legitimate, and what it implies.

Can we plausibly influence the very long-run future? If so, how important could it be that we do what we can to improve its prospects? An increasing number of people who have thought deeply about these questions are answering: "Yes we can, and very".

If we discover that we can act now to improve how things turn out very far into the future, that doing so is very important, and that we're not currently taking those opportunities, then longtermism will turn out to be a enormously important set of ideas. Not just abstractly: there might actually be ways you could begin working on some of these ideas yourself. That could be a good reason to want to get to grips with longtermist thinking.

If you're interested in effective altruism, you might also just want to know why a significant amount of EA-influenced philanthropic spending already goes toward longtermist problem areas (especially those focused on risk), even though urgent and pressing problems like extreme poverty and factory farming remain tractable, neglected, and important.

The plan for this post, then, is to explain what longtermism is, why it could be important, and what some possible objections look like.

By 'the long-run future', I means something like 'the period from now to many thousands of years in the future, or much further' — not just this century. I'll also associate longtermism with a couple specific claims: first, that this moment in history could be a time of extraordinary influence over the long-run future; and second, that society currently falls far short of what it could be doing to ensure the long-run future goes well. I mention some more precise definitions nearer the end.

The case for longtermism

The most popular argument for longtermism in the (tiny) extant philosophical literature is consequentialist in flavour. This means it appeals to the importance and size of the effects certain actions can have on the world as reasons for taking those actions.

The argument has roughly three parts. First, there is the observation that the goodness or badness of some outcome doesn't depend on when it happens. Second, there's the empirical claim that the future could plausibly be very big (in duration and number of lives), and very valuable (full of worthwhile lives and other good things). Third, there's the empirical claim that there are things we can do today that have a not-ridiculously-small chance of influencing that future by a not-ridiculously-small amount — and sometimes much more. Putting these together, the conclusion is that there are things we can do today that have a decent shot at making an enormously valuable difference.

Let's go through these stages.

Future effects matter just as much

Longtermists claim that, all else being equal, (far) future effects matter intrinsically just as much as more immediate effects.

This claim isn't denying that there are many practical reasons to prefer focusing on making things go well right now versus trying to make things go well 1,000 years from now. The most obvious practical reason is that it's typically far easier to reliably affect how things go in the immediate future, and typically unclear how to reliably affect what happens in the far future. After all, we know about today's pressing problems, but not what pressing problems the world might face a few centuries from now.

Another reason is that working on more immediate problems often also look like good ways to improve the far future. For instance, helping to develop green energy technology solves pressing problems today, like energy poverty and extreme weather from climate change. It could also stand to robustly benefit the long-run future, but you don't need to focus on long-run effects to see that it could be worthwhile.

Thirdly, immediate effects sometimes compound over time to yield even more benefit later, which could give a practical reason for preferring to cause certain kinds of compounding effects now, rather than in the future.

Longtermism isn't denying any of that, because of the 'all else being equal' part. The idea is more like: suppose you can confer some fixed total amount of harms or benefits right now, or in very many years. We're imagining no knock-on effects, and you're equally confident you can confer these harms or benefits at either time. Is there something about conferring those harms or benefits in the (far) future which makes them intrinsically less valuable? In other words, might we be prepared to sacrifice less to avoid the same amount of harm if we were confident it would occur in the (far) future, rather than next year? Longtermism answers 'no'. There is no principled reason, it claims, for intrinsically valuing (far) future effects any less than more immediate effects.

Why think this? On reflection, the fact that something happens far away from us in space doesn't make it less intrinsically bad than if it happened nearer to us. And people who live on the other side of the world aren't somehow worth less just because they're far away from us. This realisation is part of a family of views called cosmopolitanism. Longtermism simply extends this view from space to time. If there's no good reason to think that things matter less as they become more spatially distant, why think things matter less if they occur further out in time?

There are two broad objections to the claim that the value of an outcome is the same no matter when it occurs. One reason is that we might want to 'discount' the intrinsic value of future welfare, and other things that matter. In fact, governments do employ such a 'social discount rate' when they do cost-benefit analyses. Another reason is that outcomes in the future affect people who don't exist today. I'll tackle the first objection now and the second (thornier) one in the 'objections' section.

Discounting

The simplest version of a social discount rate scales down the value of costs and benefits by a fixed proportion every successive year. Very few moral philosophers have defended such crude discount rates.

In fact, the reason governments employ a social discount rate may be more about electoral politics than deep ethical reflection. On the whole, voters prefer to see slightly smaller benefits soon compared to slightly larger benefits a long time from now. Plausibly, this gives governments a democratic reason to employ a social discount rate. But the question of whether governments should respond to this public opinion is separate from the question of whether that opinion is right.

Economic analyses sometimes yield surprising conclusions when a social discount rate isn't included, which is sometimes noted as a reason for using them. But longtermists could suggest that this gets things the wrong way around: since a social discount rate doesn't make much independent sense, maybe we should take those surprising recommendations more seriously.

Here's an argument against 'pure time discounting' from Parfit (1984). Suppose you bury some glass in a forest. In one case, a child steps on the glass in 10 years' time, and hurts herself. In the other case, a child steps on the glass in 110 years' time, and hurts herself just as much. If we discounted welfare by 5% every year, we would have to think that the second case is over 100 times less bad than the first — which it obviously isn't. Since discounting is exponential, this shows that even a modest-seeming annual discount rate implies wildly implausible differences in intrinsic value over large enough timescales. Applying even a 3% yearly discount rate implies that outcomes now are more than 2 million times more intrinsically valuable than outcomes 500 years from now.

Consider how a decision-maker from the past would have weighed our interests in applying such a discount rate. Does it sound right that a despotic ruler from the year 1500 could have justified some slow ecological disaster on the grounds that we matter millions of times less than his contemporaries?

The long-run future could be enormously valuable

By 'future', we mean 'our future' — the future we are able to affect, home to our descendants and the lives they lead. You could use the phrase 'humanity's future', but note that 'humanity' doesn't just need to mean 'the species Homo sapiens'. We're interested in how valuable the future could be, and whether we might have some say over that value. Neither of those things depends on the biological species of our descendants.

How long could our future last? To begin with, a typical mammalian species survives for an average of 500,000 years. Since Homo sapiens has been around for around 300,000 years, then we might expect to have about 200,000 years left. This is not an upper bound: one way in which Homo sapiens is not a typical mammalian species could be that humans are able to anticipate and prevent potential causes of our own extinction.

It might also be instructive to look to the future of the Earth. Assuming we don't somehow change things, it looks like our planet will remain habitable for about another billion years (before it gets sterilised by the sun). If humanity survived for just 1% of that time, and similar numbers of people lived per-century as in the recent past, then we should expect at least a quadrillion future human lives.

If we care about the overall scale of the future, duration isn't the only thing that matters. We are interested in how many people (or animals or other beings) could inhabit the future, and its potential duration is only a good guide.

Since we're interested in the size of the future, it's important to consider that humanity could choose to spread far beyond Earth. We could settle other planets, or even construct vast life-sustaining structures, and early investigations suggest this really could be practically feasible.

If humanity does spread beyond Earth, it could spread unimaginably far and wide, choosing from a wide range of extraordinary futures. The number of stars in the affectable universe, and the number of years over which we may be able to harness their energy, are literally astronomical. And since we would become dispersed over huge stretches of space, we would become less vulnerable to certain one-shot 'existential catastrophes'.

You shouldn't take all these numbers too seriously, and it's right to be a bit suspicious of crudely extrapolating from past trends like 'humans per century'. What matters is that multiple signs point towards the enormous potential — even likely — size of humanity's future.

Metaphors do the same thing in a more creative way. For instance, Greaves and MacAskill analogise humanity's story to a book-length tale, and note that we may well be on the very first page. This makes sense — about 100 billion humans have been born on Earth, and we've just seen that at least a thousand times as many might yet be born. Here's another metaphor from Roman Krznaric's book The Good Ancestor: if the distance from your face to your outstretched hand represents the age of the earth, one stroke of a nail file would erase human history.

However, my favourite metaphor comes from physicist James Jeans. Imagine a postage stamp on a single coin. If the thickness of the coin and stamp combined represents our lifetime as a species, then the thickness of the stamp alone represents the extent of recorded human civilisation. Now imagine placing the coin on top of a 20-metre tall obelisk. If the stamp represents the entire sweep of human civilisation, the obelisk represents the age of the Earth. Now we can consider the future. A 5-metre tree placed atop the obelisk represents the Earth's habitable future. And behind this arrangement, the height of the Matterhorn mountain represents the habitable future of the universe.

The scale of the future

If all goes well, the future could also be full of things that matter — overwhelmingly more good than bad. To begin with, technological progress could continue to eliminate material scarcity and improve living conditions. We have already made staggering progress: the fraction of people living in extreme poverty fell from around 90% in 1820 to less than 10% in 2015. Over the same period, child mortality fell from over 40% to less than 5%, and the number of people living in a democracy increased from less than 1%, to most people in the world. Arguably it would be better to be born into a middle-class American family in the 1980s than to be born as the king of France around the year 1700. Yet, looking forward, we might expect opportunities and capabilities available to a person born in the further future which a present-day billionaire can only dream of.

The lives of nonhuman animals matter morally, and their prospects could also improve a great deal. For instance, advances in protein alternatives could come close to eliminating the world's reliance on farming sentient animals on an industrial scale.

Beyond quality of life, you might also care intrinsically about art and beauty, the reach of justice, or the pursuit of truth and knowledge for their own sakes. Along with material abundance and more basic human development, all of these things could flourish and multiply beyond levels they have ever reached before.

Of course, the world has a long way to go before it is free of severe, pressing problems like extreme poverty, disease, animal suffering, and destruction from climate change. Pointing out the scope of positive futures open to us should not mean ignoring today's problems: problems don't get fixed just by assuming someone else will fix them. In fact, fully appreciating how good things might eventually be could be an extra motivation for working on them — it means the pressing problems of our time needn't be perennial: if things go well, solving them now could come close to solving them for good.

Again, longtermism is not claiming to know what humanity's future, or even its most likely future, will be. What matters is that the future could potentially be enormously large, and full of things that matter.

We can do things to significantly and reliably affect that future

What makes the enormous potential of our future morally significant is the possibility that we might be able to influence it.

This isn't at all obvious: most people from history were not really in a position to shape the long-run future. Yet, there's a compelling case for thinking that this moment in history could be a point of extraordinary influence.

First, I'll briefly consider the kind of 'influence' that would end up being morally significant.

There is a trivial sense in which almost anything we do affects the far future, of course. Things we do now cause future events. Those events become causes of further events, and so on. And small causes sometimes amount to tipping points for effects much larger in scale than the immediate or intended effect of your initial action. As such, the effects of your decisions today ripple forward unpredictably and chaotically. The far future is just the result of all those ripples we send into the future.

The mere fact that our actions affect the far future is poetic but not itself important. What would matter is if we can do so reliably, and in ways that matter (morally).

Suppose you're deciding whether to help an elderly person across the road. The short-term effects are clear enough, and they're best when you help. What about longer-run effects? Well, choosing to help could hold up traffic and lead to a future dictator being conceived later this evening. The reason this consideration doesn't weigh against helping is that symmetrical considerations cut the other way — not helping could just as easily play a role in precipitating some future catastrophe. Since you don't have any reason to believe either outcome is more likely for either option, these worries 'cancel' or 'wash out'. So you're not reliably influencing the likelihood of future dictators by helping.

On the other hand, there are some things we can do which can reliably influence the future, but only by influencing it in ways that are (morally) unimportant. Such actions trade off reliability at the cost of insignificance. For example, you could carve your initials into a stone, or bury your favorite pie recipe in a time capsule. Both actions could reach people hundreds of years in the future — but they aren't likely to care very much.

What would matter enormously is if we could identify some actions we could take now which could (i) reliably influence how the future plays out over the long run in (ii) significant, nontrivial ways. One class of actions might involve effects which persist, or even compound, across time. Another class might help tip the balance from a much worse to a much better 'attractor' state — some civilisational outcome which is easy to fall into and potentially very difficult to leave.

The strongest example of an 'attractor' state is an 'existential catastrophe' — a permanent curtailment of humanity's potential. The most salient kind of existential catastrophe is extinction. Despite what I said earlier about the possibility of surviving a long time as a species, we also know that human extinction is possible. One obvious reason is that we can witness the rapid extinction of other animal species in real-time, and often humans are responsible for those extinctions.

Human extinction might have a natural cause, like an asteroid collision. It might also be caused by human inventions — through an engineered pandemic, unaligned artificial intelligence, or even some mostly unforeseen technology.

Other than extinction, we could also end up 'locked-in' to a bad and close-to-inescapable political regime, facilitated by widespread surveillance. Such outcomes could be just as bad as extinction, maybe even worse, so they would count as existential catastrophes also.

Might there be anything we can do to reliably decrease the likelihood of some of these existential catastrophes? It doesn't seem ridiculous to think there is.

We might, for instance, investigate ways to make advanced artificial intelligence safer through governance or technical solutions, we could strengthen the infrastructure that regulates experimentation on dangerous artificial human pathogens, or make existing political regimes more robust to totalitarian takeover (e.g. by thinking about mechanisms for recovering from them). Few of us are in a place to do that work directly right now, but many of us are in a position to encourage such initiatives with our support, or even to shift our career toward that work.

What about changes which improve the future more incrementally? Some surprising econometric evidence suggests that differences in cultural attitudes sometimes persist for centuries. For instance, the anthropologist Joseph Henrich suggests that the spread of the Protestant church explains why the Industrial Revolution occurred in Europe, which in turn explains much of Europe's subsequent cultural and economic success. You might also identify some moments from history where the values a political regime or culture decided on were very sensitive to discussions between a few people in a narrow window of time, but where those values ended up influencing enormous stretches of history, for better or worse. For instance, the belief system of Confucianism has determined much of Chinese history, but it's one of a few belief systems (including Legalism and Daoism) which could have taken hold before the reigning emperor during the Han dynasty was persuaded to embrace Confucianism, after which point it became something like the official ideology of the state for an almost continuous two millenia.

Of course, it is one thing to notice these causal differences ex post, and another thing to guess what changes now could have similar effects in the far future. But it's significant that long stretches of history have been determined by some fairly contingent decisions about values and institutions. In other words, history is not entirely dictated by unstoppable trends beyond the control of particular people or groups.

Some people have suggested that this century could be a time of unusual influence over the long-run future — a time where some contingent decisions could get made which might persist for a very long time. The main argument for this is that we appear to be living during a period of unprecedented and unsustainably rapid technological change. For this reason, Holden Karnofsky argues that we could be living in the most important century ever.

Putting it together

Plausibly, there are some things we can do to reliably and significantly influence how the future turns out over long timescales. Granting this, what makes these things so valuable or important? On this argument, the answer is straightforward: the future is such a vast repository of potential value, that anything we can do now to improve or protect it by even a modest fraction could end up being proportionally valuable. In other words, the stakes look astonishingly high.

That said, in other instances you don't need philosophical arguments about the size of the future to realise we should really do something about potentially catastrophic risks from (for example) nuclear weapons, engineered pandemics, or powerful and unaligned AI. On those fronts, the cumulative risk between now and even when our own grandchildren grow up looks unacceptably high; and it looks like we can do a lot to reduce the level of risk.

In both cases (risk mitigation and influencing which values get "locked-in"), actions which reliably and lastingly influence the future derive their importance from how much value the future might contain, and also from the fact that the extent and quality of our future could depend on things we do now.

You might consider an analogy to the prudential case — swapping out the entire future for your future, and only considering what would make your life go best. Suppose that one day you learned that your ageing process had stopped. Maybe scientists identified the gene for ageing, and found that your ageing gene was missing. This amounts to learning that you now have much more control over how long you live than previously, because there's no longer a process imposed on you from outside that puts a guaranteed ceiling on your lifespan. If you die in the next few centuries, it will most likely be due to an avoidable accident. What should you do?

To begin with, you might try a bit harder to avoid those avoidable risks to your life. If previously you had adopted an nonchalant attitude to wearing seatbelts and helmets, you'd probably want some new habits. You might also begin to spend more time and resources on things which compound their benefits over the long run. If keeping up your smoking habit is likely to lead to lingering lung problems which are very hard or costly to cure, you might care much more about kicking that habit soon. And you might begin to care more about 'meta' skills, which require some work now but pay off over the rest of your life, like learning how to learn. You might want to set up checks against any slide into madness, boredom, destructive behaviour, or joining a cult — all of which which living so long could make more likely. Once you're confident it's much less likely you'll be killed or permanently disabled by a stupid accident any time soon, you might set aside a long period of time to reflect on how you actually want to spend your indefinitely long life. What should your guiding values be? You have some guesses, but you know your values have changed in the past, and it wouldn't have made sense to set your entire life according to the values of your 12-year-old self. After all, there are so many things you could achieve in your life which your childhood self couldn't even imagine — so maybe the best plans for your life are ones you haven't yet conceived of. So, on hearing that you don't have the ageing gene, you put those safety measures in place, and begin your period of reflection.

When you learned that your future could contain far more value than you originally thought, certain behaviours and actions became far more important than before. Yet, most of those behaviours were sensible things to do anyway. Far from diminishing their importance, this fact should only underline them.

Lastly, note that you don't need to believe that the future is likely to be overwhelmingly good, or good at all, to care about actions that improve its trajectory. If anything, you might think that it's more important to reach out a helping hand to future people living difficult lives, than to improve the lives of future people who are already well-off. If you noticed that you were personally on a trajectory towards ruinous addiction, the appropriate response would likely be to try steering away from that trajectory to the best of your ability, rather than to give up or even choose to end your life.

For instance, some people think that unchecked climate change could eventually make life worse for most people than it is today. Recognising the size of that diminished future only provides an additional reason to care about doing something now to make climate change less bad.

Finally, note that longtermism is normally understood as a claim about what kinds of actions happen to be incredibly good or important at the current margin. One thing that makes these actions look like they stand to do so much good is the fact that so little thought and effort has so far been directed towards improving and safeguarding the lives of future people in sensible, effective ways.

Suppose every member or your tribe will go terribly thirsty without water tonight, but one trip to the river to get water will be enough for everyone. Getting water should be a priority for your tribe, but only while nobody has bothered to get it. It doesn't need to be a priority in the sense that everyone should make it a priority to go off and get water. And once somebody brings water, it wouldn't make sense to keep prioritising it. Similarly, longtermism is rarely understood as an absolute claim to the effect that things would go best if all or even most people started worrying about improving humanity's long-run future.

Neglectedness

Some of the causes sketched above seem exceptionally neglected. This makes them seem even more worthwhile, since the most important opportunities for working on a problem are typically filled first, followed by slightly less important opportunities, and so on.

For instance, the philosopher Toby Ord estimates that bioweapons, including engineered pandemics, pose about 10 times more existential risk than the combined risk from nuclear war, runaway climate change, asteroids, supervolcanoes, and naturally arising pandemics. The Biological Weapons Convention is the international body responsible for the continued prohibition of these bioweapons — and its annual budget is less than that of the average McDonald's restaurant.

If actions aimed at improving or safeguarding the long-run future are so important, why expect them to be so neglected? One major reason is that future people don't have a voice, because they don't presently exist. It isn't possible to cry out so loudly that your voice reaches back into the past. Obviously enfranchisement makes a difference for how decisions about the enfranchised group get made. When women were gradually enfranchised across most of the world, policies and laws were enacted that made women better off.

But women fought for their vote in the first place by making themselves heard, through protest and writing. By contrast, because future people don't exist yet, they are strictly unable to participate in politics. They are ‘voiceless’, but they still deserve moral status: they can still be seriously harmed or helped by political decisions.

Another perspective is Peter Singer's concept of an 'expanding moral circle', which is partly an observation about the past and partly a guiding principle for the future.

Singer holds that human society has come to recognise the interests of progressively widening circles. Here's the oversimplified version: for much of history, we lived in small tribal groups, and we were reluctant to help someone from a different tribe. As nation-states formed, we extended our sympathies to conationals, even outside our tribes. Today, our sympathies extend still further: many people act to help complete strangers on the other side of the world, or restrict their diets to avoid harming animals (which would have baffled our earliest ancestors).

And yet, expanding the circle to include nonhuman animals doesn't mark the last possible stage — our circles could also contain people, animals, and other beings living in the (long-run) future.

Another explanation for the neglectedness of future people has to do with 'externalities' — consequences of our actions, positive or negative, that affect people other than ourselves.

If we act out of self-interest, we will ignore externalities and focus on actions that benefit us personally. Think about how a country might invest its funds: for some spending (e.g. infrastructure), the benefits stay within the country, helping only the people who live there. For other spending (e.g. medical research), the benefits could extend to people all over the world (a "positive externality"). But these benefits are shared, while the costs remain tied to a single country — which means it may not be in the country's self-interest to invest in "global public goods" like research. Conversely, countries may overinvest in activities that benefit them but impose costs on other countries ("negative externalities", like the impacts of pollution).

The externalities from actions which stand to influence the long-run future not only extend to people living in other countries, but also to people living in the future who are not alive today. As such, the positive externalities from improvements to the long-run future are enormous, and so are the negative externalities from increasing risks to that future. If we think of governments as self-interested decision makers looking after their present-day citizens, we won't be surprised to see them neglect long-term actions, even if the total benefits of longtermist decisions stood to massively outweigh their total costs. Such actions aren't just global public goods, they're intergenerational global public goods.

Going Further

Other framings

Above, I described a broadly consequentialist argument for longtermism. In short: the future holds a vast amount of potential, and there are (plausibly) things we can do now to improve or safeguard that future. Given its potential size, even small proportional changes to how well the future goes could amount to enormous total changes. Further, we should expect some of the best such opportunities to remain open, because improving the long-run future is a public good, and because future people are disenfranchised. Because of the enormous difference such actions can make, they would be very good things to do.

Note that you don't need to be a card-carrying consequentialist to believe that long-term-oriented actions are very good things to do, as long as you care about consequences at all. And any reasonable ethical view should care about consequences to some degree.

That said, there are other ways of framing longtermism. One argument invokes obligations to tradition and the past. Our forebears made sacrifices in the service of projects they hoped would far outlive them — such as advancing science or opposing authoritarianism. Many things are only worth doing in light of a long future, during which people can enjoy what you achieved. No cathedrals would have been built if the religious sects which built them didn't expect to be around long after they were completed. As such, to drop the ball now wouldn't only eliminate all of the future's potential — it would also betray those efforts from the past.

Another framing invokes 'intergenerational justice'. Just as you might think that it's unjust that so many people live in extreme poverty when so many other people enjoy outrageous wealth, you might also think that it would be unfair or unjust that people living in the far future were much worse off than us, or unable to enjoy things we take for granted. Just as there's something we can do to make extremely poor people better off at comparably insignificant cost to ourselves, there's something we can do to improve the lives of future people. At least, there are things we can do now to preserve valuable resources for the future. The most obvious examples are environmental: protecting species, natural spaces, water quality, and the climate itself. Interestingly, this thought has been operationalised as the 'intergenerational solidarity index' — a measure of "how much different nations provide for the wellbeing of future generations".

Finally, from an astronomical perspective, life looks to be exceptionally rare. We may very well represent the only intelligent life in our galaxy, or even the observable universe. Perhaps this confers on us a kind of 'cosmic significance' — a special responsibility to keep the flame of consciousness alive beyond this precipitous period of adolescence for our species.

Other definitions and stronger versions

By 'longtermism', I had in mind the fairly simple idea that we may be able to positively influence the long-run future, and that aiming to achieve this is extremely good and important.

Some 'isms' are precise enough to come with a single, undisputed, definition. Others, like feminism or environmentalism, are broad enough to accommodate many (often partially conflicting) definitions. I think longtermism is more like environmentalism in this sense. But it might still be worth looking at some more precise definitions.

The philosopher Will MacAskill suggests the following 

Longtermism is the view that: (i) Those who live at future times matter just as much, morally, as those who live today; (ii) Society currently privileges those who live today above those who will live in the future; and (iii) We should take action to rectify that, and help ensure the long-run future goes well.

This means that longtermism would become false if society ever stopped privelaging present lives over future lives.

Other minimal definitions don't depend on changeable facts about society. Hilary Greaves suggests something like 

The view that the (intrinsic) value of an outcome is the same no matter what time it occurs.

If that were true today, it would be true always.

Some versions of longtermism from the proper philosophical literature go further than the above, and make some claim about the high relative importance of influencing the long-run future over other kinds of morally relevant activities.

Nick Beckstead's 'On the Overwhelming Importance of Shaping the Far Future' marks the first serious discussion of longtermism in analytic philosophy. His thesis:

From a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions, billions, and trillions of years.

More recently, Hilary Greaves and Will MacAskill have made a tentative case for what they call 'strong longtermism'.

Axiological strong longtermism: In the most important decision situations facing agents today, (i) Every option that is near-best overall is near-best for the far future; and (ii) Every option that is near-best overall delivers much larger benefits in the far future than in the near future.

Deontic strong longtermism: In the most important decision situations facing agents today, (i) One ought to choose an option that is near-best for the far future; and (ii) One ought to choose an option that delivers much larger benefits in the far future than in the near future.

Roughly speaking, 'axiology' has to do with what's best to do, and 'deontology' has to do with what you should do. That's why Greaves and MacAskill give two versions of strong longtermism — it could be the case that actions which stand to influence the very long-run future are best in terms of their expected outcomes, but not the case that we must always do them (you might think this is unreasonably demanding).

The idea being expressed is therefore something like the following: in cases such as figuring out what to do with your career, or where to spend money to make a positive difference, you'll often have a lot of choices. Perhaps surprisingly, the best options are pretty much always among the (relatively few) options directed at making the long-run future go better. Indeed, the reason they are the best options overall is almost always because they stand to improve the far future by so much.

Please note that you do not have to agree with strong longtermism in order to believe in longtermism! Similarly, it's perfectly fine to call yourself a feminist without buying into a single, precise, and strong statement of what feminism is supposed to be.

Objections

Some people find longtermism hard to get intuitively excited about. There are many pressing problems that affect people now, and we have a lot of robust evidence about how best to address them. In many cases, those solutions also turn out to be remarkably cost-effective. On the other hand, longtermism holds that other actions may be just as good, if not better, because they stand to influence the very long-run future. By their nature, these activities are backed by less robust evidence, and primarily protect the interests of people who don't yet exist. Could these more uncertain, future-focused activities really be the best option for someone who wants to do good?

If a proponent of longtermism wants to defend their approach, they first need to make a convincing case for why this intuition — that we ought to focus on present-focused work with strong evidentiary backing — is common, despite being misguided. Then they need to respond to the particular objections raised by skeptics of longtermism. I will leave the first question open, and focus on the particular objections.

Person-affecting views

'Person-affecting' moral views try to capture the intuition that an act can only be good or bad if it is good or bad for someone (or some people). In particular, many people think that while it may be good to make somebody happier, it can't be as good to literally make an extra (equivalently) happy person. Person-affecting views make sense of that intuition: making an extra happy person doesn't benefit that person, because in the case where you didn't make them, they don't exist, so there's no person who could have benefited from being created.

One upshot of person-affecting views is that failing to bring about the vast number of people which the future could contain isn't the kind of tragedy which longtermism makes it up to be, because there's nobody to complain of not having been created in the case where you don't create them — nobody is made worse-off. Failing to bring about those lives would not be morally comparable to ending the lives of an equivalent number of actually existing people; rather, it is an entirely victimless crime.

Person-affecting views have another, subtler, consequence. In comparing the longer-run effects of some options, you will be considering their effects on people who are not yet born. But it's almost certain that the identities of these future people will be different between options. That's because the identity of a person depends on their genetic material, which depends (among other things) on the result of a race between tens of millions of sperm cells. So basically anything you do today is likely to have 'ripple effects' which change the identities of nearly everyone on Earth born after, say, 2050.

Assuming the identity of nearly every future person is different between options, then it cannot be said of almost any future person that one option is better or worse than the other for that person. But again, person-affecting views claim that acts can only be good or bad if they are good or bad for particular people. As a result, they find it harder to explain why some acts are much better than others with respect to the long-run future.

Perhaps this is a weakness of person-affecting views — or perhaps it reflects a real complication with claims about 'improving' the long-run future. To give an alternative to person-affecting views, the longtermist still needs to explain how some futures are better than others, when the outcomes contain totally different people. And it's not entirely straightforward to explain how one outcome can be better, if it's better for nobody in particular!

Progress and self-contradiction

Some people interpret longtermism as implying that we should ignore pressing problems in favour of trying to foresee longer-term problems. In particular, they see longtermism as recommending that we try to slow down technological progress to allow our 'wisdom' to catch up, in order to minimise existential risk. However, these people point out, virtually all historical progress has been made by solving pressing problems, rather than trying to peer out beyond those problems into the further future. And many technological risks are mitigated not by slowing progress on their development, but by continuing progress: inventing new fixes. For instance, early aviation was expensive and accident-prone, and now it's practically the safest mode of travel in the United States.

Similarly, longtermists sometimes join with 'degrowth' environmentalists in sounding the alarm about 'unsustainable' practices like resource extraction or pollution. But this too underestimates the capacity for human ingenuity to reliably outrun anticipated disasters. A paradigm example is Paul Ehrlich's 1968 The Population Bomb, which predicted that "[i]n the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now". Julian Simon, a business professor, made a wager with Ehrlich that "the cost of non-government-controlled raw materials (including grain and oil) will not rise in the long run." Ehrlich chose five metals, and the bet was made in 1980 to be resolved in 1990. When 1990 came around, the price of every one of these metals had fallen. In this way, the modern world is a kind of growing patchwork of temporary solutions — but that's the best and only way things could be.

By discouraging this kind of short-term problem solving, longtermism will slow down the kind of progress that matters. Further, by urging moderation and belt-tightening in the name of 'sustainability', longtermism is more likely to harm, rather than benefit, the longer-run future. In short, longtermism may be self-undermining.

This objection shares a structure with the following objection to utilitarianism: utilitarianism says that the best actions are those actions which maximise well-being. But if we only acted on that basis, without regard for values like truth-telling, autonomy, and rights — then people would become distrustful, anxious, and insignificant. Far from maximising well-being, this would likely make people worse off. Therefore, utilitarianism undermines itself.

Without assuming either longtermism or utilitarianism are true, it should be clear that this kind of 'self-undermining' argument doesn't ultimately work. If the concrete actions utilitarianism seems to recommend fail to maximise well-being on reflection, then utilitarianism doesn't actually recommend them. Analogously, if the actions apparently recommended by longtermism stand to make the long-run future worse, then longtermism doesn't actually recommend them. At best, what these self-undermining arguments show is that the naive versions of the thesis, or naive extrapolations from them, need to be revised. But maybe that's still an important point, and one worth taking seriously.

Risk aversion and recklessness

As explained, many efforts at improving the long-run future are highly uncertain, but they derive their 'expected' value from multiplying small probabilities with enormous payoffs. This is especially relevant for existential risks, where it may be good in expectation to move large amounts of money and resources towards mitigating those risks, even when it may be very unlikely they would transpire anyway. A natural response is to call this 'reckless': it seems like something's gone wrong when what we ought to do is determined by outcomes with very low probabilities.

Demandingness

Some critics point out that if we took strong versions of longtermism seriously, we would end up moraly obligated to make real sacrifices. For instance, we might forego some very large economic gains from risky technologies in order to build and implement ways of making them safer. Moreover, these sacrifices would have to be shared by everyone — including people who aren't convinced by longtermism. On this line of cricitism, the sacrifices demanded of us by longtermism are unreasonably big; and it certainly can't be right that everyone is forced to feel their effects.

To be sure, there are some difficult hypothetical questions about precisely how much a society should be prepared to sacrifice to ensure its own survival, or look after the interests of future members. But we are so far away from making those kinds of demanding sacrifices in the actual world that such questions look mostly irrelevant. As the philosopher Toby Ord points out, "we can state with confidence that humanity spends more on ice cream every year than on ensuring that the technologies we develop do not destroy us". Meeting the 'ice cream bar' for spending wouldn't be objectionably demanding at all, but we're not even there yet.

Epistemic uncertainty: do we even know what to do?

If we were confident about the long-run effects of our actions, then probably we should care a great deal about those effects. However, in the real world, we're often close to entirely clueless about the long-run effects of our actions. And we're not only clueless in the 'symmetric' sense that I explained when considering whether you should help the elderly person across the road. Sometimes, we're clueless in a more 'complex' way — namely, when there are reasons to expect our actions to have systematic effects on the long-run future, but it's very hard to know whether those effects are likely to be good or bad. Perhaps it's so hard to predict long-run effects that the expected value of our options isn't much determined by the long-run future at all. Relatedly, perhaps it's just far too difficult to reliably influence the long-run future in a positive way, making it better to focus on present-day impact.

Alexander Berger, co-CEO of Open Philanthropy, makes the point another way:

"When I think about the recommended interventions or practices for longtermists, I feel like they either quickly become pretty anodyne, or it’s quite hard to make the case that they are robustly good".

One way of categorising longtermist interventions is to distinguish between 'broad' and 'targeted' approaches. Broad approaches focus on improving the world in ways that seem robustly good for the long-run future, such as reducing the likelihood of a conflict between great powers. Targeted approaches zoom in on more specific points of influence, such as by focusing on technical research to make sure the transition to transformative artificial intelligence goes well. On one hand, Berger is saying that you don't especially need the entire longtermist worldview to see that 'broad' interventions are sensible things to do. On the other hand, the more targeted interventions often look like they're focusing on fairly speculative scenarios — we need to be correct on a long list of philosophical and empirical guesses for such interventions to end up being important.

I think these are some of the strongest worries about longtermism. How might a longtermist respond?

  • Because longtermism is a claim about what's best to do on the margin, it only requires that we can identify some cases where the long-run effects of our actions are predictably good — there's no assumption about how predictable long-run effects are in general. And it does seem hard to deny that some things we can do now can predictably influence the long-run future in fairly straightforward, positive ways. The best examples may be efforts to mitigate existential risk, such as boosting the tiny amount of funding the Biological Weapons Convention currently receives.
  • Alternatively, the longtermist could concede that we are currently very uncertain about how concretely to influence the long-run future, but this doesn't mean giving up. Instead, it could mean turning our efforts towards becoming less uncertain, by doing empirical and theoretical research.

Totalitarianism

On the logic of longtermism, small sacrifices now can be justified by much larger anticipated benefits over the long-run future. Since the benefits could be very large, some people object that there could be no ceiling to the sacrifices that longtermism would recommend we make today. Further, this line of objection continues, longtermism implies that the importance of safeguarding the long-term future suggests that certain freedoms might justifiably be curtailed in order to guide us through this especially risky period of the human story. But that sounds similar to historical justifications of totalitarianism. Since totalitarianism is very bad, this counts as a mark against longtermism.

The liberal philosopher Isaiah Berlin pithily summarises this kind of argument:

To make mankind just and happy and creative and harmonious forever - what could be too high a price to pay for that? To make such an omelette, there is surely no limit to the number of eggs that should be broken.

One response is to point out that the totalitarian regimes of the past failed so horrendously not because of their willingness to make sacrifices for a better future, but because they were wrong in a much more straightforward way — wrong that revolutionary violence and actual totalitarianism in practice make the world better in the short or long term.

Another response is to note that much discussion within longtermism is focused on reducing the likelihood of great power conflict, improving institutional decision-making, and spreading good (liberal) political norms — in other words, securing an open society for our descendants.

But perhaps it would be too arrogant to entirely ignore the worry about longtermist ideas being used to justify political harms in the future, if its ideas end up getting twisted or misunderstood. We know that even very noble aspirations can eventually transform into terrible consequences in the hands of normal, fallible people. If that worry were legitimate, it would be very important to handle the idea of longtermism with care.

Conclusion

Arthur C. Clarke once wrote: “Two possibilities exist: either we are alone in the Universe or we are not. Both are equally terrifying”. Similarly, two broad possibilities exist for the long-run future: either humanity flourishes far into the future, or it does not. The second possibility is terrifying, and the first is so rarely discussed in normal conversation that it's easy to forget it's an option.

But both possibilities make clear the importance of safeguarding and improving humanity's long-run future, because there are things we can do now to make the first possibility more likely.

Very important ethical ideas don't come around often. Think of feminism, environmentalism, socialism, or neoliberalism. None emerge from a vacuum — they all grow from deep historical roots. Then fringe thinkers, working together or alone, systemise the idea. Books are published — A Vindication of the Rights of Women, Silent Spring, Capital, The Road to Serfdom. The ideas percolate through wider and wider circles, and reach a kind of critical mass. Then, for better or worse, the ideas lead to change.

Longtermism could be one of those ideas. If that's true, then learning about it becomes especially important. I hope I've given a fair impression of what longtermism is, plus key motivations and objections.

Further Reading