67

The Long, Long, Long Run

Can we think so far into the future that we end up giving short shrift to the present?

By Rob Reich Lorenzo Manuali

Tagged Books ReviewsEffective AltruismLongtermismPhilanthropyWilliam MacCaskill

What We Owe the Future By William MacCaskill • Basic Books • 2022 • 352 pages • $32

Every writer of nonfiction aims at persuasive communication. This is especially true for philosophers, who are not merely trying to interpret or explain some aspect of the world but, as Marx famously put it, to change it. What, however, should we make of the philosopher who is driven by public relations concerns and compromises the logic of an argument in order to make it palatable, rather than ridiculous or offensive, to a wider audience? Should philosophers be salespeople?

This is the question that confronts us with philosopher William MacAskill’s What We Owe the Future. The book landed in the summer with a blast of coverage—the cover of Time magazine, a profile in The New Yorker, Twitter praise from Elon Musk—that amounted to the splashiest arrival of a philosophy book in recent memory. The goal of the book is straightforward. MacAskill aims to introduce us to the idea of “longtermism,” an argument for why we currently living humans should act with extraordinary concern for the very far future. MacAskill summarizes longtermism in the following simple and catchy argument: “Future people count. There could be a lot of them. We can make their lives go better.”

The first two parts of the argument are simple and unassailable. No one assigns zero value to future generations. Or, put differently, everyone cares to some degree that humanity not perish. And if we look into the far future, barring extinction, there will indeed be a great many future people. MacAskill illustrates this point visually with four pages of small stick figures, each representing ten billion people. Everyone who has ever lived (around 100 billion) is ten of these small figures. Using ordinary estimates, MacAskill argues that the future of humanity could hold 20,000 pages of these figures. The future of humanity could very well be astronomically huge.

Since no one believes that future generations count for nothing, and since we can all agree that, sure, there could be lots of future generations, MacAskill focuses much of his argument on the third point. Is it possible for current humans to have a positive effect on the lives of future humans, especially those hundreds of generations from now?

This is the most interesting part of the book. To persuade the reader of the agency we have today to affect the far future, MacAskill makes two basic claims. First, history (and particularly moral progress) is not deterministic or teleological but highly contingent. That is, history is neither set to go in a particular direction due to larger forces beyond human control, nor imbued with some purpose that it is striving toward beyond human control. But the mere fact of contingency doesn’t mean we can affect the future in a predictable and tractable way. In fact, if history is contingent, that allows for at least two possibilities. First, the path of history could be filled with randomness. Or second, human agency could alter the course of human affairs in predictable ways. MacAskill argues for this second view. He believes that, while some historical events may be inevitable, others depend on “a small number of specific actions”—and it is those highly contingent ones that he wants us to focus on. As an example, he points to the moral claims of abolitionists, rather than some broader economic calculus, as leading to the end of slavery in the British Empire—and contributing, ultimately, to abolition around the world.

MacAskill is a professor at Oxford and, along with philosopher Peter Singer, is generally acknowledged as one of the intellectual progenitors and most influential promoters of effective altruism (EA)—the idea that, informed by the best available evidence, individuals should strive to maximize the good they do in the world. Effective altruists are known for distinctive views about philanthropy and career choices, such as “earning to give,” which is the idea that one should choose a high-paying job in the financial or tech sector, rather than a low-paying one in a do-gooder nonprofit, in order to donate much of one’s income to effective charities. Unlike most academic philosophers, he is also entrepreneurial and practical in orientation. MacAskill is the co-founder of several organizations, including the Centre for Effective Altruism and the career-advice nonprofit 80,000 Hours. And he is involved in the philanthropic side of EA, including, until recently, as part of the core team of fallen crypto billionaire Sam Bankman-Fried’s FTX Future Fund. MacAskill’s new book puts him at the intellectual vanguard of changing the EA community’s focus from global health and well-being to the long-term future of humanity.

It’s a trivial observation that predicting how exactly our actions will affect the future is extremely difficult. This is true when we think about what might happen next year, let alone when we think about thousands of years from now. But, MacAskill argues, the key is to focus on a few crucial considerations so potentially monumental that they yield extremely high “expected value” (loosely, probability multiplied by magnitude/value) for our actions today.

What are these considerations? One is what MacAskill calls “value lock-in.” MacAskill believes that we are living in a plastic and consequential moment in history, when moral values can change more easily before becoming persistent in the future. Even worse than persistence, values could become “locked in,” which in MacAskill’s view is inherently negative: There is little reason to think that our values are the best ones. MacAskill’s chief example is the possibility that powerful forms of artificial intelligence could develop in ways that undermine human interests and even threaten to subjugate humanity to machines. Unless we act to develop AI systems that are safe for humanity, current efforts at bringing about artificial general intelligence—AI that could learn and perform the same tasks as humans, just as well or even better—could place humanity on an irreversible path, with values antithetical to humanity locked into machine intelligence. Investments in AI safety and harm reduction, then, yield high expected value: They could affect the entire future of humanity.

Another consideration involves interventions that could stave off human extinction. What existential risks does humanity face? MacAskill offers up some of the usual suspects, such as nuclear war, and he includes bioterrorism (e.g., engineered pandemics) and asteroid impacts. Investing funds and directing human talent toward preventing extinction events have high expected value, even if the probability of those events is very low. After all, preventing extinction preserves the possibility of good lives for untold numbers of people.

The risk of societal collapse that falls short of extinction is another consideration. Although MacAskill is bullish about humanity’s ability to rebuild after a global societal collapse (mostly due to the resources that would remain even after a catastrophe, such as libraries), one unusual upshot of his argument is an additional reason not to exhaust our supply of fossil fuels: We must preserve a sufficient amount so that, facing collapse, humanity can restart the Industrial Revolution.

Humans, MacAskill argues, have the capacity to affect value lock-in, make meaningful efforts to avoid extinction, and diminish the possibility of societal collapse. MacAskill concludes that we should make decisions and allocate resources in accordance with longtermism, both at the individual level (making certain career choices, engaging in political activism, spreading longtermist ideas, and making donations) and the societal level (writing and passing longtermist public policy). MacAskill aims to spur action, writing that “figures from ‘history’ . . . can seem different from you and me. But they weren’t different: they were everyday people, with their own problems and limitations, who nevertheless decided to try to shape the history they were a part of, and who sometimes succeeded. You can do this, too. Because if not you, who? And if not now, when?”

Much here is compelling. The argument has force, and there’s never doubt about the importance of the topic because of the stakes involved. But though powerful, the logic of longtermism risks being totalizing, swamping any effort we might make to promote the interests of currently living people. What’s more, need we adopt a longtermist perspective in order to reach some of these conclusions? Almost certainly not.

To see why, it helps to examine MacAskill’s carefully salesy construction of his argument and root it in his wider scholarship. In What We Owe the Future, MacAskill argues that he is “just claiming that future people matter significantly. Just as caring more about our children doesn’t mean ignoring the interests of strangers, caring more about our contemporaries doesn’t mean ignoring the interests of our descendants.” If the case for longtermism amounts to something this commonsensical—even trivial, one might worry—do we really need a new ethical view to tell us that we should care about future people? We can care very deeply about the well-being of humans in the future—about moral progress, avoiding extinction, and preventing societal collapse—without resorting to longtermism. For generations, philosophers, economists, and most ordinary human beings have considered future people to deserve consideration in moral decision-making. As other reviewers have noted, there’s a vast literature on the demands of intergenerational justice. Climate activism among people of all ages—with or without children—demonstrates that living people take future people to merit significant moral weight. Put more strongly, given the extraordinary advances in AI and bioengineering and the familiar challenges posed by climate change, all of which threaten humanity’s very existence in some way, caring about the current generation and near future provides enough argumentative resources to generate the same conclusions about personal and policy changes reached by MacAskill.

So what’s the practical difference between commonsense views and the alleged “moral revolution” that MacAskill trumpets with longtermism? MacAskill doesn’t give an adequate answer to this question. However, some of his recent academic work does lead us to different practical conclusions. In a 2019 paper, MacAskill and another philosopher, Hilary Greaves, offer a case for what they call “strong longtermism.” This stance argues that for the purpose of guiding action, as MacAskill and Greaves write, “we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.” (They removed this section in a later draft of the paper, but not for philosophical reasons: MacAskill told a reporter that they were afraid it was “misleading” for readers.) If adopted, MacAskill and Greaves note with dry understatement, “much of what we prioritise in the world today would change.” 

So indeed it would. This position would counsel against investing time, talent, and resources in anything that primarily benefits currently living people, absent a showing that it has more powerful effects on the far future. Going against the views of early effective altruists, there’s less room for buying anti-malaria bed nets if “the most important decision situations facing agents” require taking actions that are “near-best for the far future.” The project of avoiding preventable disease and alleviating global poverty that was so core to effective altruism is entirely recast, and indeed tossed out the window, by strong longtermism. Global poverty, as some odiously put it, is nothing more than a “rounding error” when compared to the importance of avoiding an AI dystopia. A moral revolution penned by philosophers and funded by technologists concludes that the morally best action for a talented young person is not to work on global poverty but to become a philosopher or a technologist and work on AI (or to make money to donate to AI-related research and causes).

In an appendix to What We Owe the Future, MacAskill acknowledges that his book departs from strong longtermism. He distinguishes “longtermism, the view that positively influencing the longterm future is one of the key moral priorities of our time” from “strong longtermism, the view that positively influencing the longterm future is the moral priority of our time—more important, right now, than anything else.” It is only the former that he embraces in the book.

Intellectual consistency would require some deeper explanation of what is afoot here, given the stark differences between the two views. Either MacAskill endorses strong longtermism, a genuinely radical position that few would be likely to embrace, or he endorses a watered-down version of longtermism that, as we suggested earlier, delivers commonsense conclusions that do not require such intellectual trappings. Our conclusion is that MacAskill was driven not by intellectual considerations but by branding and PR concerns. This is perhaps understandable, but not intellectually forthright. Or, we dare say, respectable.

There is some evidence that MacAskill’s writing was informed by a sensitivity to public relations rather than an honest accounting of the logical implications of his philosophical argument. As scholar Émile P. Torres has documented, effective altruists are obsessed with their brand. Torres’s analysis of scores of effective altruist discussion forum posts concerning the movement’s marketing and PR efforts makes this clear. For example, Torres cites MacAskill discussing the long, drawn-out process of choosing the name “effective altruism” with an eye toward its public appeal.

The most charitable way to interpret this is that, at its core, effective altruism—including its longtermist strain—aims to be a community and social movement. MacAskill concludes What We Owe the Future with the idea that “[t]here’s no better time for a movement that will stand up . . . for all those who are yet to come.” This aspiration is laudable. Effective altruism’s earlier aim to make clear, evidence-based recommendations for how individuals can promote human well-being, primarily by alleviating desperate poverty and preventable illness, deserves praise. See the well-known work of GiveWell, which assesses the effectiveness of charities in a manner comparable to a financial return on investment—that is, human welfare improvement per dollar donation. (Note: co-author Reich served on GiveWell’s board from 2013 to 2019.) GiveWell has changed the giving habits of many and influenced organized philanthropy. 

Strong longtermism is different. As we have stated, it suggests that evaluating any potential course of action should be done on the basis of how it could affect the very distant future, ignoring much of the effects on currently living humans. When you hold that view and are writing a book to stimulate a moral revolution, focusing on philosophical precision comes second after PR considerations. In turn, this can lead to an attempt to mislead—even deceive—the reader.

 In a 2019 post to an EA forum, MacAskill talks about how the term “longtermism” could apply to three different concepts. He writes that it could mean:

(i) longtermism, which designates an ethical view that is particularly concerned with ensuring long-run outcomes go well;

(ii) strong longtermism, which . . . is the view that long-run outcomes are the thing we should be most concerned about; 

(iii) very strong longtermism, the view on which long-run outcomes are of overwhelming importance.

MacAskill comes to the conclusion that longtermism without a modifier should apply to (i)—and not (ii)—because “[t]he first concept is intuitively attractive to a significant proportion of the wider public (including key decision-makers like policymakers and business leaders); my guess is that most people would find it intuitively attractive. In contrast, the second concept is widely regarded as unintuitive, including even by proponents of the view.” Besides, “we’d achieve most of what we want to achieve” if most people accept (i). It is this version of (weak) longtermism he offers in What We Owe the Future. While we admire the openness with which effective altruists conduct these conversations, it seems that MacAskill is more concerned with the public-facing chiaroscuro of longtermism than with its intellectual integrity.

The logical structure of strong longtermism is totalizing, sweeping aside any effort to judge action on the basis of predicted effects on currently living people. What We Owe the Future is the marketable version of the argument, watered down to common sense. “I see the term ‘longtermism’ creating value,” MacAskill wrote in the same blog post, “if it results in more people taking action to help ensure that the long-run future goes well.” It should be no surprise, then, that upon examination, What We Owe the Future’s longtermism fails to offer unique guidance for how we ought to weigh the interests of future people. It’s not meant to be philosophically precise. It hides its radical ideas and implications. Instead, MacAskill aims for his public-facing philosophy to feel comfortable, even familiar, to the reader.

To be clear: It’s not a problem for philosophical positions to have PR versions. Nor do we believe that philosophy should remain locked up in the ivory tower—quite the contrary. But PR versions of philosophical positions are a problem when they hide the actual logic of the underlying argument. We thus can’t avoid the conclusion that the longtermism of What We Owe the Future is a misshapen mishmash of brand management and philosophical argumentation. Strong longtermism is hidden from view—though not disavowed—in the book. Weak longtermism is offered up instead, but its conclusions tend toward the commonsensical and do not depend on longtermist claims. 

Despite all this, What We Owe the Future’s surfacing of often underdiscussed high-stakes considerations from the perspective of the future of humanity—value lock-in, AI safety and harm, extinction risk—shines a spotlight on them that was not there before. In that sense, longtermists (the community) have performed a useful service, while longtermism (the philosophical view) has not. Is the far future of humanity the only perspective that matters? No, far from it. Do we need longtermism for these concerns to matter? Not in the slightest. Still, MacAskill has rallied attention to these concerns, and that’s a useful thing.

Read more about Books ReviewsEffective AltruismLongtermismPhilanthropyWilliam MacCaskill

Rob Reich is a professor of political science at Stanford University. He is the author of Just Giving: Why Philanthropy Is Failing Democracy and How It Can Do Better (2018), and he serves as director of the Center for Ethics in Society and co-director of the Center on Philanthropy and Civil Society.

Lorenzo Manuali is a pre-doctoral fellow at Stanford University at the Center for Ethics in Society.

Click to

View Comments

blog comments powered by Disqus