Is Existential Risk a Useless Category? Could the Concept Be Dangerous?

post by philosophytorres · 2020-03-31T16:55:10.210Z · score: -8 (19 votes) · EA · GW · 25 comments

Please don't just downvote this. I welcome comments, criticisms, feedback, and so on. Where am I wrong? Do you disagree that utopianism has, historically, led to bad outcomes? Do you think that S2 really is as bad as S1? Is Olle Häggström's scenario or Pinker's statement off-base?

https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_33466a921b2646a7a02482acb89b07b8.pdf

This paper offers a number of reasons for why the Bostromian notion of existential risk is useless. On the one hand, it is predicated on a highly idiosyncratic techno-utopian vision of the future that few would find appealing. On the other, the “worst-case outcomes” for humanity group together the atrocious to the benign. What matters, on Bostrom’s view, is not human extinction per se, but any event that would permanently prevent current or future people from attaining technological Utopia. I then consider the question of whether the Bostromian paradigm could be dangerous. My answer is affirmative: this perspective combines utopianism and utilitarianism. Historically, this has proven to be a highly combustible mix. When the ends justify the means, and when the end is paradise, then groups or individuals may feel justified in contravening any number of moral constraints on human behavior, including those that proscribe violent actions. Although I believe that studying low-probability, high-impact risks is extremely important, I urge scholars to abandon the Bostromian concept of existential risk.

25 comments

Comments sorted by top scores.

comment by ælijah · 2020-04-01T17:15:40.500Z · score: 12 (9 votes) · EA(p) · GW(p)

I tried to make this comment before, but for some reason it isn't visible, so I'm reposting it.

I think this is an interesting paper. I gave it an upvote.

One comment: It is misleading to say that on total utilitarianism + longtermism "the axiological difference between S1 and S2 is negligible". It may be negligible compared to the difference between either and utopia, but that doesn't mean it's negligible in absolute terms. Saying that the disvalue of a single terrible thing happening to one person is "negligible" compared to the total disvalue in the world over the course of ten years doesn't necessarily mean one is callous about the former.

comment by MichaelStJules · 2020-04-02T08:26:13.544Z · score: 10 (4 votes) · EA(p) · GW(p)

Besides the risks of harm by omission and focusing on the wrong things, which I agree with others here is a legitimate place for debate in cause prioritization, there are risks of contributing to active harm, which is a slightly different concern (although not fundamentally different for a consequentialist, but it might have greater reputational costs for EA). I think this passage is illustrative:

For example, consider the following scenario from Olle Häggström (2016); quoting him at length:
"Recall … Bostrom’s conclusion about how reducing the probability of existential catastrophe by even a minuscule amount can be more important than saving the lives of a million people. While it is hard to find any flaw in his reasoning leading up to the conclusion [note: the present author objects], and while if the discussion remains sufficiently abstract I am inclined to accept it as correct, I feel extremely uneasy about the prospect that it might become recognized among politicians and decision-makers as a guide to policy worth taking literally. It is simply too reminiscent of the old saying “If you want to make an omelet, you must be willing to break a few eggs,” which has typically been used to explain that a bit of genocide or so might be a good thing, if it can contribute to the goal of creating a future utopia. Imagine a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders."
Häggström offers several reasons why this scenario might not occur. For example, he suggests that “the annihilation of Germany would be bad for international political stability and increase existential risk from global nuclear war by more than one in a million.” But he adds that we should wonder “whether we can trust that our world leaders understand [such] points.” Ultimately, Häggström abandons total utilitarianism and embraces an absolutist deontological constraint according to which “there are things that you simply cannot do, no matter how much future value is at stake!” But not everyone would follow this lead, especially when assessing the situation from the point of view of the universe; one might claim that, paraphrasing Bostrom, as tragic as this event would be to the people immediately affected, in the big picture of things—from the perspective of humankind as a whole—it wouldn’t significantly affect the total amount of human suffering or happiness or determine the long-term fate of our species, except to ensure that we continue to exist (thereby making it possible to colonize the universe, simulate vast numbers of people on exoplanetary computers, and so on).

I think you don't need Bostroniam stakes or utilitarianism for these types of scenarios, though. Consider torture, collateral civilian casualties in war, the bombings of Hiroshima and Nagasaki. Maybe you could argue in many cases that more civilians will be saved, so the trade seems more comparable, actual lives for actual lives, not actual lives for extra lives (extra in number, not in identity, for a wide person-affecting view), but it seems act consequentialism is susceptible to making similar trades generally.

I think one partial solution is to just not promote act consequentialism publicly unless you preface with important caveats. Another is to correct naive act consequentialist analyses in high stakes scenarios as they come up (like Phil is doing here, but also to individual comments).

comment by zdgroff · 2020-03-31T20:35:16.584Z · score: 10 (11 votes) · EA(p) · GW(p)

I think the concerns about utopianism are well-placed and merit more discussion in effective altruism. I'm sad to see the post getting downvoted.

comment by willbradshaw · 2020-03-31T21:37:25.043Z · score: 29 (15 votes) · EA(p) · GW(p)

I downvoted it based on things like calling John Halstead and Nick Beckstead white supremacists (based on extremely shaky argumentation) and apparently taking it as obvious that rejecting person-affecting views is morally monstrous.

I might make longer, more substantive comments later, but there are reasons to downvote this other than wanting to squash discussion of fanaticism.

comment by Halstead · 2020-04-01T09:12:34.755Z · score: 33 (10 votes) · EA(p) · GW(p)

It may be noted that in the thing I wrote on climate change I don't actually defend long-termism or even avow belief in it.

For those who find it confusing that I, at best a mid-table figure in EA, get dragged into this stuff, the reason is that I once publicly criticised a post on Pinker that Phil wrote on Facebook (my critique was about three sentences). Phil has since then borne a baffling and persistent grudge against me, including persistently sending me messages on Facebook, name-checking me while making some rape allegations against some famous person I have never heard of, and then calling me a white supremacist. Hopefully, this gives some insight into Phil's psychology and what is actually driving posts such as the one linked to here.

comment by philosophytorres · 2020-04-02T22:50:05.644Z · score: 3 (2 votes) · EA(p) · GW(p)

John: Do I have your permission to release screenshots of our exchange? You write: "... including persistently sending me messages on Facebook." I believe that this is very misleading.

comment by Halstead · 2020-04-03T08:57:26.679Z · score: 6 (3 votes) · EA(p) · GW(p)

please do

comment by trammell · 2020-03-31T21:46:58.482Z · score: 28 (14 votes) · EA(p) · GW(p)

Thanks for pointing that out!

For those who might worry that you're being hyperbolic, I'd say that the linked paper doesn't say that they are white supremacists. But it does claim that a major claim from Nick Beckstead's thesis is white supremacist. Here is the relevant quote, from pages 27-28:

"As he [Beckstead] makes the point,

>> saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards, at least by ordinary enlightened humanitarian standards, saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.

This is overtly white-supremacist."

The document elsewhere clarifies that it is using the term white supremacism to refer to systems that reinforce white power, not only to explicit, conscious racism. But I agree that this is far enough from how most people use the terminology that it doesn't seem like a very helpful contribution to the discussion.

comment by willbradshaw · 2020-03-31T22:34:06.888Z · score: 35 (15 votes) · EA(p) · GW(p)

Thanks, I agree with this clarification.

I actually find the argument that those arguing against prioritising climate change are aiding white supremacy[1] more alarming than the attack on Beckstead, even though the accusations there are more oblique.

While I think Beckstead's argumentation here seems basically true, it is clearly somewhat incendiary in its implications and likely to make many people uncomfortable – it is a large bullet to bite, even if I think that calling it "overtly white-supremacist" is bad argumentation that risks substantially degrading the discourse[2].

Conversely, claiming that anyone who doesn't explicitly prioritise a particular cause area is aiding white supremacy seems like extremely pernicious argumentation to me – an attempt to actively suppress critical prioritisation between cause areas and attack those trying to work out how to make difficult-but-necessary trade-offs. I think this style of argumentation makes good-faith disagreement over difficult prioritisation questions much harder, and contributes exceedingly little in return.


  1. "Hence, dismissing climate change because it does not constitute an obstacle for creating Utopia reinforces unjust racial dynamics, and thus supports white supremacy." (p. 27) ↩︎

  2. The document also claims (in footnote 13) that "the prevalence of such tendencies" (by which I assume is meant "overtly white-supremacist" tendencies, since the footnote is appended directly to that accusation) in EA longtermism "may be somewhat unsurprising" given EA's racial make-up. I would find it quite surprising if many EAs were secretly harbouring white-supremacist leanings, and would require much stronger (or indeed any) evidence that this were the case before making such aspersions. ↩︎

comment by trammell · 2020-04-01T10:23:14.578Z · score: 17 (9 votes) · EA(p) · GW(p)

Yeah, agreed that using the white supremacist label needlessly poisons the discussion in both cases.

For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).

Maybe this is a bit off-topic, but I think it’s worth illustrating that there’s no sense in which the longtermist discussion about saving lives necessarily pushes in a so-called “white supremacist” direction.

comment by MichaelStJules · 2020-04-02T05:47:00.407Z · score: 2 (1 votes) · EA(p) · GW(p)
For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).

Is this taking more immediate existential risks into account and to what degree and how people in the developing and developed worlds affect them?

comment by trammell · 2020-04-02T07:36:59.050Z · score: 4 (2 votes) · EA(p) · GW(p)

No.

comment by zdgroff · 2020-03-31T23:33:47.674Z · score: 13 (10 votes) · EA(p) · GW(p)

Yeah, I agree the facile use of "white supremacy" here is bad, and I do want to keep ad hominems out of EA discourse. Thanks for explaining this.

I guess I still think it makes important enough arguments that I'd like to see engagement, though I agree it would be better said in a more cautious and less accusatory way.

comment by willbradshaw · 2020-04-01T09:35:04.053Z · score: 6 (5 votes) · EA(p) · GW(p)

I'm sad to see this comment get downvoted.

comment by willbradshaw · 2020-04-01T09:39:27.922Z · score: 8 (7 votes) · EA(p) · GW(p)

Much though I dislike important conversations happening on Facebook rather than some more public forum, it's probably worth people considering engaging here reading the pre-existing Facebook discussion here and here. At the very least we can avoid re-treading old ground.

comment by evelynciara · 2020-04-01T16:24:51.536Z · score: 7 (6 votes) · EA(p) · GW(p)

I think the paper title is clickbaity and misleading, given that you argue narrowly against Bostrom's conception of existential risk rather than the broader idea of x-risk itself.

comment by evelynciara · 2020-03-31T21:44:55.826Z · score: 5 (3 votes) · EA(p) · GW(p)

technological development proceeds from the time of this writing (in 2020) for another decade. Cures for pathologies like Alzheimer’s, diabetes, and heart disease are discovered. New strategies for preventing large-scale outbreaks of infectious disease are developed, and life expectancy around the world increases to 95 years old. The human population stabilizes at around 8 billion people... But at the end of this decade, technological progress stalls permanently: the conditions realized at the end of the decade are the conditions that hold for the next 1 billion years, at which point Earth becomes uninhabitable due to the sun’s growing luminosity. Nonetheless, many trillions and trillions of humans will come to exist in these conditions, with more opportunities for self-actualization than ever before. (pp. 13-14)

I agree that this is not an existential catastrophe, at least on timescales of less than a billion years, provided that humanity is not permanently prevented from leaving Earth. To me, an "existential catastrophe" is an event that causes humanity's welfare or the quality of its moral values to permanently fall far below present-day levels, e.g. to pre-industrial levels. At most, I'd be disappointed if technology plateaued at a level above the present day's technological progress.

However, I'd consider it an existential catastrophe if humanity permanently lost the ability to settle outer space, because that would make our eventual extinction inevitable.

comment by Khorton · 2020-03-31T19:40:59.496Z · score: 4 (9 votes) · EA(p) · GW(p)

I think the key message a lot of people will take away from this post is "Your entire philosophy and way of life is wrong - it doesn't matter if everyone dies."

What is the key message you actually want people to take away from this post?

comment by ælijah · 2020-03-31T23:44:59.481Z · score: 2 (4 votes) · EA(p) · GW(p)

If they read superficially, yes. Would you prefer he explicitly say in the abstract "I think it's bad if everyone dies"?

comment by Aaron Gertler (aarongertler) · 2020-04-01T06:45:08.979Z · score: 4 (8 votes) · EA(p) · GW(p)

ælijah: If you're going to accuse other users of having read something superficially, please explain your views in more detail. What do you think the paper's key message is, and what sections/excerpts make you believe this? 

I'll note that Khorton didn't suggest that "it doesn't matter if everyone dies" was what the post's author actually meant to convey - instead, she expressed concern that it could be read in that way, and asked the author to clarify. 

 

Also, speaking as a Forum moderator: the tone of your comment wasn't really in keeping with the Forum's rules. [? · GW] We discourage even mildly abrasive language if it doesn't contain enough detail for people to be able to respond to your points.

comment by ælijah · 2020-04-01T17:11:03.037Z · score: 4 (3 votes) · EA(p) · GW(p)

I apologize. I meant my comment to say that the paper wouldn't be misunderstood in that way by its readership as a whole if it were read carefully.

On further thought, I think it could be reasonably argued that the abstract actually should explicitly say "I think it's bad if everyone dies".

comment by Aaron Gertler (aarongertler) · 2020-04-01T22:55:46.896Z · score: 4 (2 votes) · EA(p) · GW(p)

Thanks for clarifying. This topic has generally been contentious, so I want to be careful to keep the discussion based on substantive discussion of Torres' ideas or specific wording.

comment by throwaway23984724087 · 2020-04-01T21:29:41.980Z · score: -10 (5 votes) · EA(p) · GW(p)

In light of these concerns, will you be changing your Twitter handle, 'xriskology', which I think you registered when you were a big supporter of the concept?

Or maybe you'll edit your website xriskology.com, which among other things promotes your book 'Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks'?

comment by MichaelStJules · 2020-04-02T05:55:49.182Z · score: 10 (4 votes) · EA(p) · GW(p)

The author still cares about x-risks, just not in the Bostroniam way. Here's the first sentence from the abstract:

This paper offers a number of reasons for why the Bostromian notion of existential risk is useless.

Weird that you made a throwaway just to leave a sarcastic and misguided comment.