What are the leading critiques of "longtermism" and related concepts

post by AlasdairGives · 2020-05-30T10:54:50.075Z · score: 40 (21 votes) · EA · GW · 14 comments

This is a question post.

Contents

  Answers
    22 Denis Drescher
    20 Benjamin_Todd
    11 Denise_Melchin
None
14 comments

By longtermism I mean “Longtermism =df the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future.”

I want to clarify my thoughts around longtermism as an idea - and to understand better why some aspects of how it is used within EA make me uncomfortable despite my general support of the idea.

I'm doing a literature search but because this is primarily an EA concept that I'm familiar with from within EA I'm mostly familiar with work (e.g Nick Beadstead etc) advocates of this position. I'd like to understand what the leading challenges and critiques to this position are (if any) as well. I know of some within the EA community (Kaufmann) but not of what the position is in academic work or outside of the EA Community.

Thanks!

Answers

answer by Denis Drescher · 2020-05-30T21:57:12.292Z · score: 22 (10 votes) · EA(p) · GW(p)

“The Epistemic Challenge to Longtermism” by Christian Tarsney is perhaps my favorite paper on the topic.

Longtermism holds that what we ought to do is mainly determined by effects on the far future. A natural objection is that these effects may be nearly impossible to predict—perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection. To that end, I develop two simple models for comparing “longtermist” and “short-termist” interventions, incorporating the idea that, as we look further into the future, the effects of any present intervention become progressively harder to predict. These models yield mixed conclusions: If we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.

“How the Simulation Argument Dampens Future Fanaticism” by Brian Tomasik has also influenced my thinking but has a more narrow focus.

Some effective altruists assume that most of the expected impact of our actions comes from how we influence the very long-term future of Earth-originating intelligence over the coming ~billions of years. According to this view, helping humans and animals in the short term matters, but it mainly only matters via effects on far-future outcomes.

There are a number of heuristic reasons to be skeptical of the view that the far future astronomically dominates the short term. This piece zooms in on what I see as perhaps the strongest concrete (rather than heuristic) argument why short-term impacts may matter a lot more than is naively assumed. In particular, there's a non-trivial chance that most of the copies of ourselves are instantiated in relatively short-lived simulations run by superintelligent civilizations, and if so, when we act to help others in the short run, our good deeds are duplicated many times over. Notably, this reasoning dramatically upshifts the relative importance of short-term helping even if there's only a small chance that Nick Bostrom's basic simulation argument is correct.

My thesis doesn't prove that short-term helping is more important than targeting the far future, and indeed, a plausible rough calculation suggests that targeting the far future is still several orders of magnitude more important. But my argument does leave open uncertainty regarding the short-term-vs.-far-future question and highlights the value of further research on this matter.

Finally, you can also conceive of yourself as one instantiation of a decision algorithm that probably has close analogs at different points throughout time, which makes Caspar Oesterheld’s work relevant to the topic. There are a few summaries linked from that page. I think it’s an extremely important contribution but a bit tangential to your question.

comment by Milan_Griffes · 2020-06-04T20:54:03.030Z · score: 6 (3 votes) · EA(p) · GW(p)

My essay on consequentialist cluelessness is also about this: What consequences? [EA · GW]

comment by AlasdairGives · 2020-05-31T21:58:57.733Z · score: 1 (1 votes) · EA(p) · GW(p)

Thanks! The top paper seems very relevant in particular.

answer by Benjamin_Todd · 2020-05-30T23:04:56.558Z · score: 20 (8 votes) · EA(p) · GW(p)

This is not exactly what you're looking for, but the best summary of objections I'm aware of is from the Strong Longtermism paper by Greaves and MacAskill.

comment by AlasdairGives · 2020-05-31T22:00:00.005Z · score: 1 (1 votes) · EA(p) · GW(p)

Thanks - I’ve read the summaries of this but hadn’t twigged it was developed into a full paper

answer by Denise_Melchin · 2020-05-30T17:42:48.816Z · score: 11 (8 votes) · EA(p) · GW(p)

Most people don't value not-yet-existing people as much as people already alive. I think it is the EA community holding the fringe position here, not the other way around. Neither is total utilitarianism a majority view among philosophers. (You might want to look into critiques of utilitarianism.)

If you pair this value judgement with a belief that existential risk is less valuable to work on than other issues for affecting people this century, you will probably want to work on "non-longtermist" problems.

comment by Benjamin_Todd · 2020-06-02T15:59:24.380Z · score: 11 (5 votes) · EA(p) · GW(p)

I don't think longtermism depends on either (i) valuing future people equally to presently alive people or (ii) total utilitarianism (or utilitarianism in general), so I don't think these are great counterarguments unless further fleshed out. Instead it depends on something much more general like 'whatever is of value, there could be a lot more of it in the future'.

comment by Max_Daniel · 2020-06-04T08:18:54.072Z · score: 15 (6 votes) · EA(p) · GW(p)

[Not primarily a criticism of your comment, I think you probably agree with a lot of what I say here.]

Instead it depends on something much more general like 'whatever is of value, there could be a lot more of it in the future'.

Yes, but in addition your view in normative ethics needs to have suitable features, such as:

  • A sufficiently aggregative axiology. Else the belief that there will be much more of all kinds of stuff in the future won't imply that the overall goodness of the world mostly hinges on its long-term future. For example, if you think total value is a bounded function of whatever the sources of value are (e.g. more happy people are good up to a total of 10 people, but additional people add nothing), longtermism may not go through.
  • [Only for 'deontic longtermism':] A sufficiently prominent role of beneficence, i.e. 'doing what has the best axiological consequences', in the normative principles that determine what you ought to do. For example, if you think that keeping some implicit social contract with people in your country trumps beneficence, longtermism may not go through.

(Examples are to illustrate the point, not to suggest they are plausible views.)

I'm concerned that some presentations of "non-consequentialist" reasons for longtermism sweep under the rug the important difference between the actual longtermist claim that improving the long-term future is of particular concern relative to other goals and the weaker claim that improving or preserving the long-term future is one ethical consideration among many, with it being underdetermined how they trade off against each other.

So for example, sure, if we don't prevent extinction we are uncooperative toward previous generations because we frustrate their 'grand project of humanity'. That might be a good, non-consequentialist reason to prevent extinction. But without specifying the full normative view, it is really unclear how much to focus on this relative to other responsibilities.

Note that I actually do think that something like longtermist practical priorities follow from many plausible normative views, including non-consequentialist ones. Especially if one believes in a significant risk of human extinction this century. But the space of such views is vast, and which views are and aren't plausible is contentious. So I think it's important to not present longtermism as an obvious slam dunk, or to only consider (arguably implausible) objections that completely deny the ethical relevance of the long-term future.

comment by Denise_Melchin · 2020-06-03T17:50:14.880Z · score: 3 (2 votes) · EA(p) · GW(p)

That's very fair, I should have been a lot more specific in my original comment. I have been a bit disappointed that within EA longtermism is so often framed in utilitarian terms - I have found the collection of moral arguments in favour of protecting the long-term future brought forth in The Precipice a lot more compelling and wish they would come up more frequently.

comment by Benjamin_Todd · 2020-06-03T22:04:22.512Z · score: 2 (1 votes) · EA(p) · GW(p)

I agree!

comment by Max_Daniel · 2020-06-04T08:26:10.913Z · score: 5 (3 votes) · EA(p) · GW(p)

I also like the arguments in The Precipice. But per my above comment [EA(p) · GW(p)], I'm not sure if they are arguments for longtermism, strictly speaking. As far as I recall, The Precipice argues for something like "preventing existential risk is among our most important moral concerns". This is consistent with, but neither implied nor required by longtermism: if you e.g. thought that there are 10 other moral concerns of similar weight, and you choose to mostly focus on those, I don't think your view is 'longtermist' even in the weak sense. This is similar to how someone who thinks that protecting the environment is somewhat important but doesn't focus on this concern would not be called an environmentalist.

comment by Benjamin_Todd · 2020-06-04T12:35:03.620Z · score: 6 (3 votes) · EA(p) · GW(p)

Yes, I agree with that too - see my comments later in the thread. I think it would be great to be clearer that the arguments for xrisk and longtermism are separate (and neither depends on utilitarianism).

14 comments

Comments sorted by top scores.

comment by MarisaJurczyk · 2020-05-30T16:20:03.189Z · score: 16 (8 votes) · EA(p) · GW(p)

Not academic or outside of EA, but this Forum comment [EA(p) · GW(p)] and this Facebook post may be good starting points if you haven't seen them already.

comment by RandomEA · 2020-06-02T01:13:18.678Z · score: 29 (14 votes) · EA(p) · GW(p)

As an update, I am working on a full post that will excerpt 20 arguments against working to improve the long-term future and/or working to reduce existential risk as well as responses to those arguments. The post itself is currently at 26,000 words and there are six planned comments (one of which will add 10 additional arguments) that together are currently at 11,000 words. There have been various delays in my writing process but I now think that is good because there have been several new and important arguments that have been developed in the past year. My goal is to begin circulating the draft for feedback within three months.

comment by Pablo_Stafforini · 2020-06-02T13:47:52.738Z · score: 11 (3 votes) · EA(p) · GW(p)

Judging from the comment, I expect the post to be a very valuable summary of existing arguments against longtermism, and am looking forward to reading it. One request: as Jesse Clifton notes [EA · GW], some of the arguments you list apply only to x-risk (a narrower focus than longtermism), and some apply only to AI risk (a narrower focus than x-risk). It would be great if your post could highlight the scope of each argument.

comment by Benjamin_Todd · 2020-06-02T16:08:06.181Z · score: 12 (4 votes) · EA(p) · GW(p)

Strongly agree - I think it's really important to disentangle longtermism from existential risk from AI safety. I might suggest writing separate posts.

I'd also be keen to see more focus on which arguments seem best, rather than having such a long list (including many that have a strong counter, or are no longer supported by the people who first suggested them), though I appreciate that might take longer to write. A quick fix would be to link to counterarguments where they exist.

comment by RandomEA · 2020-06-02T22:19:48.568Z · score: 1 (1 votes) · EA(p) · GW(p)

Thanks Pablo and Ben. I already have tags below each argument for what I think it is arguing against. I do not plan on doing two separate posts as there are some arguments that are against longtermism and against the longtermist case for working to reduce existential risk. Each argument and its response are presented comprehensively, so the amount of space dedicated to each is based mostly on the amount of existing literature. And as noted in my comment above, I am excerpting responses to the arguments presented.

comment by Benjamin_Todd · 2020-06-03T16:17:09.739Z · score: 10 (6 votes) · EA(p) · GW(p)

FWIW I'd still favour two posts (or if you were only going to one, focusing on longtermism). I took a quick look at the original list, and I think they divide up pretty well, so you wouldn't end up with many reasons that should appear on both lists. I also think it would be fine to have some arguments appear on both lists.

In general, I think conflating the case for existential risk with the case for longtermism has caused a lot of confusion, and it's really worth pushing against.

For instance, many arguments that undermine existential risk actually imply we should focus on (i) investing & capacity building (ii) global priorities research or (iii) other ways to improve the future, but instead get understood as arguments for working on global health.

comment by RandomEA · 2020-06-04T22:11:19.980Z · score: 2 (2 votes) · EA(p) · GW(p)

Thanks Ben. There is actually at least one argument in the draft for each alternative you named. To be honest, I don't think you can get a good sense of my 26,000 word draft from my 570 word comment from two years ago. I'll send you my draft when I'm done, but until then, I don't think it's productive for us to go back and forth like this.

comment by Aaron Gertler (aarongertler) · 2020-07-20T14:49:04.683Z · score: 4 (2 votes) · EA(p) · GW(p)

Any updates on how this post is going? I'm really curious to see a draft!

comment by Sean_o_h · 2020-07-20T15:59:03.055Z · score: 2 (1 votes) · EA(p) · GW(p)

+1!

comment by RandomEA · 2020-09-02T03:50:02.487Z · score: 1 (1 votes) · EA(p) · GW(p)

While I have made substantial progress on the draft, it is still not ready to be circulated for feedback.

I have shared the draft with Aaron Gertler to show that it is a genuine work in progress.

comment by AlasdairGives · 2020-06-02T20:57:22.535Z · score: 1 (1 votes) · EA(p) · GW(p)

that sounds fantastic. I'd love to read the draft once it is circulated for feedback

comment by Davis_Kingsley · 2020-06-01T02:51:38.020Z · score: 8 (3 votes) · EA(p) · GW(p)

Hmm, I remember seeing a criticism somewhere in the EA-sphere that went something like:

"The term "longtermism" is misleading because in practice "longtermism" means "concern over short AI timelines", and in fact many "longtermists" are concerned with events on a much shorter time scale than the rest of EA."

I thought that was a surprising and interesting argument, though I don't recall who initially made it. Does anyone remember?

comment by MichaelStJules · 2020-06-02T02:39:40.400Z · score: 18 (6 votes) · EA(p) · GW(p)

This sounds like a misunderstanding to me. Longtermists concerned with short AI timelines are concerned with them because of AI's long lasting influence into the far future.

comment by MichaelStJules · 2020-05-31T07:37:59.009Z · score: 2 (1 votes) · EA(p) · GW(p)

See also "If you value future people, why do you consider near term effects?" by Alex HT [EA · GW] and especially the comments.