Confusion about implications of "Neutrality against Creating Happy Lives"

post by David Reber (derber) · 2021-04-11T15:54:34.503Z · EA · GW · 6 comments

This is a question post.

As I understand, the following two positions are largely accepted in the EA community:

  1. Temporal position should not impact ethics (hence longtermism)
  2. Neutrality against creating happy lives

But if we are time-agnostic, then neutrality against making happy lives seems to imply a preference for extinction over any future where even a tiny amount of suffering exists.

So am I missing something here? (Perhaps "neutrality against creating happy lives" can't be expressed in a way that's temporally agnostic?)

Answers

answer by jackmalde · 2021-04-11T17:14:40.264Z · EA(p) · GW(p)

My short answer is that 'neutrality against creating happy lives' is not a mainstream position in the EA community. Some do hold that view, but I think it's a minority. Most think that creating happy lives is good.

comment by Alex HT · 2021-04-11T18:28:32.635Z · EA(p) · GW(p)

I agree with this answer. Also, lots of people do think that temporal position (or something similar, like already being born) should affect ethics.

But yes OP, accepting time neutrality and being completely indifferent about creating happy lives does seem to me to imply the counterintuitive conclusion you state. You might be interested in this excellent emotive piece [EA · GW] or section 4.2.1 of this philosophy thesis. They both argue that creating happy lives is a good thing.

answer by Larks · 2021-04-11T19:01:26.810Z · EA(p) · GW(p)

I have never seen a survey on this, but I think most people here adopt a totalist view on which creating new happy people is good, because of e.g. the classic transitivity argument. So you were correct to be confused!

answer by MichaelPlant · 2021-04-12T08:41:06.510Z · EA(p) · GW(p)

I want to focus on the following because it seems to be a problematic misunderstanding:

"1. Temporal position should not impact ethics (hence longtermism)"

This genuinely does seem to be a common view in EA, namely, that when someone exists doesn't (in itself) matter, and that, given impartiality with respect to time, longtermism follows. Longtermism is the view we should be particularly concerned with ensuring long-run outcomes go well.

The reason this understanding is problematic is that the probably two strongest objections to longtermism (in the sense that, if these objections hold, they rob longtermism of its practical force) have nothing to do with temporal position in itself. I won't say if these objections are, all things considered, plausible, I'll merely set out what they are.

First, there is the epistemic objection to longtermism (sometimes called the 'tractability', 'washing-out', or 'cluelessness' objection) that, in short, we can't be confident enough about the impact our actions will have on the longrun future to make it the practical priority. See this for recent discussion and references: https://forum.effectivealtruism.org/posts/z2DkdXgPitqf98AvY/formalising-the-washing-out-hypothesis#comments [EA · GW]. Note this has nothing to do with different values of people due to time.

Second, there is the ethical objection that appeals to person-affecting views in population ethics and has the implication making (happy) lives is neutral.* What's the justification for this implication? One justification could be 'presentism', the view only presently existing people matter. This is a justification based on temporal position per se, but it is (I think) highly implausible.

An alternative justification, which does not rely on temporal position in itself, is 'necessitarianism', the view the only people that matter are those that exist necessarily (i.e. in all outcomes under consideration). The motivation for this is (1) outcomes can only be better or worse if they are better or worse for someone ('person-affecting restriction') and (2) existence is not comparable to non-existence for someone ('non-comparativism'). In short, it isn't better to create lives, because it's not better for the people that get created. (I am quite sympathetic to this view and think too many EAs dismiss it too quickly, often without understanding it.)

The further thought is that our actions change the specific individuals who get created (e.g. think if any particular individual alive today would exist if Napoleon had won Waterloo). The result is that our actions, which aim to benefit (far) future people, cause different people to exist. This isn't better for either the people that would have existed, or the people that will actually exist. This is known as the 'non-identity problem'. Necessitarians might explain that, although we really want to help (far) future people, we simply can't. There is nothing, in practice, we can do make their lives better. (Rough analogy: there is nothing, in practice, we can do to make trees' lives go better - only sentient entities can have well-being.)

Note, crucially, this has nothing to do with temporal position in itself either. It's the combination of only necessary lives mattering and our actions changing which people will exist. Temporal position is ethically relevant (i.e. instrumentally important), but not ethically significant (i.e. doesn't matter in itself).

*You can have symmetric person-affecting views (creating lives is neutral). You can also have asymmetric person-affecting views (creating happy lives is neutral, creating unhappy lives is bad). Asymmetric PAVs may, or may not, have concern for the long term depending on what the future looks likes and whether they think adding happy lives can compensate for adding unhappy lives. I don't want to get into this here as this is already long enough.

answer by MichaelStJules · 2021-04-12T03:44:53.298Z · EA(p) · GW(p)

I agree with Jack that neutrality about creating happy lives is (probably) a minority view within EA, although I'm not sure. 80% of EAs are consequentialist according to the most recent EA survey, and most of those probably reject neutrality: https://www.rethinkpriorities.org/blog/2019/12/5/ea-survey-2019-series-community-demographics-amp-characteristics

The conclusion in favour of extinction doesn't necessarily follow, though, depending on the exact framing of the asymmetry and neutrality (although I think it would according to the views CLR defends, but I don't even think everyone at CLR agrees with those views). See the soft asymmetry and conclusion here: https://globalprioritiesinstitute.org/teruji-thomas-the-asymmetry-uncertainty-and-the-long-term/

Note that this view does satisfy transitivity, but not the independence of irrelevant alternatives, i.e. whether A is better than B can depend on what other options are available. I think standard intuitions about the repugnant conclusion, which the soft asymmetry avoids (if so recall correctly), do not satisfy the independence of irrelevant alternatives. There are other cases where independence is violated by common intuitions: https://forum.effectivealtruism.org/posts/HyeTgKBv7DjZYjcQT/the-problem-with-person-affecting-views?commentId=qPDNPCsWuCF86hsqi [EA(p) · GW(p)]

For what it's worth, this is a new view put forth, so it's likely few people know about it, but I suspect it's closest to a temporally impartial version of most people's moral intuitions.

There's also the possibility of s-risks by omission, like failing to help aliens (causally or causally), which extinction would exacerbate, although I'm personally skeptical that we would find and help aliens. Some discussion here: https://centerforreducingsuffering.org/s-risk-impact-distribution-is-double-tailed/

Personally, I basically agree with the views in that article by CLR, the asymmetry in particular is one of my strongest intuitions (the hard version, additional happy lives aren't good), and I think that an empty future would be optimal because of the asymmetry. I do not find this counterintuitive.

answer by jushy · 2021-04-12T10:59:41.103Z · EA(p) · GW(p)

Like others have said, I suspect that neutrality on making happy people isn't the majority view amongst EAs.

But I am neutral on making happy people, which means that I am not particularly worried about extinction, but I still think EA work surrounding extinction is a priority, because almost all of this work also helps to prevent other 'worst case scenarios' that do not necessarily involve extinction (https://forum.effectivealtruism.org/posts/nz26sqMNf7kfFDg8y/longtermism-which-doesn-t-care-about-extinction-implications [EA · GW]).

I think preference for extinction over a point in time with small amounts of suffering only holds if, on top of being 'time-agnostic' and neutral on making happy people, you are a strict negative utilitarian (you only care about reducing suffering, and not about increasing pleasure), and that the small amount of suffering cannot be eliminated at a later point in time.

6 comments

Comments sorted by top scores.