Why do you find the Repugnant Conclusion repugnant?

post by Will Bradshaw (willbradshaw) · 2021-12-17T10:00:44.822Z · EA · GW · 63 comments

This is a question post.

The Repugnant Conclusion has always seemed straightforwardly and unobjectionably true to me. I've always been confused by its alleged repugnance, or why such an anodyne-seeming conclusion merits such a dramatic name.

This isn't like the other standard objections to utilitarianism. I'm not persuaded by concerns about utility monsters or trolley problems, but I feel the sting of those objections – they feel like bullets I need to bite. Whereas the Repugnant Conclusion just seems like a non-problem to me.

I say all this not to argue against concerns about the Repugnant Conclusion, but to motivate my question here. I'd like to have a better understanding of the intuitions that lead people to seeing this as such a serious problem, and whether I'm missing something that might cause me to put more weight on these sorts of concerns. I'm less interested in technical philosophical arguments here than in intuition pumps – simple thought experiments, or real-world scenarios, or related problems that might help me feel the sting of the objections a bit more.

Answers

answer by MichaelStJules · 2021-12-17T17:32:58.558Z · EA(p) · GW(p)

I have asymmetric person-affecting intuitions, and I think the Repugnant Conclusion is a clear example of treating individuals as mere vessels/receptacles for value. Sacrificing the welfare of just one person so that another could be born — even if they would be far better off than the first person — seems wrong to me, ignoring other effects. That I could have an obligation to bring people into existence just for their own sake and at an overall personal cost seems wrong to me. The RC just seems like a worse and more extreme version of this.

In a hypothetical world where I'm the only one around, I feel I basically should be allowed to do whatever I want, as long as no one else will come into existence, and I should have no reason to bring them into existence. In my world, I should do whatever I want. If no one is born, I'm not harming anyone else or failing in my obligations to others, because they don't and won't exist to be able to experience harm (or experience an absence of benefit or worse benefits).

That I should make sacrifices to prevent people with bad lives from being born or to help future people who would exist anyway (including ensuring better off people are born instead of worse off people) does seem right to me. If and because these people will exist, I can harm them or fail to prevent harm to them, and that would be bad.

I have some more writing on the asymmetry here [EA(p) · GW(p)].

comment by Pablo (Pablo_Stafforini) · 2021-12-17T18:15:14.681Z · EA(p) · GW(p)

I'm confused by your answer.

  • You say that "sacrificing the welfare of just one person so that another could be born... seems wrong". But the Repugnant Conclusion is a claim about the relative value of two possible populations, neither of which is assumed to be actual. So I don't understand how you reach the conclusion that, in judging that one of these populations is more valuable, by bringing it about you'd be "sacrificing" the welfare of the possible people in the other population. The situation seems perfectly symmetrical, so either you are "sacrificing" people no matter what you do, or (what seems more plausible) talk of "sacrificing" doesn't really make sense in this context.
  • Even ignoring the above, I'm confused about why you think that "the Repugnant Conclusion is a clear example of treating individuals as mere vessels/receptacles for value" given your endorsement of asymmetrical views. How are you not treating individuals as mere vessels/receptacles for value when, in deciding between two worlds both of which contain suffering but differ in the number of people they contain, you bring about the world that contains less suffering? What do you tell the person whom you subject to a life of misery so that some other person, who would have been even more miserable, is not born?
  • You have said that you don't share the intuition that positive welfare has intrinsic value. But lacking this intuition, how can you intuitively compare the value of two worlds that differ only in how much positive welfare they contain?
  • The Repugnant Conclusion arises also at the intrapersonal level, so it would be very surprising if the reason we find it counterintuitive, insofar as we do, at the interpersonal level has to do with factors—such as treating people as mere receptacles of value or sacrificing people—that are absent at the intrapersonal level.
Replies from: Lumpyproletariat, MichaelStJules
comment by Lumpyproletariat · 2021-12-17T21:53:04.074Z · EA(p) · GW(p)

This comment seems to me to be requesting clarification in good faith. Might someone who downvoted it explain to why, if it wouldn't take too much time or effort? I'm fairly new to the forum and would like a more complete view of the customs.

Edited to add: Perhaps because it was perceived as lower effort than the parent comment, and required another high-effort post in response, which might have been avoided by a closer reading?

Replies from: MichaelStJules
comment by MichaelStJules · 2021-12-18T08:02:04.613Z · EA(p) · GW(p)

I never downvoted his comments, and have (just now) instead upvoted them.

However, I would interpret all of Pablo's points in his response not just as requesting clarification but also as objections to my answer, in a post that's only asking for people's reasons to object to the RC and is explicitly not about technical philosophical arguments (although it's not clear this should extend to replies to answers), just basic intuitions.

I don't personally mind, and these are interesting points to engage with. However, I can imagine others finding it too intimidating/adversarial/argumentative.

Replies from: Lumpyproletariat
comment by Lumpyproletariat · 2021-12-18T08:30:52.858Z · EA(p) · GW(p)

Thank you for the explanation! 

comment by MichaelStJules · 2021-12-17T19:36:57.791Z · EA(p) · GW(p)

(I've made a bunch of edits to the following comment within 2 hours of posting it.)

You say that "sacrificing the welfare of just one person so that another could be born... seems wrong". But the Repugnant Conclusion is a claim about the relative value of two possible populations, neither of which is assumed to be actual. So I don't understand how you reach the conclusion that, in judging that one of these populations is more valuable, by bringing it about you'd be "sacrificing" the welfare of the possible people in the other population. The situation seems perfectly symmetrical, so either you are "sacrificing" people no matter what you do, or (what seems more plausible) talk of "sacrificing" doesn't really make sense in this context.

If you're a consequentialist whose views are transitive and complete, and satisfy the independence of irrelevant alternatives, then the RC implies what I wrote (ignoring other effects and opportunity costs). The situation is not necessarily symmetrical in practice if you hold person-affecting views, which typically require the rejection of the independence of irrelevant alternatives. I'd recommend the "wide, hard view" in The Asymmetry, Uncertainty, and the Long Term by Teruji Thomas as the view closest to common sense that satisfies the intuitions of my answer above (that I'm aware of), and the talk is somewhat accessible, although the paper can get pretty technical. This view allows future contingent good lives to make up for (but not outweigh) future contingent bad lives, but, as a "hard" view, not to make up for losses to "necessary" people, who would exist regardless. Because it's "wide", it "solves" the Nonidentity problem. The wide version would still reject the RC even if we're choosing between two disjoint contingent populations, I think because "excess" (in number) contingent people with good lives wouldn't count in this particular pairwise comparison. Another way to think about it would be like matching counterparts [EA · GW] across worlds, and then we can talk about sacrifices as the differences in welfare between an individual and their counterpart, although I'm not sure the view entails something equivalent to this.

My own views are much more asymmetric than the views in Thomas's work, and I lean towards negative utilitarianism, since I don't think future contingent good lives can make up for future contingent bad lives at all.

How are you not treating individuals as mere vessels/receptacles for value when, in deciding between two worlds both of which contain suffering but differ in the number of people they contain, you bring about the world that contains less suffering? What do you tell the person whom you subject to a life of misery so that some other person, who would be even more miserable, is not born?

I tell them that I did it to prevent a greater harm that would have otherwise been experienced. The foregoing of benefit caused by someone never being born would not be experienced by that non-existent person. I have some short writing on the asymmetry here [EA(p) · GW(p)] that I think can explain this better.

You have said that you don't share the intuition that positive welfare has intrinsic value. But lacking this intuition, how can you compare the value of two worlds that differ only in how much positive welfare they contain?

Lives most people consider good overall can still involve disappointment or suffering, so the RC doesn't necessarily differ only in how much positive welfare there is, depending on how exactly we're imagining it. If we're only talking about positive welfare and no negative welfare, preferences aren't more frustrated/less satisfied than otherwise, and everyone is perfectly content in the "repugnant" world, then I wouldn't object. If I had to make a personal sacrifice to bring someone into existence, I would probably not be perfectly content, possibly unless I thought it was the right thing to do (although I might feel some dissatisfaction either way, and less if I'm doing what I think is the right thing).

Plus, it's worth sharing my more general objection regardless of my denial of positive welfare, since it may reflect others' views, and they can upvote or comment to endorse it if they agree.

The Repugnant Conclusion arises also at the intrapersonal level, so it would be very surprising if the reason we find it counterintuitive, insofar as we do, at the interpersonal level has to do with factors—such as treating people as mere receptacles of value or sacrificing people—that are absent at the intrapersonal level.

Assuming intrapersonal and interpersonal tradeoffs should be treated the same (ignoring indirect effects), yes. It's not obvious that they should be, and I think common sense ethics does not treat them the same.

But even then, the intrapersonal version (+welfarist consequentialism) also violates autonomy and means I shouldn't do whatever I want in my world, so my objection is similar. I think "preference-affecting" views (person-affecting views applied at the level of individual preferences/desires, especially Thomas's "hard, wide view") would likely fare better here for structurally similar reasons, so the "solution" could be similar or even the same.

Symmetric total preference utilitarianism and average preference utilitarianism would imply that it's good for a person to create enough sufficiently strong satisfied preferences in them, even if it means violating their consent and the preferences they already have or will have. Classical utilitarianism implies involuntary wireheading (done right) is good for a person. Preference-affecting views and antifrustrationism (negative preference utilitarianism) would only endorse violating consent or preferences for a person's own sake in ways that depend on preferences they would have otherwise or anyway, so you violate consent/some preferences to respect others (although I think antifrustrationism does worse than asymmetric preference-affecting views for respecting preferences/consent, and deontological constraints or limiting aggregation would likely do even better).

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2021-12-17T21:01:33.071Z · EA(p) · GW(p)

[ETA: You say you've made edits to your post, so it's possible some of my replies are addressed by your revisions. I am always responding to the text I'm quoting, which may differ from the final version of your comment.]

If you're a consequentialist whose views are transitive, complete and satisfy the independence of irrelevant alternatives, the RC implies what I wrote (ignoring other effects and opportunity costs). The situation is not symmetrical if you hold person-affecting views, which typically require the rejection of the independence of irrelevant alternatives. I'd recommend the "wide, hard view" in The Asymmetry, Uncertainty, and the Long Term by Teruji Thomas as the view closest to common sense that satisfies the intuitions of my answer above (that I'm aware of), and the talk is somewhat accessible, although the paper can get pretty technical. This view allows future contingent good lives to make up for (but not outweigh) future contingent bad lives, but, as a "hard" view, not losses to "necessary" people, who would exist regardless. Because it's "wide", it "solves" the Nonidentity problem.

I don't have time to look into this right now, but I also feel that this probably won't provide an answer to the question I meant to ask. (Apologies if my wording was unclear.) Call the world with few, very happy people, A, and the world with lots of mildly happy people, Z. The question is, then, simply: "If bringing about Z sacrifices people in A, why doesn't bringing about A sacrifice people in Z?" You say that you'd be sacrificing someone "even if they would be far better off than the first person", which seems to commit you to the claim that you would indeed be sacrificing people in Z by bringing about A.

I tell them that I did it to prevent a greater harm that would have otherwise been experienced. The foregoing of benefit caused by someone never being born would not be experienced by that non-existent person. I have some short writing on the asymmetry here that I think can explain this better.

I don't understand how this answer explains why you are not treating the person as a value receptacle, given that you believe this is what the total utilitarian does in the Repugnant Conclusion. I can see why a negative utilitarian and/or a person-affecting theorist would treat these two cases differently. What I don't understand is why the difference is supposed to consist in that people are being treated as value receptacles in one case, but not in the other. This just seems to misdiagnose what's going on here.

The comment you shared helps me understand the Asymmetry, but not your claim about value receptacles.

Lives most people consider good overall can still involve disappointment or suffering, so the RC doesn't necessarily differ only in how much positive welfare there is, depending on how exactly we're imagining it.

I agree that you can have people with lifetime wellbeing just above neutrality either because they live their entire lives at that level or because they have lots of ups and downs that almost perfectly cancel each other out (and anything in between). I think discussions of the Repugnant Conclusion sometimes make the stronger assumption that people's lives are continuously just above neutrality ("muzak and potatoes"), and that people may respond to the thought experiment differently depending on whether or not this assumption is made.

For a negative utilitarian, it seems that whether the assumption is made is in fact crucial, since the "muzak and potatoes" life is as good as it can be (it lacks any unpleasantness) whereas lives in other Repugnant Conclusion scenarios could contain huge amounts of suffering. I handn't appreciated this point when I wrote my previous comment, but now that I do, I feel even more confused.

Assuming intrapersonal and interpersonal tradeoffs should be treated the same (ignoring indirect effects), yes. It's not obvious that they should be, and I think common sense ethics does not treat them the same.

Oh, I wasn't saying they should be treated the same. It's pretty clear that commonsense morality treats them differently.

My point is that the phenomenology of the intuitions at the interpersonal and intrapersonal levels is essentially the same, which strongly suggests that the same factor is triggering those intuitions in both cases. Any explanation of the counterintuitiveness of the Repugnant Conclusion in terms of factors that are specific to the interpersonal case is therefore implausible.

Although I'm not sure I'm understanding you correctly, you then seem to be suggesting that your views can in fact vindicate the claim that people would also in some sense be sacrificed in the intrapersonal case. Is this what you are claiming? It would help me if you describe what you yourself believe, as opposed to discussing the implications of a wide variety of views.

[Of course, feel free to ignore any of this if you aren't interested, etc.]

Replies from: MichaelStJules
comment by MichaelStJules · 2021-12-18T07:39:21.394Z · EA(p) · GW(p)

(FWIW, I never downvoted your comments and have upvoted them instead, and I appreciate the engagement and thoughtful questions/pushback, since it helps me make my own views clearer. Since I spent several hours on this thread, I might not respond quickly or at all to further comments.)

The question is, then, simply: "If bringing about Z sacrifices people in A, why doesn't bringing about A sacrifice people in Z?" You say that you'd be sacrificing someone "even if they would be far better off than the first person", which seems to commit you to the claim that you would indeed be sacrificing people in Z by bringing about A.

Sorry, I tried to respond to that in an edit you must have missed, since I realized I didn't after posting my reply. In short, a wide person-affecting view means that Z would involve "sacrifice" and A would not, if both populations are completely disjoint and contingent, roughly because the people in A have worse off "counterparts" in Z, and the excess positive welfare people in Z without counterparts don't compensate for this. No one in Z is better off than anyone in A, so none are better off than their counterparts in A, so there can't be any sacrifice in a "wide" way in this direction. The Nonidentity problem would involve "sacrifice" in one way only, too, under a wide view.

(If all the people in Z already exist, and none of the people in A exist, then going from Z to A by killing everyone in Z could indeed mean "sacrificing" the people in Z for those in A, under some person-affecting views, and be bad under some such views.

Under a narrow view (instead of a wide one), with disjoint contingent populations, we'd be indifferent between A and Z, or they'd be incomparable, and both or neither would involve "sacrifice".)

 

 

On value receptacles, here's a quote by Frick (on his website), from a paper in which he defends the procreation asymmetry:

For another, it feeds a common criticism of utilitarianism, namely that it treats people as fungible and views them in a quasi-instrumental fashion. Instrumental valuing is an attitude that we have towards particulars. However, to value something instrumentally is to value it, in essence, for its causal properties. But these same causal properties could just as well be instantiated by some other particular thing. Hence, insofar as a particular entity is valued only instrumentally, it is regarded as fungible. Similarly, a teleological view which regards our welfare-related reasons as purely state-regarding can be accused of taking a quasi-instrumental approach towards people. It views them as fungible receptacles for well-being, not as mattering qua individuals.29 Totalist utilitarianism, it is often said, does not take persons sufficiently seriously. By treating the moral significance of persons and their well-being as derivative of their contribution to valuable states of affairs, it reverses what strikes most of us as the correct order of dependence.30 Human wellbeing matters because people matter – not vice versa.

I haven't thought much about this particular way of framing the receptacle objection, and what I have in mind is basically what Frick wrote later: 

any reasons to confer well-being on a person are conditional on the fact of her existence.

This is a bit vague: what do we mean by "conditional"? But there are plausible interpretations that symmetric person-affecting views, asymmetric person-affecting views and negative axiologies satisfy, while the total view, reverse asymmetric person-affecting views and positive axiologies don't really seem to have such plausible interpretations (or have fewer and/or less plausible interpretations).

I have two ways in mind that seem compatible with the procreation asymmetry, but not the total view:

First, in line with my linked shortform comment about the asymmetry, a person's interests should only direct us from outcomes in which they (the person, or the given interests) exist or will exist to the same or other outcomes (possibly including outcomes in which they don't exist), and all reasons with regards to a given person are of this form. I think this is basically an actualist argument (which Frick discusses and objects to in his paper). Having reasons regarding an individual A in an outcome in which they don't exist direct us towards an outcome in which they do exist would not seem conditional on A's existence. It's more "conditional" if the reasons regarding a given outcome come from that outcome than from other outcomes.

Second, there's Frick's approach. Here's a simplified evaluative version: 

All of our reasons with regards to persons should be of the following form:

It is in one way better that the following is satisfied: if person A exists, then P(A),

where P is a predicate that depends terminally only on A's interests.

Setting P(A)="A has a life worth living" would give us reason to prevent lives not worth living. Plus, there's no P(A) we could use that would imply that a given world with A is in one way better (due to the statement with P(A)) than a given world without A. So, this is compatible with the procreation asymmetry, but not the total view.

It could be "wide" and solve the Nonidentity problem, since we can find P such that P would be satisfied for B but not A, if B would be better off than A, so we would have more reasons for A not to exist than for B not to exist.

It's also compatible with antifrustrationism and negative utilitarianism in a few ways:

  1. If we apply it to preferences instead of whole persons, with predicates like P(A)="A is satisfied"
  2. If we use predicates like "P(A)=if A has interest y, then y is satisfied at least to degree d"
  3. If we use predicates like "P(A)=A has welfare at least w", allowing for the possibility of positive welfare being better than less in an existing individual, but being perfectionistic about it, so that anything worse than the best is worse than nonexistence.

I think part of what follows in Frick's paper is about applying/extending this in a way that isn't basically antinatalist.

 

For a negative utilitarian, it seems that whether the assumption is made is in fact crucial, since the "muzak and potatoes" life is as good as it can be (it lacks any unpleasantness) whereas other lives could contain huge amounts of suffering.

Ya, this seems right to me.

 

My point is that the phenomenology of the intuitions at the interpersonal and intrapersonal levels is essentially the same, which strongly suggests that the same factor is triggering those intuitions in both cases.

What do you mean by "the phenomenology of the intuitions" here?

One important difference between the interpersonal and intrapersonal cases is that in the intrapersonal case, people may (or may not!) prefer to live much longer overall, even sacrificing their other interests. It's not clear they're actually worse off overall or even at each moment in something that might "look" like Z, once we take the preference(s) for Z over A into account. We might be miscalculating the utilities before doing so. For something similar to happen in the interpersonal case, the people in A would have to prefer Z, and then similarly, Z wouldn't seem so objectionable.

 

Although I'm not sure I'm understanding you correctly, you then seem to be suggesting that your views can in fact vindicate the claim that you'd be sacrificing your future selves or treating them as value receptacles. Is this what you are claiming? It would help me if you describe what you yourself believe, as opposed to discussing the implications of a wide variety of views.

It's more about my interests/preferences than my future selves, and not sacrificing them or treating them as value receptacles. I think respect for autonomy/preferences requires not treating our preferences as mere value receptacles that you can just make more of to get more value and make things go better, and this can rule out both the interpersonal RC and the intrapersonal RC. This is in principle, ignoring other reasons, indirect effects, etc., so not necessarily in practice.

I have moral uncertainty, and I'm sympathetic to multiple views, but what they have in common is that I deny the existence of terminal goods (whose creation is good in itself, or that can make up for bads or for other things that matter going worse than otherwise) and that I recognize the existence of terminal bads. They're all versions of negative prioritarianism/utilitarianism or very similar.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2021-12-20T01:55:51.571Z · EA(p) · GW(p)

Thanks for the detailed reply. For now, I will only address your comments at the end, since I haven't read the sources you cite and haven't thought about this much beyond what I wrote previously. (As a note of color, Johann and I did the BPhil together and used to meet every week for several hours to discuss philosophy, although he kept developing his views about population ethics after he moved to Harvard; you have rekindled my interest in reading his dissertation.)

What do you mean by "the phenomenology of the intuitions" here?

I mean that the intuitions triggered by the interpersonal and the intrapersonal cases feel very similar from the inside. For example, if I try to describe why the interpersonal case feels repugnant, I'm inclined to say stuff like "it feels like something would be missing" or "there's more to life than that"; and this is exactly what I would also say to describe why the intrapersonal case feels repugnant. How these two intuitions feel also makes me reasonably confident that fMRI scans of people presented with both cases would show very similar patterns of brain activity.

One important difference between the interpersonal and intrapersonal cases is that in the intrapersonal case, people may (or may not!) prefer to live much longer overall, even sacrificing their other interests. It's not clear they're actually worse off overall or even at each moment in something that might "look" like Z, once we take the preference(s) for Z over A into account. We might be miscalculating the utilities before doing so. For something similar to happen in the interpersonal case, the people in A would have to prefer Z, and then similarly, Z wouldn't seem so objectionable.

I think that supposed difference is ruled out by the way the intrapersonal case is constructed. In any case, what I regard as the most interesting intrapersonal version is one where it is analogous to the interpersonal version in this respect. Of course, we can discuss a scenario of the sort you describe, but then I would no longer say that my intuitions about the two cases feel very similar, or that we can learn much by comparing the two cases.
 

I have moral uncertainty, and I'm sympathetic to multiple views, but what they have in common is that I deny the existence of terminal goods (whose creation is good in itself, or that can make up for bads or for other things that matter going worse than otherwise) and that I recognize the existence of terminal bads. They're all versions of negative prioritarianism/utilitarianism or very similar.

Makes sense. Thanks for the clarification.
 

comment by Will Bradshaw (willbradshaw) · 2021-12-18T13:56:24.653Z · EA(p) · GW(p)

Thanks, I appreciated reading this. I think you and I think about morality very differently, which means this doesn't update me very much, but it's still good to get a more emotional grasp of what people feel about these questions.

answer by Jack Malde (jackmalde) · 2021-12-17T13:54:06.055Z · EA(p) · GW(p)

I'll try to help you understand why (I think) some people feel the sting of the repugnant conclusion (RC), but why I think they are ultimately wrong to do so. I should say that I personally don't find the repugnant conclusion repugnant so what I'm about to say might be completely missing the point. I am slightly stung by the "very repugnant conclusion [EA · GW]", but that might be for another time.

In short, I think some people find RC repugnant based on a misunderstanding of what a life "barely worth living" would mean in practice. I think most people imagine such a life to be quite "bad" on the whole, but I think this is a mistake.

Note that the vast majority of people on earth want to continue living. This would include the vast majority of people who live in extreme poverty or who are undergoing horrific abuse. It would also include people who constantly consider suicide to end their pain but never go through with it. In normal parlance we would say these people live "bad" lives. However, we might conclude that these people are living lives worth living if they don't want their life to end / don't choose to end their life. So my guess is people imagine "a life barely worth living" to be a pretty "bad" one. The actual wording of "a life barely worth living" is inherently negative in how it is framed anyway. So RC would amount to a load of people with pretty "bad" lives by intuitive standards, being better than a smaller number of people with absolutely amazing lives. Accepting RC would be like creating another Africa with all it's poverty and hardship instead of creating another Norway with all it's happiness. Or creating loads of people attending daily suicide support groups rather than a smaller number of people living the best lives we can imagine. Most people would find these repugnant things to do and I personally would feel the sting here.

The problem with the above reasoning becomes clear when we think more carefully about "a life barely worth living". Firstly, to state what should be obvious, such a life is worth living by definition. So to be put off by the existence of such lives doesn't really make logical sense, unless you deny the theoretical existence of positive lives in the first place. This doesn't negate people's feeling of repugnance, but I think it should cause them to question it.

Where does this leave us with people attending daily suicide support groups? Well my preferred way forward is to question if these people do in fact have lives worth living, or at least to question if we have any idea on the matter. As is pointed out by Dasgupta (2016), the idea that someone who wants to continue living must be living a life of positive welfare ignores the badness of death. It is certainly possible for someone to be living a life of negative welfare, but be reluctant to end it because the subjective badness of death exceeds the badness of continuing to live. Death is indeed a horrible prospect for most when you consider factors such as religious prohibition, fear of the process of dying, the thought that one would be betraying family and friends, the deep resistance to the idea of taking one’s own life that has been built into us through selection pressure would cause someone even in deep misery to balk, and the revelation of one's misery to others when one wants it to remain undisclosed even after death.

In light of this Dasgupta puts forward the "creation test" as a way to determine the zero-level of wellbeing. What is the worst life that you would willingly create? Dasgupta says that should be the zero level. Most altruists wouldn't create more people living in extreme poverty, or people with constant thoughts of suicide, implying these people probably live negative lives. I personally would only create a life that most of us would say is very good!

I'm not saying Dasgupta's creation test is perfect - I'm undecided on how useful it is. This paper argues that we have no sufficiently clear sense of what a minimally good life is like. If this is indeed true, as the paper argues, the RC loses its probative force because we can not judge lives "barely worth living" as being "bad" as we don't really have a clue.

So to sum up my rather lengthy response, I think that many people who think RC is repugnant assume that "lives barely worth living" are those we would say are "bad" in common parlance which can lead to an understandable feeling of repugnance. I think they are wrong - either "lives barely worth living" are much better than being "bad", in which case RC loses repugnance, or we don't know how good "lives barely worth living" are and RC doesn't even get off the ground at all.

comment by sawyer · 2021-12-22T19:56:03.800Z · EA(p) · GW(p)

This is exactly my intuition. When I think about "lives barely worth living" I imagine someone who is constantly on the edge of suicide. Then I think, well that seems really bad to me, but who am I to say that that person's life is not worth living? If I can't look that person in the eye and say, "your life is not worth living" (which I almost certainly can't do) , then how can I say that my world of "lives barely worth living" is made up of people with better lives than them?

Your paraphrasing of Dasgupta's insights is helpful, and I think incorporating the negativity of death may alleviate some of my perceived Repugnancy of the aforementioned Conclusion.

comment by Florian Habermacher (FlorianH) · 2021-12-18T13:13:53.889Z · EA(p) · GW(p)

Interesting suggestion! It sounds plausible that "barely worth living" might intuitively be mistaken as something more akin to 'so bad, they'd almost want to kill themselves, i.e. might well have even net negative lives' (which I think would be a poignant way to say what you write).

comment by Will Bradshaw (willbradshaw) · 2021-12-17T14:01:42.162Z · EA(p) · GW(p)

While I appreciate you sharing your thoughts, I don't think replying to a post asking people to talk about why they dislike the repugnant conclusion with a lengthy argument about why those people are making a basic mistake is really going to help me achieve my goal here.

I don't want to litigate these intuitions here, I want to understand them. We can do the litigation elsewhere.

Replies from: jackmalde
comment by Jack Malde (jackmalde) · 2021-12-17T14:17:17.402Z · EA(p) · GW(p)

You say "I'd like to have a better understanding of the intuitions that lead people to seeing this as such a serious problem, and whether I'm missing something that might cause me to put more weight on these sorts of concerns" in which case I think my whole comment should be of relevance and I am confused by your pushback, unless of course you are only interested in the opinion of people who find RC repugnant in which case I apologise.

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2021-12-18T14:01:36.843Z · EA(p) · GW(p)

I am also interested in the intuitions of people who find the RC intuitively problematic, even if they ultimately feel it is less bad than the alternatives.

I'm not interested (here) in arguments about why people who do take serious issue with the RC are wrong, and I think spending significant time on those here is actively counterproductive to what I'm trying to achieve.

There's an intermediate case of "asking people who report being bothered by the RC pointed questions" – this is good insofar as it comes from sincere curiosity and helps uncover more information about those intuitions, and bad insofar as it (deliberately or accidentally) makes those people feel attacked or forced to defend themselves. You've been responding to several other answers here in the latter kind of way, and I wish you'd stop.

Replies from: jackmalde
comment by Jack Malde (jackmalde) · 2021-12-18T14:23:02.870Z · EA(p) · GW(p)

OK it's your thread and I will leave, despite only good intentions. I'm very surprised to have had this pushback. If anyone I have responded has felt attacked by me I apologise.

I am also interested in the intuitions of people who find the RC intuitively problematic, even if they ultimately feel it is less bad than the alternatives.

Below is the relevant text from my original comment. Feel free to ignore the rest of it.

Note that the vast majority of people on earth want to continue living. This would include the vast majority of people who live in extreme poverty or who are undergoing horrific abuse. It would also include people who constantly consider suicide to end their pain but never go through with it. In normal parlance we would say these people live "bad" lives. However, we might conclude that these people are living lives worth living if they don't want their life to end / don't choose to end their life. So my guess is people imagine "a life barely worth living" to be a pretty "bad" one. The actual wording of "a life barely worth living" is inherently negative in how it is framed anyway. So RC would amount to a load of people with pretty "bad" lives by intuitive standards, being better than a smaller number of people with absolutely amazing lives. Accepting RC would be like creating another Africa with all it's poverty and hardship instead of creating another Norway with all it's happiness. Or creating loads of people attending daily suicide support groups rather than a smaller number of people living the best lives we can imagine. Most people would find these repugnant things to do and I personally would feel the sting here.
 

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2021-12-18T15:31:49.027Z · EA(p) · GW(p)

Below is the relevant text from my original comment. Feel free to ignore the rest of it.

Yep, I appreciated this part! I also agree that intuitions about the set point seem key here.

answer by Derek Shiller · 2021-12-17T17:09:45.528Z · EA(p) · GW(p)

I find your attitude somewhat surprising. I'm much less sympathetic to trolley problems or utility monsters than the repugnant conclusion. I can see why some people aren't moved by it, but I have a hard time seeing how someone couldn't get what it is moving about it. Since it is a rather basic intuition, it's not super easy to pump. But I wonder, what do you think about this alternative, which seems to draw on similar intuitions for me:

Suppose that you could right now, at this moment, choose between continuing to live your life, with all its ups and downs and complexity, or going into a state of near-total suspended animation. In the state of suspended animation, you will have no thoughts and no feelings, except you will have a sensation of sucking on a rather disappointing but not altogether bad cough drop. You won't be able to meditate on your existence, or focus on the different aspects of the flavor. You won't feel pain or boredom. Just the cough drop. If you continue your life, you'll die in 40 years. If you go into the state of animation, it will last for 40,000 years (or 500,000, or 20 million, whatever number it takes.) Is it totally obvious that the right thing to do is to opt for the suspended animation (at least, from a selfish perspective) ?

comment by Will Bradshaw (willbradshaw) · 2021-12-19T15:27:25.543Z · EA(p) · GW(p)

Thanks for trying to come up with a thought experiment that targets your intuitions here! That's exactly what I was hoping people would do.

For me, this thought experiment feels like it raises more "value of complexity" questions than the canonical RC. Though from the comments it seems like complexity vs homogeneity intuitions are contributing to quite a few people's anti-RC feelings, so it's not bad to have a thought experiment that targets that.

In any case, I think there probably is a sufficiently large number of years at which I would take the cough drop, all else equal. Certainly I don't feel extremely strong resistance to the idea of doing so. However, I'm a slightly non-optimal person to pose this thought experiment to, in that I'm not at all sure that my life so far has been good for me on net.

comment by Jack Malde (jackmalde) · 2021-12-18T14:24:51.459Z · EA(p) · GW(p)

By the way I apologise for implying you should "remove" something from your comment which I didn't literally mean. What I should have said is I think the words led to an unhelpful characterisation of the life being lived in the thought experiment. The OP doesn't appreciate my contributions so I am going to leave this post.

comment by Jack Malde (jackmalde) · 2021-12-18T06:39:17.094Z · EA(p) · GW(p)

In the state of suspended animation, you will have no thoughts and no feelings, except you will have a sensation of sucking on a rather disappointing but not altogether bad cough drop.

Firstly remove the words "rather disappointing". Remember there is nothing bad in this world and terms like that don't help people put themselves in the situation.

You won't feel pain or boredom.

I for one find this very difficult to imagine, and perhaps counterproductive to the RC. A buddhist might say not feeling pain or boredom is akin to living an enlightened life which is of the highest possible quality. It's for this reason that I personally don't find this thought experiment very helpful - it's just way too difficult to imagine what such a cough drop life would be like.

EDIT: I regret implying you should "remove" something from your comment which I don't literally mean. What I should have said is I think the words led to an unhelpful characterisation

Replies from: Derek Shiller
comment by Derek Shiller · 2021-12-18T14:45:44.458Z · EA(p) · GW(p)

There is a challenge here in making the thought experiment specific, conceivable, and still compelling for the majority of people. I think a marginally positive experience like sucking on a cough drop is easy to imagine (even if it is hard to really picture doing it for 40,000 years) and intuitively just slightly better than non-existence minute by minute.

Someone might disagree. There are some who think that existence is intrinsically valuable, so simply having no negative experiences might be enough to have a life well worth living. But it is hard to paint a clear picture of a life that is definitely barely worth living and involves some mix of ups and downs, because you then have to make sure that the ups and downs balance each other out, and this is more difficult to imagine and harder to gauge.

answer by Pablo · 2021-12-17T13:10:08.930Z · EA(p) · GW(p)

The term 'repugnant' is unfortunate; I think it's best to focus on whether there's anything morally problematic or deficient about such a world, irrespective of whether it elicits emotions of moral repugnance.

Personally, when I reflect on a universe that only contains experiences of "muzak and potatoes", I feel there's something missing from it, no matter how many such experiences it contains. I'm still willing to bite the bullet and conclude that my feeling is non-veridical, but I do experience the feeling.

One can also consider the parallel situation at the intrapersonal level. Parfit asks us to compare a "Century of Ecstasy" with a "Drab Eternity". I definitely feel the appeal of the former, even if, on reflection, I'd probably opt for the latter. (Though note that Parfit's wording here is also tendentious; a better name for the second option would be a "Mildly Pleasant Eternity".)

But I'm not sure I can describe this feeling more clearly or accurately, though, so this isn't really an answer to your question.

comment by Will Bradshaw (willbradshaw) · 2021-12-17T13:12:39.288Z · EA(p) · GW(p)

The term 'repugnant' is unfortunate; I think it's best to focus on whether there's anything morally problematic or deficient about such a world, irrespective of whether we'd call it repugnant.

Yeah, I agree, I'll tone down the title. Thanks!

answer by MichaelStJules · 2021-12-19T19:11:10.386Z · EA(p) · GW(p)

Adding another answer, although I think it's basically pretty similar to my first.

I can imagine myself behind a veil of ignorance, comparing the two populations, even on a small scale, e.g. 2 vs 3 people. In the smaller population with higher average welfare, compared to the larger one with lower average welfare, I imagine myself either

  1. as having higher welfare and finding that better, or
  2. never existing at all and not caring about that fact, because I wouldn't be around to ever care.

So, overall, the smaller population seems better.

 

I can make it more concrete, too: optimal family size. A small-scale RC could imply that the optimal family size is larger than the parents and older siblings would prefer (ignoring indirect concerns), and so the parents should have another child even if it means they and their existing children would be worse off and would regret it. That seems wrong to me, because if those extra children are not born, they won't be wronged/worse off, but others will be worse off than otherwise.

In the long run, everyone would become contingent people, too, but then you can apply the same kind of veil of ignorance intuition pump. People can still think a world where family sizes are smaller would have been better, even if they know they wouldn't have personally existed, since they imagine themselves either

  1. as someone else (a "counterpart") in that other world, and being better off, or
  2. not existing at all (as an "extra" person) in their own world, which doesn't bother them, since they wouldn't have ever been around in the other world to be bothered.

Naively, at least, this seems to have illiberal implications for contraceptives, abortion, etc..

 

There's also an average utilitarian veil of ignorance intuition pump: imagine yourself as a random person in each of the possible worlds, and notice that your welfare would be higher in expectation in the world with fewer people, and that seems better. (I personally distrust this intuition pump, since average utilitarianism has other implications that seem very wrong to me.)

comment by Will Bradshaw (willbradshaw) · 2021-12-20T08:32:43.820Z · EA(p) · GW(p)

Thanks. We of course run here into the standard total-vs-person-affecting dispute, namely that I would prefer to exist with positive welfare than not exist, and all this "not around to care" stuff feels like a very odd way to compare scenarios to me.

answer by antimonyanthony · 2021-12-18T17:41:08.270Z · EA(p) · GW(p)

It depends on the formulation. I don't find Parfit's version of the RC, where the people with muzak-and-potatoes lives "never suffer," repugnant. But according to total (symmetric) utilitarianism, that RC is morally equivalent to another version, which I find highly repugnant. Imagine (A) as large and blissful a utopia as you like. Now imagine (Z) a world where many more people than in this utopia each have the following life: for a million years, they endure constant, unbearable torture. After that, they eat potatoes and listen to muzak peacefully for a sufficiently large number of years.

I just don't see how the latter experiences, no matter how many of them, could be considered morally significant in a way that outweighs the torture. You can chalk this up to scope neglect if you want, but (1) my intuitions are definitely not scope-neglectful when comparing suffering to suffering, and (2) I have the same intuition about milder cases where the amount of happiness a classical utilitarian would (probably) accept as outweighing is practically imaginable. e.g. Each person is born experiencing 1 day of depression, then eats potatoes for a normal human lifespan (~30,000 days).

answer by Matt Ball · 2021-12-22T19:47:58.207Z · EA(p) · GW(p)

I have serious doubts about inter-personal trade-offs.
https://www.mattball.org/2021/12/note-and-more-on-ethics-including-case.html
which follows
https://www.mattball.org/2021/12/ethics-is-not-simple-math-problem.html
 

answer by Jackson Wagner · 2021-12-17T11:24:00.996Z · EA(p) · GW(p)

To answer on the level of imagery and associations rather than trying to make a strong philosophical argument: The Repugnant Conclusion makes me think of the dire misery of extremely poor places, like Haiti or Congo. People in extreme poverty are often malnourished, they have to put up with health problems and live in terrible conditions. On top of all those miseries, they have to get through it all with very limited education / access to information, and very limited freedom / agency in life. (But I agree with jackmalde that their lives are nevertheless worth living vs nonexistence -- I would still prefer to live if I was in their situation.)

Compared to an Earth with 10 Billion people living at developed-world standards, it just seems crazy to me that anyone would prefer a world with, say, 1 Trillion people eking out their lives in a trash-strewn Malthusian wasteland. The latter seems like a static world with no variety and no future, without the slack necessary for individuals to appreciate life or for civilization as a whole to grow, explore, learn, and change.

This image leads to various wacky political objections, which are not philosophically relevant since nobody said the Repugnant Conclusion was supposed to apply to the actual situation of Earth in 2021 (as opposed to, say, a hypothetical comparison between 10 Billion rich people vs 3^^^3 lives barely worth living). But emotionally and ideologically, the Repugnant Conclusion brings to mind appropriately aversive images like:

  • That EA should pivot away from interventions like GiveDirectly or curing diseases, and instead become all about boosting birthrates in whatever way possible. (New cause area: "Family Disempowerment Media"?)
  • That things like the invention of the birth control pill and the broader transition away from strict pro-fertility hierarchical gender norms (starting in the industrial revolution) were some of the worst events in history.
  • That almost all human values (art, love, etc) should be sacrificed in favor of supporting a higher total carrying capacity of optimized pure replicators, a la the essay "Meditations on Moloch".

So, in the practical world, the idea that humanity should aim to max out the Earth's carrying capacity without regard to quality-of-life seems insane, so the Repugnant Conclusion will therefore always seem like a bizarre idea totally opposed to ordinary moral reasoning, even if it's technically correct when you use sufficiently big numbers.

Separately from all the above, I also feel that there would be an extreme "samey-ness" to all of these barely-worth-living lives. It seems farfetched to me that you are still adding moral value linearly when you create the quadrillionth person to complete your low-quality-of-life population -- how could their repetitive overlapping experiences match up to the richness and diversity of qualia experienced by a smaller crew of less-deprived humans?

comment by Will Bradshaw (willbradshaw) · 2021-12-19T15:13:17.021Z · EA(p) · GW(p)

Thanks, this is one of my favourite responses here. I appreciated your sharing your mental imagery and listing out some consequences of that imagery. I think I am more inclined than you to say that many people alive today have lives not worth living, but you address confusion about that point in another comment. And while I'm more pro-hedonium than you I also wonder about "tiling" issues.

Do your intuitions about this stay consistent if you reverse the ordering? That is, as I think another comment on this post said elsewhere, if you start with a large population of just-barely-happy people, and then replace them with a much smaller population of very happy people, does that seem like a good trade to you?

Replies from: Jackson Wagner
comment by Jackson Wagner · 2021-12-21T05:13:48.862Z · EA(p) · GW(p)

Yes, my intuition stays the same if the ordering is reversed; population A seems better than population Z and that's that. (For instance, if the population of an isolated valley had grown so much, and people had subdivided their farmland, to the point that each plot of land was barely enough for subsistence and the people regularly suffered conflict and famine, in most situations I would think it good if those people voluntarily made a cultural change towards having fewer children, such that over a few generations the population would reduce to say 1/3 the original level, and everyone had enough buffer that they could live in peace with plenty to eat and live much happier lives. Of course I would have trouble "wishing people into nonexistence" depending on how much the metaphysical operation seemed to resemble snuffing out an existing life... I would always be inclined to let people live out their existing lives.)

Furthermore, I could even be tempted into accepting a trade of Population A (whose lives are already quite good, much better than barely-worth-living) for a utility-monster style even-smaller population of extremely good lives. But at this point I should clarify that although I might be a utilitarian, I am not a "hedonic" utilitarian and I find it weird that people are always talking about positive emotional valence of experience rather than a more complex basket of values. I already mentioned how I value diversity of experience. I also highly value something like intelligence or "developedness of consciousness":

  • It seems silly to me that the ultimate goal becomes Superhappy states of incredible joy and ecstasy. Perhaps this is a failure of my imagination, since I am incapable of really picturing just how good Superhappy states would be. Or perhaps I have cultural blinders that try to ward me off of wireheading (via drug addiction, etc) by indoctrinating me to believe statements like "life isn't all about happiness; being connected to reality & other people is important, and having a deep understanding the universe is better than just feeling joyful".

  • Imagine the following choice: "Take the blue pill and you'll experience unimaginable joy for the rest of your life (not just one-note heroin-esque joy, but complex joy that cycles through the multiple different shades of positive feeling that the human mind can experience). Take the red pill, and you'll experience a massive increase in the clarity of your consciousness together with a gigantic boost in IQ to superhuman levels, allowing you to have many complex experiences that are currently out of reach for you, just like how rats are incapable of using language, understanding death, etc. But despite all those revelations, your net happiness level will be totally similar to your current life." Obviously the joy has its appeal -- both are great options! -- but I would take the red pill.

  • Although I care about the suffering of animals like chimps and factory-farmed chickens and would incorporate it into my utilitarian calculus, I also think that there is a sense in which no number of literal-rats-on-heroin could fully substitute for a human. If you offered me to trade 1 human life for creating a planet with 1 quadrillion rats on heroin, I'd probably take that deal over and over for the first few thousand button-presses. But I wouldn't just keep going until Earth ran out of people, because I'd never trade away the last complex, intelligent human life to just get one more planet of blissed-out lower life forms.

  • By contrast, I'd have far fewer qualms going the other way, and trading Earth's billions of humans for a utopian super-civilization with mere millions of super-enhanced, godlike transhuman intelligences.

Even with my basket of Valence + Diversity-of-experience + Level-of-consciousness, I still expect that utilitarianism of any kind is more like a helpful guide for doing cost-benefit calculations than a final moral theory where we can expect all its assumptions (for instance that moral value scales absolutely linearly forever when you add more lives to the pile) to robustly hold in extreme situations. I think this belief is compatible with being very very utilitarian compared to most ordinary people -- just like how I believe that GDP growth is an imperfect proxy for what we want from our civilization, but I am still very very pro economic growth moreso than ordinary people.

comment by Charlie_Guthmann (Charles_Guthmann) · 2021-12-18T02:23:06.823Z · EA(p) · GW(p)

"The latter seems like a static world with no variety and no future, without the slack necessary for individuals to appreciate life or for civilization as a whole to grow, >explore, learn, and change."

If you're a total utilitarian, you don't care about these things other than how they serve as a tool for utility. By the structure of the repugnant conclusion, there is no amount of appreciating life that will make the total utility in smaller world greater than total utility in bigger world. 

Replies from: Jackson Wagner
comment by Jackson Wagner · 2021-12-18T11:07:04.881Z · EA(p) · GW(p)

Certainly. Some of those values I mentioned might be counted as direct forms of utility, and some might be counted as necessary means to the end of greater total utility later. And the repugnant conclusion can always win by turning up the numbers a bit and making Population Z's lives pretty decent compared to the smaller Population A.

Partially I am just trying to describe the imagery that occurs to me when I look at the "population A vs population Z" diagram.

I guess I am also using the repugnant conclusion to point out a complaint I have against varieties of utilitarianism that endorse stuff like "tiling the universe with rats on heroin". To me, once you start talking about very large populations, diversity of experiences is just as crucial as positive valence. That's because without lots of diversity I start doubting that you can add up all the positive valence without double-counting. For example, if you showed me a planet filled with one million supercomputers all running the exact same emulation of a particular human mind thinking a happy thought, I would be inclined to say, "that's more like one happy person than like a million happy people".

Replies from: Charles_Guthmann
comment by Charlie_Guthmann (Charles_Guthmann) · 2021-12-18T18:30:12.020Z · EA(p) · GW(p)

I have the same feeling. I have an aversion to utility tiling as you describe it but I can't exactly pinpoint why other than that I guess I am not a utilitarian. As consequentialists perhaps we should focus more on the ends ends, i.e. aesthetically how much we like the look of future potential universes, rather than looking at the expected utility of said universes. E.g. Star wars is prettier than expansive VN probe network to me so I should prefer that. Of course this is just rejecting utiliarianism again. 

comment by Jack Malde (jackmalde) · 2021-12-18T06:27:36.723Z · EA(p) · GW(p)

(But I agree with JackM that their lives are nevertheless worth living vs nonexistence -- I would still prefer to live if I was in their situation.)

You have misunderstood my comment. Perhaps I have not been clear enough. Feel free to have another read and I would be happy to answer any questions.

Replies from: Jackson Wagner
comment by Jackson Wagner · 2021-12-18T11:39:16.821Z · EA(p) · GW(p)

Yes, I guess it would've been more accurate to say "I'm one of those confused people jackmalde was referring to, who intellectually thinks that very deprived lives are still worth living but nevertheless feels uncomfortable and conflicted about the obvious logical implications of that."

Potential sources of this conflictedness:

  • Maybe my mental picture of a deprived but barely worth-it life is cartoonishly exaggerated in its badness. Poor people that I have met IRL in rural India did not have the best lives, but most of them were basically happy such that it does seem like a moral boon rather than repugnant to imagine creating trillions of others like them.

  • Maybe I am still having difficulty extricating myself from practical/political concerns. In the real world, if a new continent magically appeared full of many new barely-worth-living people, we would feel morally obligated to share with them and help improve their lives. This is a good instinct which is at the core of EA itself, but the inevitability of this empathetic response does mean that the appearance of new large barely-worth-it populations seems like a threat to the ongoing wellbeing of Population A. But of course in the thought experiment the populations are totally separate.

  • I am definitely (and understandably) uncertain about how to figure what kind of life is barely worth living. I am strongly anti-death to a greater extent than you are in your comment, but even I would not endorse things like "tortured forever" as being necessarily better than nothing, so I do want to set a threshold somewhere. (But again maybe this is just political concerns and my own personal spoiledness?? If I was God deciding whether to create the universe, and it was either going to be torture-hell or no universes whatsoever, maybe I'd create hell rather than there being nothing at all. But if I got to create a normal happy universe first, I'd definitely stick with happy universe plus nothing else rather than happy + hell.) On the other hand, the "creation test" seems suspicious to me -- wouldn't everyone just benchmark off their own quality of life? I'd be happy to create educated rich-world citizens, but immortal cyberhumans from the 23rd century would probably say that life isn't worth creating if you're not immortal and at the very least 50% cyber.

answer by Teo Ajantaival · 2021-12-18T11:31:12.118Z · EA(p) · GW(p)

For me (currently with minimalist [EA · GW] intuitions), the repugnance depends on whether the lives in the larger population are assumed to never suffer (cf. this section [EA · GW]). Judging from the different answers here, people seem to indeed have wildly different interpretations about what those lives feel like.

At one extreme, they could contain absolutely no craving for change and be simply lacking in additional bliss; at the other, they could be roller coaster lives in which extreme craving is assumed to be slightly positively counterbalanced by some of their other moments.

As a practical example, I deny that factory farms could be net positive (all else being equal [EA · GW]) regardless of how much bliss the victims could be induced to experience.

answer by lincolnq · 2021-12-17T14:48:20.903Z · EA(p) · GW(p)

A world which supports the maximum number of people has no slack. I instinctively shy away from wanting to be in a world with resource limits that tight.

comment by antimonyanthony · 2021-12-17T16:33:44.017Z · EA(p) · GW(p)

I think the point of the RC is to assume away these kinds of practical contingencies - suppose you know for certain that the muzak-and-potatoes lives would never drop into the territory of more suffering than happiness.

answer by MichaelStJules · 2021-12-20T19:41:54.104Z · EA(p) · GW(p)

(Another answer...)

In humans, fertility rates have been declining while average quality of life has been increasing. Considering only human life until now, the RC might suggest things would have been better had fertility rates and average quality of life remained constant, since we'd have far more people with lives worth living. It can undermine the story of human progress, and suggest past trajectories would have been better.

We could also ask whether lifting people out of poverty is good, in case it would lead to lower populations. In general, as incomes increase, people have more access to contraceptives and other family planning services, even if we aren't directly funding such things. (Life-saving interventions would likely not lead to lower populations than otherwise, and would likely lead to higher ones at least in some places, according to research by David Roodman for GiveWell (GiveWell blog post).)

From https://ourworldindata.org/future-population-growth

https://en.wikipedia.org/wiki/List_of_countries_by_population_growth_rate

https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependencies_by_total_fertility_rate

answer by Larks · 2021-12-18T20:43:17.966Z · EA(p) · GW(p)

A slightly tongue-in-cheek response: the thought experiment is often introduced by name, and calling it 'repugnant' is priming people to consider it bad, in a way that 'the trolley problem' does not.

answer by Frank_R · 2021-12-18T09:26:33.438Z · EA(p) · GW(p)

I suggest the following thought experiment. Imagine wild animal suffering can be solved. Then it would be possible to populate a square mile with millions of happy insects instead of a few happy human beings. If the repugnant conclusion was true, the best world would be populated with as many insects as possible and only a few human beings that take care that there is no wild animal suffering. 

Even more radical, the best thing to do would be to fill as much of the future light cone as possible with hedonium. Both scenarios do not match the moral intuitions of most people.

If you believe in the opposite, namely that a world with fewer individuals with higher cognitive functions is more worthy, you may arrive at the conclusion that a world populated with a few planet-sized AIs is the best.  

As other people have said, all kinds of population ethics lead to some counter-intuitive conclusions. The most conservative solution is to aim for outcomes that are not bad according to many ethical theories. 

answer by Tessa (tessa) · 2021-12-17T15:50:15.616Z · EA(p) · GW(p)

In the maximally repugnant world, no one's life is all that good. I feel the sting of that. It's hard for me to get excited about a world in which all of the people I know personally have barely-net-positive lives full of suffering and struggle, even if that world contains more people.

The Wikipedia page you linked gives a pretty not-upsetting version of the paradox: 

From Wikipedia, the four situations, A, A+, B-, and B of the Mere Addition Paradox, illustrated as bars of different widths and heights with "water" between (in the case of A+ and B-), following Parfit's book Reasons and Persons, chapter 19.

whereas the thing that people find repugnant looks more like:
 

From the Stanford Encyclopedia page on the repugnant conclusion. 


I accept the conclusion, but it feels like I am biting a bullet when I say that World Z is worth fighting for.

comment by Jack Malde (jackmalde) · 2021-12-18T06:44:10.218Z · EA(p) · GW(p)

It's hard for me to get excited about a world in which all of the people I know personally have barely-net-positive lives full of suffering and struggle, even if that world contains more people.

I'd imagine they must have lots of brilliant and amazing experiences to make up for the suffering, in order to leave them at a net-positive life.

Replies from: tessa
comment by Tessa (tessa) · 2021-12-18T15:32:50.201Z · EA(p) · GW(p)

Is this necessary? I feel like many people judge their lives as worth living even though their day-to-day experiences contain mostly pain. I wonder if we're imagining different definitions  for "barely-net-positive". Maybe you mean "adding up the magnitude of moment-to-moment negative or positive qualia over someone's entire life" (hedonistic utilitarianism) whereas I am usually imagining something more like "on reflection, the person judges their life as worth living" (kinda preference utilitarian).

Replies from: jonathan-mustin
comment by Jonathan Mustin (jonathan-mustin) · 2021-12-20T21:18:36.206Z · EA(p) · GW(p)

My sense is that people choose to weather currently-net-negative lives for at least two  reasons that they might endorse on reflection:

  1. The negative parts of their life may be solvable, such that the EV of their future is plausibly positive
  2. Ending their life has a few terrible externalities, e.g. the impact it would have on their close loved ones

Eliminating those considerations, I would expect the bar for World Z to be much better than the worst lives people reflectively consider worth living today.
 

answer by Florian Habermacher · 2021-12-18T13:45:23.096Z · EA(p) · GW(p)

Might simply also a big portion of status-quo bias and/or omission bias (here both with similar effect) - be at play, helping to explain the typical classification of the conclusion as repugnant?

I think this might be the case when I ask myself  whether many people who classify the conclusion as repugnant, would not also have classified as just as repugnant the 'opposite' conclusion, if instead they had been offered the same experiment 'the other way round':

Start with a world counting huge numbers of lives worth-living-even-if-barely-so, and propose to destroy them all, for the sake of making very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others). It is just a gut feeling, but I'd guess this would evoke similar types of feelings of repugnance very often (maybe even more so than in the original RC experiment?)! A sort of Repugnant Conclusion 2.

comment by MichaelStJules · 2021-12-19T20:48:53.345Z · EA(p) · GW(p)

I think the killing would probably explain the intuitive repugnance of RC2 most of the time, though.

Replies from: FlorianH, willbradshaw
comment by Florian Habermacher (FlorianH) · 2021-12-22T16:31:43.091Z · EA(p) · GW(p)

Fair point, even if my personal feeling is that it would be the same even without the killing (even if indeed the killing itself indeed would alone suffice too).

We can amend the RC2 attempt to avoid the killing : Start with the world with the seeds for huge numbers of lives worth-living-even-if-barely-so, and propose to destroy that world, for the sake of creating a world for very few really rich and happy! (Obviously with the nuance that it is the rich few whose net happiness is slightly larger than the sum of the others).

My gut feeling does not change about this RC2 still feeling repugnant to many, though I admit I'm less sure and might also be biased now, as in not wanting to feel different, oops.

comment by Will Bradshaw (willbradshaw) · 2021-12-20T08:27:21.777Z · EA(p) · GW(p)

I moderately agree, but I do think there is commonly an ordering effect here, arising both from the phrasing of the RC and the way people often discuss it.

answer by dominicroser · 2021-12-18T04:40:49.666Z · EA(p) · GW(p)

There was a somewhat unusual short philosophical paper this year signed by lots of philosophers which claimed that avoidance of the repugnant conclusion should not be seen as a necessary condition for an adequate population ethics. I guess it's driven by a similar concern you have here: the repugnant conclusion is much less obviously repugnant than its name makes it seem.     

63 comments

Comments sorted by top scores.

comment by Will Bradshaw (willbradshaw) · 2021-12-19T15:18:55.995Z · EA(p) · GW(p)

One approach I was expecting someone to try here, but haven't seen, is trying to motivate the intuition at a smaller scale – e.g. comparing a small number of very happy people to a large-but-easily-imaginable number of slightly happy people.

If the intuitions underlying aversion to the Repugnant Conclusion only kick in for extremely large populations, then I'm more confidently inclined to say they are a mistake arising from an inability to imagine at that scale. But given that the original argument for the RC is based on infinite regress, it seems like the issues that make people averse to it should start to kick in much sooner. But most commenters here have focused entirely on the vast-population case.

Replies from: MichaelStJules, antimonyanthony
comment by MichaelStJules · 2021-12-19T21:45:29.077Z · EA(p) · GW(p)

I thought my first answer [EA(p) · GW(p)] already did what you're asking for, and it has (right now) the most upvotes, which may reflect endorsement. Are you looking for something more concrete or that isn't tied to people who would exist anyway being worse off? I added another answer.

The ways to avoid the RC, AFAIK, should fall under at least one of the following, and so intuitions/thought experiments should match:

  1. Have some kind of threshold (a critical level, a sufficientarian threshold or a lexical threshold), and marginally good lives fall below it while the very good lives are above. It could be a "vague" threshold.
  2. Non-additive (possibly aggregating in some other way, e.g. with decreasing marginal returns to additional people, average utilitarian, maximin or softer versions like rank-discounted utilitarianism which strong prioritize the worst off, or strongly prioritizing better lives, like geometrism).
  3. Person-affecting.
  4. Carry in other assumptions/values and appeal to them, e.g. more overall bad in the larger population.

See also:

https://plato.stanford.edu/entries/repugnant-conclusion/#EigWayDeaRepCon

comment by antimonyanthony · 2021-12-19T16:44:09.181Z · EA(p) · GW(p)

This is a fair point. For what it's worth, I do honestly think a world of 10 people with utopian lives (of normal length) is better than a world with 10 billion people with lives like the ones I described in my answer [EA(p) · GW(p)]. I guess it depends on the details of "utopian" - seems plausible that for me and many others to endorse this claim, such lives need not be so imaginably awesome that a classical utilitarian would agree the total utility of the 10 billion population world is worse.

comment by Eric Chen · 2021-12-17T15:35:57.135Z · EA(p) · GW(p)

Do you also find the Reverse Repugnant Conclusion to be straightforwardly and unobjectionably true? (This would help tailor an intuition pump that gets at the repugnance)

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2021-12-17T16:01:18.667Z · EA(p) · GW(p)

Yes.

Replies from: Teo Ajantaival
comment by Teo Ajantaival · 2021-12-17T16:22:22.909Z · EA(p) · GW(p)

Ditto for Creating Hell to Please the Blissful?

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2021-12-17T16:41:16.075Z · EA(p) · GW(p)

I think any scenario that involves hypothetical vast populations in a very simple abstract universe isn't going to change my views here. I can't actually imagine that scenario (a flaw with many thought experiments), so I'm forced to fall back on small-scale intuitions + intellectual beliefs. The latter say such a thing would be the right thing to do, given a sufficiently large blissful population and all the caveats and restrictions that always apply in these thought experiments. 

I think trying to convince the former might be more tractable, but big abstract thought experiments like this don't do that, because they are so unimaginable and unrealistic. That's (one framing of) why I'm looking for something less abstract. This is what I was trying to get at in the OP, though I accept I wasn't super clear about what exactly I was & wasn't looking for.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2021-12-17T19:09:39.297Z · EA(p) · GW(p)

I thought the OP was clear. Sorry that most of the answers, including mine, do not actually answer your question.

Given what you say, maybe the reason you don't find the Repugnant Conclusion counterintuitive is that you have already internalized that you can't adequately represent the thought experiment in imagination, so your brain doesn't generate the relevant intuitions in the first place. Whereas I personally agree, on reflection, that my internal representation of the thought experiment is inadequate, but this doesn't prevent me from feeling the intuitive appeal of the less populous world. This might also explain why you do feel the sting of trolley problems, which generally involve small numbers of people. (However, you also say that you find utility monsters counterintuitive, which would be inconsistent with this explanation. Interestingly, in Reasons and Persons Parfit dismisses the force of Nozick's thought experiment on the grounds that it's impossible to properly imagine a utility monster. But he doesn't take this same approach for dealing with the Repugnant Conclusion.)

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2021-12-18T10:24:46.320Z · EA(p) · GW(p)

Yeah, I do think that "I can't actually realistically represent this scenario in my imagination, and if I try I'll just deceive myself, so I won't" has become a pretty deep intuition for me over the years.

I think it's more thoroughly internalised for scenarios that are unimaginably large (many people, very long stretches of time) than scenarios that are small but weird. Possibly because the intuition for size has been trained by  a lot of real-world experiences – I don't think a human can really imagine even a million people, so there are many real-world cases where the correct response is to back off from visual imagination and shut up and multiply [? · GW].

Utility monsters (and the Fat Man trolley problem variant) are small but weird, so it's more difficult for me to accept that my intuitive imagination of the scenario is likely to be misleading. I've seen fictional representations of utility monsters, and in general when I try to imagine a single sentient being it's difficult not to imagine something like a human. So even though I believe that a real utility monster would in fact be a profoundly alien and hard-to-imagine being, when I think about the scenario my brain conjures up a human tyrant and it seems really bad.

Whereas for the RC my brain sees the words "unimaginably vast" and decides not to try and imagine.