Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

post by Ardenlk · 2021-01-05T02:18:27.901Z · EA · GW · 29 comments

Contents

  Introduction
    of paper:
    
    
  The view
    
      Central Illustration of HMV:
    non-identity problem for HMV
    Counterpart Relations
  Results
    the Repugnant Conclusion:
    the absurd conclusion:
    independence of irrelevant alternatives?
    mere addition paradox:
  Ways of rejecting Meacham’s view
  My reaction
    
None
29 comments

I originally wrote this summary in my capacity as a researcher at 80,000 Hours to save time for the rest of the team. The view presented in the paper (and person-affecting views in general) would have important implications for prioritizing among problems. (Listen to the first Hilary Greaves 80,000 hours podcast episode to hear some about why.) Thus it seemed important for us to engage with the paper as a contemporary and respected representative of such views. Having written the summary, it seemed worthwhile to post here too (especially once Aaron Gertler offered to help me format it - thanks Aaron!)

This is a very spare but pretty comprehensive summary of Christopher Meacham's 2012 paper, "Person-Affecting Views and Saturating Counterpart Relations." Hilary Greaves of the Global Priorities Institute at Oxford said a couple years ago that the view presented in this paper was "the best theory that [she's] seen in person-affecting spirit."

Although presenting my own views is not the purpose of this post, I wrote out my reaction for 80,000 Hours when I did the summary, and have shared it at the end of this post as well. In short: although I share some of the intuitive motivations of the view, and I thought the view was impressive and interesting, I don't find it particularly plausible. This decreased my confidence that person-affecting views can be made to work.

(Read the full paper.)

Introduction

Aim of paper:

(i) To create a person-affecting view (i.e., a view according to which, in a slogan, what's good is 'making people happy, not making happy people') that avoids or solves:

  1. The nonidentity problem
  2. The repugnant conclusion
  3. The absurd conclusion
  4. The mere-addition paradox
  5. The various problems of other person-affecting views.

(all these phrases are explained in their respective sections, below)

(ii) To do (1) - (5) by using a new way of identifying subjects across outcomes, i.e., saying who counts as the same subject for moral decision-making purposes in different possible worlds.

Motivation:

There are some moral intuitions, such as the ‘procreation asymmetry’ (illustrated in the ‘central illustration’ below) that only a person-affecting view can capture. But, Meacham thinks, other person-affecting views are inadequate.

Proposal:

A person-affecting view called the “harm minimizing view” (HMV) + a way of identifying subjects across outcomes called “saturating counterpart relations”.

The view

HMV:

I. We ought to pick the outcome in which the sum of harm to subjects in that world is minimized.

II. Harm is done to a subject in a world if and only if she exists in that world and her welfare there is lower than her welfare in an alternate world.

III. In worlds where a subject doesn’t exist, we treat her welfare as if it is equal to 0 (but again, she cannot be harmed in that world).

Central Illustration of HMV:

You can either bring about possible world 1 or possible world 2. Which should you bring about?

table 1

HMV says you should bring about world 2. In world 1, harm is done to Bob because his wellbeing at world 1 is lower than his wellbeing at world 2 (where it is counted as 0), whereas at world 2 no harm is done to either, because neither exists. Thus if we want to minimize the sum of harms, we pick world 2.

A note I wrote in the course of reading the paper:

This intuition feels more compelling when we don't imagine Amy and Bob are the only (possible) people in the world. Instead, say they're your possible children, but otherwise the world is populated just about the way it is in reality. Many people have the intuition that whereas you have at least some moral reason not to bring about world 1 ("poor bob!"), you have no moral reason to bring about world 2 ("It's not morally good to bring people into existence -- we shouldn't commend people for having kids."). The combination of these intuitions is the ‘procreation asymmetry’, i.e., the intuition that you ought not bring people with negative wellbeing into existence, but you have no moral reason to bring people with positive wellbeing into existence.

I should also note that although Meacham says the idea that you should bring about World 2 rather than World 1 all else equal is the 'central illustration', it is not necessarily the main motivation for the view. That may be, e.g., avoiding the repugnant conclusion.

The non-identity problem for HMV

HMV as it stands has a problem: it says these worlds are equally good (neither contains any harm); but we the right-hand world is better.

table 2

This is the 'non-identity problem', which afflicts many person-affecting views.

Meacham's solution is to also posit "saturating counterpart relations."

Saturating Counterpart Relations

In short: we say that child(1) and child(2) are "counterparts" of each other, i.e., count for the purposes of moral reasoning about this case as the same subject.

But how do we say who is a counterpart of whom? We have to define a counterpart relation. Meacham doesn’t actually define a unique counterpart relation, but he says that any of a set of counterpart relations that satisfies the following conditions will be fine:

With respect to a particular choice that would result in either some worlds 1 or 2, the counterpart relation for subjects in those worlds is such that:

  1. It’s a one-to-one mapping from W(orld)1 to W(orld)2.
  2. Before the choice is made, everyone who is qualitatively identical in W1 and W2 are mapped to one another.
  3. As many subjects in W1 are mapped to subjects in W2 as possible. (That's the 'saturating' part.)
  4. There is no other mapping that satisfies the first 3 conditions but which results in W1 having lower harm when combined with HMV.

Note the absence of conditions of symmetry and transitivity.

Any relation that satisfies (1)-(4) is a ‘saturating counterpart relation’, and will satisfy our intuition in the non-identity problem. The mothers in the above situation are counterparts because they are qualitatively identical before the relevant choice. The children who end up being born in each world are counterparts basically because of condition (3).

Results

Avoiding the Repugnant Conclusion:

The 'repugnant conclusion' is the idea that a world in which there is a large number of very happy people is a worse world than one in which there is a much larger number of people just above subsistence level (i.e., whose lives are 'barely worth living'). Totalist' views imply the repugnant conclusion, because they simply 'total up' the welfare of everyone in each world; add enough people with lives barely worth living to one of the worlds and it will be able to surpass the other. This is one of the main things many people don't like about totalist views.

HMV avoids the repugnant conclusion proper by itself, but only by saying that neither the world with a large number of very happy people nor the world with tons more barely subsisting people does any harm; thus bringing about either is permissible. The anti-repugnant-conclusion intuition many people have is that one should bring about the former.

Combining HMV with a saturating counterpart relation allows us to get this result. All of the well-off people in the first world get matched to someone in the second world, and since they’re much better off in the first, the second gets counted as causing lots of harm. The rest of the people in the second world are not harmed, but neither are they in the first world since they don’t exist there. Thus HMV tells us we should bring about the first world—the one with lots of very happy people—rather than the world with a much larger number of barely subsisting people.

Avoiding the absurd conclusion:

The less well-known 'absurd conclusion' is that there would be a moral difference between two worlds with the same distributions of wellbeing across the same number of people, but in which the subjects live concurrently instead of consecutively. (This may be a problem for various views, though I'm not familiar enough with the literature to say which).

Meacham’s view avoids this conclusion, since (3) requires that each subject in one world gets mapped to one in the other (they have the same number), and (4) requires that they get mapped to subjects with the same well-being, since otherwise there will be another mapping that will minimize the harm of each option. HMV + a saturating counterpart relation thus says these worlds are equally valuable, giving the intuitively right result.

The independence of irrelevant alternatives?

Meacham’s view violates the 'independence of irrelevant alternatives' requirement when it is transmuted into a requirement on the rationality of moral decisions. The independence of irrelevant alternatives is a principle many philosophers have found attractive.

chart 3

The independence of irrelevant alternative requirement says (roughly) that: W2 is preferable to W1 in choice situation 1 if and only if W2 is preferable to W1 in choice situation 2.

But in choice situation 1, Meacham’s view says that either is permissible (neither world does any harm); while in choice situation 2 W1 is preferable to W2, because in that situation W2 does harm and W1 still doesn’t.

Meacham responds:

  1. One could think this argument fails because it’s metaphysically impossible to have identical possible worlds in which agents face different choice situations, and you can't draw conclusions from a metaphysically impossible thought experiment.
  2. If we decide to define possible worlds in a “coarse-grained” way such that the above is metaphysically possible, then violating the independence of irrelevant alternatives is actually inescapable for escaping the "mere-addition paradox" and a strength of the view. (explained below)

The mere addition paradox:

chart 4

Paradox:

  1. Thus: W1 ≥ W3, W3 > W2, W2 ≥ W1.
  2. Thus: W2 ≥ W1 ≥ W3 > W2.

That's a contradiction.

How does Meacham’s view help?

Well, at first it seems to have the same problem. In the first choice situation, Meacham’s view says that W1 and W2 are both permissible, because no harm is done in either. This is consistent with the intuition. In the second, Meacham’s view says that W2 is better than W3, consistent with the intuition. In the third, Meacham’s view says that W3 is better than W1, since W3 contains harm and W1 doesn’t.

However, when all three options are available in a single choice situation, Meacham’s view resolves the paradox (exactly by violating ‘independence of irrelevant alternatives’).

table 4

Meacham’s view says that in this choice situation, W2 does 40 units of harm and W3 does 10, while W1 does none. Thus we should bring about W1.

In other words, we block the inference from

  1. W1 ≥ W3, W3 > W2, W2 ≥ W1 to
  2. W2 ≥ W1 ≥ W3 > W2

in the argument for the paradox, because we say that it matters what other choices you have available to you when you’re comparing options.

So Meacham’s view requires the violation of the independence of irrelevant alternatives, but in a plausible-seeming way – it says that in moral contexts sometimes it does matter what kind of choice situation you’re in, i.e., that outcomes can’t be ranked in terms of moral permissibility in a totally situation-independent way.

(I thought this part of the paper was pretty cool.)

Ways of rejecting Meacham’s view

Meacham outlines 5 ways someone might reject his view (the fifth is further down):

  1. Reject counterpart relations
  2. Reject focus on wellbeing
  3. Go for the repugnant conclusion
  4. Relatedly, reject the procreation asymmetry

Meacham spends some time on (4). He asks: does his view imply that it’s better to let humanity go extinct, rather than procreate, since otherwise we’ll inevitably end up creating some unhappy people?

He suggests that our intuitions here are muddled by things like the sense that people have a right to procreate, and that issues with subjective probabilities make this tricky. He also writes:

[we won’t get that result] in realistic cases. Consider: why think that your choice to procreate will result in the existence of individuals whose lives are not worth living? The thought might be this: "The effects of your choice to procreate will ripple outward, and change a great many things. And it may result in some individuals being harmed relative to their counterparts in the outcome that results from a different choice." But this is just as true of the choice not to procreate. And there's no reason to think that the decision to procreate will lead to more harm, all things considered, than the decision not to procreate.

(I return to this response in my reaction below.)

One more way Meecham points out of rejecting his view:

  1. You could think that both W2 and W3 are permissible in the following case:

table 5

Meacham’s view says we should bring about W1 because both W2 and W3 cause one unit of harm; but intuitively this is wrong. W2 and W3 also seem at least permissible.

Meacham's response: you just have this intuition because both W2 and W3 would be permissible if considered just against W1 in isolation (in two different choice situations), and his view agrees with that.

My reaction

This is an interesting and coherent paper, and the view Meacham defends seems relatively sensible to me as these sorts of things go.

I find the rejection of the independence of irrelevant alternatives somewhat compelling in this case, or at least not that bad. I buy the idea that it needn't necessarily be a requirement on moral choices (even if other forms of it make sense as requirements on things like preferences over outcomes). Bringing about W2 in choice situation 2 does seem like it could seem wrong—we might say “If you were gonna do that why didn’t you bring about W3?”—even if, if you’re in choice situation 1, it doesn't. This violation does seem less bad to me than the mere addition paradox, so avoiding the mere addition paradox by sacrificing the independence of irrelevant alternatives seems reasonable. (I do wonder, however, whether Meecham’s view is necessary to get that solution).

That said, I don't buy the view overall.

First of all, some of the aspects of it seem unmotivated to me:

More importantly in my mind, though it's debatable whether this should count as an objection, the view has extremely radical implications.

Despite Meacham's protests that his view doesn't imply that you shouldn't procreate, it obviously does imply that if you could cause the world to stop existing all at once, you probably should. This seems at least as bad to me.

For example:

Should we, as members of the 2100 Council of the United Federation of Nuclear Powers, use our combined firepower to bomb like on this planet into nonexistence?

(These are of course silly made up numbers, and they should be read as in-expectation)

table 5

The bombed planet world does (20)(6 billion) + (9)(1 billion) = 129 billion units of harm. We get this by matching the 7 billion currently existing people in the first world to the people they’re qualitatively identical with before the time of the decision in the second world and sum the differences in their wellbeing levels between the worlds.

The no-bomb world, on the other hand, does 1 trillion units of harm, to b(10-trillion-and-1) - b(11 trillion). We compare their welfare in the no-bomb world to 0, and the benefits to the other 10 trillion b's doesn’t do anything to compensate. (Indeed, these lives would do nothing to compensate no matter how good they were.)

Of course, this should not come as a surprise "gocha" objection—views like Meacham's are designed to care about people being worse off than 0 and not people being better off than 0 when they come into existence. But this conclusion just seems so, so wrong to me.

Addendum:

In reviewing this writeup, Michelle Hutchinson [EA · GW] pointed out another, plausibly even worse problem: you don’t even need the possibility of people having negative welfare for Meacham's view to imply that it could be not just as good but better for us to go extinct.

Return to the fifth way of rejecting Meacham's view [EA · GW]. Michelle commented:

[Couldn't there be] tonnes of cases where you have the option of a more egalitarian and a less egalitarian future, such that then bringing either future into existence is impermissible (as long as you can somehow prevent future people)?

Hence, it seems like the view is likely to imply that it's basically never permissible to bring future happy people into existence, because there's likely a more or less egalitarian distribution of happy people you could have brought into existence instead. This seems like a reason why the counterpart relation really runs him into trouble compared to other [person-affecting] views. On other such views, bringing into existence happy people seems basically always fine, whereas due to the counterparts in this case it basically never is.

Note that this will only be the case if, as in the situation discussed in (5), you have >2 options in a single choice situation.

I think Meacham's best response here would be to basically deny that there ever are choice situations like this in the real world, where there are genuinely >2 options. Choices in the real world always look like either doing x or not. And then maybe if you decide not to you can consider doing y. You will never genuinely be faced with a tripartite situation where you can either bring about one of three situations in a single action. This would also allow him to get out of denying the independence of irrelevant alternatives, by saying it is a nonsensical requirement anyway. Though it also seems like it'd undermine his response to the mere addition paradox.

29 comments

Comments sorted by top scores.

comment by Halstead · 2021-01-05T17:15:10.677Z · EA(p) · GW(p)

Second comment, on your critique of Meacham...

As a (swivel-eyed) totalist, I'm loath to stick up for a person-affecting view, but I don't find your 'extremely radical implications' criticism of the view compelling and I think it is an example of an unpromising way of approaching moral reasoning in general. The approach I am thinking of here is one that  selects theories by meeting intuitive constraints rather than by looking at the deeper rationales for the theories. 

I think a good response for Meacham would be that if you find the rationale for his theory compelling, then it is simply correct that it would be better to stop everyone existing. Similarly, totalism holds that it would be good to make everyone extinct if there is net suffering over pleasure (including among wild animals). Many might also find this counter-intuitive. But if you actually believe the deeper theoretical arguments for totalism, then this is just the correct answer. 

I agree that Meacham's view on extinction is wrong, but that is because of the deeper theoretical reasons - I think adding happy people to the world makes that world better, and I don't see an argument against that in the paper. 

The Impossibility Theorems show formally that we cannot have a theory that satisfies people's intuitions about cases. So, we should not use isolated case intuitions to select theories. We should instead focus on deeper rationales for theories. 

Replies from: MichaelPlant, Ardenlk
comment by MichaelPlant · 2021-01-05T22:31:13.679Z · EA(p) · GW(p)

Strong upvote. I thought this was a great reply: not least because you finally came clean about your eyes, but because I think the debate in population ethics is currently too focused on outputs and unduly disinterested in the rationales for those outputs.

comment by Ardenlk · 2021-01-05T23:56:28.304Z · EA(p) · GW(p)

Yeah, I mean you're probably right, though I have a bit more hope in the 'does this thing spit out the conclusions I independetnly think are right' methodology than you do. Partly that's becuase I think some of the intuitions that are, jointly, impossible to satisfy a la impossibility theorems are more important than others -- so I'm ok trying to hang on to a few of them at the expense of others. Partly it's becuase I feel unsure of how else to proceed -- that's part of why I got out of the game!

I also think there's something attractive in the idea that what moral theories are are webs of implications, and the things to hold on to are the things you're most sure are right for whatever reason, and that might be the implications rather than the underlying rationales. I think whether that's right might depend on your metaethics -- if you think the moral truth is determined by your moral committments, then being very committed to a set of outputs could make it the case that the theories that imply them are true. I don't really think that's right as a matter of metaethics, though I'm not sure.

Replies from: jackmalde
comment by jackmalde · 2021-01-06T19:10:21.451Z · EA(p) · GW(p)

I think it's important to ask why you think it's horrible to bomb the planet into non-existence. Whatever reason you have, I suspect it probably just simplifies down to you disagreeing with the core rationale of person-affecting views.

For example, perhaps you're concerned that bombing the plant will prevent a future that you expect to be good. In this case you're just disagreeing with the very core of person-affecting views: that adding happy people can't be good.

Or perhaps you're concerned by the suffering caused by the bombing. Note that Meacham's person-affecting view thinks that the suffering is 'harmful' too, it just thinks that the bombing will avoid a greater quantity of harm in the future. Also note that many people, including totalists, also hold intuitions that it is OK to cause some harm to prevent greater harm. So really what you're probably disagreeing with in this case is the claim you would actually be avoiding a greater harm by bombing. This is probably because you disagree that adding some happy future people can't  ever outweigh the harm of adding some unhappy future people. In other words, once again, you're simply disagreeing with the very core of person-affecting views: that adding happy people can't be good.

Or perhaps you don't like the bombing for deontological reasons i.e. you just can't countenance that such an act could be OK. In this case you don't want a moral view that is purely consequentialist without any deontological constraints. So you're disagreeing with another core of person-affecting views: pure consequentialism.

I could probably go on, but my point is this: I do believe you find the implication horrible, but my guess is that this is because you fundamentally don't accept the underlying rationale.

comment by Chris Meacham · 2021-01-12T17:41:58.143Z · EA(p) · GW(p)

Thanks for the write up and interesting commentary Arden.

I had one question about the worry in the Addendum that Michelle Hutchinson raised, and the thought that “This seems like a reason why the counterpart relation really runs him into trouble compared to other [person-affecting] views. On other such views, bringing into existence happy people seems basically always fine, whereas due to the counterparts in this case it basically never is.”

I take this to be the kind of extinction case Michelle has in mind (where for simplicity I’m bracketing currently existing people and assuming they’ll have the same level of wellbeing in every outcome). Suppose you have a choice between three options:

W1-Inegalitarian Future

a(1): +1; a(2): +2; a(3): +3

W2-Egalitarian Future

b(1): +2; b(2): +2; b(3): +2

W3-Unpopulated Future

Since both W1 and W2 will yield harm while W3 won’t, it looks like W3 will come out obligatory.

I can see why one might worry about this. But I wasn't sure how counterpart relations were playing an interesting role here. Suppose we reject counterpart theory, and adopt HMV and cross-world identity (where a(1)=b(1), a(2)=b(2), and a(3)=b(3)). Then won’t we get precisely the same verdicts (i.e., that W3 is obligatory)?

Replies from: jackmalde, Ardenlk
comment by jackmalde · 2021-01-12T18:41:09.280Z · EA(p) · GW(p)

Chris Meacham? I'm starstruck!

In all seriousness, good point, I think you're right but I would be interested to see what Arden/Michelle say in response.

Since both W1 and W2 will yield harm while W3 won’t, it looks like W3 will come out obligatory. I can see why one might worry about this.

I thought I'd take this opportunity to ask you: do you hold the person-affecting view you outlined in the paper and, if so, do you then in fact see ensuring extinction as obligatory?

Replies from: Chris Meacham
comment by Chris Meacham · 2021-01-13T18:14:38.663Z · EA(p) · GW(p)

Sadly, I don’t have a firm stance on what the right view is. Sometimes I’m attracted to the kind of view I defend in this paper, sometimes (like when corresponding with Melinda Roberts) I find myself pulled toward a more traditional person-affecting view, and sometimes I find myself inclined toward some form of totalism, or some fancy variant thereof.

Regarding extinction cases, I’m inclined to think that it’s easy to pull in a lot of potentially confounding intuitions. For example, in the blowing up the planet example Arden presents, in addition to well-being considerations, we have intuitions about violating people’s rights by killing them without their consent, intuitions about the continuing existence of various species (which would all be wiped out), intuitions about the value of various artwork (which would be destroyed if we blew up the planet), and so on. And if one thinks that many of these intuitions are mistaken (as many Utilitarians will), or that these intuitions bring in issues orthogonal to the particular issues that arise in population ethics (as many others will), then one won’t want to rest one’s evaluation of a theory on cases where all of these intuitive considerations are in play.

Here’s a variant of Arden’s case which allows us to bracket those considerations. Suppose our choice is between:

Option 1: Create a new planet in which 7 billion humans are created, and placed in an experience machine in which they live very miserable lives (-10).

Option 2: Create a new planet in which 11.007 trillion humans are created, and placed in experience machines, where 1.001 trillion are placed in experience machines in which they live miserable lives (-1), 10 trillion are placed in experience machines in which they live great lives (+50), and 0.006 trillion are placed in experience machines in which they live good lives (+10).

This allows us to largely bracket many of the above intuitions — humanity and the others species will still survive on our planet regardless of which option we choose, no priceless art is being destroyed, no one is being killed against their will, etc.

In this case, the position that option 1 is obligatory doesn’t strike me as that bad. (My folk intuition here is probably that option 2 is obligatory. But my intuitions here aren’t that strong, and I could easily be swayed if other commitments gave me reason to say something else in this case.)

Replies from: jackmalde
comment by jackmalde · 2021-01-16T11:39:52.362Z · EA(p) · GW(p)

Thanks for this, that’s an interesting idea. It certainly seems like a useful approach to bracket possibly confounding intuitions!

comment by Ardenlk · 2021-01-13T02:50:33.134Z · EA(p) · GW(p)

Well hello thanks for commenting, and for the paper!

Seems right that you'll get the same objection if you adopt cross-world identity. Is that a popular alternative for person-affecting views? I don't actually know a lot about the literature. I figured the most salient alternative was to not match the people up across worlds at all, which was why people say that e.g. it's not good for a(3) than W1 was brought about.

Replies from: Chris Meacham
comment by Chris Meacham · 2021-01-13T15:09:11.261Z · EA(p) · GW(p)

I guess the two alternatives that seem salient to me are (i) something like HMV combined with pairing individuals via cross-world identity, or (ii) something like HMV combined with pairing individuals who currently exist (at the time of the act) via cross-world identity, and not pairing individuals who don’t currently exist. (I take it (ii) is the kind of view you had in mind.)

If we adopt (ii), then we can say that all of W1-W3 are permissible in the above case (since all of the individuals in question don’t currently exist, and so don’t get paired with anyone). But this kind of person-affecting view has some other consequences that might make one squeamish. For example, suppose you have a choice between three options:

Option 1: Don’t have a child.

Option 2: Have a child, and give them a great life.

Option 3: Have a child, and give them a life barely worth living.

(Suppose, somewhat unrealistically, that our choice won’t bear on anyone else’s well-being.)

According to (ii), all three options are permissible. That entails that option 3 is permissible — it’s permissible to have a child and give them a life barely worth living, even though you could have (at no cost to yourself or anyone else) given that very same person a great life. YMMV, but I find that hard to square with person-affecting intuitions!

Replies from: Ardenlk
comment by Ardenlk · 2021-01-13T17:51:01.131Z · EA(p) · GW(p)

This is helpful.

For what it's worth I find the upshot of (ii) hard to square with my (likely internally inconsistent) moral intuitions generally, but easy to square with the person-affecting corners of them, which is I guess to say that insofar as I'm a person-affector I'm a non-identity-embracer.

comment by jackmalde · 2021-01-05T10:43:14.852Z · EA(p) · GW(p)

Thanks for sharing Arden. I strongly upvoted because I think considering alternative views in population ethics is important and thought this write-up was interesting and clear (this applies to both your explanation of the paper and then your subsequent reaction). I'm also not sure if I ever would have got around to reading the Meacham paper myself, but I'm glad I now understand it. Overall I would be happy if you were to share similar write-ups in the future!

To give some more specific thoughts:

  • I found your discussion about how rejecting IIA may not be that weird in certain situations quite interesting and not something I have really considered before. Having said that I still think (counter to you) that I prefer accepting the mere addition paradox over rejecting IIA, but I now want to think about that more and it's possible I could change my mind on this
  • I think I agree with your 'ad hoc' point about Meacham's saturating counterpart relations. It all seems a bit contrived to me
  • Having said all that I don't think I find the 'radical implications' counterargument compelling. I don't really trust my intuitions on these things that much and it's worth noting that some people don't find the idea that ending the world may be a good thing to be counterintuitive (I actually used to feel quite strongly that it would be good!). Plus maybe there is a better way to do this than by bombing everything. Instead of rejecting things because of 'radical implications' I prefer to just factor in moral uncertainty to my decision-making, which can then lead to not wanting to bomb the planet even if one really likes a person-affecting view (EDIT: I agree with Halstead's comment on this and think he has put the argument far better than I have)

So thanks for giving me some things to think about and I hope to see more of these in the future. For now I remain a (slightly uneasy) totalist.

comment by Halstead · 2021-01-05T16:53:27.689Z · EA(p) · GW(p)

Thanks a lot for taking the time to do this Arden, I found it useful. I have a couple of comments

Firstly, on the repugnant conclusion. I have long found the dominant dialectic in population ethics a bit strange. We (1) have this debate about whether merely possible future people are worthy of our ethical consideration and then (2) people start talking about a conclusion that they find repugnant because of aggregation of low quality lives. The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future; it is rather from the way totalism aggregates low quality lives. This repugnance is irrelevant to questions of population ethics. It's a bit like if we were talking about the totalist view of population ethics, and then people started talking about the experience machine or other criticisms of hedonism: this may be a valid criticism of totalism but it is beside the point - which is whether merely possible future people matter. 

Related to this:

(1) There are current generation perfect analogues of the repugnant conclusion. Imagine you could provide a medicine that provides a low quality life to billions of currently existing people or provide a different medicine to a much smaller number of people giving them brilliant lives. The literature on aggregation also discusses the 'headaches vs death' case which seems exactly analogous.

(2) For this reason, we shouldn't expect person-affecting views to avoid the repugnant conclusion. For one thing, some impartialist views like critical level utilitarianism, avoid the repugnant conclusion. For another thing, the A population and the Z population are merely possible future people so most person-affecting theories will say that they are incomparable. 

Meacham's view avoids this with its saturating relation in which possible future people are assigned counterparts. But (1) there are current generation analogues to the RC as discussed above, so this doesn't actually solve the (debatable) puzzle of the RC. 

(2) Meacham's view would imply that if the people in the much larger population had on average lives only slightly worse than people in the small population (A), then the smaller population would still be better. Thus, Meacham's view solves the repugnant conclusion but only by discounting aggregation of high quality lives, in some circumstances. This is not the solution to the repugnant conclusion that people wanted.

Replies from: MichaelPlant, Lukas_Finnveden
comment by MichaelPlant · 2021-01-05T22:41:38.079Z · EA(p) · GW(p)

I think you're right to point out that we should be clear about exactly what's repugnant about the repugnant conclusion.  However, Ralph Bader's answer (not sure I have a citation, I think it's in his book manuscript) is that what's objectionable about moving from world A (take as the current world) to world Z is that creating all those extra lives isn't good for the new people, but it is bad for the current population, whose lives are made worse off.  I share this intuition. So I think you can cast the repugnant conclusion as being about population ethics.

FWIW, I share your intuition that, in a fixed population, one should just maximise the average. 

comment by Lukas_Finnveden · 2021-01-05T20:41:15.713Z · EA(p) · GW(p)

The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future.

It doesn't? That's not my impression. In particular:

There are current generation perfect analogues of the repugnant conclusion. Imagine you could provide a medicine that provides a low quality life to billions of currently existing people or provide a different medicine to a much smaller number of people giving them brilliant lives.

But people don't find these cases intuitively identical, right? I imagine that in the current-generation case, most people who oppose the repugnant conclusion instead favor egalitarian solutions, granting small benefits to many (though I haven't seen any data on this, so I'd be curious if you disagree!). Whereas when debating who to bring into existence, people who oppose the repugnant conclusion aren't just indifferent about what happens to these merely-possible people; they actively think that the happy, tiny population is better. 

So the tricky thing is that people intuitively support granting small benefits to many already existing people above large benefits to a few already existing people, but don't want to extend this to creating many barely-good lives above creating a few really good ones.

Replies from: Halstead
comment by Halstead · 2021-01-05T22:03:32.642Z · EA(p) · GW(p)

Hi, The A population and the Z population are both composed of merely possible future people, so person-affecting intuitions can't ground the repugnance. Some impartialist theories (critical level utilitaianism) are explicitly designed to avoid the repugnant conclusion. 

The case is analogous to the debate in aggregation about whether one should cure a billion headaches or save someone's life. 

Replies from: Lukas_Finnveden
comment by Lukas_Finnveden · 2021-01-06T01:14:44.650Z · EA(p) · GW(p)

When considering whether to cure a billion headaches or save someone's life, I'd guess that people's prioritarian intuition would kick in, and say that it's better to save the single life. However, when considering whether to cure a billion headaches or to increase one person's life from ok to awesome, I imagine that most people prefer to cure a billion headaches. I think this latter situation is more analogous to the repugnant conclusion. Since people's intuition differ in this case and in the repugnant conclusion, I claim that "The repugnance of the repugnant conclusion in no way stems from the fact that the people involved are in the future" is incorrect. The fact that the repugnant conclusion concerns is about merely possible people clearly matters for people's intuition in some way.

I agree that the repugnace can't be grounded by saying that merely possible people don't matter at all. But there are other possible mechanics that treat merely possible people differently from existing people, that can ground the repugnance. For example, the paper that we're discussing under!

comment by MichaelPlant · 2021-01-05T13:37:16.433Z · EA(p) · GW(p)

I found this post very thought-provoking (I want to write a paper in this area at some point) so might pop back with a couple more thoughts.

Arden, you said this decreased your confidence that person-affecting views can be made to work, but I'm not sure I understand your thinking here. 

To check, was this just because you thought the counterpart stuff was fishy, or because you thought it has radical implications? I'm assuming it's the former, because it wouldn't make sense to decrease one's confidence in a view on account of it's more or less obvious implications:  the gist of person-affecting views is that they give less weight to merely possible lives than impersonal views do.  Also, please show me a view in population ethics without (according to someone) 'radical implications'!

(Nerdy aside I'm not going to attempt to put in plain English: FWIW, I also think counterpart relations are fishy.  It seems you can have a de re or de dicto person-affecting views (I think this is the same as the 'narrow' vs 'wide' distinction). On the former, what matters is the particular individuals who do or will exist (whatever we do). On the latter, what matters is the individuals who do or will exist,  whomsoever they happen to be.  Meacham's is of the latter camp. For a different view which allows takes de dicto lives as what matters see Bader (forthcoming)

It seems to me that, if one is sympathetic to person-affecting views, it is because one finds these two theses plausible 1. only personal value is morally significant - things can only be good or bad if they are good or bad for someone and 2. non-comparativism, that is, that existence can not be better or worse for someone than non-existence. But if one accepts (1) and (2) it's obvious lives de re matter, but unclear why one would care about lives de dicto. What makes counterpart relations fishy is that they are unmotivated by what seem to be the key assumptions in the area. 

Replies from: Ardenlk
comment by Ardenlk · 2021-01-05T23:29:37.856Z · EA(p) · GW(p)

You're right radical implications are par for the course in population ethics, and that this isn't that surprising. However, I guess this is even more radical than was obvious to me from the spirit of the theory, since the premautre deaths of the presently existing people can be so easily outweighed. I also agree, although a big begrudgingly in this case, that "I strongly dislike the implications!" isn't a valid argument against something.

I did also think the counterpart relations were fishy, and I like your explanation as to why! The de dicto/de re distinction isn't someting I'd thought about in this context.

comment by antimonyanthony · 2021-01-06T04:45:31.074Z · EA(p) · GW(p)

There are some moral intuitions, such as the ‘procreation asymmetry’ (illustrated in the ‘central illustration’ below) that only a person-affecting view can capture.

I don't think this is exactly true. The procreation asymmetry is also consistent with any form of negative consequentialism. I wouldn't classify such views as "person-affecting," since the reason they don't consider it obligatory to create happy people is that they reject the premise that happiness is intrinsically morally valuable, rather than that they assign special importance to badness-for-someone. These views do still have some of the implications you consider problematic in this post, but they're not vulnerable to, for example, Parfit's critiques based on reductionism about personal identity.

comment by MichaelStJules · 2021-01-14T19:39:53.703Z · EA(p) · GW(p)

One might also consider condition (iii) of HMV (that is, in worlds where a subject doesn’t exist, we treat her welfare as if it is equal to 0) to be ad hoc. So we treat her as having welfare 0, but only for the purposes of comparing it to her welfare in other worlds. But we don’t actually think she has welfare 0 at that world, because she doesn’t exist. It feels a bit tailor made.

 

You might think people who exist can compare their own lives to nonexistence (or there are comparisons to be made on their behalf, since they have interests), but an individual who doesn't exist can't make any such comparisons (and there are no comparisons to make on their behalf, since they have no interests). From her point of view in the worlds where she exists, she does have welfare 0 in the worlds where she doesn't exist, but in the worlds where she doesn't exist, she has not point of view and is not a moral patient.

Or, existing people can have nonexisting counterparts, but nonexisting people do not get counterparts at all, since they're not moral patients.

comment by MichaelStJules · 2021-01-14T19:24:03.690Z · EA(p) · GW(p)

Condition (4) in the definition of a saturating counterpart relations (that is, there is no other mapping that satisfies the first 3 conditions but which results in W1 having lower harm when combined with HMV) seems to be a bit ad hoc and designed to get him out of various situations, like the absurd conclusion, without having independent appeal.

 

One way to motivate this is that it's a generalization of symmetry. Counterparts are chosen so that their welfares match as closely as possible (after any personal identity-preservation constraints, which could be dropped), where the distance between two worlds is roughly measured in additive terms (rather than, say, the minimizing the maximum harm), which matches our additive aggregation for calculating harm.

If you took one world, and replaced all the identities with a disjoint set of identities of the same numbers, while preserving the distribution of welfare, adding condition (4) to the other conditions makes these worlds morally equivalent. If you switched the identities and changed exactly one welfare, then the mapping of identities would be one of the permissible mappings under condition (4). It picks out the intuitively correct mappings in these cases. Maybe condition (4) is unjustifiably strong for this, though.

Another way to look at it is that the harm in a given world is the minimum harm under all mappings satisfying conditions 1-3. Mappings which satisfy the minimum in some sense make the worlds most similar (under the other constraints and definition of harm).

Furthermore, if you were doing infinite ethics and didn't have any other way to match identities between worlds (e.g. locations) or had people left over who weren't (yet) matched (after meeting identity constraints), you could do something like this, too.  Pairwise, you could look at mappings of individual identities between the two worlds, and choose the mappings that lead to the minimum (infimum) absolute aggregate of differences in welfares, where the differences are taken between the mapped counterparts. So, this is choosing the mappings which make the two worlds look as similar as possible in terms of welfare distributions. The infimum might not actually be attained, but we're more interested in the number than the mappings, anyway. If, within some distance of the infimum (possibly 0, so an attained minimum), all the mappings lead to the same sign for the aggregate (assuming the aggregate isn't 0), then we could say one world is better than the other.

comment by MichaelStJules · 2021-01-14T20:15:02.793Z · EA(p) · GW(p)

(EDIT: Chris Meacham came up with a similar example here [EA(p) · GW(p)]. I missed that comment before writing this one.)

On the Addendum, here's an example with three options, with four individuals  with welfares 1 through 4 split across the first two worlds.

  1. No extra people exist.

In world 1,  will be at their peak under any counterpart relation, and  will not be at their peak under any counterpart relation since their counterpart will have higher welfare 2 or 3 > 1 in world 2. In world 2,  and  can't both be at their peaks simultaneously, since one will have a counterpart with higher welfare 4 > 2, 3 in world 2. Therefore, both world 1 and world 2 cause harm, while world 3 is harmless, so only world 3 is permissible.

(EDIT: the following would also work, by the same argument:

  1. No extra people exist.)

The same conclusion follows with any identity constraints, since this just rules out some mappings.

In this way, I see the view as very perfectionist. The view is after all essentially that anything less than a maximally good life is bad (counts against that life), with some specification of how exactly we should maximize. This is similar to minimizing total DALYs, but DALYs use a common reference for peak welfare for everyone, 80-90 years at perfect health (and age discounting?).

comment by MichaelPlant · 2021-01-05T12:34:50.191Z · EA(p) · GW(p)

Thanks a lot for writing this up! I confess I'd had a crack at Meacham's paper some time ago and couldn't really work out what was going on, so this is helpful. One comment.

I don't think the view implies what you say it implies in the Your Reaction part. We have only two choices and all those people who exist in one outcome (i.e. the future people) have their welfare ignored on this view - they couldn't have been better off. So we just focus on the current people - who do exist in both "bomb" and "not-bomb". Their lives go better in "not-bomb". Hence, the view says we shouldn't blow up the world, not - as you claim - that we should. Did I miss something?

Replies from: jackmalde, jackmalde
comment by jackmalde · 2021-01-05T13:55:18.138Z · EA(p) · GW(p)

Deleted my previous comment because I got the explanation wrong. 

In "not bomb" there will be various people who go on to exist in the future. In "bomb" these people won't exist and so will be given wellbeing level 0. So all you need is for one future person in the "not bomb" world to have negative welfare and there is harm. If you bomb everyone then there will be no one that can be harmed in the future.

This is why world 2 is better than world 1 here (see 'central illustration of HMV' section):

It's quite possible I've got this wrong again and should only talk about population ethics when I've got enough time to think about it carefully!

Replies from: MichaelPlant
comment by MichaelPlant · 2021-01-05T16:16:50.913Z · EA(p) · GW(p)

Right. So, looking at how HMW was specified up top - parts II and III - then people who exist in only one of two outcomes count for zero even if they have negative well-being  in the world where they exist. That what how I interpreted the view as working in my comment. 

One could specify a different view on which creating net-negative lives, even if they couldn't have had a higher level of welfare, is bad, rather than neutral.  This would need a fourth condition.

(My understanding is that people who like HMVs tend to think that creating uniquely exist negative lives is bad, rather than neutral, as that captures that procreative asymmetry. 

Replies from: jackmalde
comment by jackmalde · 2021-01-05T16:40:56.313Z · EA(p) · GW(p)

II. Harm is done to a subject in a world if and only if she exists in that world and her welfare there is lower than her welfare in an alternate world.

III. In worlds where a subject doesn’t exist, we treat her welfare as if it is equal to 0 (but again, she cannot be harmed in that world).

Given this:

  • If a person exists in only one of two outcomes and they have negative wellbeing in the outcome where they exist, then they have been harmed.
  • If a person exists in only one of two outcomes and they have positive wellbeing in the outcome where they exist, then there is no harm to anyone.

So creating net negative lives is bad under Meacham's view. 

It's possible I'm getting something wrong, but this is how I'm reading it. I find thinking of 'counting for zero' confusing so I'm framing it differently.

Replies from: MichaelPlant
comment by MichaelPlant · 2021-01-05T17:56:58.242Z · EA(p) · GW(p)

Ah, I see. No, you've got it right. I'd somehow misread it and the view works the way I had thought it was supposed to: non-existence as zero is not-existence can be compared to existence in terms of welfare levels. 

comment by jackmalde · 2021-01-05T13:10:27.854Z · EA(p) · GW(p)

We have only two choices and all those people who exist in one outcome (i.e. the future people) have their welfare ignored on this view - they couldn't have been better off.

Good challenge.

I'm not sure if I'm right here as I don't have time to think about this in much depth, but I think it depends on your interpretation of "possible worlds". If we just consider the possible worlds to be "bomb" and "not bomb" I think you're right.

If you allow for there to be a whole range of possible "not bomb" worlds, then not bombing will result in a great deal of harm (as you would be able to compare counterparts across all these possible worlds), whereas bombing will ensure you minimise harm to zero.

It's not clear to me that just because we are making a choice between bombing and not bombing, that we can then consider only two possible worlds, but I'm not sure about this and need to think about this more.