Is Effective Altruism fundamentally flawed?

post by Jeffhe · 2018-03-13T02:18:35.250Z · score: 2 (18 votes) · EA · GW · Legacy · 129 comments

Contents

  A. Some background:
  B. The problem I see with effective altruism: 
  C. Some likely objections and my responses:
  Objection 1:
  My response:
  Objection 1.2:
  My response: 
  Objection 1.3:
  My response:
  Objection 2:
  My response:
  Objection 3:
  My response:
  D. Conclusion:
  E. One final objection:
  My response:
None
124 comments

Update on Mar 21: I have completely reworked my response to Objection 1 to make it more convincing to some and hopefully more clear. I would also like to thank everyone who has responded thus far, in particular brianwang712, Michael_S, kbog and Telofy for sustained and helpful discussions.

Update on Apr 10: I have added a new objection (Objection 1.1) that captures an objection that kbog and Michael_S have raised to my response to Objection 1.  I'd also like to thank Alex_Barry for a sustained and helpful discussion.

Update on Apr 24: I have removed Objection 1.1 temporarily. It is undergoing revision to be more clear.  

 

Hey everyone,

This post is perhaps unlike most on this forum in that it questions the validity of effective altruism rather than assumes it.

A. Some background:

I first heard about effective altruism when professor Singer gave a talk on it at my university a few years ago while I was an undergrad. I was intrigued by the idea. At the time, I had already decided that I would donate the vast majority of my future income to charity because I thought that preventing and/or alleviating the intense suffering of others is a much better use of my money than spending it on personal luxuries. However, the idea of donating my money to effective charities was a new one to me. So, I considered effective altruism for some time, but soon I came to see a problem with it that to this day I cannot resolve. And so I am not an effective altruist (yet).

Right now, my stance is that the problem I've identified is a very real problem. However, given that so many intelligent people endorse effective altruism, there is a good chance I have gone wrong somewhere. I just can’t see where. I'm currently working on a donation plan and completing the plan requires assessing the merits of effective altruism. Thus, I would greatly appreciate your feedback. 

Below, I state the problem I see with effective altruism, some likely objections and my responses to those objections.

Thanks in advance for reading! 

 

B. The problem I see with effective altruism:

Suppose we find ourselves in the following choice situation: With our last $10, we can either help Bob avoid an extremely painful disease by donating our $10 to a charity working in his area, or we can help Amy and Susie each avoid an equally painful disease by donating our $10 to a more effective charity working in their area, but we cannot help all three. Who should we help?

Effective altruism would say that we should help the group consisting of Amy and Susie since that is the more effective use of our $10. Insofar as effective altruism says this, it effectively denies Bob (and anyone else in his place) any chance of being helped. But that seems counter to what reason and empathy would lead me to do.

Yes, Susie and Amy are two people, and two is more than one, but were they to suffer (as would happen if we chose to help Bob), it is not like any one of them would suffer more than what Bob would otherwise suffer. Indeed, were Bob to suffer, he would suffer no less than either Amy or Susie. Susie’s suffering would be felt by Susie alone. Amy’s suffering would be felt by Amy alone. And neither of their suffering would be greater than Bob’s suffering. So why simply help them over Bob rather than give all of them an equal chance of being helped by, say, tossing a coin? (footnote 1)

Footnote 1: A philosopher named John Taurek first discussed this problem and proposed this solution in his paper "Should the Numbers Count?" (1977) 

 

C. Some likely objections and my responses:

Objection 1:

One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie.

My response:

I don’t think two instances of suffering, spread across two people (e.g. Amy and Susie), is a morally worse case than one instance of the same kind of suffering had by one other person (e.g. Bob). I think these two cases are just as bad, morally speaking. Why’s that? Well, first of all, what makes one case morally worse than another? Answer: Morally relevant factors (i.e. things of moral significance, things that matter). Ok, and what morally relevant factors are present here? Well, experience is certainly one - in particular the severe pain that either Bob would feel or Susie and Amy would each feel, if not helped (footnote 2). Ok. So we can say that a case in which Amy and Susie would each suffer said pain is morally worse than a case in which only Bob would suffer said pain just in case there would be more pain or greater pain in the former case than in the latter case (i.e. iff Amy’s pain and Susie’s pain would together be experientially worse than Bob’s pain.)

Footnote 2: In my response to Objection 2, it will become clear that I think something else matters too: the identity of the sufferer. In other words, I don't just think suffering matters, I also think who suffers it matters. However, unlike the morally relevant factor of suffering, I don't think it's helpful for our understanding to understand this second morally relevant factor as having an effect on the moral worseness of a case, although one could understand it this way. Rather, I think its better for our understanding to accommodate its force via the denial that we should always prevent the morally worst case (i.e. the case involving the most suffering). If you find this result deeply unintuitive, then maybe its better for your understanding to understand this second morally relevant factor as having an effect on the moral worseness of a case, which allows you to say that what we should always do is prevent the morally worse case. In any case, ignore the morally relevant factor of identity for now as I haven't even argued for why it is morally relevant. 

Here, it's helpful to keep in mind that more/greater instances of pain does not necessarily mean more/greater pain. For example, 2 very minor headaches is more instances of pains than 1 major headache, but they need not involve more pain than a major headache (i.e., they need not be experientially worse than a major headache). Thus, while there would clearly be more instances of pain in the former case than in the latter case (i.e. 2 vs 1; Amy's and Susie's vs Bob's), that does not necessarily mean that there would be more pain. 

So the key question for us then is this: Are 2 instances of a given pain, spread across two people (e.g. Amy and Susie), experientially worse (i.e. do they involve more/greater pain) than one instance of the same pain had by one person (e.g. Bob)? If they are (call this thesis “Y”), then a case in which Amy and Susie would each suffer a given pain is morally worse than a case in which only Bob would suffer the given pain. If they aren’t (call this thesis “N”), then the two cases are morally just as bad, in which case Objection 1 would fail, even if we agreed that we should prevent the morally worse case.

Here’s my argument against Y:

Suppose that 5 instances of a certain minor headache, all experienced by one person, are experientially worse than a certain major headache experienced by one person. That is, suppose that any person in the world who has an accurate idea/appreciation of what 5 instances of this certain minor headache feels like and of what this certain major headache feels like would prefer to endure the major headache over the 5 minor headaches if put to the choice. Under this supposition, someone who holds Y must also hold that 5 minor headaches, spread across 5 people, are experientially worse than a major headache had by one person. Why? Because, at bottom, someone who holds Y must also hold that 5 minor headaches spread across 5 people are experientially just as bad as 5 minor headaches all had by one person.

So let's assess whether 5 minor headaches, spread across 5 people, really are experientially worse than a major headache had by one person. Given the supposition above, consider first what makes a single person who suffers 5 minor headaches experientially worse off than a person who suffers just 1 major headache, other things being equal.

Well, imagine that we were this person who suffers 5 minor headaches. We suffer one minor headache one day, suffer another minor headache sometime after that, then another after that, etc. By the end of our 5th minor headache, we will have experienced what it’s like to go through 5 minor headaches. After all, we went through 5 minor headaches! Note that the what-it’s-like-of-going-through-5-headaches consists simply in the what-it’s-like-of-going-through-the-first-minor-headache then the what-it’s-like-of-going-through-the-second-minor-headache  then the what-it’s-like-of-going-through-the-third-minor-headache, etc. Importantly, the what-it’s-like-of-going-through-5-headaches is not whatever we experience right after having our 5th headache (e.g. exhaustion that might set in after going through many headaches or some super painful headache that is the "synthesis" of the intensity of the past 5 minor headaches). It is not a singular/continuous feeling like the feeling we have when we're experiencing a normal pain episode. It is simply this: the what-it’s-like of going through one minor headache, then another (some time later), then another, then another, then another. Nothing more. Nothing less.

Now, by the end of our 5th minor headache, we might have long forgotten about the first minor headache because, say, it happened so long ago. So, by the end of our 5th minor headache, we might not have an accurate appreciation of what it’s like to go through 5 minor headaches even though we in fact have experienced what it’s like to go through 5 minor headaches. As a result, if someone asked us whether we’ve been through more pain due to our minor headaches or more pain through a major headache that, say, we recently experienced, we would likely incorrectly answer the latter.

But, if we did have an accurate appreciation of what it’s like to go through 5 minor headaches, say, because we experienced all 5 minor headaches rather recently, then there will be a clear sense to us that going through them was experientially worse than the major headache. The 5 minor headaches would each be “fresh in our mind”, and thus the what-it’s-like-of-going-through-5-minor-headaches would be “fresh in our mind”. And with that what-it’s-like fresh in mind, it seems clear to us that it caused us more pain than the major headache did.

Now, a headache being “fresh in our mind” does not mean that the headache needs to be so fresh that it is qualitatively the same as experiencing a real headache. Being fresh in our mind just means we have an accurate appreciation/idea of what it feels like, just as we have some accurate idea of what our favorite dish tastes like.

Because we have appreciations of our past pains (to varying degrees of accuracy), we sometimes compare them and have a clear sense that one set of pains is worse than another. But it is not the comparison and the clear sense we have of one set of pains being worse than another that ultimately makes one set of pains worse than another. Rather, it is the other way around: it is the what-it’s-like-of-having-5-minor-headaches that is worse than the what-it’s-like-of-having-a-major-headache. And if we have an accurate appreciation of both what-it’s-likes, then we will conclude the same. But, when we don’t, then our own conclusions could be wrong, like in the example provided earlier of a forgotten minor headache.

So, at the end of the day, what makes a person who has 5 minor headaches worse off than a person who has 1 major headache is the fact that he experienced the what-it’s-like-of-going-through-5-minor-headaches. 

But, in the case where the 5 minor headaches are spread across 5 people, there is no longer the what-it’s-like-of-going-through-5-minor-headaches because each of the 5 headaches is experienced by a different person. As a result, the only what-it’s-like that is present is the what-it’s-like-of-experiencing-one-minor-headache. Five different people each experience this what-it’s-like, but no one experiences what-it’s-like-of-going-through-5-minor-headaches. Moreover, the what-it’s-like of each of the 5 people cannot be linked to form the what-it’s-like-of-experiencing-5-minor-headaches because the 5 people are experientially independent beings.

Now, it's clearly the case that the what-it’s-like-of-going-through-1-minor-headache is not experientially worse than the what-it’s-like-of-going-through-a-major-headache. Given what I said in the previous paragraph, therefore, there is nothing present that could be experientially worse than the what-it’s-like-to-go-through-a-major-headache in the case where the 5 minor headaches are spread across 5 people. Therefore, 5 minor headaches, spread across 5 people, cannot be (and thus, is not) worse, experientially speaking, than one major headache.

Indeed, five independent what-it's-likes-of-going-through-1-minor-headache is very different from a single what-it's-like-of-going-through-5-minor-headaches. And given a moment's reflection, one thing should be clear: only the latter what-it's-like can plausibly be experientially worse than a major headache. 

Thus, one should not treat 5 minor headaches spread across 5 people as being experientially just as bad as 5 minor headaches all had by 1 person. The latter is experientially worse than the former. The latter involves more/greater pain. 

We can thus make the following argument against Y:

P1) If Y is true, then 5 minor headaches spread across 5 people is experientially just as bad 5 minor headaches all had by 1 person.

P2) But that is not the case (since 5 minor headaches all had by 1 person is experientially worse than 5 minor headaches spread across 5 people).

C) Therefore Y is false. And therefore Objection 1 fails, even if it's granted that we should prevent the morally worse case.

Objection 1.1: (Improving it)

Objection 1.2:

One might reply that experience is a morally relevant factor, but when the amount of pain in each case is the same (i.e. when the cases are experientially just as bad), the number of people in each case also becomes a morally relevant factor. Since the case in which Amy and Susie would each suffer involves more people, therefore, it is still the morally worse case. 

My response:

I will respond to this objection in my response to Objection 2.

Objection 1.3:

One might reply that the number of people involved in each case is a morally relevant factor in of itself (i.e. completely independent of the amount of pain in each case). That is, one might say that the inherent moral relevance of the number of people involved in each case must be reconciled with the inherent moral relevance of the amount of pain in each case, and that therefore, in principle, a case in which many people would each suffer a relatively lesser pain can be morally worse than a case in which one other person would suffer a relatively greater pain, so long as there are enough people on the side of the many. For example, between helping a million people avoid depression or one other person avoid very severe depression, one might have the intuition that we should help the million, i.e. that a case in which a million people would suffer depression is morally worse.  

My response:

I don’t deny that many people have this intuition, but I think this intuition is based on a failure to recognize and/or appreciate some important facts. In particular, I think that if you really kept in the forefront of your mind the fact that not one of the million would suffer worse than the one, and the fact that the million of them together would not suffer worse than the one (assuming my response to Objection 1 succeeds), then your intuition would not be as it is (footnote 3).

Nevertheless, you might still feel that the million people should still have a chance of being helped. I agree, but this is not because of the sheer number of them involved. Rather, it is because which individual suffers matters. (Please see my response to Objection 2.)

Footnote 3: For those familiar with Derk Pereboom’s position in the free will debate, he makes an analogous point. He doesn’t think we have free will, but admits that many have the intuition that we do. But he points out that this is because we are generally not aware of the deterministic psychological/neurological/physical causes of our actions. But once we become aware of them – once we have them in the forefront of our minds – our intuition would not be that we are free. See pg 95 of “Free Will, Agency, and Meaning in Life” (Pereboom, 2014)

 

Objection 2:

One might reply that we should help Amy and Susie because either of their suffering neutralizes/cancels out Bob’s suffering, leaving the other’s suffering to carry the day in favor of helping them over Bob.

My response:

I don’t think one person’s suffering can neutralize/cancel out another person’s suffering because who suffers matters. Which individual it is that suffers matters because it is the sufferer who bears the complete burden of the suffering. It is the particular person who ends up suffering that feels all the suffering. This is an obvious fact, but it is also a very significant fact when properly appreciated, and I don’t think it is properly appreciated.

Think about it. The particular person(s) who suffers has to bear everything. If we save Amy and Susie, it is Bobthat particular vantage point on the world - who has to feel all of the suffering (which it bears remembering is suffering that would be no less painful than the suffering Amy and Susie would each otherwise endure). The same, of course, is true of each of Amy and Susie were we to save Bob.

I fear that saying anymore might make the significance of the fact I’m pointing to less clear. For those who appreciate the significance of what I’m getting at, it should be clear that neither Amy’s or Susie’s suffering can be used to neutralize/cancel out Bob’s suffering and vice versa. Yes, it’s the same kind of suffering, but it’s importantly different whether Amy and Susie each experiences it or Bob experiences it, because again, whoever experiences it is the one who has to bear all of it.

Notice that this response to objection 2 is importantly compatible with empathizing with every individual involved (e.g., Amy, Susie and Bob). Indeed, to empathize with only select individuals is biased. Yet, it seems to me that many people are in fact likely to forget to empathize with the group containing the fewer number. Note that as I understand it, to empathize with someone is to imagine oneself in their shoes and to care about that imagined perspective.

Also, notice that this response to objection 2 also deals with Objection 1.2 since this response argues against (what seems to me) the only plausible way in which the number of people involved might be thought to be relevant when the amount of pain involved in each case is the same: when the amount of pain involved in each case is the same, it might be thought that one person's pain can neutralize or cancel out another person's pain, e.g. that the suffering Amy would feel can neutralize or cancel out the suffering Bob would feel, leaving only the suffering that Susie would feel left in play, and that therefore the case in which Amy and Susie would suffer is morally worse than the case in which Bob would suffer. But if my response to Objection 2 is right, then this thought is wrong.

Just to be clear, this is not to say that I think one person’s suffering can not balance (or, in the case of greater suffering, outweigh) another person’s equal (or lesser) suffering such that the reasonable and empathetic thing to do is to give the person who would face the greater suffering a higher chance of being helped. In fact, I think it can. But balancing is not the same as neutralizing/canceling out. Bob’s suffering balances out Amy’s suffering and it also independently balances out Susie’s suffering precisely because Bob’s suffering does not get neutralized/cancelled out by either of their suffering. 

My own view is that we should give the person who would face the greater suffering a higher chance of being saved in proportion to how much greater his suffering would be relative to the suffering that the other person(s) would each otherwise face. We shouldn't automatically help him just because he would face a greater suffering if not helped. After all, who suffers matters, and this includes those who would be faced with the lesser suffering if not helped (footnote 4).

Footnote 4: My own view is slightly more complicated than this, but those details aren't important given the simple sorts of choice situations discussed in this essay.

Going back to Objection 1.3, this then explains why I agree that we should still give those who would each suffer a less serious depression a chance of being helped, even though the one other person would suffer more if not saved. Importantly, the number of people who would each suffer the less serious depression is irrelevant. I would give them a chance of being saved whether they are 2 persons or a million or a billion. How high of a chance would I give them? In proportion to how their depression compares in suffering to the single person’s severe depression. So, if it involves slightly less suffering, I would give them around 48% of being helped. If it involves a lot less suffering, then I would give them lot lower of a chance (footnote 5).

Footnote 5: Notice that with certain types of pain episodes, such as a torture episode vs a minor headache, there is such a big gap in amount of suffering between them that any clear-headed person in the world would rather endure an infinite number of minor headaches (i.e. live with very frequent minor headaches in an immortal life) than to endure the torture episode. This would explain why in a choice situation in which we can either save a person from torture or x number of persons from a minor headache (or 1 person from x minor headaches), we would just save the person who would be tortured rather than give the other(s) even the slightest chance of being helped. And I think this accords with our intuition well.

 

Objection 3:

One might reply that from “the perspective of the universe” or “moral perspective” or “objective perspective”, either of their suffering neutralizes/cancels out Bob’s suffering, leaving the other’s suffering to carry the day in favor of helping them over Bob.

My response:

As I understand it, the perspective of the universe is the impartial or unbiased perspective where personal biases are excluded from consideration. As a result, such a perspective entails that we should give equal weight to equal suffering. For example, whereas I would give more weight to my own suffering than to the equal suffering of others (due to the personal bias involved in my everyday personal perspective), if I took on the perspective of the universe, I would have to at least intellectually admit that their equal suffering matters the same amount as mine. Of course, it doesn’t matter the same amount as mine from my perspective. It matters the same amount as mine from the perspective of the universe that I have taken on. We might say it matters the same amount as mine period. However, none of this entails that, from the perspective of the universe, which individual suffers doesn’t matter – that whether it is I who suffers X or someone else who suffers X doesn’t matter. Clearly it does matter for the reason I gave earlier. Giving equal weight to equal suffering does not entail that who suffers said suffering doesn’t matter. It is precisely because it matters that in a choice situation in which we can either save person A from suffering X or person B from suffering X we think we should flip a coin to give each an equal chance of being saved, rather than, say, choosing one of them to save on a whim. This is our way of acknowledging that A suffering is importantly different from B suffering -  that who suffers matters.

Even if I'm technically wrong about what the perspective of the universe - as understood by utilitarians - amounts to, all that shows is that the perspective of the universe, so understood, is not the moral perspective. For who suffers matters (assuming my response to Objection 2 is correct), and so the moral perspective must be one from which this fact is acknowledged. Any perspective from which it isn't therefore cannot be the moral perspective. 

  

D. Conclusion:

I therefore think that according to reason and empathy, Bob should be accorded an equal chance to be helped (say via flipping a coin) as Amy and Susie. This conclusion holds regardless of the number of people that are added to Amy and Susie’s group as long as the kind of suffering remains the same. So for example, if with a $X donation we can either help Bob avoid an extremely painful disease or a million other people from the same painful disease, but not all, reason and empathy would say to flip a coin – a conclusion that is surely against effective altruism.

 

E. One final objection:

One might say that this conclusion is too counter-intuitive to be correct, and that therefore something must have gone wrong in my reasoning, even though it may not be clear what that something is.

My response:

But is it really all that counter-intuitive when we bear in mind all that I have said? Importantly, let us bear in mind three facts:

1) Were we to save the million people instead of Bob, Bob would suffer in a way that is no less painful than any one of the million others otherwise would. Indeed, he would suffer in a way that is just as painful as any one among the million. Conversely, were we to save Bob, no one among the million suffering would suffer in a way that is more painful than Bob would otherwise suffer. Indeed, the most any one of them would suffer is the same as what Bob would otherwise suffer.

2) The suffering of the million would involve no more pain than the pain Bob would feel (assuming my response to Objection 1 is correct). That is, a million instances of the given painful disease, spread across a million people, would not be experientially worse - would not involve more pain or greater pain - than one instance of the same painful disease had by Bob. (Again, keep in mind that more/greater instances of a pain does not necessarily mean more/greater pain.)

3) Were we to save the million and let Bob suffer, it is he – not you, not me, and certainly not the million of others – who has to bear that pain. It is that particular person, that unique sentient perspective on the world who has to bear it all.

In such a choice situation, reason and empathy tells me to give him an equal chance to be saved. To just save the millions seems to me to completely neglect what Bob has to suffer, whereas my approach seems to neglect no one.

129 comments

Comments sorted by top scores.

comment by brianwang712 · 2018-03-13T14:27:53.567Z · score: 10 (10 votes) · EA(p) · GW(p)

One additional objection that one might have is that if Bob, Susie, and Amy all knew beforehand that you would end up in a situation where you could donate $10 to alleviate either two of them suffering or one of them suffering, but they didn't know beforehand which two people would be pitted against which one person (e.g., it could just as easily be alleviating Bob + Susie's suffering vs. alleviating Amy's suffering, or Bob + Amy's suffering vs. Susie's suffering, etc.), then they would all sign an agreement directing you to send a donation such that you would alleviate two people's suffering rather than one, since this would give each of them the best chance of having their suffering alleviated. This is related to Rawls' veil of ignorance argument.

And if Bob, Susie, Amy, and a million others were to sign an agreement directing your choice to donate $X to alleviate one person's suffering or a million peoples' suffering, again all of them behind a veil of ignorance, none of them would hesitate for a second to sign an agreement that said, "Please donate such that you would alleviate a million people's suffering, and please oh please don't just flip a coin."

More broadly speaking, given that we live in a world where people have competing interests, we have to find a way to effectively cooperate such that we don't constantly end up in the defect-defect corner of the Prisoner's Dilemma. In the real world, such cooperation is hard; but in an ideal world, such cooperation would essentially look like people coming together to sign agreements behind a veil of ignorance (not necessarily literally, but at least people acting as if they had done so). And the upshot of such signed agreements is generally to make the interpersonal-welfare-aggregative judgments of the type "alleviating two people's suffering is better than one", even if everyone agrees with the theoretical arguments that the suffering of two people on opposite sides don't literally cancel out, and that who's suffering matters.

Bob, Susie, Amy, and the rest of us all want to live in a world where we cooperate, and therefore we'd all want to live in a world where we make these kinds of interpersonal welfare aggregations, at the very least during the kinds of donation decisions in your thought experiments.

(For a much longer explanation of this line of reasoning, see this Scott Alexander post: http://slatestarcodex.com/2014/08/24/the-invisible-nation-reconciling-utilitarianism-and-contractualism/)

comment by Jeffhe · 2018-03-13T22:03:42.774Z · score: 2 (4 votes) · EA(p) · GW(p)

Hi Brian,

Thanks for your comment and for reading my post!

Here's my response:

Bob, Susie and Amy would sign the agreement to save the greater number if they assumed that they each had an equal chance of being in any of their positions. But, is this assumption true? For example, is it actually the case that Bob had an equal chance to be in Amy's or Susie's position? If it is the case, then saving the greater number would in effect give each of them a 2/3 chance of being saved (the best chance as you rightly noted). But if it isn't, then why should an agreement based on a false assumption have any force? Suppose Bob, in actuality, had no chance of being in Amy's or Susie's position, then is it really in accordance with reason and empathy to save Amy and Susie and give Bob zero chance?

Intuitively, for Bob to have had an equal chance of being in Amy's position or Susie's position or his actual position, he must have had an equal chance of living Amy's life or Susie's life or his actual life. That's how I intuitively understand a position: as a life position. To occupy someone's position is to be in their life circumstances - to have their life. So understood, what would it take for Bob to have had an equal chance of being in Amy's position or Susie's position or his own? Presumably, it would have had to be the case that Bob was just as likely to have been born to Amy's parents or Susie's parents or his actual parents. But this seems very unlikely because the particular “subject-of-experience” or “self” that each of us are is probably biologically linked to our ACTUAL parents' cells. Thus another parent could not give birth to us, even though they might give birth to a subjective-of-experience that is qualitatively very similar to us (i.e. same personality, same skin complexion, etc).

Of course, being in someone's position need not be understood in this demanding (though intuitive) way. For example, maybe to be in Amy's position just requires being in her actual location with her actual disease, but not e.g. being of the same sex as her or having her personality. But insofar as we are biologically linked to our actual parents, and parents are spread all over the world, the odds of Bob having had an equal chance of being in his actual position (i.e. a certain location with a certain disease) or in Amy's position (i.e. a different location with an equally painful disease) is highly unlikely. Think also about all the biological/personality traits that make a person more or less likely to be in a given position. I, for example, certainly had zero chance of being in an NBA position, given my height. Of course, as we change in various ways, our chances to be in certain positions change too, but even so, it is extremely unlikely that any given person, at any given point in time, had an equal chance of being in any of the positions of a trade off situation that he is later to be involved in.

UPDATE (ADDED ON MAR 18): I have added the above two paragraphs to help first-time readers better understand how I understand "being in someone's position" and why I think it is most unlikely that Bob actually had an equal chance of being in Amy's or Susie's position. These two paragraphs have replaced a much briefer paragraph, which you can find at the end of this reply. UPDATE (ADDED ON MAR 21): Also, no need to read past this point since someone (kbog) made me realize that the question I ask in the paragraph below rests on a misunderstanding of the veil-of-ignorance approach.

Also, what would the implications of this objection be for cases where the pains involved in a choice situation are unequal? Presumably, EA favors saving a billion people each from a fairly painful disease than a single person from the excruciating pain of being burned alive. But is it clear that someone behind the veil of ignorance would accept this?

-

Original paragraph that was replaced: "Similarly, is it actually the case that each of us had an equal chance of being in any one of our positions? I think the answer is probably no because the particular “subject-of-experience” or “self” that each of us are is probably linked to our parents' cells."

comment by brianwang712 · 2018-03-14T05:22:03.475Z · score: 3 (3 votes) · EA(p) · GW(p)

I do think Bob has an equal chance to be in Amy's or Susie's position, at least from his point of view behind the veil of ignorance. Behind the veil of ignorance, Bob, Susie, and Amy don't know any of their personal characteristics. They might know some general things about the world, like that there is this painful disease X that some people get, and there is this other equally painful disease Y that the same number of people get, and that a $10 donation to a charity can in general cure two people with disease Y or one person with disease X. But they don't know anything about their own propensities to get disease X or disease Y. Given this state of knowledge, Bob, Susie, and Amy all have the same chance as each other of getting disease X vs. disease Y, and so signing the agreement is rational. Note that it doesn't have to be actually true that Bob has an equal chance as Susie and Amy to have disease X vs. disease Y; maybe a third party, not behind the veil of ignorance, can see that Bob's genetics predispose him to disease X, and so he shouldn't sign the agreement. But Bob doesn't know that; all that is required for this argument to work is that Bob, Susie, and Amy all have the same subjective probability of ending up with disease X vs. disease Y, viewing from behind the veil of ignorance.

Regarding your second point, I don't think EA's are necessarily committed to saving a billion people each from a fairly painful disease vs. a single person being burned alive. That would of course depend on how painful the disease is, vs. how painful being burned alive is. To take the extreme cases, if the painful disease were like being burned alive, except just with 1% less suffering, then I think everybody would sign the contract to save the billion people suffering from the painful disease; if the disease were rather just like getting a dust speck in your eye once in your life, then probably everyone would sign the contract to save the one person being burned alive. People's intuitions would start to differ with more middle-of-the-road painful diseases, but I think EA is a big enough tent to accommodate all those intuitions. You don't have to think interpersonal welfare aggregation is exactly the same as intrapersonal welfare aggregation to be an EA, as long as you think there is some reasonable way of adjudicating between the interests of different numbers of people suffering different amounts of pain.

comment by Jeffhe · 2018-03-14T20:24:46.605Z · score: 0 (0 votes) · EA(p) · GW(p)

It would be a mistake to conclude, from a lack of knowledge about one's position, that one has an equal chance of being in any one's position. Of course, if a person is behind the veil of ignorance and thus lacks relevant knowledge about his/her position, it might SEEM to him/her that he has an equal chance of being in any one's position, and he/she might thereby be led to make this mistake and consequently choose to save the greater number.

In any case, what I just said doesn't really matter because you go on to say,

"Note that it doesn't have to be actually true that Bob has an equal chance as Susie and Amy to have disease X vs. disease Y; maybe a third party, not behind the veil of ignorance, can see that Bob's genetics predispose him to disease X, and so he shouldn't sign the agreement. But Bob doesn't know that; all that is required for this argument to work is that Bob, Susie, and Amy all have the same subjective probability of ending up with disease X vs. disease Y, viewing from behind the veil of ignorance."

Let us then suppose that Bob, in fact, had no chance of being in either Amy's or Susie's position. Now imagine Bob asks you why you are choosing to save Amy and Susie and giving him no chance at all, and you reply, "Look, Bob, I wished I could help you too but I can't help all. And the reason I'm not giving you any chance is that if you, Amy and Susie were all behind the veil of ignorance and was led to assume that each of you had an equal chance of being in anyone else's position, then all of you (including you, Bob) would have agreed to the principle of saving the greater number in the kind of case you find yourself in now."

Don't you think Bob can reasonably reply, "But Brian, whether or not I make that assumption under the veil of ignorance is irrelevant. The fact of the matter is that I had no chance of being in Amy's or Susie's position. What you should do shouldn't be based on what I would agree to in a condition where I'm imagined as making a false assumption. What you should do should be based on my actual chance of being in Amy's or Susie's position. It should be based on the facts, and the fact is that I NEVER had a chance to be in any of their positions. Look, Brian, I'm really scared. I'm going to suffer a lot if you choose to save Amy and Susie - no less than any one of them would suffer. I can imagine that they must be very scared too, for each of them would suffer just as much as me were you to save me instead. In this case, seeing that we each have the same amount to suffer, shouldn't you give each of us an equal chance of being helped, or at least give me some chance and not 0?"

How would you reply? I personally think that Bob's reply shows the clear limits of this hypothetical contractual approach to determining what we should do in real life.

UPDATE (ADDED ON MAR 21): No need to read past this point since another person (kbog) made me realize that the paragraph below rests on a misunderstanding of the veil-of-ignorance approach.

Regarding the second point, I think what any person would agree to behind the veil of ignorance (even assuming the truth of the assumption that each has an equal chance of being in anybody's position) is highly dependent on their risk-adverseness to the severest potential pain. Towards the extreme ends that you described, people of varying risk-adverseness would perhaps be able to form a consensus. But it gets less clear as we consider "middle-of-the-road" cases. As you said people's intuitions here start to differ (which I would peg to varying degrees of risk-adverseness to the severest potential pain). But the question then is whether this hypothetical contractual approach can serve as a “reasonable way of adjudicating between the interests of different numbers of people suffering different amounts of pain” since your intuition might not be the same as the person whose fate might rest in your hands. Is it really reasonable to decide his fate using your intuition and not his?

comment by brianwang712 · 2018-03-17T07:00:58.273Z · score: 2 (2 votes) · EA(p) · GW(p)

Regarding the first point, signing hypothetical contracts behind the veil of ignorance is our best heuristic for determining how best to collectively make decisions such that we build the best overall society for all of us. Healthy, safe, and prosperous societies are built from lots of agents cooperating; unhappy and dangerous societies arise from agents defecting. And making decisions as if you were behind the veil of ignorance is a sign of cooperation; on the contrary, Bob's argument that you should give him a 1/3 chance of being helped even though he wouldn't have signed on to such a decision behind the veil of ignorance, simply because of the actual position he finds himself in, is a sign of defection. This is not to slight Bob here -- of course it's very understandable for him to be afraid and to want a chance of being helped given his position. Rather, it's simply a statement that if everybody argued as Bob did (not just regarding charity donations, but in general), we'd be living in a much unhappier society.

If you're unmoved by this framing, consider this slightly different framing, illustrated by a thought experiment: Let's say that Bob successfully argues his case to the donor, who gives Bob a 1/2 chance of being helped. For the purpose of this experiment, it's best to not specify who in fact gets helped, but rather to just move forward with expected utilities. Assuming that his suffering was worth -1 utility point, consider that he netted 1/2 of an expected utility point from the donor's decision to give everyone an equal chance. (Also assume that all realized painful incidents hereon are worth -1 utility point, and realized positive incidents are worth +1 utility point.)

The next day, Bob gets into a car accident, putting both him and a separate individual (say, Carl) in the hospital. Unfortunately, the hospital is short on staff that day, so the doctors + nurses have to make a decision. They can either spend their time to help Bob and Carl with their car accident injuries, or they can spend their time helping one other indivdual with a separate yet equally painful affliction, but they cannot do both. They also cannot split their time between the two choices. They have read your blog post on the EA forum and decide to flip a coin. Bob once again gets a 1/2 expected utility point from this decision.

Unfortunately, Bob's hospital stay cost him all his savings. He and his brother Dan (who has also fallen on hard times) go to their mother Karen to ask for a loan to get them back on their feet. Karen, however, notes that her daughter (Bob and Dan's sister) Emily has also just asked for a loan for similar reasons. She cannot give a loan to Bob and Dan and still have enough left over for Emily, and vice versa. Bob and Dan note that if they were to get the loan, they could both split that loan and convert it into +1 utility point each, whereas Emily would require the whole loan to get +1 utility point (Emily was used to a more lavish lifestyle and requires more expensive consumption to become happier). Nevertheless, Karen has read your blog post on the EA forum and decides to flip a coin. Bob nets a 1/2 expected utility point from this decision.

What is the conclusion from this thought experiment? Well, if decisions were made to your decision rule, providing each individual an equal chance of being helped in each situation, then Bob nets 1/2 + 1/2 + 1/2 = 3/2 expected utility points. Following a more conventional decision rule to always help more people vs. less people if everyone is suffering similarly (a decision rule that would've been agreed upon behind a veil of ignorance), Bob would get 0 (no help from the original donor) + 1 (definite help from the doctors + nurses) + 1 (definite help from Karen) = 2 expected utility points. Under this particular set of circumstances, Bob would've benefitted more from the veil of ignorance approach.

You may reasonably ask whether this set of seemingly fantastical scenarios has been precisely constructed to make my point rather than yours. After all, couldn't Bob have found himself in more situations like the donor case rather than the hospital or loan cases, which would shift the math towards favoring your decision rule? Yes, this is certainly possible, but unlikely. Why? For the simple reason that any given individual is more likely to find themselves in a situation that affects more people than a situation that affects few. In the donor case, Bob had a condition where he was in the minority; more often in his life, however, he will find himself in cases where he is in the majority (e.g., hospital case, loan case). And so over a whole lifetime of decisions to be made, Bob is much more likely to benefit from the veil-of-ignorance-type approach.

Based on your post, it seems you are hesitant to aggregate utility over multiple individuals; for the sake of argument here, that's fine. But the thought scenario above doesn't require that at all; just aggregating utility over Bob's own life, you can see how the veil-of-ignorance approach is expected to benefit him more. So if we rewind the tape of Bob's life all the way back to the original donor scenario, where the donor is mulling over whether they want to donate to help Bob or to help Amy + Susie, the donor should consider that in all likelihood Bob's future will be one in which the veil-of-ignorance approach will work out in his favor moreso than the everyone-gets-an-equal-chance approach. So if this donor and other donors in similar situations are to commit to one of these two decision rules, they should commit to the veil of ignorance approach; it would help Bob (and Amy, and Susie, and all other beneficiaries of donations) the most in terms of expected well-being.

Another way to put this is that, even if you don't buy that Bob should put himself behind a veil of ignorance because he knows he doesn't have an equal chance of being in Amy's and Susie's situation, and so shouldn't decide to sign a cooperative agreement with Amy and Susie, you should buy that Bob is in effect behind a veil of ignorance regarding his own future, and therefore should sign the contract with Amy and Susie because this would be cooperative with respect to his future selves. And the donor should act in accord with this hypothetical contract.

I would respond to the second point, but this post is already long enough, and I think what I just laid out is more central.

I will also be bowing out of the discussion at this point – not because of anything you said or did, but simply since it took me much more time to write up my thoughts than I would have liked. I did enjoy the discussion and found it useful to lay out my beliefs in a thorough and hopefully clear manner, as well as to read your thoughtful replies. I do hope you decide that EA is not fatally flawed and to stick around the community :)

comment by Jeffhe · 2018-03-18T21:53:22.129Z · score: 0 (2 votes) · EA(p) · GW(p)

Hey Brian,

No worries! I've enjoyed our exchange as well - your latest response is both creative and funny. In particular, when I read "They have read your blog post on the EA forum and decide to flip a coin", I literally laughed out loud (haha). It's been a pleasure : ) If you change your mind and decide to reply, definitely feel welcome to.

Btw, for the benefit of first-time readers, I've updated a portion of my very first response in order to provide more color on something that I originally wrote. In good faith, I've also kept in the response what I originally wrote. Just wanted to let you know. Now onto my response.

You write, "In the donor case, Bob had a condition where he was in the minority; more often in his life, however, he will find himself in cases where he is in the majority (e.g., hospital case, loan case). And so over a whole lifetime of decisions to be made, Bob is much more likely to benefit from the veil-of-ignorance-type approach."

This would be true if Bob has an equal chance of being in any of the positions of a given future trade off situation. That is, Bob would have a higher chance of being in the majority in any given future trade off situation if Bob has an equal chance of being in any of the positions of a given trade off situation. Importantly, just because there is more positions on the majority side of a trade off situation, that does not automatically mean that Bob has a higher chance of being among the majority. His probably or chance of being in each of the positions is crucial. I think you were implicitly assuming that Bob has an equal chance of being in any of the positions of a future trade off situation because he doesn't know his future. But, as I mentioned in my previous post, it would be a mistake to conclude, from a lack of knowledge about one's position, that one has an equal chance of being in any one's position. So, just because Bob doesn't know anything about his future, it does not mean that he has an equal chance of being in any of the positions in the future trade off situations that he is involved in.

In my original first response to you, I very briefly explained why I think people in general do not have an equal chance of being in anybody's position. I have sense expanded that explanation. If what I say there is right, then it is not true that "over a whole lifetime of decisions to be made, Bob [or anyone else] is much more likely to benefit from the veil-of-ignorance-type approach [than the equal-chance approach]."

All the best!

comment by kbog · 2018-03-19T21:24:29.329Z · score: 1 (1 votes) · EA(p) · GW(p)

It would be a mistake to conclude, from a lack of knowledge about one's position, that one has an equal chance of being in any one's position

It's a stipulation of the Original Position, whether you look at Rawls' formulation or Harsanyi's. It's not up for debate.

comment by Jeffhe · 2018-03-19T22:24:06.807Z · score: 0 (0 votes) · EA(p) · GW(p)

Hey kbog,

Thanks for your comment. I never said it was up for debate. Rather, given that it is stipulated, I question whether agreements reached under such stipulations have any force or validity on reality, given that the stipulation is, in fact, false.

Please read my second response to brianwang712 where I imagine that Bob has a conversation with him. I would be curious how you would respond to Bob in that conversation.

comment by kbog · 2018-03-20T00:44:34.043Z · score: 0 (0 votes) · EA(p) · GW(p)

I never said it was up for debate. Rather, given that it is stipulated, I question whether agreements reached under such stipulations have any force or validity on reality, given that the stipulation is, in fact, false.

The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational.

Please read my second response to brianwang712 where I imagine that Bob has a conversation with him. I would be curious how you would respond to Bob in that conversation.

My reply to Bob would be to essentially restate brianwang's original comment, and explain how the morally correct course of action is supported by a utilitarian principle of indifference argument, and that none of the things he says (like the fact that he is not Amy or Susie, or the fact that he is scared) are sound counterarguments.

comment by Jeffhe · 2018-03-20T01:46:42.865Z · score: 0 (0 votes) · EA(p) · GW(p)

1) The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational.

The conclusions are rational under the stipulation that each person has an equal chance of being in anybody's position. But it is not actually rational given that the stipulation is false. So you can't just say that the conclusions have a bearing on reality because they are necessarily rational. They are rational under the stipulation, but not when you take into account what is actually the case.

And I don't see how the conclusion is fair to Bob when the conclusion is based on a false stipulation. Bob is a real person. He shouldn't be treated like he had an equal chance of being in Amy's or Susie's position, when he in fact didn't.

2) "My reply to Bob would be to essentially restate brianwang's original comment..."

Sorry, can you quote the part you're referring to?

3) "...and explain how the morally correct course of action is supported by a utilitarian principle of indifference argument."

Can you explain what this "utilitarian principle of indifference argument" is?

4) "and that none of the things he says (like the fact that he is not Amy or Susie, or the fact that he is scared) are sound counterarguments."

Please don't distort what I said. I had him say, "The fact of the matter is that I had no chance of being in Amy's or Susie's position.", which is very different from saying that he was not Amy or Susie. If he wasn't Amy or Susie, but actually had an equal chance of being either of them, then I would take the veil of ignorance approach more seriously.

I added the part that he is said because I wanted it to sound realistic. It is uncharitable to assume that that forms part of my argument.

comment by kbog · 2018-03-20T07:14:40.065Z · score: 1 (1 votes) · EA(p) · GW(p)

The conclusions are rational under the stipulation that each person has an equal chance of being in anybody's position. But it is not actually rational given that the stipulation is false.

The argument of both Rawls and Harsanyi is not that it just happens to be rational for everybody to agree to their moral criteria; the argument is that the morally rational choice for society is a universal application of the rule which is egoistically rational for people behind the veil of ignorance. Of course it's not egoistically rational for people to give anything up once they are outside the veil of ignorance, but then they're obviously making unfair decisions, so it's irrelevant to the thought experiment.

And I don't see how the conclusion is fair to Bob when the conclusion is based on a false stipulation

Stipulations can't be true or false - they're stipulations. It's a thought experiment for epistemic purposes.

Bob is a real person. He shouldn't be treated like he had an equal chance of being in Amy's or Susie's position, when he in fact didn't.

The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system.

Also, to be clear, the Original Position argument doesn't say "imagine if Bob had an equal chance of being in Amy's or Susie's position, see how you would treat them, and then treat him that way." If it did, then it would simply not work, because the question of exactly how you should actually treat him would still be undetermined. Instead, the argument says "imagine if Bob had an equal chance of being in Amy's or Susie's position, see what decision rule they would agree to, and then treat them according to that decision rule."

Sorry, can you quote the part you're referring to?

The first paragraph of his first comment.

Can you explain what this "utilitarian principle of indifference argument" is?

This very idea, originally argued by Harsanyi (http://piketty.pse.ens.fr/files/Harsanyi1975.pdf).

comment by Jeffhe · 2018-03-22T01:32:28.461Z · score: 0 (0 votes) · EA(p) · GW(p)

Hey Brian,

I just wanted to note that another reason why you might not want to use the veil-of-ignorance approach to justify why we should save the greater number is that it would force you to conclude that, in a trade off situation where you can either save one person from an imminent excruciating pain (i.e. being burned alive) or another person from the same severe pain PLUS a third person from a very minor pain (e.g. a sore throat), we should save the second and third person and give 0 chance to the first person.

I think it was F. M. Kamm who first raised this objection to the veil-of-ignorance approach in his book Morality, Mortality Vol 1. (I haven't actually read the book). Interestingly, kbog - another person I've been talking with on this forum - accepts this result. But I wonder if others like yourself would. Imagine Bob, Amy and Susie were in a trade off situation of the kind I just described, and imagine that Bob never actually had a chance to be in Amy's or Susie's position. In such a situation, do you think you should just save Amy and Susie?

comment by brianwang712 · 2018-03-23T14:39:21.250Z · score: 0 (0 votes) · EA(p) · GW(p)

Yes, I accept that result, and I think most EAs would (side note: I think most people in society at large would, too; if this is true, then your post is not so much an objection to the concept of EA as it is to common-sense morality as well). It's interesting that you and I have such intuitions about such a case – I see that as in the category of "being so obvious to me that I wouldn't even have to hesitate to choose." But obviously you have different intuitions here.

Part of what I'm confused about is what the positive case is for giving everyone an equal chance. I know what the positive case is for the approach of automatically saving two people vs. one: maximizing aggregate utility, which I see as the most rational, impartial way of doing good. But what's the case for giving everyone an equal chance? What's gained from that? Why prioritize "chances"? I mean, giving Bob a chance when most EAs would probably automatically save Amy and Susie might make Bob feel better in that particular situation, but that seems like a trivial point, and I'm guessing is not the main driver behind your reasoning.

One way of viewing "giving everyone an equal chance" is to give equal priority to different possible worlds. I'll use the original "Bob vs. a million people" example to illustrate. In this example, there's two possible worlds that the donor could create: in one possible world Bob is saved (world A), and in the other possible world a million people are saved (world B). World B is, of course, the world that an EA would create every time. As for world A, well: can we view this possible world as anything but a tragedy? If you flipped a coin and got this outcome, would you not feel that the world is worse off for it? Would you not instantly regret your decision to flip the coin? Or even forget flipping the coin, we can take donor choice out of it; wouldn't you feel that a world where a hurricane ravaged and destroyed an urban community where a million people lived is worse than a world where that same hurricane petered out unexpectedly and only destroyed the home of one unlucky person?

If so, then why give tragic world A any priority at all, when we can just create world B instead? I mean, if you were asked to choose between getting a delicious chocolate milkshake vs. a bee sting, you wouldn't say "I'll take a 50% chance of each, please!" You would just choose the better option. Giving any chance, no matter how small, to the bee sting would be too high. Similarly, giving any priority to tragic world A, even 1 in 10 million, but be too high.

comment by Jeffhe · 2018-03-23T16:35:44.843Z · score: 0 (2 votes) · EA(p) · GW(p)

Hi Brian,

I think the reason why you have such a strong intuition of just saving Amy and Susie in a choice situation like the one I described in my previous reply is that you believe Amy's burning to death plus Susie's sore throat involves more or greater pain than Bob's burning to death. Since you think minimizing aggregate pain (i.e. maximizing aggregate utility) is what we should do, your reason for just Amy and Susie is clear.

But importantly, I don't share your belief that Amy's burning to death and Susie's sore throat involves more or greater pain than Bob's burning to death. On this note, I have completely reworked my response to Objection 1 a few days ago to make clear why I don't share this belief, so please read that if you want to know why. On the contrary, I think Amy's burning to death and Susie's sore throat involves just as much pain as Bob's burning to death.

So part of the positive case for giving everyone an equal chance is that the suffering on either side would involve the same LEVEL/AMOUNT of pain (even though the suffering on Amy's and Susie's side would clearly involve more INSTANCES of pain: i.e. 2 vs 1.)

But even if the suffering on Amy's and Susie's side would involve slightly greater pain (as you believe), there is a positive case for giving Bob some chance of being saved, rather than 0. And that is that who suffers matters, for the reason I offered in my response to Objection 2. I think that response provides a very powerful reason for giving Bob at least some chance, and not no chance at all, even if his pain would be less great than Amy's and Susie's together. (My response to Objection 3 makes clear that giving Bob some chance is not in conflict with being impartial, so that response is relevant too if you think doing so is being partial)

At the end of the day, I think one's intuitions are based on one's implicit beliefs and what one implicitly takes into consideration. Thus, if we shared the same implicit beliefs and implicitly took the same things into consideration, then we would share the same intuitions. So one way to view my essay is that it tries to achieve its goal by doing two things:

1) Challenging a belief (e.g. that Amy's burning to death plus Susie's sore throat involves more pain than Bob's burning to death) that in part underlies the differences in intuition between me and people like yourself.

2) Reminding people of another important moral fact that should figure in their implicit thought processes (and thus be reflected in their intuitions): that who suffers matters. This moral fact is often forgotten about, which skews people's intuitions. Once this moral fact is seriously taken into account, I bet people's intuitions would not be the same. Importantly, I bet the vast majority of people (including yourself) would feel that giving Bob some chance of being saved is more appropriate than none, EVEN IF you still thought that Amy's pain and Susie's pain involve slightly more pain than Bob's.

comment by Michael_S · 2018-03-13T03:30:33.731Z · score: 7 (7 votes) · EA(p) · GW(p)

Choice situation 3: We can either save Al, and four others each from a minor headache or Emma from one major headache. Here, I assume you would say that we should save Emma from the major headache

I think you're making a mistaken assumption here about your readers. Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people. I expect the majority of EAs would as well.

comment by Jeffhe · 2018-03-13T23:52:54.758Z · score: -1 (1 votes) · EA(p) · GW(p)

Hi Michael,

Thanks very much for your response.

UPDATE (ADDED ON MAR 16):

I have shortened the original reply as it was a bit repetitive and made improvements in its clarity. However, it is still not optimal. Thus I have written a new reply for first-time readers to better appreciate my position. You can find the somewhat improved original reply at the end of this new reply (if interested):

To be honest, I just don't get why you would feel the same if the 5 minor headaches were spread across 5 people. Supposing that 5 minor headaches in one person is (experientially) worse than 1 major headache in one person (as you request), consider WHAT MAKES IT THE CASE that the single person who suffers 5 minor headaches is worse off than a person who suffers just 1 major headache, other things being equal.

Well, imagine that we were this person who suffers 5 minor headaches. We suffer one minor headache one day, suffer another minor headache sometime after that, then another after that, etc. By the end of our 5th minor headache, we will have experienced what it’s like to go through 5 minor headaches. After all, we went through 5 minor headaches! Note that the what-it’s-like-of-going-through-5-headaches consists simply in the what-it’s-like-of-going-through-the-first-minor-headache then the what-it’s-like-of-going-through-the-second-minor-headache then the what-it’s-like-of-going-through-the-third-minor-headache, etc. Importantly, the what-it’s-like-of-going-through-5-headaches is NOT whatever we experience right after having our 5th headache (e.g. exhaustion that might set in after going through many headaches or some super painful headache that is the "synthesis" of the intensity of the past 5 minor headaches). It is NOT a singular/continuous feeling like the feeling we have when we're experiencing a normal pain episode. It is simply this: the what-it’s-like of going through one minor headache, then another (sometime later), then another, then another, then another. Nothing more. Nothing less.

Now, by the end of our 5th minor headache, we might have long forgotten about the first minor headache because, say, it happened so long ago. So, by the end of our 5th minor headache, we might not have an accurate appreciation of what it’s like to go through 5 minor headaches even though we in fact have experienced what it’s like to go through 5 minor headaches. As a result, if someone asked us whether we’ve been through more pain due to our minor headaches or more pain through a major headache that, say, we recently experienced, we would likely incorrectly answer the latter.

But, if we did have an accurate appreciation of what it’s like to go through 5 minor headaches, say, because we experienced all 5 minor headaches rather recently, then there will be a clear sense to us that going through them was (experientially) worse than the major headache. The 5 minor headaches would each be “fresh in our mind”, and thus the what-it’s-like-of-going-through-5-minor-headaches would be “fresh in our mind”. And with that what-it’s-like fresh in mind, it seems clear to us that it caused us more pain than the major headache did.

Now, a headache being “fresh in our mind” does not mean that the headache needs to be so fresh that it is qualitatively the same as experiencing a real headache. Being fresh in our mind just means we have an accurate appreciation/idea of what it felt like, just as we have some accurate idea of what our favorite dish tastes like.

Because we have appreciations of our past pains (to varying degrees of accuracy), we sometimes compare them and have a clear sense that one set of pains is worse than another. But it is not the comparison and the clear sense we have of one set of pain being worse than another that ultimately makes one set of pains worse than another. Rather, it is the other way around. It is the what-it’s-like-of-having-5-minor-headaches that is worse – more painful – than the what-it’s-like-of-having-a-major-headache. And if we have an accurate appreciation of both what-it’s-likes, then we will conclude the same. But, when we don’t, then our own conclusions could be wrong, like in the example provided earlier of a forgotten minor headache.

So, at the end of the day, what makes a person who has 5 minor headaches worse off than a person who has 1 major headache is the fact that he experienced what-it’s-like-of-going-through-5-minor-headaches.

But, in the case where the 5 minor headaches are spread across 5 people, there is no longer the what-it’s-like-of-going-through-5-minor-headaches because each of the 5 headaches is experienced by a different person. As a result, the only what-it’s-like present is the what-it’s-like-of-experiencing-one-minor-headache. Five different people each experience this what-it’s-like, but no one experiences what-it’s-like-of-going-through-5-minor-headaches. Moreover, the what-it’s-like of each of the 5 people cannot be linked to form the what-it’s-like-of-experiencing-5-minor headaches because the 5 people are experientially independent beings.

Now, it's clearly the case that the what-it’s-like-of-going-through-1-minor-headache is not worse than the what-it’s-like-of-going-through-a-major-headache. Given what I said in the previous paragraph, therefore, there is nothing present that could be worse than the what-it’s-like-to-go-through-a-major-headache in the case where the 5 minor headaches are spread across 5 people. Therefore, 5 minor headaches, spread across 5 people, cannot be (and thus is not) worse (experientially speaking) than one major headache.

Therefore, "conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, ... [one should not] feel exactly the same if it were spread out over 5 people."!

Finally, since 5 headaches, spread across 5 people, is not EXPERIENTIALLY worse than another person's single major headache, therefore the case in which Emma would suffer a major headache is MORALLY worse than the case in which 5 different people would each suffer a minor headache. (If you disagree with this, please see Objection 1.2 and my response to it) Therefore what I said in choice situation 3 holds.

-

The somewhat improved though sub-optimal original reply:

To be honest, I just don't get why you would feel the same if the pains were spread out over 5 people. I mean, when the 5 minor headaches occur in a single person, then FOR that person, there is a very clear sense how the 5 headaches are worse to endure than 1 major headache. But once the 5 minor headaches are spread across 5 different people, that clear sense is lost because each of the 5 people only experiences at most 1 minor headache. In each experiencing only 1 minor headache, NOT ONE of the 5 people experience something worse than a major headache (e.g., what Emma would go through). So none of them would individually be worse off than Emma. Are you really ready to say that the 5 of them together are worse off than Emma? But in what sense? Certainly not in any experiential sense (since none of them individually experiences anything worse than a major headache and they are experientially independent of each other). But then I don't see what other sense there are that matters.

comment by Michael_S · 2018-03-14T01:55:51.670Z · score: 4 (4 votes) · EA(p) · GW(p)

If a small headache is worth 2 points of disutility and a large headache is worth 5, the total amount of pain is worse because 2*5>5. It's a pretty straightforward total utilitarian interpretation.I find it irrelevant whether there's one person who's worse off; the total amount of pain is larger.

I'll also note that I find the concept of personhood to be incoherent in itself, so it really shouldn't matter at all whether it's the same "person". But while I think an incoherent personhood concept is sufficient for saying there's no difference if it's spread out over 5 people, I don't think it's necessary. Simple total utilitarianism gets you there.

comment by Jeffhe · 2018-03-14T03:33:17.188Z · score: 0 (0 votes) · EA(p) · GW(p)

I assume we agree that we determine the points of disutility of the minor and major headache by how they each feel to someone. Since the major headache hurts more, it's worth more points (5 in this case).

But, were a single person to suffer all 5 minor headaches, he would end up having felt what it is like to go through 5 headaches - a feeling that would make him say things like "Going through those 5 minor headaches is worse/more painful than a major headache" or "There was more/greater/larger pain in going through those 5 minor headaches than a major headache".

We find these statements intelligible. But that is because we're at a point in life where we too have felt what it is like to go through multiple minor pains, and we too can consider (i.e. hold before our mind) a major pain in isolation, and compare these feelings: the what-it's-like of going through multiple minor pains vs the what-it's-like of going through a major pain.

But once the situation is that the 5 minor headache are spread across 5 people, there is no longer the what-it's-like-of-going-through-5-minor-headaches, just 5 independent what-it's-likes-of-going-through-1-minor-headache. As a result, in this situation, when you say "the total amount of pain [involved in 5 minor headaches] is worse [one major headache]", or that "the total amount of pain [involved in 5 minor headaches] is larger [than one major headache], there is nothing to support their intelligibility.

So, I honestly don't understand these statements. Sure, you can use numbers to show that 10 > 5, but there is no reality that that maps on to (i.e. describes). I worry that representing pain in numbers is extremely misleading in this way.

Regarding personhood, I think my position just requires me to be committed to there being a single subject-of-experience (is that what you meant by person?) who extends through time to the extent that it can be the subject of more than one pain episode. I must admit I know very little about the topic of personhood. On that note, any further comments that help your position and question mine would be helpful. Thanks.

comment by Michael_S · 2018-03-14T13:31:24.173Z · score: 1 (1 votes) · EA(p) · GW(p)

I think this is confusing means of estimation with actual utils. You can estimate that 5 headaches are worse than one by asking someone to compare five headaches vs. one. You could also produce an estimate by just asking someone who has received one small headache and one large headache whether they would rather receive 5 more small headaches or one more large headache. But there's no reason you can't apply these estimates more broadly. There's real pain behind the estimates that can be added up.

comment by Jeffhe · 2018-03-14T19:15:53.580Z · score: 0 (0 votes) · EA(p) · GW(p)

I agree with the first half of what you said, but I don't agree that "there's no reason you can't apply these estimates more broadly (e.g. to a situation where 5 minor headaches are spread across 5 persons).

Sure, a person who has felt only one minor headache and one major headache can say "If put to the choice, I think I'd rather receive another major headache than 5 more minor headaches", but he says this as a result of imagining roughly what it would be like for him to go through 5 of this sort of minor headache and comparing that to what it was like for him to go through the one major headache.

Importantly, what is supporting the intelligibility of his statement is STILL the what-it's-like-of-going-through-5-minor-headaches, except that this time (unlike in my previous reply), the what-it's-like-of-going-through-5-minor-headaches is imagined rather than actual.

But in the situation where the 5 minor headaches are spread across 5 people, there isn't a what-it's-like-of-going-through-5-minor-headaches, imagined or actual, to support the intelligibility of the claim that 5 minor headaches (spread across 5 people) are worse or more painful than a major headache. What there is are five independent what-it's-like-of-going-through-1-minor-headache, since

1) the 5 people are obviously experientially independent of each other (i.e. each of them can only experience their own pain and no one else's), and

2) each of the 5 people experience just one minor headache.

But these five independent what-it's-likes can't support the intelligibility of the above claim. None of these what-it-likes are individually worse or more painful than the major headache. And they cannot collectively be worse or more painful than the major headache because they are experientially independent of each other.

The what-it's-like-of-going-through-5-minor-headaches is importantly different from five independent what-it's-like-of-going-through-1-minor-headache, and only the former can support the intelligibility of a claim like 5 minor headaches are worse than a major headache. But since the former what-it's-like can only occur in a single subject-of-experience, that means that, more specifically, the former what-it's-like can only support the intelligibility of a claim like 5 minor headaches, all had by one person, is worse than a major headache. It cannot support a claim like 5 minor headaches, spread across 5 people, are worse than a major headache.

comment by Michael_S · 2018-03-15T04:17:22.488Z · score: 2 (2 votes) · EA(p) · GW(p)

It's the same 5 headaches. It doesn't matter if you're imagining one person going through it on five days or imagine five different people going through it on one day. You can still imagine 5 headaches. You can imagine what it would be like to say live the lives of 5 different people for one day with and without a minor headache. Just as you can imagine living the life of one person for 5 days with and without a headache. The connection to an individual is arbitrary and unnecessary.

Now this goes into the meaningless of personhood as a concept, but what would even count as the individual in your view? For simplicity, let's say 2 modest headaches in one person are worse than one major headache. What if between the two headaches, the person gets a major brain injury and their personality is completely altered (as has happened in real life). Let's say they also have no memory of their former self. Are they no longer the same person? Under your view, is it no longer possible to say that the two modest headaches are worse than the major headache? If it still is, why is it possible after this radical change in personality with no memory continuity but impossible between two different people?

comment by Jeffhe · 2018-03-16T01:46:55.419Z · score: 0 (0 votes) · EA(p) · GW(p)

If I'm understanding you correctly, you essentially deny that there is a metaphysical difference (i.e. a REAL difference) between

A. One subject-of-experience experiencing 5 headaches over 5 days (say, one headache per day), and

B. Five independent subjects-of-experience each experiencing 1 headache over 5 days (say, each subject has their 1 headache on a different day, such that on any given day, only one of them has a headache).

And you deny this BECAUSE you think that, in case A for example, there simply is no fact of the matter as to how many subjects-of-experience there were over those 5 days IN THE FIRST PLACE, and NOT because you think one subject-of-experience going through 5 headaches IS IDENTICAL to five independent subjects-of-experience each going through 1 headache.

Also, you are not simply saying that we don't KNOW how many subjects of experience there were over those 5 days in case A, but that there actually isn't an answer to how many there were. The indeterminate-ness is "built into the world" so to speak, and not just existing in our state of mind.

You therefore think it is arbitrary to say that one subject-of-experience experienced all 5 headaches over the 5 days or that 5 subjects-of-experience each experienced 1 headache over the 5 days.

But importantly, IF there is a fact of the matter as to how many subjects-of-experience there is in any given time period, you would NOT continue to think that there is no metaphysical difference between case A and B. And this is because you agree that one subject-of-experience going through 5 headaches is not identical to five independent subjects-of-experience each going through 1 headache. You would say, "Obviously they are not identical. The problem, however, is that - in case A, for example - there simply is no fact of the matter as to how many subjects-of-experience there were over those 5 days IN THE FIRST PLACE so saying that one subject-of-experience experienced all 5 headaches is arbitrary."

I hope that was an accurate portrayal of your view.

Let us then try to build some consensus from the ground up:

First, there is surely experience. That there is experience, whether it be pain experience or color experience or whatever, is the most obvious truth there is. I assume you don't deny that. Ok, so we agree that

1) there is experience.

Second, well, each experience is clearly SOMEONE'S experience - it is experience FOR SOMEONE. Suppose there is a pain experience - a headache. Someone IN PARTICULAR experiences that headache. Let's suppose you're not experiencing it and that I am. Then I am that particular someone. I assume you don't deny any of that. Ok, so we agree that

2) there is not just experience, but that for every experience, there is also a particular subject-of-experience who experiences it, whether or not a particular subject-of-experience can also extend through time and be the subject of multiple experiences.

That's all the consensus building I want to do right now.

Now, let me report something about myself (for the sake of argument, just assume it's true): I felt 5 headaches over the past 5 days. Here (just as in case A) you would say that there is no fact of the matter whether one subject-of-experience felt those 5 headaches or five different subjects-of-experience felt those 5 headaches, even though the “I” in “I just felt 5 headaches” makes it SOUND LIKE there was only one subject-of-experience.

If I then say that, “no no, there was just one subject-of-experience who felt those 5 headaches”, your question (and challenge) to me is what is my criteria for saying that there was just one subject-of-experience and not five. More specifically, you ask whether memory-continuity and personality-continuity are necessary conditions for being the same subject-of-experience over the 5 days, “same” in the sense of being numerically identical and not qualitatively identical.

Here’s my answer:

I’m sure philosophers have tried to come up with various criteria. Presumably that’s what philosophers engaged in the field called “personal identity” in part do, though I don’t know much about that field. Anyways, presumably they are all trying to come up with a criteria that would neatly accommodate all our intuitive judgements in specific (perhaps imagined) cases concerning personal identity (e.g., split brain cases). A criteria that succeeded in doing that would presumably be regarded as the “true” or “correct” criteria. In other words, the ONLY way philosophers have for testing their criteria is presumably to see if their criteria would yield results that accord with our intuitions. Moreover, if the “correct” criteria is found, philosophers are presumably going to say that it is correct not merely in the sense that it accurately describes the implicit/sub-conscious assumptions that we hold about personal identity which have led us to have the intuitions we have. Indeed, presumably, they are going to say that the criteria is correct in the stronger sense that it accurately describes the conditions under which a subject-of-experience IN REALITY is the same numerical subject over time. Insofar as they would say this, philosophers are assuming that our intuitive judgements represent the truth (i.e. the way things actually are). For only if the intuitions represented the truth would it be the case that a criteria that accommodated all of them would thereby be a criteria that described reality.

But then the question is, do our intuitions represent the truth? I don’t know, and so even if I were able to give you a criteria that accommodated all our intuitions and that, according to this criteria, there was only one subject-of-experience who experienced all 5 headaches over those 5 days, I would not have, in any convincing way, demonstrated that there was in fact only one subject-of-experience who experienced all 5 headaches over those 5 days, instead of 5 independent subjects-of-experience who each experienced 1 headache. For you can always ask what reasons I have for taking our intuitions to represent the truth. I don’t think there is a convincing answer. So I don’t think presenting you with criteria will ultimately satisfy you, at least I don’t think it should.

Of course, that’s not to say that we wouldn’t know what would have to be the case for it to be true that one subject-of-experience experienced all 5 headaches over the 5 days: That would be true just in case one subject-of experience IN FACT experienced all 5 headaches over the 5 days. We just don’t know if that is the case. And I have just argued above that providing a criteria that accords with all our intuitions won’t really help us to know if that is the case either.

So, what reason can I give for believing that there really was just one subject-of-experience who experienced all 5 headaches over those 5 days? Well, what reason can YOU give for saying that there isn’t a fact of the matter as to whether there was one subject-of-experience who experienced all 5 headaches over those 5 days or give independent subjects-of-experience who each experienced only 1 headache over those 5 days?

Are we at a standstill? We would be if neither of us can provide reasons for our views. Your view attributes a fundamental indeterminate-ness to the world itself, and I wonder what reason you have for such a view.

I have a reason for believing my view. But this reply is already very long, so before I describe my reason, I would just like some confirmation that we’re on the same page. Thanks.

P.S. I'll just add (as a more direct response to the first paragraph of your response): Yes, I can imagine 5 headaches by either imagining myself in the shoes of one person for 5 days or imagining myself in the shoes of 5 different people for one day each. In both cases, I imagine 5 headaches. True. BUT. When I imagine myself in the shoes of 5 different people for one day each, what is going on is that one subject-of-experience (i.e. me), takes on the independent what-it's-likes (i.e. experiences) associated with the 5 different people, and IN DOING SO, LINKS THESE what-it's-likes - which in reality are experientially independent of each other - TOGETHER IN ME. So ultimately, when I imagine myself in the shoes of 5 different people for one day each, I am, in effect, imagining what it's like to go through 5 headaches. But in reality, there is no such what-it's-like among the 5 different people. The only what-it's-like present is the what-it's-like-of-going-through-1-headache, which each of the 5 different people would experience.

In essence, what I am saying is that when you or I imagine ourselves in the shoes of 5 different people for a day each, we do end up with the (imagined) what-it's-like-of-going-through-5-headaches, but there is no such what-it's-like in reality among those different 5 people. But there needs to be in order for their 5 independent headaches to be worse than a major headache. I hope that made sense. If it didn't, then I guess you can ignore these last two paragraphs.

P.S.S. As a more direct response to your questions in the second paragraph of your response: it would still be possible IF the person is still the same subject-of-experience after the radical change in personality and loss of memory. It is impossible between two different people because they are numerically different subjects-of-experience.

comment by Michael_S · 2018-03-17T02:21:08.400Z · score: 0 (0 votes) · EA(p) · GW(p)

I'd say I'm making two arguments:

1) There is no distinct personal identity; rather it's a continuum. The you today is different than the you yesterday. The you today is also different from the me today. These differences are matters of degree. I don't think there is clearly a "subject of experience" that exists across time. There are too many cases (eg. brain injuries that change personality) that the single consciousness theory can't account for.

2) Even if I agreed that there was a distinct difference in kind that represented a consistent person, I don't think it's relevant to the moral accounting of experiences. Ie. I don't see why it matters whether experiences are "independent" or not. They're real experiences of pain

comment by Jeffhe · 2018-03-17T03:31:30.737Z · score: 0 (0 votes) · EA(p) · GW(p)

1) I agree that the me today is different from the me yesterday, but I would say this is a qualitative difference, not a numerical difference. I am still the numerically same subject-of-experience as yesterday's me, even though I may be qualitatively different in various physical and psychological ways from yesterday's me. I also agree that the me today is different from the you today, but here I would say that the difference is not merely qualitative, but numerical too. You and I are numerically different subjects-of-experience, not just qualitatively different.

Moreover, I would agree that our qualitative differences are a matter of degrees and not of kind. I am not a chair and you a subject-of-experience. We are both embodied subjects-of-experience (i.e. of that kind), but we differ to various degrees: you might be taller or lighter-skinned, etc

I thus agreed with all your premises and have shown that they can be compatible with the existence of a subject-of-experience that extends through time. So I don't quite see a convincing argument for the lack of the existence of a subject-of-experience that extends through time.

2) So here you're granting me the existence of a subject-of-experience that extends through time, but you're saying that it makes no moral difference whether one subject-of-experience suffers 5 minor headaches or 5 numerically different subjects-of-experience each experience 1 minor headache, and that therefore, we should just focus on the number of headaches.

Well, as I tried to explain in previous replies, when there is one subject-of-experience who extends through time, it is possible for him to experience what it's like of going through 5 minor headaches, since after all, he experiences all 5 minor headaches (whether he remembers experiencing them or not). Moreover, it is ONLY the what-it's-like-of-going-through-5-minor-headaches that can plausibly be worse or more painful than the what-it's-like-of-going-through-a-major-headache.

In contrast, when the 5 minor headaches are spread across 5 people, each of the 5 people experiences only what it's like to go through 1 minor headache. Moreover, the what-it's-like-of-going-through-1-headache CANNOT plausibly be worse or more painful than the what-it's-like-of-going-through-a-major-headache.

Thus it matters whether the 5 headaches are experienced all by a single subject-of-experience (i.e. experienced together) or spread across five experientially independent subject-of-experiences (i.e. experienced independently). It matters because, again, ONLY when the 5 headaches are experienced together can there be the what-it's-like-of-going-through-5-minor-headaches and ONLY that can plausibly be said to be worse or more painful than the what-it's-like-of-going-through-a-major-headache.

P.S. I have extensively edited my very first reply to you, so that it is more clear and detailed for first-time readers. I would recommend giving it a read if you have the time. Thanks.

comment by Michael_S · 2018-03-17T15:30:17.704Z · score: 0 (0 votes) · EA(p) · GW(p)

1) I'd like to know what your definition of "subject-of-experience" is.

2) For this to be true, I believe you would need to posit something about "conscious experience" that is entirely different than everything else in the universe. If say factory A produces 15 widgets, factory B produces 20 widgets, and Factory C produces 15 widgets, I believe we'd agree that the number of widgets in A+C is greater than the number of widgets produced by B, no matter how independent the factories are. Do you disagree with this?

Similarly, I'd say if 15 neural impulses occur in brain A, 20 in brain B, and 15 in brain C, the # of neural impulses is greater than A+C than in B. Do you disagree with this?

Conscious experiences are a product of such neural chemical reactions. Do you disagree with this?

Given this, It seems odd to then postulate that even though all ingredients are the same and are additive between individuals, the conscious product is not. It seems arbitrary and unnecessary to explain anything, and there is no reason to believe it is true.

comment by Jeffhe · 2018-03-17T19:14:47.889Z · score: 0 (0 votes) · EA(p) · GW(p)

1) A subject of experience is just something which "enjoys" or has experience(s), whether that be certain visual experiences, pain experiences, emotional experiences, etc... In other words, a subject of experience is just something for whom there is a "what-it's-like". A building, a rock or a plant is not a subject of experience because it has no experience(s). That is, for example, why we don't feel concerned when we step on grass: it doesn't feel pain or feel anything. On the other hand, a cow is a subject-of-experience - it presumably has visual experiences and pain experience and all sorts of other experiences. Or more technically, a subject-of-experience (or multiple) may be realized by a cow's physical system (i.e. brain). There would be a single subject-of-experience if all the experiences realized by the cow's physical system are felt by a single subject. Of course, it is possible that within the cow's physical system's life span, multiple subjects-of-experience are realized. This would be the case if not all of the experiences realized by the cow's physical system are felt by a single subject.

2) But when we say that 5 minor headaches is "worse" or "more painful" than a major pain, we are not simply making a "greater than, less than, or equal to" number comparison like 5 minor headaches is more headaches than 1 major headaches.

Clearly 5 minor headaches, whether they are spread across 5 persons or not, is more headaches than 1 major headache. But that is irrelevant. Because the claim you're making is that 5 minor headaches, whether they are spread across 5 persons or not, is WORSE or MORE PAINFUL than 1 major headache. And this is where I disagree.

I am saying that for 5 minor headaches to be plausibly worse than a major headache, it must be the case that there is a what-it's-like-of-going-through-5-minor-headaches, because only THAT KIND of experience can be plausibly worse or more painful than a major headache. But, for there to be THAT KIND of experience, it must be the case that all 5 minor headaches are felt by a single subject of experience and not spread among 5 experientially independent subjects of experience. For when the 5 minor headaches are spread, there is only 5 experientially independent what-it's-like-of-going-through-a-minor-headache, and no what-it's-like-of-going-through-5-minor-headache.

Sorry for the caps btw, I have no other way of placing emphasis.

comment by Michael_S · 2018-03-17T21:14:48.543Z · score: 0 (0 votes) · EA(p) · GW(p)

Of course, it is possible that within the cow's physical system's life span, multiple subjects-of-experience are realized. This would be the case if not all of the experiences realized by the cow's physical system are felt by a single subject.

That's what I'm interested in a definition of. What makes it a "single subject"? How is this a binary term?

I am making a greater than/less than comparison. That comparison is with pain which results from the neural chemical reactions. There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don't see any reason to treat this differently then the underlying chemical reactions.

No problem on the caps.

comment by Jeffhe · 2018-03-19T00:06:45.028Z · score: 0 (0 votes) · EA(p) · GW(p)

REVISED TO BE MORE CLEAR ON MAR 19:

You also write, "There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don't see any reason to treat this differently then the underlying chemical reactions."

Well, to me the reason is obvious: when we say that 5 minor pains in one person is greater than (i.e. worse than) a major pain in one person" we are using "greater than" in an EXPERIENTIAL sense. On the other hand, when we say that 10 neural impulses in one person is greater than 5 neural impulses in one person, we are using "greater than" in a QUANTITATIVE/NUMERICAL sense. These two comparisons are very different in their nature. The former is about the relative STRENGTH of the pains, the latter is about the relative QUANTITIES of neural impulses.

So just because 10 neural impulses is greater than 5 neural impulses in the numerical sense, whether the 10 impulses take place in 1 brain or 5 brains, that does NOT mean that 5 minor pains is greater than 1 major headache in the experiential sense, whether the 5 minor pains are realized in 1 brain or 5 brains.

This relates back to why I said it can be very misleading to represent pain comparisons in numerals like 5*2>5. Such representations do not distinguish between the two senses described above, and thus can easily lead one to conflate them.

comment by Jeffhe · 2018-03-18T23:34:48.234Z · score: 0 (0 votes) · EA(p) · GW(p)

Just to make sure we're on the same page here, let me summarize where we're at:

In choice situation 2 of my paper, I said that supposing that any person would rather endure 5 minor headaches of a certain sort than 1 major headache of a certain sort when put to the choice, then a case in which Al suffers 5 such minor headaches is morally worse than a case in which Emma suffers 1 such major headache. And the reason I gave for this is that Al's 5 minor headaches is more painful (i.e. worse) than Emma's major headache.

In choice situation 3, however, the 5 minor headaches are spread across 5 different people: Al and four others. Here I claim that the case in which Emma suffers a major headache is morally worse than a case in which the 5 people each suffer 1 minor headache. And the reason I gave for this is that Emma's major headache is more painful (i.e. worse) than each of the 5 people's minor headache.

Against this, you claim that if the supposition from choice situation 2 carries over to choice situation 3 - the supposition that any person would rather endure 5 minor headaches than 1 major headache if put to the choice -, then the case in which the 5 people each suffer 1 minor headache is morally worse than Emma suffering a major headache. And your reason for saying this is that you think 5 minor headaches spread across the 5 people is more painful (i.e. worse) than Emma's major headache.

THAT is what I took you to mean when you wrote: "Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people."

As a result, this whole time, I have been trying to explain why it is that 5 minor headaches spread across five people CANNOT be more painful (i.e. worse) than a major headache, even while the same minor 5 headaches all had by one person can (and would be, under the supposition).

Importantly, I never took myself to be disagreeing with you on whether 5 instances of a minor headache is more than 1 instance of a major headache. Clearly, 5 instances of a minor headache is more than 1 instance of a major headache, regardless of whether the 5 instances were all experienced by a single subject-of-experience or spread across 5.

I took our disagreement to be about whether 5 instances of a minor headache, when spread across 5 people, is more painful (i.e. worse) than an instance of a major headache.

My view is that only when the 5 headaches are all had by one subject-of-experience could they be more painful (i.e. worse) than a major headache. Moreover, my view is that it literally makes no sense to say (or that it is at least false to say, even if it made sense) that the 5 headaches, when spread across 5 people, is more painful (i.e. worse) than a major headache, under the supposition.

If I am right, then in choice situation 3, the morally worse case should be the case in which Emma suffers one major headache, not the case in which 5 people each suffer one minor headache.

In response to your question, "what makes a single subject "a single subject", here is another stab: Within any given physical system that can realize subjects of experience (e.g. a cow's brain), the subject-of-experience at t-1 (S1) is numerically identical to the subjective-of-experience at t-2 (S2) if and only if an experience at t-1 (E1) and an experience at t-2 (E2) are both felt by S1. That is S1 = S2 iff S1 feels E1 and E2.

That in conjunction with the definition I provided earlier is probably the best I can do to communicate what I take a subject-of-experience to be, and what makes a particular subject-of-experience the numerically same subject-of-experience over time.

comment by Michael_S · 2018-03-20T02:07:50.927Z · score: 0 (0 votes) · EA(p) · GW(p)

To your first comment, I disagree. I think it's the same thing. Experiences are the result of chemical reactions. Are you advocating a form of dualism where experience is separated from the physical reactions in the brain?

I think there is more total pain. I'm not counting the # of headaches. I'm talking about the total amount of pain.

Can you define S1?

We may not, as these discussions tend to go. I'm fine calling it.

I think we have to get closer to defining a subject of experience, (S1); I think I would need this to go forward. But here's my position on the issue: I think moral personhood doesn't make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus). I don't see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn't be. The consciousness experiences are different between at different times and different brains; I see this as a matter of degree of similarity.

comment by Jeffhe · 2018-03-21T03:27:08.441Z · score: 0 (0 votes) · EA(p) · GW(p)

Hi Michael,

I removed the comment about worrying that we might not reach a consensus because I worried that it might send you the wrong idea (i.e. that I don't want to talk anymore). It's been tiring I have to admit, but also enjoyable and helpful. Anyways, you clearly saw my comment before I removed it. But yeah, I'm good with talking on.

I agree that experiences are the result of chemical reactions, however the nature of the relations "X being experientially worse than Y" and "X being greater in number than Y" are relevantly different. Someone by the name of "kbog" recently read my very first reply to you (the updated edition) and raised basically the same concern as you have here, and I think I have responded to him pretty aptly. So if you don't mind, can you read my discussion with him:

http://effective-altruism.com/ea/1lt/is_effective_altruism_fundamentally_flawed/dmu

I would have answered you here, but I'm honestly pretty drained from replying to kbog, so I hope you can understand. Let me know what you think.

Regarding defining S1, I don't think I can do better than to say that S1 is a thing that has, or is capable of having, experience(s). I add the phrase 'or is capable of having' this time because it has just occurred to me that when I am in dreamless sleep, I have no experiences whatsoever, yet I'd like to think that I am still around - i.e. that the particular subject-of-experience that I am is still around. However, it's also possible that a subject-of-experience exists only when it is experiencing something. If that is true, then the subject-of-experience that I am is going out of and coming into existence several times a night. That's spooky, but perhaps true.

Anyways, I can't seem to figure out why you need any better of a definition of a subject-of-experience than that. I feel like my definition sufficiently distinguishes it from other kinds of things. Moreover, I have provided you with a criteria for identity over time. Shouldn't this be enough?

You write, "I think moral personhood doesn't make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus)."

I agree with all of this, but I would insist those NEED NOT BE numerical differences, just qualitative differences. A mind can be very qualitatively different (e.g. big personality change) from one moment to the next, but that does not necessarily mean that it is a numerically different mind. Likewise, a brain can be very qualitative different (e.g. big change in shape) from one moment to the next, but that does not necessarily mean that it is a numerically different brain.

You then write, "I don't see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn't be."

Well, if a particular mind is the numerically same mind before and after a big qualitative change (e.g., due to a brain injury), then clearly there is reason to call it the same mind/person in a way that two minds of two coexisting brains wouldn't be. After all, it's the numerically same mind, whereas two minds of two coexisting brains are clearly two numerically different minds.

You might agree that there is a literal reason to call it the same mind, but deny that there is a moral reason that wouldn't be true of two minds of two coexisting brains. But I think the literal reason constitutes or provides the moral reason: if a mind is numerically the same mind before and after a big qualitative change (e.g. big personality change), then that means whatever experiences are had by that mind before and after the change are HAD BY THAT NUMERICALLY SAME MIND. So if that particular mind suffered a headache before the radical change and then suffered a headache after the change, it is THAT PARTICULAR MIND THAT SUFFERS BOTH. That is enough reason to also call that mind the same mind in a moral sense that wouldn't also be true of two numerically different minds of two coexisting brains.

I didn't quite understand the sentences after that.

comment by Michael_S · 2018-03-22T02:32:20.342Z · score: 0 (0 votes) · EA(p) · GW(p)

FYI, I'm pretty busy over the next few days, but I'd like to get back to this conversation at one point. If I do, it may be a bit though.

comment by Jeffhe · 2018-03-22T03:05:38.698Z · score: 0 (0 votes) · EA(p) · GW(p)

No worries!

comment by kbog · 2018-03-20T08:02:36.583Z · score: 0 (0 votes) · EA(p) · GW(p)

To be honest, I just don't get why you would feel the same if the 5 minor headaches were spread across 5 people

Because I don't have any reason to feel different. Imagine if I said, "5 headaches among tall people would be better than 5 headaches among short people." And then you said, "no, it's the same either way. Height is irrelevant." And then I replied, "I just don't get why you would feel the same if the people are tall or short!" In that case, clearly I wouldn't be giving you a response that carries any weight. If you want to show that the cases are different in a relevant way, then you need to spell it out. In the absence of reasons to say that there is a difference, we assume by default that they're similar.

Now, it's clearly the case that the what-it’s-like-of-going-through-1-minor-headache is not worse than the what-it’s-like-of-going-through-a-major-headache. Given what I said in the previous paragraph, therefore, there is nothing present that could be worse than the what-it’s-like-to-go-through-a-major-headache in the case where the 5 minor headaches are spread across 5 people. Therefore, 5 minor headaches, spread across 5 people, cannot be (and thus is not) worse (experientially speaking) than one major headache.

The third sentence does not follow from the second. This is like saying "there is nothing present in a Toyota Corolla that could make it weigh more than a Ford F-150, therefore five Toyota Corollas cannot weigh more than a Ford F-150." Just because there is no one element in a set of events that is worse than a bad thing doesn't mean that the set of events is not worse than the bad thing. There are lots of events where badness increases with composition, even without using aggregative utilitarian logic. E.g.: it is okay to have sex with Michelle, and it is okay to marry Tiffany, but it is not okay to do both.

comment by Jeffhe · 2018-03-20T19:00:50.361Z · score: 0 (2 votes) · EA(p) · GW(p)

1) "Because I don't have any reason to feel different."

Ok, well, that comes as a surprise to me. In any case, I hope after reading my first reply to Michael_S, you at least sort of see how it could be possible that someone like I would feel surprised by that, even if you don't agree with my reasoning. In other words, I hope you at least sort of see how it could be possible that someone who would clearly agree with you that, say, 5 minor headaches all had by 1 tall person is experientially just as bad as 5 minor headaches all had by 1 short person, might still disagree with you that 5 minor headaches all had by 1 person is experientially just as bad as 5 minor headaches spread across 5 people.

2) "If you want to show that the cases are different in a relevant way, then you need to spell it out. In the absence of reasons to say that there is a difference, we assume by default that they're similar."

That's what my first reply to Michael_S, in effect, aimed to do.

3) "The third sentence does not follow from the second. This is like saying "there is nothing present in a Toyota Corolla that could make it weigh more than a Ford F-150, therefore five Toyota Corollas cannot weigh more than a Ford F-150." Just because there is no one element in a set of events that is worse than a bad thing doesn't mean that the set of events is not worse than the bad thing. There are lots of events where badness increases with composition, even without using aggregative utilitarian logic. E.g.: it is okay to have sex with Michelle, and it is okay to marry Tiffany, but it is not okay to do both."

Your reductio-by-analogy (I made that phrase up) doesn't work, because your analogy is relevantly different. In your analogy, we are dealing with the relation of _ being heavier than _, whereas I'm dealing with the relation of _ being experientially worse than _. These relations are very different in nature: one is quantitative in nature, the other is experiential in nature. You might insist that this is not a relevant difference, but I think it is when one really slows down to think about exactly what is it that makes 5 minor headaches experientially worse than a major headache.

As I mentioned, the answer is the what-it's-like-of-going-through-5-minor-headaches. That is, the what-it's-like of going through one minor headache, then another (sometime later), then another, then another, then another. It's THAT SPECIFIC WHAT-IT'S-LIKE that can plausibly be experientially worse than a major headache. It's THAT SPECIFIC WHAT-IT'S-LIKE that can plausibly be "shittier" or "sucker" than a major headache.

However, when the 5 minor headaches are spread across 5 people, there is just 5 what-it's-likes-of-going-through-1-minor-headache, and no single what-it's-like-of-going-through-5-minor-headaches. Why? Because each of the minor headaches in this situation would be felt by a numerically non-identical subject-of-experience (i.e. 5 people), and numerically different subjects-of-experience cannot have their experiences "linked". Otherwise, they would not be numerically different.

Therefore, only 5 minor headaches, when all had by one subject-of-experience (i.e. one person) can they be experientially worse than one major headache. And therefore, 5 minor headaches, when all had by one person, is experientially worse than 5 minor headaches, spread across 5 people.

I think what I just said above shows clearly how the relation of _ being experientially worse than _ is impacted by whether the 5 minor headaches are all had by one person or spread across 5 different people. Whereas the relation of _ being heavier than _ is not similarly affected. So that is the relevant difference.

I hope you can really consider what I'm saying here. Thanks.

comment by kbog · 2018-03-20T21:33:55.029Z · score: 0 (0 votes) · EA(p) · GW(p)

I hope you at least sort of see how it could be possible that someone who would clearly agree with you that, say, 5 minor headaches all had by 1 tall person is experientially just as bad as 5 minor headaches all had by 1 short person, might still disagree with you that 5 minor headaches all had by 1 person is experientially just as bad as 5 minor headaches spread across 5 people.

Well I can see how it is possible for someone to believe that. I just don't think it is a justified position, and if you did embrace it you would have a lot of problems. For instance, it commits you to believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time.

In your analogy, we are dealing with the relation of _ being heavier than _, whereas I'm dealing with the relation of _ being experientially worse than _. These relations are very different in nature: one is quantitative in nature, the other is experiential in nature.

There are two rooms, painted bright orange inside. One person goes into the first room for five minutes, five people go into the second for one minute. If we define orange-perception as the phenomenon of one conscious mind's perception of the color orange, the amount of orange-perception for the group is the same as the amount of orange-perception for the one person.

Something being experiential doesn't imply that it is not quantitative. We can clearly quantify experiences in many ways, e.g. I had two dreams, I was awake for thirty seconds, etc. Or me and my friends each saw one bird, and so on.

However, when the 5 minor headaches are spread across 5 people, there is just 5 what-it's-likes-of-going-through-1-minor-headache, and no single what-it's-like-of-going-through-5-minor-headaches.

Yes, but the question here is whether 5 what-it's-lies-of-going-through-1-minor-headache is 5x worse than 1 minor headache. We can believe this moral claim without believing that the phenomenon of 5 separate headaches is phenomenally equivalent to 1 experience of 5 headaches. There are lots of cases where A is morally equivalent to B even though A and B are physically or phenomenally different.

comment by Jeffhe · 2018-03-20T22:39:04.658Z · score: 0 (0 votes) · EA(p) · GW(p)

1) "Well I can see how it is possible for someone to believe that. I just don't think it is a justified position, and if you did embrace it you would have a lot of problems. For instance, it commits you to believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time."

I disagree. I was precisely trying to guard against such thoughts by enriching my first reply to Michael_S with a case of forgetfulness. I wrote, "Now, by the end of our 5th minor headache, we might have long forgotten about the first minor headache because, say, it happened so long ago. So, by the end of our 5th minor headache, we might not have an accurate appreciation of what it’s like to go through 5 minor headaches EVEN THOUGH we in fact have experienced what it’s like to go through 5 minor headaches." (I added the caps here for emphasis)

The point I was trying to make in that passage is that if one person (i.e. one subject-of-experience) experienced all 5 minor headaches, then whether he remembers them or not, the fact of the matter is that HE felt all of them, and insofar as he has, he is experientially worse off than someone who only felt a major headache. Of course, if you asked him at the end of his 5th minor headache whether HE thinks he's had it worse than someone with a major headache, he may say "no" because, say, he has forgotten about some of the minor headaches he's had. But that does NOT MEAN that, IN FACT, he did not have it worse. After all, the what-it's-like-of-going-through-5-minor-headaches is experentially worse than one major headache, and HE has experienced the former, whether he remembers it or not.

So, if my memory is wiped each time after getting tortured, of course it still matters how many times I'm tortured. Because I WILL have the experience of being tortured a second time, whether or not I VIEW that experience as such.

2) "There are two rooms, painted bright orange inside. One person goes into the first room for five minutes, five people go into the second for one minute. If we define orange-perception as the phenomenon of one conscious mind's perception of the color orange, the amount of orange-perception for the group is the same as the amount of orange-perception for the one person.

Something being experiential doesn't imply that it is not quantitative. We can clearly quantify experiences in many ways, e.g. I had two dreams, I was awake for thirty seconds, etc. Or me and my friends each saw one bird, and so on."

My point wasn't that we can't quantify experience in various ways, but that relations of an experiential nature, like the relation of X being experientially worse than Y, behave in relevantly different ways from relations of a quantitative - maybe 'non-experiential' might have been a better word - nature, like the relation of X being heavier than Y. As I tried to explain, the "experientially-worse-than" relation is impacted by whether the X (e.g. 5 minor headaches) are spread across 5 people or all had by one person, whereas the "heavier-than" relation is not impacted by whether X (e.g. 100 tons) are spread across 5 objects or true of 1 object.

3) "Yes, but the question here is whether 5 what-it's-lies-of-going-through-1-minor-headache is 5x worse than 1 minor headache. We can believe this moral claim without believing that the phenomenon of 5 separate headaches is phenomenally equivalent to 1 experience of 5 headaches. There are lots of cases where A is morally equivalent to B even though A and B are physically or phenomenally different."

The moral question here is whether a case in which 5 minor headaches are all had by one person is morally equivalent (i.e. morally just as bad) as a case in which 5 minor headaches are spread across 5 people. You think it is, and I think it isn't. Instead, I think the former case is morally worse than the latter case.

And the ONLY reason why I think this is because I think 5 headaches all had by one person is experientially worse than 5 headaches spread across 5 people. As I said before, I think experience is the only morally relevant factor.

Since I don't think anything other than experience matters, I would deny the existence of cases in which A and B are morally just as bad/good where A and B differ phenomenally.

comment by kbog · 2018-03-24T21:11:42.520Z · score: 0 (0 votes) · EA(p) · GW(p)

I disagree. I was precisely trying to guard against such thoughts by enriching my first reply to Michael_S with a case of forgetfulness. I wrote, "Now, by the end of our 5th minor headache, we might have long forgotten about the first minor headache because, say, it happened so long ago. So, by the end of our 5th minor headache, we might not have an accurate appreciation of what it’s like to go through 5 minor headaches EVEN THOUGH we in fact have experienced what it’s like to go through 5 minor headaches." (I added the caps here for emphasis)

But I don't have an accurate appreciation of what it's like to be 5 people going through 5 headaches either. So I'm missing out on just as much as the amnesiac. In both cases people's perceptions are inaccurate.

My point wasn't that we can't quantify experience in various ways, but that relations of an experiential nature, like the relation of X being experientially worse than Y, behave in relevantly different ways from relations of a quantitative - maybe 'non-experiential' might have been a better word - nature, like the relation of X being heavier than Y. As I tried to explain, the "experientially-worse-than" relation is impacted by whether the X (e.g. 5 minor headaches) are spread across 5 people or all had by one person, whereas the "heavier-than" relation is not impacted by whether X (e.g. 100 tons) are spread across 5 objects or true of 1 object

Of course you can define a relation to have that property, but merely defining it that way gives us no reason to think that it should be the focus of our moral concern.

If I were to define a relation to have the property of being the target of our moral concern, it wouldn't be impacted by how it were spread across multiple people.

As I said before, I think experience is the only morally relevant factor.

Well, so do I. The point is that the mere fact that 5 headaches in one person is worse for one person doesn't necessarily imply that it is worse overall for 5 headaches among 5 people.

comment by Jeffhe · 2018-03-27T20:10:28.034Z · score: 0 (0 votes) · EA(p) · GW(p)

Hi kbog, glad to hear back from you.

1) "But I don't have an accurate appreciation of what it's like to be 5 people going through 5 headaches either. So I'm missing out on just as much as the amnesiac. In both cases people's perceptions are inaccurate."

I don't quite understand how this is a response to what I said, so let me retrace some things:

You first claimed that if I believed that 5 minor headaches all had by one person is experientially worse than 5 minor headaches spread across 5 people, then I would be committed to "believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time" and this is a problem.

I replied that it does matter how many times I get tortured because even if my memory is wiped each time, it is still ME (as opposed to a numerically different subject-of-experience, e.g. you) who would experience torture again and again. If my memory is wiped, I will incorrectly VIEW each additional episode of torture as the first one I've ever experienced, but it would not BE the first one I've ever experienced. I would still experience what-it's-like-of-going-through-x-number-of-torture-episodes even if after each episode, my memory was wiped. Since it's the what-it's-like-of-going-through-x-number-of-torture-episodes (and not my memory of it) that is experientially worse than something else, and since X is morally worse than Y when X is experientially worse (i.e. involves more pain) than Y, therefore, it does matter how many times I'm tortured irrespective of my memory.

Now, the fact that you said that I "will never have the experience of being tortured a second time" suggests that you think that memory-continuity is necessary to being the numerically same subject-of-experience (i.e. person). If this were true, then every time a person's memory is wiped, a numerically different person comes into existence and so no person would experience what-it's-like-of-going-through-2-torture-episodes if a memory wipe happens after each torture episode. But I don't think memory-continuity is necessary to being the numerically same subject-of-experience. I think a subject-of-experience at time t1 (call this subject "S1") and a subject-of-experience at some later time t2 (call this subject "S2") are numerically identical (though perhaps qualitatively different) just in case an experience at t1 (call this experience E1) and an experience at t2 (call this experience E2) are both felt by S1. In other words, I think S1 = S2 iff E1 and E2 are both felt by S1. S1 may have forgotten about E1 by t2 (due to a memory wipe), but that doesn't mean it wasn't S1 who also felt E2.

In a nutshell, memory (and thus how accurate we appreciate our past pains) is not morally relevant since it does not prevent a person from actually experiencing what-it's-like-of-going-through-multiple-pains, and it is this latter thing that is morally relevant. So I don't quite see the point of your latest reply.

2) "Of course you can define a relation to have that property, but merely defining it that way gives us no reason to think that it should be the focus of our moral concern.

If I were to define a relation to have the property of being the target of our moral concern, it wouldn't be impacted by how it were spread across multiple people."

I am not simply defining a relation here. We both agree that experience is morally relevant and that therefore pain is morally bad, and that therefore an outcome that involves more pain than another outcome is morally worse than the latter outcome. That is, we agree X is morally worse than Y iff X involves more pain than Y. But how are we to understand phrase 'involves more pain than'? I understand it as meaning "is experientially worse than", which is why I ultimately think that 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 people. You seem to agree with me that the former is experientially worse than the latter, yet you deny that the former is morally worse than the latter. Thus, you have to offer another plausible account of the phrase 'involves more pain than' on which 5 minor headaches all had by one person involves just as much pain as 5 minor headaches spread across 5 people. IMPORTANTLY, this account has to be one according to which 5 minor headaches all had by one person can involve more pain than 1 major headache and not merely in an experientially worse sense. Can you offer such an account?

I mean, how can 5 minor headaches all had by one person involve more pain than 1 major headache if not in an experientially worse sense? You might try to use math to help illustrate your point of view. You might say, well suppose each minor headache represents a pain of a magnitude of 2, and the major headache represents a pain of a magnitude of 6. You might further clarify that the 2 doesn't just signify the INTENSITY of the minor pain since how shitty a pain episode is doesn't just depend on its intensity but also on its duration. Thus, you might clarify that the 2 represents the overall shitness of the pain - the disutility of it, so to speak. Next, you might say that insofar as there are 5 such minor headaches, they represent 10 disutility, and 10 is bigger than 6. Therefore 5 minor headaches all had by one person involves more pain than a major headache.

But then I would ask you: what is the reality underpinning the number 10? Is it not some overall shittiness that is experientially worse than the overall shittiness from experiencing one major headache? Is it not the overall shittiness of what-it's-like-of-going-through-5-minor-headaches? If it is, then we haven't departed from my "is experientially worse than" interpretation of 'involves more pain than'. If it isn't, then what is it?

To see the problem even more clearly, consider when the 5 minor headaches are spread across 5 people. Here again, you will say that the 5 minor headaches represent 10 disutility and 10 is greater than 6, therefore 5 minor headaches spread across 5 people involve more pain than one major headache. This conclusion is easy to arrive at when one just focuses on the math: 2 x 5 = 10 and 10 > 6. But we must not forget to ask ourselves what the "10" might signify in reality. Is it meant to signify an overall shittiness that is shittier than the experience of 1 major headache? Ok, but where in reality is this overall shittiness? I certainly don't see it. I don't see the presence of this overall shittiness because there is no experience of it.

(Thus, I find using math to show that 5 minor headaches spread across 5 people involves more pain than 1 major headache is very misleading: yes, mathematically, you can easily portray it. But, at bottom, the '10' maps onto nothing in reality.)

So in conclusion, I don't see any other plausible interpretation of 'involves more pain than' than "is experientially worse than". If that is the case, then not only is it the case that I haven't arbitrarily defined a relation, but it's also the case that this relation is the only plausible morally relevant relation.

3) "Well, so do I. The point is that the mere fact that 5 headaches in one person is worse for one person doesn't necessarily imply that it is worse overall for 5 headaches among 5 people."

We need to distinguish between experientially worse and morally worse. You agree that 5 headaches in one person is experientially worse than 5 headaches spread across 5 people, yet you insist that that doesn't mean the former is morally worse than the latter. Well, again, this requires you to show that there is another plausible interpretation of 'involves more pain than' on which the former involves just as much pain as the latter.

Also, I should note that I was too hasty when I said that I think experience is the ONLY morally relevant factor. Actually, I also think who suffers is a morally relevant factor, but that doesn't affect our discussion here.

comment by kbog · 2018-03-28T01:51:58.485Z · score: 0 (0 votes) · EA(p) · GW(p)

In a nutshell, memory (and thus how accurate we appreciate our past pains) is not morally relevant since it does not prevent a person from actually experiencing what-it's-like-of-going-through-multiple-pains, and it is this latter thing that is morally relevant. So I don't quite see the point of your latest reply.

The point is that the subject has the same experiences as that of having one headache five times, and therefore has the same experiences as five headaches among five people. There isn't any morally relevant difference between these experiences, as the mere fact that the latter happens to be split among five people isn't morally relevant. So we should suppose that they are morally similar.

But how are we to understand phrase 'involves more pain than'?

You think it should be "involves more pain for one person than". But I think it should be "involves more pain total", or in other words I take your metric, evaluate each person separately with your metric, and add up the resulting numbers.

Thus, you have to offer another plausible account of the phrase 'involves more pain than' on which 5 minor headaches all had by one person involves just as much pain as 5 minor headaches spread across 5 people.

It's just plain old cardinal utility: the sum of the amount of pain experienced by each person.

IMPORTANTLY, this account has to be one according to which 5 minor headaches all had by one person can involve more pain than 1 major headache and not merely in an experientially worse sense

Why?

I mean, how can 5 minor headaches all had by one person involve more pain than 1 major headache if not in an experientially worse sense?

In the exact same way that you think they can.

then we haven't departed from my "is experientially worse than" interpretation of 'involves more pain than'.

Correct, we haven't, because we're not yet doing any interpersonal comparisons.

But we must not forget to ask ourselves what the "10" might signify in reality. Is it meant to signify an overall shittiness that is shittier than the experience of 1 major headache? Ok, but where in reality is this overall shittiness?

It is distributed - 20% of it is in each of the 5 people who are in pain.

comment by Jeffhe · 2018-03-28T03:46:43.365Z · score: 0 (0 votes) · EA(p) · GW(p)

1) "The point is that the subject has the same experiences as that of having one headache five times, and therefore has the same experiences as five headaches among five people."

One subject-of-experience having one headache five times = the experience of what-it's-like-of-going-through-5-headaches. (Note that the symbol is an equal sign in case it's hard to see.)

Five headaches among five people = 5 experientially independent experiences of what-it's-like-of-going-through-1-headache. (Note the 5 experiences are experientially independent of each other because each is felt by a numerically different subject-of-experience, rather than all by one subject-of-experience.)

The single subject-of-experience does not "therefore has the same experiences as five headaches among five people."

2) "You think it should be "involves more pain for one person than". But I think it should be "involves more pain total", or in other words I take your metric, evaluate each person separately with your metric, and add up the resulting numbers."

Ok, and after adding up the numbers, what does the final resulting number refer to in reality? And in what sense does the referent (i.e. the thing referred to) involve more pain than a major headache?

Consider the case in which the 5 minor headaches are spread across 5 people, and suppose each minor headache has an overall shittiness score of 2 and a major headache has an overall shittiness score of 6. If I asked you what '2' refers to, you'd easily answer the shitty feeling characteristic of what it's like to go through a minor-headache. And you would say something analogous for '6' if I asked you what it refers to.

You then add up the five '2's and get 10. Ok, now, what does the '10' refer to? You cannot answer the shitty feeling characteristic of what it's like to go through 5 minor headaches, for this what-it's-like is not present since no individual feels all 5 headaches. The only what-it's-like that is present are 5 experientially independent what-it's-like-of-going-through-1-minor-headache. Ok so what does '10' refer to? 5 of these shitty feelings? Ok, and in what sense do 5 of these shitty feelings involve more pain than 1 major headache? Clearly not in an experiential sense for only the what-it's-like-of-going-through-5-minor-headaches is plausibly experientially worse than a major headache. So in what sense does the referent involve more pain than a major headache?

THIS IS THE CRUX OF OUR DISAGREEMENT. I CANNOT SEE HOW 5 what-it's-like-of-going-through-1-minor-headache involves more pain than 1 major headache. YES, mathematically, you can show me '10 > 6' all day long, but I don't see any reality onto which it maps!

3) "It's just plain old cardinal utility: the sum of the amount of pain experienced by each person."

Yes, but I don't see how that "sum of pain" can involve more pain than 1 major headache because what that "sum of pain" is, ultimately speaking, are 5 what-it's-likes-of-going-through-1-minor-pain, and NOT 1 what-it's-like-of-going-through-5-minor-pains.

4) "Why?"

Because ultimately you'll need an account of 'involves more pain than' on which 5 minor headaches spread across 5 people can involve more pain than 1 major headache. And in that situation, it is clearly the case that the 5 minor headaches are not experientially worse than the 1 major headache (for only the what-it's-like-of-going-through-5-minor-headaches can plausibly be experientially worse than 1 major headache).

My point was just that you'll need an account of 'involves more pain than' that can make sense of how 5 experientially independent what-it's-likes-of-going-through-1-minor-headache can involve more pain than 1 major headache, for my account (i.e. "is experientially worse than") certainly cannot make sense of it.

5) "It is distributed - 20% of it is in each of the 5 people who are in pain."

But when it's distributed, you won't have an overall shittiness that is shittier than the experience of 1 major headache, at least not when we understand "is shittier than" as meaning "is experientially worse than". For 5 experientially independent what-it's-likes-of-going-through-1-minor-headache are not experientially worse than 1 major headache: only the what-it's-like-of-going-through-5-minor-headaches can plausibly be experientially worse than 1 major headache.

Your task, again, is to provide a different account of 'involves more pain than' or 'shittier than' on which, somehow, 5 experientially independent what-it's-likes-of-going-through-1-minor-headache can somehow involve more pain than 1 major headache.

comment by kbog · 2018-03-28T04:46:13.003Z · score: 0 (0 votes) · EA(p) · GW(p)

Five headaches among five people = 5 experientially independent experiences of what-it's-like-of-going-through-1-headache. (Note the 5 experiences are experientially independent of each other because each is felt by a numerically different subject-of-experience, rather than all by one subject-of-experience.)

The fact that they are separate doesn't mean that their content is any different from the experience of the one person. Certainly, the amount of pain they involve isn't any different.

Ok, and after adding up the numbers, what does the final resulting number refer to in reality?

The total amount of suffering. Or, the total amount of well-being.

And in what sense does the referent (i.e. the thing referred to) involve more pain than a major headache?

Because are multiple people and each of them has their own pain.

You then add up the five '2's and get 10. Ok, now, what does the '10' refer to?

The amount of pain experienced among five people.

Ok, and in what sense do 5 of these shitty feelings involve more pain than 1 major headache?

In the sense that each of them involves more than 1/5 as much pain, and the total pain among 5 feelings is the sum of pain in each of them.

Clearly not in an experiential sense for only the what-it's-like-of-going-through-5-minor-headaches is plausibly experientially worse than a major headache

Sure it's experiential, all 10 of the pain is experienced. It's just not experienced by the same person.

I CANNOT SEE HOW 5 what-it's-like-of-going-through-1-minor-headache involves more pain than 1 major headache

In the same way that there are more sheep apparitions among five people, each of them dreaming of two sheep, than for one person who is dreaming of six sheep.

I don't see how that "sum of pain" can involve more pain than 1 major headache because what that "sum of pain" is, ultimately speaking, are 5 what-it's-likes-of-going-through-1-minor-pain, and NOT 1 what-it's-like-of-going-through-5-minor-pains.

But as far as cardinal utility is concerned, both quantities involve the same amount of pain. That's just what you get from the definition of cardinal utility.

Because ultimately you'll need an account of 'involves more pain than' on which 5 minor headaches spread across 5 people can involve more pain than 1 major headache. And in that situation, it is clearly the case that the 5 minor headaches are not experientially worse than the 1 major headache

That just means I need a different account of "involves more pain than" (which I have) when interpersonal comparisons are being made, but it doesn't mean that my account can't be the same as your account when there is only one person.

But when it's distributed, you won't have an overall shittiness that is shittier than the experience of 1 major headache, at least not when we understand "is shittier than" as meaning "is experientially worse than".

But as I have been telling you this entire time, I don't follow your definition of "experientially worse than".

Your task, again, is to provide a different account of 'involves more pain than' or 'shittier than' on which, somehow, 5 experientially independent what-it's-likes-of-going-through-1-minor-headache can somehow involve more pain than 1 major headache.

Well, I already did. But it's really just the same as what utilitarians have been writing for centuries so it's not like I had to provide it.

comment by Jeffhe · 2018-03-29T00:54:11.223Z · score: 0 (2 votes) · EA(p) · GW(p)

The fact that they are separate doesn't mean that their content is any different from the experience of the one person. Certainly, the amount of pain they involve isn't any different.

Yes, each of the 5 minor headaches spread among the 5 people are phenomenally or qualitatively the same as each of the 5 minor headaches of the one person. The fact that the headaches are spread does not mean that any of them, in themselves, feel any different from any of the 5 minor headaches of the one person. A minor headache feels like a minor headache, irrespective of who has it.

Now, each such minor headache constitutes a certain amount of pain, so 5 such minor headaches constitutes five such pain contents, and in THAT sense, five times as much pain. Moreover, since there are 5 such minor headaches in each case (i.e. the 1 person case and the 5 people case), therefore, each case involves the same amount of pain. This is so even if 5 minor headaches all had by one person (i.e. the what-it's-like-of-going-through-5-minor-headaches) is experientially different from 5 minor headaches spread across 5 people (5 experientially independent what-it's-likes-of-going-through-1-minor-headache).

Analogously, a visual experience of the color orange constitutes a certain amount of orange-ish feel, so 5 such visual experiences constitutes 5 such orange-ish feels, and in THAT sense, 5 times as much orange-ish feel. If one person experienced 5 such visual experiences one right after another and we recorded these experiences on an "experience recorder" and did the same with 5 such visual experiences spread among 5 people (where they each have their visual experience one right after the other), and then we played back both recordings, the playbacks viewed from the point of view of the universe would be identical: if each visual experience was 1 minute long, then both playbacks would be 5 minutes long of the same content. In this straight forward sense, 5 such visual experiences had by one person involves just as much orange-ish feel as 5 such visual experiences spread among 5 people. This is so even if the what-it's-like-of-going-through-5-such-visual-experiences is not experientially the same as 5 experientially independent what-it's-likes-of-going-through-1-such-visual-experience.

Right? I assume this is what you have in mind.

I thus understand your alternative account or sense of 'involves more pain than'. I can see how according to it, 5 minor headaches had by 1 person involves the same amount of pain as 5 minor headaches spread among 5 people.

But again, consider 5 minor headaches spread among 5 people vs 1 major headache. Here you claim that the 5 minor headaches involves more pain than 1 major headache, and I asked you to explain in what sense. Why did I do this? Because it is clearest here how your account fails to achieve what you think it can achieve.

So let's carefully think about this for a second. Each minor headache constitutes a certain amount of pain - the amount of pain determined how shitty it feels in absolute terms. The same is true of the major headache. Since a major headache feels a lot shittier in absolute terms, we might use '6' to represent the amount of pain it constitutes, and a '2' to represent the amount of pain a single minor headache constitutes. IMPORTANTLY, both numbers - and the amount of pain they each represent - are determined by how shitty the major headache and the minor headache respectively FEEL. (Note: As I mentioned in an earlier reply, how shitty a pain episode feels is a function of both its intensity and duration).

Ok. Now, we have 5 experientially independent minor headaches. We have 5 such pain contents, and in THAT sense, 5 times as much pain. The duration of the playback would be 5 times as long compared to the playback of 1 minor headache.) Ok, but do we have something that we can appropriately call 10. Well, these numbers are meant to represent the amount of pain there is and we just said that the amount of pain is determined by how shitty something feels.

The question then is: Do 5 experientially independent minor headaches some how collectively constitute an amount of pain that feels like a 10. Clearly they don't because only the what-it's-like-of-going-through-5-minor-headaches can plausibly feel like a 10, and 5 experientially independent what-it's-likes-of-going-through-1-minor-headache is not experientially the same as 1 what-it's-like-of-going-through-5-minor-headaches.

You might reply that 5 experientially minor headaches collectively constitute a 10 in that each minor headache constitutes an amount of pain represented by 2 and there are 5 such headaches. In other words, the duration of the playback is 5 times as long. There is, in that sense, 5 times the amount of pain, which is 10.

Yes, there is 5 times the amount of pain in THAT sense, which is why I would agree that 5 minor headaches all had by one person involves just as much pain as 5 minor headaches spread among 5 people in THAT sense. BUT, notice that only the number 2 is experientially determined. The 5 is not. The 5 is the number of instances of the minor headaches. As a result, the number 10 is not experientially determined. So, the number 10 simply signifies a certain amount of pain (2) repeated 5 times. It does NOT signify an amount of pain that feels like a 10.

You might not disagree. You might ask, what is the problem here? The problem is that while you can compare a 10 and a 10 that are both determined in this non-purely experiential way, which in effect is what you do to get the result that 5 minor headaches had by one person involves just as much pain as 5 minor headaches spread among 5 people, you CANNOT compare a 10 and a 6 when the 10 is determined in this non-purely experiential way and the 6 is determined in a purely experiential way. For when the numbers are determined in different ways, they signify different things, and are thus incommensurate.

I can make the same point by talking in terms of pain, rather than in terms of numbers. When you say that 5 minor headaches all had by one person involves the same amount of pain as 5 minor headaches spread among 5 people, you are USING 'amount of pain' in a non-purely experiential sense. The amount of pain, so used, is determined by a certain amount of pain used in a purely experiential sense (i.e. an amount of pain determined by how shitty a minor headache feels) x how many minor headaches there are. While you can compare two amounts of pains, so used, with each other, you cannot compare an amount of pain, so used, with a certain amount of pain used in a purely experiential sense (i.e. an amount of pain determined by how shitty a major headache feels).

Of course, how many minor headaches there are will affect the amount of pain there is (used in a purely experiential sense) when the headaches all occur in one person. For 5 minor headaches all had by one person results in the what-it's-like-of-going-through-5-minor-headaches, which feels shittier (i.e. is experentially worse) than a major headache and thus constitutes more pain than a major headache. Thus, when I say 5 minor headaches all had by one person involves an amount of pain that is more than the amount of pain of a major headache, I am using both "amount of pain" in a purely experiential sense. I am comparing apples to apples. But when you say that 5 minor headaches spread among 5 people involves an amount of pain that is more than the amount of pain of a major headache, you are using the former "amount of pain" in a non-purely experiential sense (the one I described in the previous paragraph) and the latter "amount of pain" in a purely experiential sense. You are comparing apples to oranges.

In this response, I've tried very hard to make clear why it is that even though your account of 'involves more pain than' can work for 5 minor headaches all had by one person vs 5 minor headaches spread across 5 people (and get the result you want: i.e. that the amount of pain in each case is the same), your account cannot work for 5 minor headaches spread across 5 people vs 1 major headache. Thus, your account cannot achieve what you think it can achieve.

I worry that I haven't been as clear as I wish to be (despite my efforts), so if any part of it comes off unclear, I hope you can be as charitable as you can and make an effort to understand what I'm saying, even if you disagree with it.

comment by Alex_Barry · 2018-03-29T22:59:36.870Z · score: 0 (0 votes) · EA(p) · GW(p)

I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important.

Overall I find this post confusing though, since the framing seems to be "Effective Altruism is making an intellectual mistake" whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with effective altruism as it currently practiced. Whilst you could describe moral differences as intellectual mistakes, this does not seem to be a standard or especially helpful usage.

The comments etc. then just seem to have mostly been people explaining why they don't find your moral intuition that 'non-purely experientially determined' and 'purely experientially determined' amounts of pain cannot be compared compelling. Since we seem to have reached a point where there seems to be a fundamental disagreement about considered moral values, it does not seem that attempting to change each others minds is very fruitful.

I think I would have found this post more conceptually clear if it had been structured:

  1. EA conclusions actually require an additional moral assumption/axiom - and so if you don't agree with this assumption then you should not obviously follow EA advice.

  2. (Optionally) Why you find the moral assumption unconvincing/unlikely

  3. (Extra Optionally) Tentative suggestions for what should be done in the absence of the assumption.

Where throughout the assumption is the commensuratabilitly of 'non-purely experientially determined' and 'purely experientially determined' experience.

In general I am not very sure what you had in mind as the ideal outcome of this post. I'm surprised if you thought most EAs agreed with you on your moral intuition, since so much of EA is predicated on its converse (as is much of established consequential thinking etc.). But equally I am not sure what value we can especially bring to you if you feel very sure in your conviction that the assumption does not hold.

(Note I also made this as a top level comment so it would be less buried, so it might make more sense to respond (if you would like to) there)

comment by kbog · 2018-03-29T06:45:08.773Z · score: 0 (0 votes) · EA(p) · GW(p)

Just because two things are different doesn't mean they are incommensurate. It is easy to compare apples and oranges: for instance, the orange is healthier than the apple, the orange is heavier than the apple, the apple is tastier than the orange. You also compare two different things, by saying that a minor headache is less painful than torture, for instance. You think that different people's experiences are incommensurable, but I don't see why.

In fact, there is good reason to think that any two values are necessarily commensurable. For if something has value to an agent, then it must provide motivation to them should they be perceiving, thinking and acting correctly, for that is basically what value is. If something (e.g. an additional person's suffering) does not provide additional motivation, then either I'm not responding appropriately to it or it's not a value. And if my motivation is to follow the axioms of expected utility theory then it must be a function over possible outcomes where my motivation for each outcome is a single number. And if my motivation for an outcome is a single number, then it must take the different values associated with that outcome and combine them into one figure denoting how valuable I find it overall.

comment by Jeffhe · 2018-03-29T18:18:23.990Z · score: 1 (1 votes) · EA(p) · GW(p)

Just because two things are different doesn't mean they are incommensurate.

But I didn't say that. As long as two different things share certain aspects/dimensions (e.g. the aspect of weight, the aspect of nutrition, etc...), then of course they can be compared on those dimensions (e.g. the weight of an orange is more than the weight of an apple, i.e., an orange weighs more than an apple).

So I don't deny that two different things that share many aspects/dimensions may be compared in many ways. But that's not the problem.

The problem is that when you say that the amount of pain involved in 5 minor headaches spread among 5 people is more than the amount of pain involved in 1 major headache (i.e., 5 minor headaches spread among 5 people involves more pain than 1 major headache), you are in effect saying something like the WEIGHT of an orange is more than the NUTRITION of an apple. This is because the former "amount of pain" is used in a non-purely experiential sense while the latter "amount of pain" is used in a purely experiential sense. When I said you are comparing apples to oranges, THIS is what I meant.

comment by kbog · 2018-03-30T05:25:38.457Z · score: 0 (0 votes) · EA(p) · GW(p)

you are in effect saying something like the WEIGHT of an orange is more than the NUTRITION of an apple.

No, I am effectively saying that the weight of five oranges is more than the weight of one orange.

This is because the former "amount of pain" is used in a non-purely experiential sense while the latter "amount of pain" is used in a purely experiential sense

That is wrong. In both cases I evaluate the quality of the experience multiplied by the number of subjects. It's the same aspect for both cases. You're just confused by the fact that, in one of the cases but not the other, the resulting quantity happens to be the same as the number provided by your "purely experiential sense". If I said "this apple weighs 100 grams, and this orange weighs 200 grams," you wouldn't tell me that I'm making a false comparison merely because both the apple and the orange happen to have 100 calories. There is nothing philosophically noteworthy here, you have just stumbled upon the fact that any number multiplied by one is still one.

As if that isn't decisive enough, imagine for instance that it was a comparison between two sufferers and five, rather than between one and five. Then you would obviously have no argument at all, since my evaluation of the two people's suffering would obviously not be in the "purely experiential sense" that you talk about. So clearly I am right whenever more than one person is involved. And it would be strange for utilitarianism to be right in all those cases, but not when there was just one person. So it must be right all the time.

comment by Jeffhe · 2018-03-30T21:17:49.297Z · score: 1 (1 votes) · EA(p) · GW(p)

You'll need to read to the very end of this reply before my argument seems complete.

In both cases I evaluate the quality of the experience multiplied by the number of subjects. It's the same aspect for both cases. You're just confused by the fact that, in one of the cases but not the other, the resulting quantity happens to be the same as the number provided by your "purely experiential sense".

Case 1: 5 minor headaches spread among 5 people

Case 2: 1 major headache had by one person

Yes, I understand that in each case, you are multiplying a certain amount of pain (determined solely by how badly something feels) by the number of instances to get a total amount of pain (determined via this multiplication), and then you are comparing the total amount of pain in each case.

For example, in Case 1, you are multiplying the amount of pain of a minor headache (determined solely by how badly a minor headache feels) by the number of instances to get a total amount of pain (determined via this multiplication). Say each minor headache feels like a 2, then 2 x 5 = 10. Call this 10 “10A”.

Similarly, in Case 2, you are multiplying the amount of pain of a major headache (determined solely by how badly a major headache feels) by the number of instances, in this case just 1, to get a total amount of pain (determined via this multiplication). Say the major headache feels like a 6, then 6 x 1 = 6. Call this latter 6 “6A”.

You then compare the 10A with the 6A. Moreover, since the amounts of pain represented by 10A and 6A are both gotten by multiplying one dimension (i.e. amount of pain, determined purely experientially) by another dimension (instances), you claim that you are comparing things along the same dimension, namely, A. But this is problematic.

To see the problem, consider

Case 3: 5 minor headaches all had by 1 person.

Here, like in Case 1, we can multiply the amount of pain of a minor headache (determined purely experientially) by the number of instances to get a total amount of pain (determined via this multiplication). 2 x 5 = 10. This 10 is the 10A sort.

OR, unlike in Case 1, we can determine the final amount of pain not by multiplying those things, but instead in the same way we determine the amount of pain of a single minor headache, namely, by considering how badly the 5 minor headaches feels. We can consider how badly the what-it's-like-of-going-through-5-minor-headaches feels. It feels like a 10, just as a minor headache feels like a 2, and a major headache feels like a 6. Call these 10E, 2E and 6E respectively. The ‘E’ signifies that the numbers were determined purely experientially.

Ok. I'm sure you already understand all that. Now here's the problem.

You insist that there is no problem with comparing 10A and 6A. After all, they are both determined in the same way: multiplying an experience by its instances.

I am saying there is a problem with that. The problem is that saying 10A is more than 6A makes no sense. Why not? Because, importantly, what goes into determining the 10A and 6A are 2E and 6E respectively: 2E x 5 = 10A. 6E x 1 = 6A. So what?

Well think about it. 2E x 5 instances is really just 2E, 2E, 2E, 2E, 2E.

And 6E x 1 instance is really just 6E.

So when you assert 10A is more than 6A, you are really just asserting that (2E, 2E, 2E, 2E, 2E) is more than 6E.

But then notice that, at bottom, you are still working with the dimension of experience (E) - the dimension of how badly something feels. The problem for you, then, is that the only intelligible form of comparison on this dimension is the "is experientially more bad than" (i.e. is experientially worse than) comparison.

(Of course, there is also the dimension of instances, and an intelligible form of comparison on this dimension is the “is more in instances than” comparison. For example, you can say 5 minor headaches is more in instances than 1 major headache (i.e. 5 > 1). But obviously, the comparison we care about is not merely a comparison of instances.)

Analogously, when you are working with the dimension of weight - the dimension of how much something weighs -, the only intelligible form of comparison is "weighs more than".

Now, you keep insisting that there is an analogy between

1) your way of comparing the amounts of pain of various pain episodes (e.g. 5 minor headaches vs 1 major headache), and

2) how we normally compare the weights of various things (e.g. 5 small oranges vs 1 big orange).

For example, you say,

No, I am effectively saying that the weight of five oranges is more than the weight of one orange.

So let me explain why they are DIS-analogous. Consider the following example:

Case 1: Five small oranges, 2lbs each. (Just like 5 minor headaches, each feeling like a 2).

Case 2: One big orange, 6lbs. (Just like 1 major headache that feels like a 6).

Now, just as the 2 of a minor headache is determined by how badly it feels, the 2 of a small orange is determined by how much it weighs. So just as we write, 2E x 5 = 10A, we can similarly write 2W x 5 = 10A. And just as we write, 6E x 1 = 6A, we can similarly write 6W x 1 = 6A.

Now, if you assert that (the total amount of weight represented by) 10A is more than 6A, I would have NO problem with that. Why not? Because the comparison "is more than" still occurs on the dimension of weight (W). You are saying 5 small oranges WEIGHS more than 1 big orange. The comparison thus occurs on the SAME dimension that was used to determine the number 2 and 6 (numbers that in turn determined 10A and 6A): A small orange was determined to be 2 by how much it WEIGHED. Likewise with the big orange. And when you say 10A is more than 6A, the comparison is still made on that dimension.

By contrast, when you assert that (the total amount of pain represented by) 10A is more than 6A, the "is more than" does not occur on the dimension of experience anymore. It does not occur on the dimension of how badly something feels anymore. You are not saying that 5 minor headaches spread among 5 people is EXPERIENTIALLY WORSE than 1 major headache had by 1 person. You are saying something else. In other words, the comparison does NOT occur on the same dimension that was used to determine the number 2 and 6 (numbers that in turn determined 10A and 6A): A minor headache was determined to be 2 by how EXPERIENTIALLY BAD IT FELT. Likewise with the major headache. Yet, when you say 10A is more than 6A, you are not making a comparison on that dimension anymore.

So I hope you see how your way of comparing the amounts of pain between various pain episodes is disanalogous to how we normally compare the weights between various things.

Now, just as the dimension of weight (i.e. how much something weighs) and the dimension of instances (i.e. how many instances there are) do not combine to form some substantive third dimension on which to compare 5 small oranges with a big orange, the dimension of experience (i.e. how badly something feels) and the dimension of instances do not combine to form some substantive third dimension on which to compare 5 minor headaches spread among 5 people and 1 major headache had by one person. At best, they combine to form a trivial third dimension consisting in their collection/conjunction, on which one can intelligibly compare, say, 32 minor headaches with 23 minor headaches, irrespective of how the 32 and 23 minor headaches are spread. This trivial dimension is the dimension of "how many instances (i.e. how much) of a certain pain there is". On this dimension, 5 minor headaches spread among 5 people cannot be compared with a MAJOR headache, because they are different pains, but 5 minor headaches spread among 5 people can be compared with 5 minor headaches all had by 1 person. Moreover, the result of such a comparison would be that they are the same on this dimension (as I allowed in an earlier reply). But this is a small victory given that this dimension won't allow any comparisons between differential pains (e.g. 5 minor headaches and a major headache).

comment by kbog · 2018-03-30T22:40:57.140Z · score: 0 (0 votes) · EA(p) · GW(p)

But then notice that, at bottom, you are still working with the dimension of experience (E) - the dimension of how badly something feels. The problem for you, then, is that the only intelligible form of comparison on this dimension is the "is experientially more bad than" (i.e. is experientially worse than) comparison

What I am working with "at bottom" is irrelevant here, because I'm not making a comparison with it. There are lots of things we compare that involve different properties "at bottom".

But obviously, the comparison we care about is not merely a comparison of instances

And obviously the comparison we care about is not merely a comparison how bad it feels for any given person.

The comparison thus occurs on the SAME dimension

No it doesn't. That is, if I were to apply the same logic to oranges that you do to people, I would say that there is Mono-Orange-Weight, defined as the most weight that is ever present in one of a group of oranges, and Multi-Orange-Weight, defined as the total weight that is present in a group of oranges, and insist that you cannot compare one to the other, so one orange weighs the same as five oranges.

Of course that would be nonsense, as it's true that you can compare orange weights. But you can see how your argument fails. Because this is all you are doing; you are inventing a distinction between "purely experiential" and "non-purely experiential" badness and insisting that you cannot compare one against the other by obfuscating the difference between applying either metric to a single entity.

A minor headache was determined to be 2 by how EXPERIENTIALLY BAD IT FELT

But that isn't how I determined that one person with a minor headache has 2 units of pain total.

Yet, when you say 10A is more than 6A, you are not making a comparison on that dimension anymore

You are right, I am comparing one person's "non purely experiential" headache to five people's "non purely experiential" headaches.

So I hope you see how your way of comparing the amounts of pain between various pain episodes is disanalogous to how we normally compare the weights between various things.

It's not reasonable to expect me to change my mind when you're repeating the exact same argument that you gave before while ignoring the second argument I gave in my comment.

comment by Jeffhe · 2018-03-31T00:56:11.382Z · score: 0 (0 votes) · EA(p) · GW(p)

hey kbog, I didn't anticipate you would respond so quickly... I was editting my reply while you replied... Sorry about that. Anyways, I'm going to spend the next few days slowly re-reading and sitting on your past few replies in an all-out effort to understand your point of view. I hope you can do the same with just my latest reply (which I've editted). I think it needs to be read to the end for the full argument to come through.

Also, just to be clear, my goal here isn't to change your mind. My goal is just to get closer to the truth as cheesy as that might sound. If I'm the one in error, I'd be happy to admit it as soon as I realize it. Hopefully a few days of dwelling will help. Cheers.

comment by kbog · 2018-03-31T12:48:14.270Z · score: 0 (0 votes) · EA(p) · GW(p)

just as the dimension of weight (i.e. how much something weighs) and the dimension of instances (i.e. how many instances there are) do not combine to form some substantive third dimension on which to compare 5 small oranges with a big orange,

What?

It's the dimension of weight, where the weight of 5 oranges can be more than the weight of one big orange. Weight is still weight when you are weighing multiple things together. If you don't believe me, put 5 oranges on a scale and tell me what you see. The prior part of your comment doesn't have anything to change this.

comment by Jeffhe · 2018-04-10T21:07:27.261Z · score: 0 (0 votes) · EA(p) · GW(p)

Hi kbog,

Sorry for taking awhile to get back to you – life got in the way... Fortunately, the additional time made me realize that I was the one who was confused as I now see very clearly the utilitarian sense of “involves more pain than” that you have been in favor of.

Where this leaves us is with two senses of “involves more pain than” and with the question of which of the two senses is the one that really matters. In this reply, I outline the two senses and then argue for why the sense that I have been in favor of is the one that really matters.

The two senses:

Suppose, for purposes of illustration, that a person who experiences 5 minor toothaches is experientially just as badly off as someone who experiences a major toothache. This supposition, of course, makes use of my sense of “involves more pain than” – the sense that analyzes “involves more pain than” as “is experientially worse than”. This sense compares two what-it’s-likes (e.g., the what-it’s-like-of-going-through-5-minor-toothaches vs the what-it’s-like-of-going-through-a-major-toothache) and compares them with respect to their what-it’s-like-ness – their feel. On this sense, 5 minor toothaches all had by one person involves the same amount of pain as 1 major toothache had by one person in that the former is experientially just as bad as the latter.

On your sense (though not on mine), if these 5 minor toothaches were spread across 5 people, they would still involve the same amount of pain as 1 major toothache had by one person. This is because having 1 major toothache is experientially just as bad as having 5 minor toothaches (i.e. using my sense), which entitles one to claim that the 1 major toothache is equivalent to 5 minor toothaches, since they give rise to distinct what-it’s-likes that are nevertheless experientially just as bad. At this point, it’s helpful to stipulate that one minor toothache = one base unit of pain. That is, let’s suppose that the what-it’s-like-of-going-through-one-minor-toothache is experientially as bad as any of the least experientially bad experience(s) possible. Now, since there are in effect 5 base units of pain in both cases, therefore the cases involve the same amount of pain (in your sense). It is irrelevant that the 5 base units of pain are spread among 5 people in one case. This is because it is irrelevant how those 5 base units of pain feel when experienced together since we are not comparing the cases with respect to their what-it’s-like-ness – their feel. Rather, we are comparing the cases with respect to their quantity of the base unit of pain.

Which is the sense that really matters?

I believe the sense I am in favor of is the one that really matters, and that this becomes clear when we remind ourselves why we take pain to matter in the first place.

We take pain to matter because of its negative felt character – because of how it feels. I argue that we should favor my sense of “involves more pain than” because it fully respects this fact, whereas the sense you’re in favor of goes against the spirit of this fact.

According to your sense, 5 minor toothaches spread among 5 people involves the same amount of pain as one major toothache had by one person. But doesn't this clearly go against the spirit of the fact that pain matters solely because of how it feels? None of the 5 people feel anything remotely bad. There is simply no experience of anything remotely bad on their side of the equation. They each feel a very mild pain – unpleasant enough to be perceived to be experientially bad, but that’s it. That’s the worst what-it’s-like on their side of the equation. Yet, a bundle of 5 of these mild what-it’s-likes somehow involve the same amount of pain as one major toothache. That can only be acceptable if the felt character of the major toothache (and of pain in general) is not as important to you as the sheer quantity of very mild pains (i.e. of base units of pain). But this is against the spirit of why pain matters.

comment by kbog · 2018-04-11T04:20:25.983Z · score: 0 (0 votes) · EA(p) · GW(p)

The 5000 pains are only worse if 5000 minor pains experienced by one person is equivalent to one excruciating pain. If so, then 5000 minor pains for 5000 people being equivalent to one excruciating pain doesn't go against the badness of how things feel; at least it doesn't seem counterintuitive to me.

Maybe you think that no amount of minor pains can ever be equally important as one excruciating pain. But that's a question of how we evaluate and represent an individual's well-being, not a question of interpersonal comparison and aggregation.

comment by Jeffhe · 2018-04-12T00:41:53.334Z · score: 0 (0 votes) · EA(p) · GW(p)

Hey kbog, if you don't mind, let's ignore my example with the 5000 pains because I think my argument can more clearly be made in terms of my toothache example since I have already laid a foundation for it. Let me restate that foundation and then state my argument in terms of my toothache example. Thanks for bearing with me.

The foundation:

Suppose 5 minor toothaches had by one person is experientially just as bad as 1 major toothache had by one person.

Given the supposition, you would claim: 5 minor toothaches spread among 5 people involves the same amount of pain as 1 major toothache had by one person.

Let me explain what I think is your reasoning step by step:

P1) 5 minor toothaches had by one person and 1 major toothache had by one person give rise to two different what-it's-likes that are nevertheless experientially JUST AS BAD. (By above supposition) (The two different what-it's-likes are: the what-it's-like-of-going-through-5-minor-toothaches and the what-it's-like-of-going-through-1-major-toothache.)

P2) Therefore, we are entitled to say that 5 minor toothaches had by one person is equivalent to 1 major toothache had by one person. (By P1)

P3) 5 minor toothaches spread among 5 people is 5 minor toothaches, just as 5 minor toothaches had by one person is 5 minor toothaches, so there is the same quantity of minor toothaches (or same quantity of base units of pain) in both cases. (Self-evident)

P4) Therefore, we are entitled to say that 5 minor toothaches spread among 5 people is equivalent to 5 minor toothaches had by one person. (By P4)

P5) Therefore, we are entitled to claim that 5 minor toothaches spread among 5 people is equivalent to 1 major toothache had by one person. (By P2 and P4)

C) Therefore, 5 minor toothaches spread among 5 people involves the same amount of pain as 1 major toothache had by one person. (By P5)

As the illustrated reasoning shows, 5 minor toothaches spread among 5 people involves the same amount of pain as 1 major toothache had by one person (i.e. C) only if 5 minor toothaches had by ONE person is equivalent to 1 major toothache (i.e. P2). You agree with this.

Moreover, as the illustrated reasoning also shows, the reason why 5 minor toothaches had by one person is equivalent to 1 major toothache (i.e. P2) is because they give rise to two different what-it's-likes that are nevertheless experientially just as bad (i.e. P1). I presume you agree with this too. Call this reason "Reason E", E for "experientially just as bad")

Furthermore, as the illustrated reasoning shows, the reason why 5 minor toothaches spread among 5 people is equivalent to 5 minor toothache had by one person is DIFFERENT from the reason for why 5 minor headaches had by one person is equivalent to 1 major toothache had by one person. That is, 5 minor toothaches spread among 5 people is equivalent to 5 minor toothaches had by one person (i.e. P4) because they share the same quantity of base units of pain, namely 5, irrespective of how the 5 base units of pain are spread (i.e. P3), and NOT because they give rise to two what-it's-likes that are experientially just as bad (as they clearly don't). Call this reason (i.e. P3) "Reason S", S for "same quantity of base units of pains"

Argument:

So there are these two different types of reasons underlying your equivalence claims (I will use "=" to signify "is equivalent to":

5 MiTs/5 people = 5 MiTs/1 person = 1 MaT/1 person

........................(Reason S).................(Reason E)..........................

Now, never mind the transitivity problem that Reasons S and E create for your reasoning. Indeed, that's not the problem I want to raise for your sense of "involves more pain."

The problem with your sense of "involves more pain" is that it admits of Reason S as a basis for saying X involves more pain than Y. But Reason S, unlike Reason E, is against the spirit of why we take pain to matter. We take pain to matter because of the badness of how it feels, as you rightly claim. But Reason S doesn't give a crap about how bad the pains on the two sides of the equation FEEL; it doesn't care that 5 MiTs/1 person constitutes a pain that feels a whole lot worse than any anything on the other side of the equation. It just cares about how many base units of pain there are on each side. And, obviously, more base units of pain does not mean there is experientially worse pain precisely because the base units of pain can be spread out among many different people.

Maybe you think that no amount of minor pains can ever be equally important as one excruciating pain.

This an interesting question. Perhaps the what-it's-like-of-going-through-an-INFINITE-number-of-a-very-mild-sort-of-pain cannot be experientially worse than the what-it's-like-of-suffering-one-instance-of-third-degree-burns. If so, then I would think that 1 third-degree burn/1 person is morally worse than infinite mild pains/1 person. In any case, I don't think what I think here is relevant to my argument against your utilitarian sense of "involves more pain than".

comment by kbog · 2018-04-16T17:34:59.888Z · score: 0 (0 votes) · EA(p) · GW(p)

the reason why 5 minor toothaches spread among 5 people is equivalent to 5 minor toothache had by one person is DIFFERENT from the reason for why 5 minor headaches had by one person is equivalent to 1 major toothache had by one person.

No, both equivalencies are justified by the fact that they involve the same amount of base units of pain.

But Reason S doesn't give a crap about how bad the pains on the two sides of the equation FEEL

Sure it does. The presence of pain is equivalent to feeling bad. Feeling bad is precisely what is at stake here, and all that I care about.

In any case, I don't think what I think here is relevant to my argument against your utilitarian sense of "involves more pain than".

Yes, that's what I meant when I said "that's a question of how we evaluate and represent an individual's well-being, not a question of interpersonal comparison and aggregation."

comment by Jeffhe · 2018-04-23T16:42:54.031Z · score: 0 (0 votes) · EA(p) · GW(p)

the reason why 5 minor toothaches spread among 5 people is equivalent to 5 minor toothache had by one person is DIFFERENT from the reason for why 5 minor headaches had by one person is equivalent to 1 major toothache had by one person.

No, both equivalencies are justified by the fact that they involve the same amount of base units of pain.

So you're saying that just as 5 MiTs/5 people is equivalent to 5 MiTs/1 person because both sides involve the same amount of base units of pain, 5 MiTs/1 person is equivalent to 1 MaT/1 person because both sides involve the same amount of base units of pain (and not because both sides give rise to what-it's-likes that are experientially just as bad).

My question to you then is this: On what basis are you able to say that 1 MaT/1 person involves 5 base units of pain?

But Reason S doesn't give a crap about how bad the pains on the two sides of the equation FEEL

Sure it does. The presence of pain is equivalent to feeling bad. Feeling bad is precisely what is at stake here, and all that I care about.

Reason S cares about the amount of base units of pain there are because pain feels bad, but in my opinion, that doesn't sufficiently show that it cares about pain-qua-how-it-feels. It doesn't sufficiently show that it cares about pain-qua-how-it-feels because 5 base units of pain all experienced by one person feels a whole heck of a lot worse than anything felt when 5 base units of pain are spread among 5 people, yet Reason S completely ignores this difference. If Reason S truly cared about pain-qua-how-it-feels, it cannot ignore this difference.

I understand where you're coming from though. You hold that Reason S cares about the quantity of base units of pain precisely because pain feels bad, and that this fact alone sufficiently shows that Reason S is in harmony with the fact that we take pain to matter because of how it feels (i.e. that Reason S cares about pain-qua-how-it-feels).

However, given what I just said, I think this fact alone is too weak to show that Reason S is in harmony with the fact that we take pain to matter because of how it feels. So I believe my objection stands.

Have we hit bedrock?

comment by kbog · 2018-04-23T22:44:02.176Z · score: 0 (0 votes) · EA(p) · GW(p)

On what basis are you able to say that 1 MaT/1 person involves 5 base units of pain?

Because you told me that it's the same amount of pain as five minor toothaches and you also told me that each minor toothache is 1 base unit of pain.

5 base units of pain all experienced by one person feels a whole heck of a lot worse than anything felt when 5 base units of pain are spread among 5 people, yet Reason S completely ignores this difference. If Reason S truly cared about pain-qua-how-it-feels, it cannot ignore this difference.

If you mean that it feels worse to any given person involved, yes it ignores the difference, but that's clearly the point, so I don't know what you're doing here other than merely restating it and saying "I don't agree."

On the other hand, you do not care how many people are in pain, and you do not care how much pain someone experiences so long as there is someone else who is in more pain, so if anyone's got to figure out whether or not they "care" enough it's you.

Have we hit bedrock?

You've pretty much been repeating yourself for the past several weeks, so, sure.

comment by Jeffhe · 2018-04-24T03:26:21.316Z · score: 0 (0 votes) · EA(p) · GW(p)

Because you told me that it's the same amount of pain as five minor toothaches and you also told me that each minor toothache is 1 base unit of pain.

Where in supposition or the line of reasoning that I laid out earlier (i.e. P1) through to P5)) did I say that 1 major headache involves the same amount of pain as 5 minor toothaches?

I attributed that line of reasoning to you because I thought that was how you would get to C) from the supposition that 5 minor toothaches had by one person is experientially just as bad as 1 major toothache had by one person.

But you then denied that that line of reasoning represents your line of reasoning. Specifically, you denied that P1) is the basis for asserting P2). When I asked you what is your basis for P2), you assert that I told you that 1 major headache involves the same amount of pain as five minor toothaches. But where did I say this?

In any case, it would certainly help if you described your actual step by step reasoning from the supposition to C), since, apparently, I got it wrong.

If you mean that it feels worse to any given person involved, yes it ignores the difference, but that's clearly the point, so I don't know what you're doing here other than merely restating it and saying "I don't agree."

I'm not merely restating the fact that Reason S ignores this difference. I am restating it as part of a further argument against your sense of "involves more pain than" or "involves the same amount of pain as". The argument in essence goes: P1) Your sense relies on Reason S P2) Reason S does not care about pain-qua-how-it-feels (because it ignores the above stated difference). P3) We take pain to matter because of how it feels. C) Therefore, your sense is not in harmony with why pain matters (or at least why we take pain to matter).

I had to restate that Reason S ignores this difference as my support for P2, so it was not merely stated.

On the other hand, you do not care how many people are in pain, and you do not care how much pain someone experiences so long as there is someone else who is in more pain, so if anyone's got to figure out whether or not they "care" enough it's you.

Both accusations are problematic.

The first accusation is not entirely true. I don't care about how many people are in pain only in situations where I have to choose between helping, say, Amy and Susie or just Bob (i.e. situations where a person in the minority party does not overlap with anyone in the majority party). However, I would care about how many people are in pain in situations where I have to choose between helping, say, Amy and Susie or just Amy (i.e. situations where the minority party is a mere subset of the majority party). This is due to the strict pareto principle which would make Amy and Susie each suffering morally worse than just Amy suffering, but would not make Amy and Susie suffering morally worse than Bob suffering. I don't want to get into this at this point because it's not very relevant to our discussion. Suffice it to say that it's not entirely true that I don't care about how many people are in pain.

The second accusation is plain false. As I made clear in my response to Objection 2 in my post, I think who suffers matters. As a result, if I could either save one person from suffering some pain or another person from suffering a slightly less pain, I would give each person a chance of being saved in proportion to how much each has to suffer. This is what I think I should do. Ironically, your second accusation against me is precisely true of what you stand for.

You've pretty much been repeating yourself for the past several weeks, so, sure.

In my past few replies, I have:

1) Outlined in explicit terms a line of reasoning that got from the supposition to C), which I attributed to you.

2) Highlighted that that line of reasoning appealed to Reason S.

3) On that basis, argued that your sense of "involves the same amount of pain as" goes against the spirit of why pain matters.

If that comes across to you as "just repeating myself for the past several weeks", then I can only think that you aren't putting enough effort into trying to understand what I'm saying.

comment by JanBrauner · 2018-03-13T09:02:02.927Z · score: 5 (5 votes) · EA(p) · GW(p)

You think aggregating welfare between individuals is a flawed approach, such that you are indifferent between alleviating an equal amount of suffering for 1 or each of a million people.

You conclude that these values recommend giving to charities that directly address the sources of most intense individual suffering, and that between them, one should not choose by cost-effectiveness, but randomly. One should not give to say GiveDirectly, which does not directly tackle the most intense suffering.

This conclusion seems correct only for clear-cut textbook examples. In the real world, I think, your values fail to recommend anything. You can never know for certain how many people you are going help. Everything is probabilities and expected value:

Say, for the sake of the argument, you think that severe depression is the cause of most intense individual suffering. You could give your $10.000 to a mental health charity, and they will in expectation prevent 100 people (made up number) from getting severe depression.

However, if you give $10.000 to GiveDirectly, certainly that will affect they recipients strongly, and maybe in expectation prevent 0.1 cases of severe depression.

Actually, if you take your $10.000, and buy that sweet, sweet Rolex with it, there is a tiny chance that this will prevent the jewelry store owner from going bankrupt, being dumped by their partner and, well, developing severe depression. $10.000 to the jeweller prevent an expected 0.0001 cases of severe depression.

So, given your values, you should be indifferent between those.

Even worse, all three actions also harbour tiny chances of causing severe depression. Even the mental health charity, for every 100 patients they prevent from developing depression, will maybe cause depression in 1 patient (because interventions sometimse have adverse effects, ...). So if you decide between burning the money or giving it to the mental health charity, you decide between preventing 100 or 1 episodes of depression. An decision that you are, given your stated values, indifferent between.

Further arguments why approaches that try to avoid interpersonal welfare aggregation fail in the real world can be found here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1781092

comment by Jeffhe · 2018-03-13T23:41:04.752Z · score: 0 (2 votes) · EA(p) · GW(p)

Hi Jan,

Thanks a lot for your response.

I wonder if it is too big of a concession to make to say that "This conclusion seems correct only for clear-cut textbook examples." My argument against effective altruism was an attempt to show that it is theoretically/fundamentally flawed, even if (per your objection) I can't criticize the actual pattern of donation it is responsible for (e.g. pushing a lot of funding to GiveDirectly), although I will offer a response to your objection.

I remember listening to a podcast featuring professor MacAskill (one of the presumed founders of EA) where he was recounting a debate he had with someone (can't remember who). That someone raised (if I remember correctly) the following objection: If there was a burning house and you could either save the boy trapped inside or a painting hanging on the wall which you could sell and use that money to save 100 kids in a third world country from a similar pain that the boy would face, you should obviously save the boy. But EA says to save the painting. Therefore EA is false. Professor MacAskill's response (if I remember correctly) was to bite the bullet and say that while it might be hard to stomach, that is really what we should do.

If professor MacAskill's view represents EA's position, then I assume that if you concede that we should flip a coin in such a case, then there is an issue.

Regarding whether my argument recommends anything in the real world, I think it does.

First, just to be clear, since we cannot give each person a chance of being helped that is proportionate to what they have to suffer, I said that I personally would choose to use my money to help anyone among the class of people who stands to suffer the most (see Section F.). Just to be clear, I wouldn't try to give each of the people among this class an equal chance because that is equally impossible. I would simply choose to help those who I come across or know about I guess. Note that I didn't explain why I would choose to help this class of people, but the reason is simply that were it possible to give each person a chance of being helped proportional to their suffering, those who stand to suffer the most have the highest chance of winning. (I have since updated the post to include this explanation, thanks.)

I think, now that I have clarified my position, it should be clear that my way of things can recommend actions. There are many opportunities where donating almost certainly prevents or alleviates a certain extreme suffering to someone. Maybe depression is not one of those cases, but I would imagine that severe malnutrition is very painful. So is torture (which oftentimes can be prevented if a ransom is paid). Since the pattern of donation that EA promotes is likely very different from the pattern of donation that arises from my way of things, my way of things provides a real alternative practically speaking (but maybe up to a limit before the patterns of donations would converge).

Btw, I would not be absolutely against giving to GiveDirectly if there is a statistically good chance that they will prevent or alleviate at least one person from one of the worst kinds of suffering AND there wasn't any other cheaper practical way to help that very person (which is likely the case because we don't even know who that person is). However, I would personally donate to charities where there is a near certainty of prevention of alleviation simply because at the end of the day my donation actually helped someone, whereas a statistically good chance may not pan out, in which case I haven't helped the worst off

Yes, by doing so, I perhaps end up allowing someone to suffer in one of the worst ways who otherwise wouldn't have suffered had I (and everyone else) given to GiveDirectly. But, as I made more clear in Section F., there is no way to give each person an appropriate chance of being helped, not even if we just considered those people who stand to suffer the worst. And so, at the end of the day, I am forced to make a choice to help a particular person anyways.

comment by G Gordon Worley III (gworley3) · 2018-03-13T19:19:39.346Z · score: 4 (4 votes) · EA(p) · GW(p)

I think you are conflating EA with utilitarianism/consequentialism. To be fair this is totally understandable since many EAs are consequentialists and consequentialist EAs may not be careful to make or even see such a distinction, but as someone who is closest to being a virtue ethicist (although my actual metaethics are way more complicated) I see EA as being mainly about intentionally focusing on effectiveness rather than just doing what feels good in our altruistic endeavors.

comment by Jeffhe · 2018-03-19T18:15:45.618Z · score: 0 (0 votes) · EA(p) · GW(p)

Hey gworley3,

Here's the comment I made about the difference between effective-altruism and utilitarianism (if you're interested): http://effective-altruism.com/ea/1ll/cognitive_and_emotional_barriers_to_eas_growth/dij

comment by Jeffhe · 2018-03-14T00:00:32.255Z · score: 0 (0 votes) · EA(p) · GW(p)

Hi gworley3,

Thanks for your comment.

I don't think I'm conflating EA with utilitarianism. In fact, I made a comment a few days ago specifically pointing out how they might differ under the post "Cognitive and emotional barriers to EA's growth". If you still think I'm conflating things, please point out what in specific so I can address it. Thanks.

comment by kbog · 2018-03-30T06:11:10.842Z · score: 0 (0 votes) · EA(p) · GW(p)

That EA and utilitarianism are different is precisely the point being made here: you have given an argument against utilitarianism, but EA is not utilitarianism, so the argument wouldn't demonstrate that EA is flawed.

comment by Jeffhe · 2018-03-31T02:09:01.903Z · score: 0 (0 votes) · EA(p) · GW(p)

Only my response to Objection 1 is more or less directed to the utilitarian. My response to Objection 2 is meant to defend against other justifications for saving the greater number, such as leximin or cancelling strategies. In any case, I think most EAs (even the non-utilitarians) will appeal to utilitarian reasoning to justify saving the greater number, so addressing utilitarian reasoning is important.

comment by kbog · 2018-03-31T12:31:35.556Z · score: 0 (0 votes) · EA(p) · GW(p)

It's not about responses to objections, it's about the thesis itself.

comment by Evan_Gaensbauer · 2018-03-14T00:33:18.350Z · score: 3 (3 votes) · EA(p) · GW(p)

If you think PETA is the best bet for reducing suffering, you might want to check out other farm animal advocacy organizations at Animal Charity Evaluators' website. The Organization to Prevent Intense Suffering (OPIS) is an EA-aligned organization which has a more explicit focus on advancing projects which directly mitigate abject and concrete suffering. You might also be interested in their work.

comment by Jeffhe · 2018-03-14T00:54:52.608Z · score: 1 (3 votes) · EA(p) · GW(p)

Wow, their name says it all. I didn't know about OPIS - I'll definitely check them out. Will potentially be very useful for my own charitable activities.

Also, thanks for the link to Animal Charity Evaluators - didn't know about them either. Although, given that the numbers don't matter to me in trade off cases, I don't know if it will make a difference. It would if it showed me that donating to another animal charity would help the EXACT same animals I'd help via donating to PETA AND then some (i.e. even more animals). If donating to another animal charity helped different animals (e.g. a different cow than a cow I would have helped by donating to PETA), then even if I can help more animals by donating to this other charity, I would have no overwhelming reason to, because the cow who I would thereby be neglecting would end up suffering no less than any one of the other animals otherwise would, and as I argued in response to Objection 2, who suffers matters.

Thanks for both suggestions though, Evan!

Note, I have since removed PETA from my post because the point of my post was just to question EA and not to suggest charities to donate to. Thanks for making me realize this.

comment by Denis Drescher (Telofy) · 2018-03-13T19:20:08.838Z · score: 3 (3 votes) · EA(p) · GW(p)

I think Brian Tomasik has addressed this briefly and Nick Bostrom at greater length.

What I’ve found most convincing (quoting myself in response to a case that hinged on the similarity of the two or many experiences):

If you don’t care much more about several very similar beings suffering than one of them suffering, then you would also not care more about them, when they’re your own person moments, right? You’re extremely similar to your version a month or several months ago, probably more similar than you are to any other person in the whole world. So if you’re suffering for just a moment, it would be no better than being suffering for an hour, a day, a month, or any longer multiple of that moment. And if you’ve been happy for just a moment sufficiently recently, then close to nothing more can be done for you for a long time.

I imagine that fundamental things like that are up to the subjectivity of moral feelings – so close to the axioms, it’s hard to argue with even more fundamental axioms. But I for one have trouble empathizing with a nonaggregative axiology at least.

comment by Jeffhe · 2018-03-13T22:43:44.944Z · score: 0 (2 votes) · EA(p) · GW(p)

Hi Telofy,

Thanks for your comment, and quoting oneself is always cool (haha)/

In response, if I understand you correctly, you are saying that if I don't prefer saving many similar, though distinct, people each from a certain pain than another person from the same pain, then I have no reason to prefer saving myself from many of those pains than just one of them.

I certainly wouldn't agree with that. Were I to suffer many pains, I (just me) suffers all of them in such a way that there is a very clear sense how they, cumulatively, are worse to endure than just one of them. Thus, I find intra-personal aggregation of pains intelligible. I mean, when an old man reminiscing about his past says to us, "The single worst pain I had was that one time when I got shot in the foot, but if you asked me whether I'd go through that again or all those damn'ed headaches I had over my life, I would certainly ask for the bullet.", we get it. Anyways, I think the clear sense I mentioned supports the intra-personal aggregation of pains and if pains intra-personally aggregate, then more instances of the same pain will be worse than just one instance, and so I have reason to prefer saving myself from more of them.

However, in the case of many vs one other (call him "C"), the pains are spread across distinct people rather than aggregate in one person, so they cannot in the same sense be worse than the pain that C goes through. And so even if I show no preference in this case, I still have reason to show preference in the former case.

comment by Denis Drescher (Telofy) · 2018-03-15T12:32:26.563Z · score: 0 (0 votes) · EA(p) · GW(p)

Okay, curious. What is to you a “clear experiential sense” is just as clear or unclear to me no matter whether I think about the person moments of the same person or of different people.

It would be interesting if there’s some systematic correlation between cultural aspects and someone’s moral intuitions on this issue – say, more collectivist culture leading to more strongly discounted aggregation and more individualist culture leading to more linear aggregation… or something of the sort. The other person I know who has this intuition is from a eastern European country, hence that hypothesis.

comment by Jeffhe · 2018-03-16T03:36:34.931Z · score: 0 (2 votes) · EA(p) · GW(p)

Imagine you have 5 headaches, each 1 minutes long, that occur just 10 seconds apart of each other. From imagining this, you will have an imagined sense of what it's like to go through those 5 headaches.

And, of course, you can imagine yourself in the shoes of 5 different friends, who we can suppose each has a single 1-minute long headache of the same kind as above. From imagining this, you will again have an imagined sense of what it's like to go through 5 headaches.

If that's what you mean when you say that "the clear experiential sense is just as clear or unclear to me no matter whether I think about the person moments of the same person or of different people", then I agree.

But when you imagine yourself in the shoes of those 5 friends, what is going on is that one subject-of-experience (i.e. you), takes on the independent what-it's-likes (i.e. experiences) associated with your 5 friends, and IN DOING SO, LINKS THOSE what-it's-likes - which in reality would be experimentally independent of each other - TOGETHER IN YOU. So ultimately, when you imagine yourself in the shoes of your 5 friends, you are, in effect, imagining what it's like to go through 5 headaches. But in reality, there would be no such what-it's-like among your 5 friends. The only what-it's-like that would be present would be the what-it's-like-of-going-through-1-headache, which each of your friend would experience. No one would experience the what it's like of going through 5 headaches. But that is what is needed for it to be the case that 5 such headaches can be worse than a headache that is worse than any one of them.

Please refer to my conversation with Michael_S for more info.

comment by Denis Drescher (Telofy) · 2018-03-16T22:43:32.380Z · score: 1 (1 votes) · EA(p) · GW(p)

Argh, sorry, I haven’t had time to read through the other conversation yet, but to clarify, my prior was the other one – not that there is something linking the experiences of the five people but that there is very little, and nothing that seems very morally relevant – that links the experiences of the one person. Generally, people talk about continuity, intentions, and memories linking the person moments of a person such that we think of them as the same one even though all the atoms of their bodies may’ve been exchanged for different ones.

In your first reply to Michael, you indicate that the third one, memories, is important to you, but in themselves I don’t feel that they confer moral importance in this sense. What you mean, though, may be that five repeated headaches are more than five times as bad as one because of some sort of exhaustion or exasperation that sets in. I certainly feel that, in my case especially with itches, and I think I’ve read that some estimates of DALY disability weights also take that into account.

But I model that as some sort of ability of a person to “bear” some suffering, which gets worn down over time by repeated suffering without sufficient recovery in between or by too extreme suffering. That leads to a threshold that makes suffering below and above seem morally very different to me. (But I recognize several such thresholds in my moral intuitions, so I seem to be some sort of multilevel prioritarian.)

So when I imagine what it is like to suffer headaches as bad as five people suffering one headache each, I imagine them far apart with plenty of time to recover, no regularity to them, etc. I’ve had more than five headaches in my life but no connection and nothing pathological, so I don’t even need to rely on my imagination. (Having five attacks of a frequently recurring migraine must be noticeably worse.)

comment by Jeffhe · 2018-03-17T02:12:42.613Z · score: 0 (0 votes) · EA(p) · GW(p)

Hi Telofy,

Thanks for this lucid reply. It has made me realize that it was a mistake to use the phrase "clear experiential sense" because that misleads people into thinking that I am referring to some singular experience (e.g. some feeling of exhaustion that sets in after the final headache). In light of this issue, I have written a "new" first reply to Michael_S to try to make my position clearer. I think you will find it helpful. Moreover, if you find any part of it unclear, please do let me know.

What I'm about to say overlaps with some of the content in my "new" reply to Michael_S:

You write that you don't see anything morally relevant linking the person moments of a single person. Are you concluding from this that there is not actually a single subject-of-experience who feels, say, 5 pains over time (even though we talk as if there is)? Or, are you concluding from this that even if there is actually just a single subject-of-experience who feels all 5 pains over time, it is morally no different from 5 subjects-of-experience who each feels 1 pain of the same sort?

What matters to me at the end of the day is whether there is a single subject-of-experience who extends through time and thus is the particular subject who feels all 5 pains. If there is, then this subject experiences what it's like of going through 5 pains (since, in fact, this subject has gone through 5 pains, whether he remembers going through them or not). Importantly, the what-it's-like-of-going-through-5-pains is just the collection of the past 5 singular pain episodes, not some singular/continuous experience like an feeling of exhaustion or some super intense pain from the synthesis of the intensity of the 5 past pains. It is this what-it's-like that can plausibly be worse than the what it's like of going through a major pain. Since there could only be this what-it's-like when there is a single subject who experiences all 5 pains, therefore 5 pains spread across 5 people cannot be worse than a major pain (since, at best, there would only be 5 experientially independent what-it's-like-of-going-through-1-minor-headache).

My latest reply to Michael_S focuses on the question whether there could be a single subject-of-experience who extends through time, and thus capable of feeling multiple pains.

comment by Denis Drescher (Telofy) · 2018-03-25T15:05:07.194Z · score: 1 (1 votes) · EA(p) · GW(p)

Hi Jeff!

To just briefly answer your question, “Are you concluding from this that there is not actually a single subject-of-experience”: I don’t have an intuition for what a subject-of-experience is – if it is something defined along the lines of the three characteristics of continuous person moments from my previous message, then I feel that it is meaningful but not morally relevant, but if it is defined along the lines of some sort of person essentialism then I don’t believe it exists on Occam’s razor grounds. (For the same reason, I also think that reincarnation is metaphysically meaningless because I think there is no essence to a person or a person moment besides their physical body* until shown otherwise.)

* This is imprecise but I hope it’s clear what I mean. People are also defined by their environment, culture, and whatnot.

comment by Jeffhe · 2018-03-27T21:47:00.264Z · score: 0 (0 votes) · EA(p) · GW(p)

Hi Telofy, nice to hear from you again :)

You say that you have no intuition for what a subject-of-experience is. So let me say two things that might make it more obvious:

1.Here is how I defined a subject-of-experience in my exchange with Michael_S:

"A subject of experience is just something which "enjoys" or has experience(s), whether that be certain visual experiences, pain experiences, emotional experiences, etc... In other words, a subject of experience is just something for whom there is a "what-it's-like". A building, a rock or a plant is not a subject of experience because it has no experience(s). That is why we don't feel concerned when we step on grass: it doesn't feel pain or feel anything. On the other hand, a cow is a subject-of-experience: it presumably has visual experiences and pain experience and all sorts of other experiences. Or more technically, a subject-of-experience (or multiple) may be realized by a cow's physical system (i.e. brain). There would be a single subject-of-experience if all the experiences realized by the cow's physical system are felt by a single subject. Of course, it is possible that within the cow's physical system's life span, multiple subjects-of-experience are realized. This would be the case if not all of the experiences realized by the cow's physical system are felt by a single subject."

I later enriched the definition a bit as follows: "A subject-of-experience is a thing that has, OR IS CAPABLE OF HAVING, experience(s). I add the phrase 'or is capable of having' this time because it has just occurred to me that when I am in dreamless sleep, I have no experiences whatsoever, yet I'd like to think that I am still around - i.e. that the particular subject-of-experience that I am is still around. However, it's also possible that a subject-of-experience exists only when it is experiencing something. If that is true, then the subject-of-experience that I am is going out of and coming into existence several times a night. That's spooky, but perhaps true."

2.Having offered a definition to Michael, I then say to him here is WHAT MAKES a particular subject-of-experience the numerically same subject-of-experience over time:

"Within any given physical system that can realize subjects of experience (e.g. a cow's brain), a subject-of-experience at time t-1 (call this subject "S1") is numerically identical to a subjective-of-experience at some later time t-2 (call this subject "S2") if and only if an experience at t-1 (call this experience "E1") and an experience at t-2 (call this experience "E2") are both felt by S1. That is S1 = S2 iff S1 feels E1 and E2."

Let me just add: A particular subject-of-experience can obviously be qualitatively different over time, which would happen when his personality changes or memory changes (or is erased) etc... But that doesn't imply there is any numerical difference. I assume the distinction between numerical identity and qualitative identity is a familiar one to you. In any case, here is an example to illustrate the distinction: Two perfectly matching coins are qualitatively the same, yet they are numerically distinct insofar as they are not one and the same coin.

I hope what I have said here helps!

comment by jonathancourtney · 2018-03-16T18:53:39.014Z · score: 2 (2 votes) · EA(p) · GW(p)

Hey Jeffhe- the position you put forward looks structurally really similar to elements of Scanlon's, and you discuss a dillema that is often discussed in the context of his work (the lifeboat/the rocks example)- It also seems like given your reply to objection 3 you might really like it's approach (if you are not familiar with it already). Subsection 7 of this SEP article (https://plato.stanford.edu/entries/contractualism/) gives a good overview of the case that is tied to the one you discuss. The idea of the separateness of persons, and the idea that one persons pain can't cancel out another person pain, is well represented in Scanlon's work.

I also wonder whether the right way of representing an 'equal chance of being helped' in this model is not to flip a coin for each group, but to roll a N sided dice, where N are the total number of people who could be helped, and then choosing whichever group the person whose number is rolled is in: that way everyone, in some sense, has a chance to be saved, and that chance is, in some sense, equal- without leading to the worrying conclusions that Bob and a million peoples lives ought to be settled through a coin flip (The coin-flipping decision theory could also be abused by dividing up groups differently, i.e. I can always re-describe the world in the way where a person I could help in extreme pain is in one group, and all other people are in a different group, but then I can simply redescribed the world to move that person into the 'all other people' category, and select another person, which seems to mean we can arbitrarily increase the odds of any one person being the right person to help, simply by moving them between the categories- which seems wrong).

comment by Jeffhe · 2018-03-16T22:17:28.080Z · score: 0 (0 votes) · EA(p) · GW(p)

Hi Jonathan,

Thanks for directing me to Scanlon's work. I am adequately familiar with his view on this topic, at least the one that he puts forward in What We Owe to Each Other. There, he tried to put forward an argument to explain why we should save the greater number in a choice situation like the one involving Bob, Amy and Susie, which respected the separateness of persons, but his argument has been well refuted by people like Michael Otsuka (2000, 2006).

Regarding your second point, what reason can you give for giving each person less than the maximum equal chance possible (e.g. 50%) aside from wanting to sidestep a conclusion that is worrying to you? Suppose I choose to give Bob, Amy and Susie each a 1% of being saved, instead of each a 50% of being saved, and I say to them, "Hey none of you have anything to complain about because I'm technically giving each of you an equal chance, even though most likely, none of you will be saved." Each of them can reasonably protest that doing so does not treat them with the appropriate level of concern. Say then, I give each of them a 1/3 chance of being saved (as you propose we do) and again I say to them, "Hey none of you have anything to complain about because I'm technically giving each of you an equal chance". Don't you think they can reasonably protest in the same way until I give them each the maximum equal chance (i.e. 50%)?

Regarding your third point, I don't see how I can divide up the groups differently. They come to me as given. For example, I can't somehow switch Bob and Amy's place such that the choice situation is one of either helping Amy or helping Bob and Susie. How would I do that?

comment by Kaj_Sotala · 2018-03-16T13:02:36.817Z · score: 2 (2 votes) · EA(p) · GW(p)

The following is roughly how I think about it:

If I am in a situation where I need help, then for purely selfish reasons, I would prefer people-who-are-capable-of-helping-me to act in such a way that has the highest probability of helping me. Because I obviously want my probability of getting help, to be as high as possible.

Let's suppose that, as in your original example, I am one of three people who need help, and someone is thinking about whether to act in a way that helps one person, or to act in a way that helps two people. Well, if they act in a way that helps one person, then I have a 1/3 chance of being that person; and if they act in a way that helps two people, then I have a 2/3 chance of being one of those two people. So I would rather prefer them to act in a way that helps as many people as possible.

I would guess that most people, if they need help and are willing to accept help, would also want potential helpers to act in such a way that maximizes their probability of getting help.

Thus, to me, reason and empathy would say that the best way to respect the desires of people who want help, is to maximize the amount of people you are helping.

comment by Jeffhe · 2018-03-16T18:11:47.047Z · score: 0 (0 votes) · EA(p) · GW(p)

Hi Kaj,

Thanks for your response. Please refer to my conversation with brianwang712. It addresses this objection!

comment by RandomEA · 2018-03-13T03:52:45.778Z · score: 2 (2 votes) · EA(p) · GW(p)

I used to think that a large benefit to a single person was always more important than a smaller benefit to multiple people (no matter how many people experienced the smaller benefit). That's why I wrote this post asking others for counterarguments. After reading the comments on that post (one of which linked to this article), I became persuaded that I was wrong.

Here's an additional counterargument. Let's say that I have two choices:

A. I can save 1 person from a disease that decreases her quality of life by 95%; or

B. I can save 5 people from a disease that decreases their quality of life by 90%.

My intuition is that it is better to save the 5. Now let's say I get presented with a second dilemma:

B. I can save 5 people from a disease that decreases their quality of life by 90%; or

C. I can save 25 people from a disease that decreases their quality of life by 85%.

My intuition is that it is better to save the 25. Now let's say I get presented with a third dilemma.

C. I can save 25 people from a disease that decreases their quality of life by 85%; or

D. I can save 125 people from a disease that decreases their quality of life by 80%.

My intuition is that it is better to save the 125. This cycle continues until the seventeenth dilemma:

Q. I can save 152,587,890,625 people from a disease that decreases their quality of life by 15%; or

R. I can save 762,939,453,125 people from a disease that decreases their quality of life by 10%.

My intuition is that it is better to save the 762,939,453,125.

Since I prefer R over Q and Q over P and P over O and so on and so forth all the way through preferring C over B and B over A, it follows that I should prefer R over A.

In other words, our intuition that providing a large benefit to one person is less important than providing a slightly smaller benefit to several people conflicts with our intuition that providing a very large benefit to one person is more important than providing a very small benefit to an extremely large number of people. Given scope insensitivity, I think the former intuition is probably more reliable.

One last point. I think that EA has a role even under your worldview. It can help identify the worst possible forms of suffering (such as being boiled alive at a slaughterhouse) and the most effective ways to prevent that suffering.

comment by Jeffhe · 2018-03-14T00:25:00.510Z · score: 0 (2 votes) · EA(p) · GW(p)

Hi RandomEA,

First of all, awesome name! And secondly, thanks for your response.

My view is that we should give each person a chance of being helped that is proportionate to what they each have to suffer. It is irrelevant to me how many people there are who stand to suffer the lesser pain. So, for example, in the first choice situation you described, my intuition is to give the single person roughly slightly over a 50% chance of being saved and the others slightly under 50% of being saved. This is because the single person would suffer slightly worse than any one of the others, so the single person gets a slightly higher chance. It is irrelevant to me how many people have 90% to lose in quality of life, whether it be 5 or 5 billion.

So if 760 billion people have 10% to lose where the single person has 90% to lose, my intuition is to give the single person roughly a 90% chance of being saved and the other 760 billion a 10% of being saved.

In my essay, I in effect argued that everyone would have this intuition if properly appreciated the following two facts:

  1. That were the 760 billion people to suffer, none of them would suffer anywhere near the amount the single person would. Conversely, were the single person to suffer, he/she would suffer so much more than any one of the 760 billion.
  2. Which individual suffers matters because it is the particular individual who suffers that bears all the suffering.

I assume that we should accept the intuitions that we have when we keep all the relevant facts at the forefront of our mind (i.e. when we properly appreciate them). I believe the intuitions I mentioned above (i.e. my intuitions) are the ones people would have when they do this.

Regarding your second point, I have to think a little more about it!

comment by RandomEA · 2018-03-14T03:08:55.730Z · score: 0 (0 votes) · EA(p) · GW(p)

Let's say that you have $100,000,000,000,000.

For every $1,000,000,000,000 you spend on buying medicine A, the person in scenario A (from my previous comment) will have an additional 1% chance of being cured of disease A.

For every $200,000,000,000 you spend on buying medicine B, a person in scenario B (from my previous comment) will have an additional 1% chance of being cured of disease B.

For every $40,000,000,000 you spend on buying medicine C, a person in scenario C (from my previous comment) will have an additional 1% chance of being cured of disease C.

...

For every $1.31 you spend on buying medicine R, a person in scenario R (from my previous comment) will have an additional 1% chance of being cured of disease R.

Now consider a situation where you have to spend your $100,000,000,000,000 on helping one person with disease A and 5 people with disease B. Based on your response to my comment, it sounds like you would spend $51,355,000,000,000 on the person with disease A (giving her a 51.36% chance of survival) and $9,729,000,000,000 on each person with disease B (giving each of them a 48.64% chance of survival). Is that correct?

Next consider a situation where you have to spend your $100,000,000,000,000 on helping one person with disease A and 762,939,453,125 people with disease R. Based on your response to my comment, it sounds like you would spend $90,476,000,000,000 on the person with disease A (giving her a 90.48% chance of surviving) and $12.48 on each person with disease R (giving each of them a 9.53% chance of surviving). Is that correct?

comment by Jeffhe · 2018-03-14T21:17:29.238Z · score: 0 (0 votes) · EA(p) · GW(p)

The situations I focus on in my essay are trade-off choice situations, meaning that I can only choose one party to help, and not all parties to various degrees. Thus, if you have an objection to my argument, it is important that we focus on such kinds of situations. Thanks!

comment by RandomEA · 2018-03-14T23:20:44.249Z · score: 1 (1 votes) · EA(p) · GW(p)

Yes but the situations that EAs face are much more analogous to my second set of hypotheticals. So if you want your argument to serve as an objection to EA, I think you have to explain how it applies to those sorts of cases.

comment by Jeffhe · 2018-03-14T23:49:39.945Z · score: 0 (2 votes) · EA(p) · GW(p)

Not true. Trade off situations are literally everywhere. Whenever you donate to some charity, it is at the expense of another charity working in a different area and thus at the expense of the people who the other charity would have helped. Even with malaria, if you donate to a certain charity, you are helping the people who that charity helps at the expense of other people that another charity against malaria helps. That's the reality.

And if you're thinking "Well, can't I donate some to each malaria fighting charity?", the answer is yes, but whatever money you donate to the other malaria fighting charity, it comes at the expense of helping the people who the original malaria fighting charity would have been able to help had they got all your donation and not just part of it. The trade off choice situation would be between either helping some of the people residing in the area of the other malaria fighting charity or helping some additional people residing in the area of the original malaria fighting charity. You cannot help all.

In principle, as long as one doesn't have enough money to help everyone, one will always find oneself in a trade off choice situation when deciding where to donate.

comment by RandomEA · 2018-03-15T01:51:20.933Z · score: 0 (0 votes) · EA(p) · GW(p)

I think the second set of hypotheticals does involve trade-offs. When I say that a person has an additional 1% chance of being cured, I mean that they have an additional 1% chance of receiving a medicine that will definitely cure them. If you spend more money on medicines to distribute among people with disease Q (thus increasing the chance that any given person with disease Q will be cured), you will have less money to spend on medicines to distribute among people with disease R (thus decreasing the chance that any given person with disease R will be cured).

The reason I think that the second set of hypotheticals is more analogous to the situations EAs face is that there are typically already many funders in the space, meaning that potential beneficiaries often have some chance of being helped even absent your donation. It's quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped.

comment by Jeffhe · 2018-03-15T02:27:31.084Z · score: 0 (2 votes) · EA(p) · GW(p)

My apologies. After re-reading your second set of hypothetical, I think I can answer your questions.

In the original choice situation contained in my essay, the device I used to capture the amount of chance each group would be given of being helped was independent of the donation amount. For example, in the choice situation between Bob, Amy, and Susie, the donation was $10 and the device used to give each a 50% chance of being saved from a painful disease was a coin.

However, it seems like in your hypotheticals, the donation is used as the device too. That confused me at first. But yeah, at the end of the day, I would give person A a roughly 90% of being saved from his/her suffering and roughly a 10% to each of the billions of others, regardless of what the dollar breakdown would look like. So, if I understand your hypotheticals correctly, then my answer would be yes to both your original questions.

I don't however see the point of using the donation to also act as the device. It seems to unnecessarily over complicate the choice situations.

If your goal is to try to create a choice situation in which I have to give a vast amount of money to give person A around a 90% chance of surviving, and the objection you're thinking of raising is that it is absurd to give that much to give a single person around a 90% of being helped, then my response is:

1) Who suffers matters

2) What person A stands to suffer is far worse than what any one of the people from the competing group stands to suffer.

I think if we really appreciate those two facts, our intuition is to give person A 90% and each of the others a 10%, regardless of the $ breakdown that involves. Thanks.

Just noticed you expanded your comment. You write, "It's quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped." This is not true. There will always be a person in line who isn't helped, but who would have been helped had you funded the charity working in his area. He may not be the first in line, but he is somewhere in the line waiting to be helped by that charity.

comment by RandomEA · 2018-03-15T15:43:18.489Z · score: 1 (1 votes) · EA(p) · GW(p)

Just noticed you expanded your comment. You write, "It's quite rare that you choosing to fund one person over another will result in the other person having no chance at all of being helped." This is not true. There will always be a person in line who isn't helped, but who would have been helped had you funded the charity working in his area. He may not be the first in line, but he is somewhere in the line waiting to be helped by that charity.

I was simply noting the difference between our two examples. In your example, Bob has no chance of receiving help if you choose the other person. In the real world, me choosing one charity over another will not cause a specific person to have no ex-ante chance of being helped. Instead, it means that each person in the potential beneficiary population has a lower chance of being helped. I wanted my situation to be more analogous to the real world because I want to see how your principle works in practice. It's the same reason I introduced different prices into the example.

Also, my comment was expanded very shortly after it was originally posted. It's possible that you saw the original one and while you were writing your response to it I posted my edit.

comment by Jeffhe · 2018-03-17T17:24:13.645Z · score: 0 (0 votes) · EA(p) · GW(p)

Hey RandomEA,

Sorry for the late reply. Well, say I'm choosing between the World Food Programme (WFP) and some other charity, and I have $30 to donate. According to WFP, $30 can feed a person for a month (if I remember correctly). If I donate to the other charity, then WFP in its next operation will have $30 less to spend on food, meaning someone who otherwise would have been helped won't be receiving help. Who that person is, we don't know. All we know is that he is the person who was next in line, the first to be turned away.

Now, you disagree with this. Specifically you disagree that it could be said of any SPECIFIC person that, if I don't donate to WFP, that it would be true of THAT person that he won't end up receiving help that he otherwise would have. And this is because:

1) HE - that specific person - still had a chance of being helped by WFP even if I didn't donate the $30. For example, he might have gotten in line sooner than I'm supposing he has. And you will say that this holds true for ANY specific person. Therefore, the phrase "he won't end up receiving help" is not guaranteed.

2) Moreover, even if I do donate the $30 to WFP, there isn't any guarantee that he would be helped. For example, HE might have gotten in line way to late for an additional $30 to make a difference for him. And you will say that this holds true for ANY specific person. Therefore, the phrase "that he otherwise would have" is also not guaranteed.

In the end, you will say, all that can be true of any SPECIFIC person is that my donation of $30 would raise THAT person's chance of being helped.

Therefore, in the real world, you will say, there's rarely a trade-off choice situation between specific people.

I am tempted to agree with that, but two points:

1) There still seems to be a trade off choice situation between specific groups of people: i.e. the group helped by WFP and the group helped by the other charity.
2) I think, at least in refugee camps, there is already a list of all the refugees and a document specifying who in specific is next in line to receive a given service/aid. In these cases, we will be faced with a trade off choice situation between a specific individual (who we would be helping if we donated to the refugee camp) and whatever group of people that would be helped by donating to another charity. I wonder what percentage of real life situations are like this. Moreover, if you're looking for real life trade off situations between some specific person(s) and some other specific person or specific group, they are clearly not hard to find. For example, you can either help a specific homeless man vs whoever. Or you can help a specific person avoid torture by helping pay off a ransom vs whoever else by helping a charity. Or you can spend fund a specific person's cancer treatment vs whoever. Etc...

My overall point is that trade off situations of the kind I describe in my paper are very real and everywhere EVEN IF it is true that there are trade off situations of the nature you describe.

Thanks.

comment by bejaq · 2018-03-30T16:20:47.489Z · score: 1 (1 votes) · EA(p) · GW(p)

I agree that aggregating suffering of different people is problematic. By necessity, it happens on a rather abstract level, divorced from the experiential. I would say that can lead to a certain impersonal approach which ignores the immediate reality of the human condition. Certainly we should be aware of how we truly experience the world.

However I think here we transcend ethics. We can't hope to resolve deep issues of of suffering within ethics, because we are somewhat egocentric beings by nature. We see only through our eyes and feel our body. I don't see that ethics really can adress that level meaningfully, it requires us to abstract from that existential reality.

For me the alternative is a more pragmatic ethical framework. It acknowledges we are not just ethical beings, but that ethics is important on an interpersonal level. From that point of view helping more people can be the right thing because we are aware we generally cannot truly resolve others suffering on an individual basis. So we are in effect helping the greater system of society or humanity. In that case there's no problem helping a group instead of an individual. We are not trying to help "at the root" - which we may only be able to do for ourselves or perhaps people close to us - but contribute to society in a meaningful way. And on that level there's a practical difference between helping one person or many.

In practice, for me that means I do take effective altruism into account, but also acknowledge its limitations. I'd say everyone does that implicity or explicity.

comment by Jeffhe · 2018-03-31T01:38:58.946Z · score: 1 (1 votes) · EA(p) · GW(p)

Hi bejaq,

Thanks for your thoughtful comment. I think your first paragraph captures well why I think who suffers matters. The connection between suffering and who suffers it is to strong for the former to matter and for the latter not to. Necessarily, pain is pain for someone, and ONLY for that someone. So it seems odd for pain to matter, yet for it not to matter who suffers it.

I would also certainly agree that there are pragmatic considerations that push us towards helping the larger group outright, rather than giving the smaller group a chance.

comment by Alex_Barry · 2018-03-29T23:13:16.104Z · score: 1 (3 votes) · EA(p) · GW(p)

(Posted as top-level comment as I has some general things to say, was originally a response here)

I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important.

Overall I find this post confusing though, since the framing seems to be 'Effective Altruism is making an intellectual mistake' whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with effective altruism as it currently practiced. Whilst you could describe moral differences as intellectual mistakes, this does not seem to be a standard or especially helpful usage.

The comments etc. then just seem to have mostly been people explaining why they don't find your moral intuition that 'non-purely experientially determined' and 'purely experientially determined' amounts of pain cannot be compared compelling. Since we seem to have reached a point where there seems to be a fundamental disagreement about considered moral values, it does not seem that attempting to change each others minds is very fruitful.

I think I would have found this post more conceptually clear if it had been structured:

  1. EA conclusions actually require an additional moral assumption/axiom - and so if you don't agree with this assumption then you should not obviously follow EA advice.

  2. (Optionally) Why you find the moral assumption unconvincing/unlikely

  3. (Extra Optionally) Tentative suggestions for what should be done in the absence of the assumption.

Where throughout the assumption is the commensuratabilitly of 'non-purely experientially determined' and 'purely experientially determined' experience.

In general I am not very sure what you had in mind as the ideal outcome of this post. I'm surprised if you thought most EAs agreed with you on your moral intuition, since so much of EA is predicated on its converse (as is much of established consequential thinking etc.). But equally I am not sure what value we can especially bring to you if you feel very sure in your conviction that the assumption does not hold.

comment by kbog · 2018-03-30T05:45:38.593Z · score: 0 (0 votes) · EA(p) · GW(p)

Little disagreement in philosophy comes down to a matter of bare differences in moral intuition. Sometimes people are just confused.

comment by Jeffhe · 2018-03-31T01:55:42.982Z · score: 1 (1 votes) · EA(p) · GW(p)

Hey Alex, thanks for your comment!

I didn't know what the source of my disagreement with EAs would be, so I hope you can understand why I couldn't structure my post in a way that would have already taken into account all the subsequent discussions. But thanks for your suggestion. I may write another post with a much simpler structure if my discussion with kbog reaches a point where either I realize I'm wrong or he realizes he's wrong. If I'm wrong, I hope to realize it asap.

Also, I agree with kbog. I think it's much likelier that one of us is just confused. Either kbog is right that there is an intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person or he is not.

After figuring that out, there is the question of which sense of "involves more pain than" is more morally important: is it the "is experientially worse than" sense or kbog's sense? Perhaps that comes down to intuitions.

comment by Alex_Barry · 2018-03-31T16:08:00.627Z · score: 0 (0 votes) · EA(p) · GW(p)

Thanks for your reply - I'm extremely confused if you think there is no 'intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person" since (as has been discussed in these comments) if you view/define total pain as being measured by intensity-weighted number of experiences this gives a clear metric that matches consequentialist usage.

I had assumed you were arguing at the 'which is morally important' level, which I think might well come down to intuitions.

I hope you manage to work it out with kblog!

comment by Jeffhe · 2018-04-10T21:14:32.725Z · score: 1 (1 votes) · EA(p) · GW(p)

Hey Alex,

Thanks for your reply. I can understand why you'd be extremely confused because I think I was in error to deny the intelligibility of the utilitarian sense of "more pain".

I have recently replied to kbog acknowledging this mistake, outlining how I understand the utilitarian sense of "more pain", and then presenting an argument for why my sense of "more pain" is the one that really matters.

I'd be interested to know what you think.

comment by Alex_Barry · 2018-04-12T13:13:34.150Z · score: 1 (1 votes) · EA(p) · GW(p)

Thanks for getting back to me, I've read your reply to kblog, but I don't find your argument especially different to those you laid out previously (which given that I always thought you were trying to make the moral case should maybe not be surprising). Again I see why there is a distinction one could care about, but I don't find it personally compelling.

(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).)

A couple of brief points in favour of the classical approach:

  • It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).
  • As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.

One additional thing to note is that dropping the comparability of 'non-purely experientially determined' and 'purely experientially determined' experiences (henceforth 'Comparability') does not seem to naturally lead to a specific way of evaluating different situations or weighing them against each other.

For example, you suggest in your post that without Comparability the morally correct course of action would be to give each person a chance of being helped in proportion to their suffering, but this does not necessarily follow. One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else. To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).

comment by Jeffhe · 2018-04-12T22:37:12.772Z · score: 0 (0 votes) · EA(p) · GW(p)

Hey Alex,

Thanks again for taking the time to read my conversation with kbog and replying. I have a few thoughts in response:

(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).)

When you say that many people here would embrace the assumption that "two people experiencing the same pain is twice as bad as one person experiencing that pain", are you using "bad" to mean "morally bad?"

I ask because I would agree if you meant morally bad IF the single person was a subset of the two people. For example, I would agree that Amy and Susie each suffering is twice as morally bad as just Amy suffering. However, I would not agree IF the single person was not a subset of the two (e.g., if the single person was Bob). If the single person was Bob, I would think the two cases are morally just as bad.

Now, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.

However, based on my preferred sense of "more pain", two people suffering involves the same amount of pain as one person suffering, irrespective of whether the single person is a subset or not.

Therefore, you might wonder how I am able to arrive at the different opinions above. More specifically, if I think Amy and Susie each suffering involves the same amount of pain as just Amy suffering, shouldn't I be committed to saying that the former is morally just as bad as the latter, rather than twice as morally bad (which is what I want to say?)

I don't think so. I think the pareto principle provides an adequate reason for taking Amy and Susie each suffering to be morally worse than just Amy's suffering. As Otsuka (a philosopher at Harvard) puts it, the Pareto states that “One distribution of benefits over a population is strictly Pareto superior to another distribution of benefits over that same population just in case (i) at least one person is better off under the former distribution than she would be under the latter and (ii) nobody is worse off under the former than she would be under the latter." Since just Amy suffering (i.e. Susie not suffering) is Pareto superior to Amy and Susie each suffering, therefore just Amy suffering is morally better than Amy and Susie each suffering. In other words, Amy and Susie each suffering is morally worse than just Amy suffering. Notice, however, that if the single person were Bob, condition (ii) would not be satisfied because Bob would be made worse off. The Pareto principle is based on the appealing idea that we shouldn't begrudge another person an improvement that costs us nothing. Amy shouldn't begrudge Susie an improvement that costs her nothing.

Anyways, I just wanted to make that aspect of my thinking clear. So I would agree with you that more people suffering is morally worse than fewer people suffering as long as the smaller group of people is a subset of the larger group, due to the Pareto principle. But I would not agree with you that more people suffering is morally worse than fewer people suffering if those fewer people are not a subset of the larger group, since the Pareto principle is not a basis for it, nor is there more pain in the former case than the latter case on my preferred sense of "more pain". And since I think my preferred sense of "more pain" is the one that ultimately matters because it respects the fact that pain matters solely because of how it feels, I think others should agree with me.

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.

The veil of ignorance approach at minimum supports a policy of helping the greater number (given the stipulation that each person has an equal chance of occupying anyone's position). However, as I argued, this stipulation is not true OF the real world because each of us didn't actually have an equal chance of being in any of our position, and what we should do should be based on the facts, and not on a stipulation. In kbog's latest reply to me regarding the veil of ignorance, he seems to argue that the stipulation should determine what we ought to do (irrespective of whether it is true in the actual world) because "The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system." I have yet to respond to this latest reply because I have been too busy arguing about our senses of "more pain", but if I were to respond, I would say this: "I agree that we should give equal consideration to everyone, which is why I believe we should give each person a chance of being helped proportional to the suffering they face. The only difference is that this is giving equal consideration to everyone in a way that respects the facts of the world." Anyways, I don't want to say too much here, because kbog might not see it and it wouldn't be fair if you only heard my side. I'll respond to kbog's reply eventually (haha) and you can follow the discussion there if you wish.

Let me just add one thing: Based on Singer's intro to Utilitarianism, Harsanyi argued that the veil of ignorance also entails a form of utilitarianism on which we ought to maximize average utility, as opposed to Rawls' claim that it entails giving priority to the worst off. If this is right, then the veil of ignorance approach doesn't support classical utilitarianism which just says we ought to maximize utility and not average utility.

One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else.

Yes, they could, but I also argued that who suffers matters in my response to Objection 2, and to simply help the person suffering the most is to ignore this fact. Thus, even if one person suffering a lot is experientially worse (and thus morally worse) than many others each suffering something less, I believe we should give the others some chance of being helped. That is to say, in light of the fact that who suffers matters, I believe it is not always right to prevent the morally worse case.

To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).

While this is a possible position to hold, it is not a plausible one, because it effectively entails that the numbers matter in itself. That is, such a person thinks he should save the many over one other person not because he thinks the many suffering involves more pain than the one suffering (for he denies that a non-purely experientially determined amount of pain can be compared with a purely experientially determined amount of pain). Rather, he thinks he should save the many solely because they are many. But it is hard to see how numbers matter in itself.

comment by Alex_Barry · 2018-04-13T10:09:42.831Z · score: 1 (1 votes) · EA(p) · GW(p)

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

The argument is that if:

  • The amount of 'total pain' is determined by the maximum amount of suffering people experienced by any given person (Which I think is what you are arguing)
  • There could be an alien civilization containing a being experiencing more suffering than any human is capable of experiencing (you could also just use a human being tortured if you liked for a less extreme but clearly applicable case)
  • In this case, then the amount of 'total pain' is always at least that very large number, such that none of your actions can change it at all.
  • Thus (and you would disagree with this implication due to your adoption of the Pareto principle) since the level of 'total pain' is the morally important thing, all of your possible actions are morally equivalent.

As I mention I think you escape this basic formulation of the problem by your adoption of the Pareto principle, but a more complicated version causes the same issue:

This is essentially just applying the non-identity problem to the example above. (weirdly enough I think the best explanation I've seen of the non-identity problem is the second half of the 'the future' section of Derek Parfit wikipedia page )

The argument goes something like:

  • D1 If we adopt that 'total pain' is the maximal pain experienced by any person for whom we can effect how much pain their experience (an attempt to incorporate the Pareto principle into the definition for simplicity's sake).
  • A1 At some point in the far future there is almost certainly going to be someone experiencing extreme pain. (Even if humanity is wiped out, so most of the future has no one in it, that wiping out is likely to involve extreme pain for some).
  • A2 Due to chaotic nature of the world, and the strong dependence on birth timings of personal identity (if the circumstances of ones conception change even very slightly then your identity will almost certainly be completely different) any actions in the world now will within a few generations result in a completely different set of people existing.
  • C1 Thus by A1 the future is going to contain someone experiencing extreme pain, but by A2 exactly who this person is will vary with any different courses of action, thus by D1 the 'total pain' in all cases is uniformly vary high.

This is similar to the point made in JanBrauner, however I did not find your response to their comment particularly engaged the core point of the extreme unpredictability of the maximum pain caused by an act.

After your most recent comment I am generally unsure exactly what you are arguing for in terms of moral theories. When arguing on which form of pain is morally important you seem to make a strong case that one should consider the 'total pain' in a situation solely by whatever pain involved is most extreme. However when discussing moral recommendations you don't completely focus on this. Thus I'm not sure if this comments and its examples will miss the mark completely.

(There are also more subtle defenses, such as those relating to how much one cares about future people etc. which have thusfar been left out of the discussion).

comment by Jeffhe · 2018-04-13T23:43:01.030Z · score: 0 (0 votes) · EA(p) · GW(p)

Thanks for the exposition. I see the argument now.

You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.

I've since made clear to you that who suffers matters to me too, so if the above is right, then according to my moral theory, what we ought to do is assign an equal chance to any possible action we could take, since each possible action gives rise to the same total pain, just suffered by different individuals.

Your argument would continue: Any moral theory that gave this absurd recommendation cannot be correct. Since the root of the absurdity is my preferred approach to determining total pain, that approach to determining total pain must be problematic too.

My response:

JanBrauner, if I remember correctly was talking about extreme unpredictability, but your argument doesn't seem to be based on unpredictability. If A1 and A2 are true, then each possible action more-or-less seems to inevitably result in a different person suffering maximal pain.

Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).

I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive. Since I don't find the latter absurd at all (keeping in mind that none of the million will suffer anything worse than the one, i.e. that the one would suffer no less than any one of the million), I would not find the former absurd either. Indeed, giving each person an equal chance of being saved from being burned alive seems to me like the right thing to do given that each person has the same amount to suffer. So I would feel similarly about assigning each possible action an equal chance (assuming A1 and A2 are true).

comment by Alex_Barry · 2018-04-17T13:38:57.734Z · score: 0 (0 votes) · EA(p) · GW(p)

I was trying to keep the discussions of 'which kind of pain is morally relevant' and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.

You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.

Given that you were initially arguing (with kblog etc.) for this definition of total pain, independent of any other identity considerations, this seems very relevant to that discussion.

Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).

But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them.

I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive.

The issue is this also applied to the case of deciding whether to set the island on fire at all

comment by Jeffhe · 2018-04-22T23:37:16.627Z · score: 0 (0 votes) · EA(p) · GW(p)

I was trying to keep the discussions of 'which kind of pain is morally relevant' and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.

I think I see the original argument you were going for. The argument against my approach-minus-the-who-suffers-matters-bit is that it renders all resulting states of affairs equally bad, morally speaking, because all resulting states of affairs would involve the same total pain. Given that we should prevent the morally worst case, this means that my approach would have it that we shouldn't take any action, and that's just absurd. Therefore, my way of determining total pain is problematic. Here "a resulting state of affairs" is broadly understood as the indefinite span of time following a possible action, as opposed to any particular point in time following a possible action. On this broad understanding, it seems undeniable that each possible action will result in a state of affairs with the same total maximal pain, since there will surely be someone who suffers maximally at some point in time in each indefinite span of time.

Well, if who suffered didn't matter, then I think leximin should be used to determine which resulting state of affairs is morally worse. According to leximin, we determine which state of affairs is morally better as follows: Step 1: From each state of affairs, select a person among the worst off in that state of affairs. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 2. Step 2: From each state of affairs, select a person among the worst off in that state of affairs, except for the person who has already been selected. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 3. And so forth... According to this method, even though all resulting states of affairs will involve the same total pain, certain resulting states of affairs will be morally better than others, and we should act so as to realize them.

My appeal to leximin is not ad hoc because it takes an individual's suffering seriously, which is inline with my approach. Notice that leximin can be used to justify saving Susie and Amy over Bob. I don't actually endorse leximin because leximin does not take an individual's identity seriously (i.e. it doesn't treat who suffers as morally relevant, whereas I do. I think who suffers matters).

So that is one response I have to your argument: it grants you that the total pain in each resulting state of affairs would be the same and then argues that this does not mean that all resulting state of affairs would be morally just as bad.

Another response I have is that, most probably, different states of affairs will involve different amounts of pain, and so some states of affairs will be morally worse than others just based on total pain involved. This becomes more plausible when we keep in mind what the maximum amount of pain is on my approach. It is not the most intense pain, e.g. a torture session. It is not the longest pain, e.g. a minor headache that lasts one's entire life. Rather, it is the most intense pain over the longest period of time. The person who suffers maximum pain is the person who suffers the most intense pain for the longest period of time. Realizing this, it is unlikely that each possible action will lead to a state of affairs involving this. (Note that this is to deny A1.)

Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).

But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them.

To give each possible action an equal chance is certainly not to flip a coin between murdering someone or not. At any given moment, I have thousands (or perhaps an infinite number) of possible actions I could take. Murdering the person in front of me is but one. (There are many complexities here that make the discussion hard like what counts as a distinct action.)

However, I understand that the point of your objection is that my approach can allow the murder of an innocent. In this way, your objection is like that classical argument against utilitarianism. Anyways, I guess, like effective altruism, I can recognize rules that forbid murdering etc. I should clarify that my goal is not to come up with a complete moral theory as such. Rather it is to show that we shouldn't use the utilitarian way of determining "total pain", which underlies effective altruism.

I have argued for this by

1) arguing that the utilitarian way of determining "total pain" goes against the spirit of why we take pain to matter in the first place. In response, you have suggested a different framing of utilitarianism on which they are determining a "total moral value" based on people's pains, which is different from determining a total pain. I still need to address this point.

2) responding to your objection against my way of determining "total pain" (first half of this reply)

comment by Alex_Barry · 2018-04-13T09:03:31.803Z · score: 0 (0 votes) · EA(p) · GW(p)

are you using "bad" to mean "morally bad?"

Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)

I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badness' etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don't know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).

I bring this up since you are approaching this from a different angle than the usual, which makes peoples standard lines of reasoning seem more complex.

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

I'll discuss this in a separate comment since I think it is one of the strongest argument against your position.

I don't know much about the veil of ignorance, so I am happy to give you that it does not support total utilitarianism.

I believe it is not always right to prevent the morally worse case.

Then I am really not sure at all what you are meaning by 'morally worse' (or 'right'!). In light of this, I am now completely unsure of what you have been arguing the entire time.

comment by Jeffhe · 2018-04-13T19:58:31.261Z · score: 0 (0 votes) · EA(p) · GW(p)

Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)

I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badness' etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don't know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).

So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:

  1. Assign a moral value to each person's experiences based on its overall what-it's-like. For example, if someone is to experience 5 headaches, we are to assign a single moral value to his 5 headaches based on how experientially bad the what-it's-like-of-going-through-5-headaches is. If going through 5 such headaches is about experientially as bad as going through 1 major headache, then we would assign the same moral value to someone's 5 minor headaches as we would to someone else's 1 major headache.

  2. We then add up the moral value assigned to each person's experiences to get a global moral value, and compare this moral value to the other global values corresponding to the other states of affairs we could bring about.

This approach reminds me of trade-off situations that involve saving lives instead of saving people from suffering. For example, suppose we can either save Amy's and Susie's life or Bob's life, but we cannot save all. Who do we save? Most people would reason that we should save Amy's and Susie's life because each life is assigned a certain positive moral value, so 2 lives is twice the moral value as 1 life. I purposely avoided talking about trade-off situations involving saving lives because I don't think a life has moral value in itself, yet I anticipated that people would appeal to life having some sort of positive moral value in itself and I didn't want to spend time arguing about that. In any case, if life does have positive moral value in itself, then I think it makes sense to add those values just as it makes sense to add the dollar values of different merchandise. This would result in Amy's and Susie's death being a morally worse thing than Bob's death, and so I would at least agree that what we ought to do in this case wouldn't be to give everyone a 50% chance.

In any case, if we assign a moral value to each person's experience in the same way that we might assign a moral value to each person's life, then I can see how people reach the conclusion that more people suffering a given pain is morally worse than fewer people suffering the given pain (even if the fewer are other people). Moreover, given step 1., I agree that this approach, at least prima facie, respects [the fact that pain matters solely because of how it FEELS] more than the approach that I've attributed to kbog). (I added the "[...]" to make the sentence structure more clear.) As such, this is an interesting approach that I would need to think more about, so thanks for bringing it up. But, even granting this approach, I don't think what we ought to do is to OUTRIGHT prevent the morally worse case; rather we ought to give a higher chance to preventing the morally worse case proportional to how much morally worse it is than the other case. I will say more about this below.

Then I am really not sure at all what you are meaning by 'morally worse' (or 'right'!). In light of this, I am now completely unsure of what you have been arguing the entire time.

Please don't be alarmed (haha). I assume you're aware that there are other moral theories that recognize the moral value of experience (just as utilitarianism does), but also recognizes other side constraints such that, on these moral theories, the right thing to do is not always to OUTRIGHT prevent the morally worst consequence. For example, if a side constraint is true of some situation, then the right thing to do would not be to prevent the morally worst consequence if doing so violates the side constraint. That is why these moral theories are not consequentialist.

You can think of my moral position as like one of these non-consequentialist theories. The one and only side constraint that I recognize is captured by the fact that who suffers matters. Interestingly, this side constraint arises from the fact that experience matters, so it is closer to utilitarianism than other moral theories in this respect. Here's an example of the side constraint in action: Suppose I can either save 100 people from a minor headache or 1 other person from a major headache. Going by my sense of "more pain" (i.e. my way of quantifying and comparing pains), the single person suffering the major headache is morally worse than the 100 people each suffering a minor headache because his major headache is experientially worse than any of the other people's minor headache. But in this case, I would not think the right thing to do is to OUTRIGHT save the person with the major headache (even though his suffering is the morally worse case). I would think that the right thing to do is to give him a higher chance of being saved proportional to how much worse his suffering is experientially speaking than any one of the others (i.e. how much morally worse his suffering is relative to the 100's suffering).

Similarly, if we adopted the approach you outlined above, maybe the 100 people each suffering a minor headache would be the morally worse case. If so, given the side constraint, I would still similarly think that it would not be right to OUTRIGHT save the 100 from their minor headaches. I would again think that the right thing to do would be to give the 100 people a higher chance of being saved proportional to how much morally worse their suffering is relative to the single person's suffering.

I hope that helps.

comment by Alex_Barry · 2018-04-13T21:46:19.785Z · score: 2 (2 votes) · EA(p) · GW(p)

On 'people should have a chance to be helped in proportion to how much we can help them' (versus just always helping whoever we can help the most).

(Again, my preferred usage of 'morally worse/better' is basically defined so as to mean one always 'should' always pick the 'morally best' action. You could do that in this case, by saying cases are morally worse than one another if people do not have chances of being helped in proportion to how badly off they are. This however leads directly into my next point... )

How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.

In your reply to JanBrauner you are very willing to basically completely sacrifice this principle in response to practical considerations, so it seems possibly you are not willing to trade off any amount of 'actually helping people' in favour of it, but then it seems strange you argue for it so forcefully.

As a separate point, this form of reasoning seems rather incompatible with your claims about 'total pain' being morally important, and also determined solely by whoever is experiencing the most pain. Thus, if you follow your approach and give some chance of helping people not experiencing the most pain, in the case when you do help them, the 'total pain' does not change at all!

For example:

  • Suppose Alice is experiencing 10 units of suffering (by some common metric)
  • 10n people (call them group B) are experiencing 1 units of suffering each
  • We can help exactly one person, and reduce their suffering to 0

In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of 'total pain' remains at 10 as Alice is not helped.

This means that n/(n+1) proportion of the time the 'total pain' remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.

Finally I find the claim that this is actually the fairer or more empathetic approach unconvincing. I would argue that whatever fairness you gain by letting there be some chance you help the person experiencing the second-most suffering is outweighed by your unfairness to the person suffering the most.

Indeed, for another example:

  • Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.
  • However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.

You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with "reason and empathy".

(This was perhaps a needlessly emotive example, but I wanted to hammer home how completely terrible it could be to help the person not suffering the most. If you have a choice between not rolling a die, and rolling a die with a chance of terrible consequences, why take the chance?)

comment by Jeffhe · 2018-04-22T18:21:31.568Z · score: 1 (1 votes) · EA(p) · GW(p)

Hey Alex! Sorry for the super late response! I have a self-control problem and my life got derailed a bit in the past week >< Anyways, I'm back :P

How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.

This is an interesting question, adding another layer of chance to the original scenario. As you know, if (there was a 100% chance) I could give each person a chance of being saved in proportional to his/her suffering, I would do that instead of outright saving the person who has the worst to suffer. After all, this is what I think we should do, given that suffering matters, but who suffers also matters. Here, there seems to me a nice harmony between these two morally relevant factors – the suffering and the identity of who suffers, where both have a sufficient impact on what we ought to do: we ought to give each person a chance of being saved because who suffers matters, but each person’s chance ought to be in proportion to what he/she has to suffer because suffering also matters.

Now you’re asking me what I would do if there was only a 95% chance that I could give each person a chance of being saved in proportion to his/her suffering with a 5% chance of not helping anyone at all: would I accept the 95% chance or outright save the person who has the worst to suffer?

Well, what should I do? I must admit it’s not clear. I think it comes down to how much weight we should place on the morally relevant factor of identity. The more weight it has, the more likely the answer is that we should accept the 95% chance. I think it’s plausible that it has enough weight such that we should accept a 95% chance, but not a 40% chance. If one is a moral realist, one can accept that there is a correct objective answer yet not know what it is.

One complication is that you mention the notion of fairness. On my account of what matters, the fair thing to do – as you suggest - seems to be to give each person a chance in proportion to his/her suffering. Fairness is often thought of as a morally relevant factor in of itself, but if what the fair thing to do in any given situation is grounded in other morally relevant factors (e.g. experience and identity), then its moral relevance might be derived. If so, I think we can ignore the notion of fairness.

For example:

• Suppose Alice is experiencing 10 units of suffering (by some common metric)

• 10n people (call them group B) are experiencing 1 units of suffering each

• We can help exactly one person, and reduce their suffering to 0

In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of 'total pain' remains at 10 as Alice is not helped.

This means that n/(n+1) proportion of the time the 'total pain' remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.

This is a fantastic objection. This objection is very much in the spirit of the objection I was raising against utilitarianism: both objections show that the respective approaches can trivialize suffering given enough people (i.e. given that n is large enough). I think this objection shows a serious problem with giving each person a chance of being saved proportional to his/her suffering insofar as it shows that doing so can lead us to give a very very small chance to someone who has a lot to suffer when it intuitively seems to me that we should give him a much higher chance of being saved given how much more he/she has to suffer relative to any other person.

So perhaps we ought to outright save the person who has the most to suffer. But this conclusion doesn’t seem right either in a trade-off situation involving him and one other person who has just a little less to suffer, but still a whole lot. In such a situation, it intuitively seems that we should give one a slightly higher chance of being saved than the other, just as it intuitively seems that we should give each an equal chance of being saved in a trade-off situation where they each have the same amount to suffer.

I also have an intuition against utilitarianism. So if we use intuitions as our guide, it seems to leave us nowhere. Maybe one or more of these intuitions can be “evolutionarily debunked”, sparing one of the three approaches, but I don’t really have an idea of how that would go.

Indeed, for another example:

• Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.

• However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.

You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with "reason and empathy".

I had anticipated this objection when I wrote my post. In footnote 4, I wrote:

“Notice that with certain types of pain episodes, such as a torture episode vs a minor headache, there is such a big gap in amount of suffering between them that any clear-headed person in the world would rather endure an infinite number of minor headaches (i.e. live with very frequent minor headaches in an immortal life) than to endure the torture episode. This would explain why in a choice situation in which we can either save a person from torture or x number of persons from a minor headache (or 1 person from x minor headaches), we would just save the person who would be tortured rather than give the other(s) even the slightest chance of being helped. And I think this accords with our intuition well.”

Admittedly, there are two potential problems with what I say in my footnote.

1) It’s not clear that any clear-headed person would do as I say, since it seems possible that the what-it’s-like-of-going-through-infinite-minor-headaches can be experientially worse than the what-it’s-like-of-going-through-a-torture-session.

2) Even if any clear-headed person would do as I say, it’s not clear that this can yield the result that we should outright save the one person from torture. It depends on how the math works out, and I’m terrible at math lol. Does 1/infinity = 0? If so, then it seems we ought to give the person who would suffer the minor headache a 0% chance (i.e. we ought to outright save the other person from torture).

But the biggest problem is that even if what I say in my footnote can adequately address this objection, it cannot adequately address your previous objection. This is because in your previous example concerning Alice, I think she should have a high chance of being saved (e.g. around 90%) no matter how big n is, and what I say in footnote 4 cannot help me get that result.

All in all, your previous objection shows that my own approach leads to a result that I cannot accept. Thanks for that (haha). However, I should note that it doesn’t make the utilitarian view more plausible to me because, as I said, your previous objection is very much in the spirit of my own objection against utilitarianism.

I wonder if dropping the idea that we should give each person a chance of being saved proportional to his/her suffering requires dropping the idea that who suffers matters... I used the latter idea to justify the former idea, but maybe the latter idea can also be used to justify something weaker - something more acceptable to me... (although I feel doubtful about this).

comment by Alex_Barry · 2018-04-13T20:43:32.040Z · score: 0 (0 votes) · EA(p) · GW(p)

So you're suggesting that most people aggregate different people's experiences as follows:

Well most EAs, probably not most people :P

But yes, I think most EAs apply this 'merchandise' approach weighed by conscious experience.

In regards to your discussion of moral theories, side constraints: I know there are a range of moral theories that can have rules etc. My objection was that if you were not in fact arguing that total pain (or whatever) is the sole determiner of what action is right then you should make this clear from the start (and ideally baked into what you mean by 'morally worse').

Basically I think sentences like:

"I don't think what we ought to do is to OUTRIGHT prevent the morally worse case"

are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using 'morally worse' in a nonstandard way (and possibly use a different term). I have the intuition that if you say "X is the morally relevant factor" then which actions you say are right will depend solely on how they effect X.

Hence if you say 'what is morally relevant is the maximal pain being experienced by someone' when I expect all I need to tell you abut actions for you to decide between them is how they effect the maximal pain being experienced by someone.

Obviously language is flexible but I think if you deviate from this without clear disclaimers it is liable to cause confusion. (Again, at least in EA circles).

I think your argument that people should have a chance to be helped in proportion to how much we could help them is completely separate from your point about Comparability, and we should keep the discussions separate to avoid the chance of confusion. I'll make a separate comment to discuss it,

comment by Jeffhe · 2018-04-13T22:01:35.715Z · score: 0 (0 votes) · EA(p) · GW(p)

So you're suggesting that most people aggregate different people's experiences as follows:

FYI, I have since reworded this as "So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:"

I think it is a more precise formulation. In any case, we're on the same page.

Basically I think sentences like:

"I don't think what we ought to do is to OUTRIGHT prevent the morally worse case"

are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using 'morally worse' in a nonstandard way (and possibly use a different term). I have the intuition that if you say "X is the morally relevant factor" then which actions you say are right will depend solely on how they effect X.

The way I phrased Objection 1 was as follows: "One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie."

Notice that this objection in argument form is as follows:

P1) Two people suffering a given pain is morally worse than one other person suffering the given pain.

P2) We ought to prevent the morally worst case.

C) Therefore, we should help Amy and Susie over Bob.

My argument with kbog concerns P1). As I mentioned, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.

Given this premise, I've been arguing that two people suffering a given pain does not involve more pain than one person suffering the given pain, and thus P1) is false. And kbog has been arguing that two people suffering a given pain does involve more pain than one person suffering the given pain, and thus P1) is true. Of course, both of us are right on our respective preferred sense of "involves more pain than". So I recently started arguing that my sense is the sense that really matters.

Anyways, notice that P2) has not been debated. I understand that consequentialists would accept P2). But for other moral theorists, they would not because not all things that they take to matter (i.e. to be morally relevant, to have moral value, etc) can be baked into/captured by the moral worseness/goodness of a state of affairs. Thus, it seems natural for them to talk of side constraints, etc. For me, two things matter: experience matters, and who suffers it matters. I think the latter morally relevant thing is best captured as a side constraint.

However, you are right that I should make this aspect of my work more clear.

comment by Alex_Barry · 2018-04-13T22:14:03.684Z · score: 0 (0 votes) · EA(p) · GW(p)

Some of your quotes are broken in your comment, you need a > for each paragraph (and two >s for double quotes etc.)

I know for most of your post you were arguing with standard definitions, but that made it all the more jarring when you switched!

I actually think most (maybe all?) moral theories can be baked into goodness/badness of sates of affairs. If you want incorporate a side-constraint you can just define any state of affairs in which you violate that constraint as being worse than all other states of affairs. I do agree this can be less natural, but the formulations are not incompatible.

In any case as I have given you plenty of other comment threads to think about I am happy to leave this one here - my point was just a call for clarity.

comment by Jeffhe · 2018-04-13T23:51:40.349Z · score: 1 (1 votes) · EA(p) · GW(p)

I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things.

By "you switched", do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I've switched?

And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?

comment by Alex_Barry · 2018-04-14T07:54:28.537Z · score: 0 (0 votes) · EA(p) · GW(p)

Yes, "switched" was a bit strong, I meant that by default people will assume a standard usage, so if you only reveal later that actually you are using a non-standard definition people will be surprised. I guess despite your response to Objection 2 I was unsure in this case whether you were arguing in terms of (what are at least to me) conventional definitions or not, and I had assumed you were.

To italicize works puts *s on either side, like *this* (when you are replying to a comment there is a 'show help' button that explains some of these things.)

comment by Jeffhe · 2018-04-22T23:50:19.311Z · score: 1 (1 votes) · EA(p) · GW(p)

I see the problem. I will fix this. Thanks.

comment by kbog · 2018-03-19T21:15:55.146Z · score: 0 (2 votes) · EA(p) · GW(p)

But that seems counter to what reason and empathy would lead me to do.

What? It seems to be exactly what reason and empathy would lead one to do. Reason and empathy don't tell you to arbitrarily save fewer people. At best, you could argue that empathy pulls you in neither direction, while conceding that it's still more reasonable to save more rather than fewer. You've not written an argument, just a bald assertion. You're dressing it up to look like a philosophical argument, but there is none.

P1. The degree of suffering in the case of Amy and Susie would be the same as in the case of Bob, even though the number of instances of suffering would differ (e.g., 2:1).

This is because it was stipulated from the outset that Amy, Susie and Bob would each suffer from an equally painful disease if we didn’t help them. Relatedly, and as suggested earlier, it’s not like Amy and Susie would each somehow suffer more than Bob would suffer just because there would be two of them suffering; they would each suffer what they would each suffer (which is no more than what Bob would suffer) and no more. They surely can’t – and therefore wouldn’t – suffer each other’s pain too. For example, Amy cannot, on top of her own suffering, also suffer Susie’s pain, because Susie’s pain cannot be transferred to Amy, and vice versa.

This doesn't answer the objection. There is more suffering when it happens to two people, and more suffering is morally worse. The fact that the level of suffering in each person is the same doesn't imply that they are morally equivalent outcomes. It's like if I said, "safer cars will reduce the number of car fatalities," and then you protested "but EACH CAR FATALITY WILL BE JUST AS BAD", totally ignoring the point that I'm making.

Here, I assume you would say that we should save Emma from the major headache

This is a textbook case of begging the question. No one you're arguing with will grant that we should act differently for cases 2 and 3.

comment by Jeffhe · 2018-03-19T23:17:29.487Z · score: 0 (2 votes) · EA(p) · GW(p)

1) "Reason and empathy don't tell you to arbitrarily save fewer people."

I never said they tell me to arbitrarily save fewer people. I said that they tell us to give each person an equal chance of being saved.

2) "This doesn't answer the objection."

That premise (as indicated by "P1."), plus my support for that premise, was not meant to answer an objection. It was just the first premise of an argument that was meant to answer objection 1.

3) "There is more suffering when it happens to two people, and more suffering is morally worse."

Yes, there is more instances of suffering. But as I have tried to argue, x instances of suffering spread across x people is just as morally bad as 1 instance of the same kind of suffering had by one other person. If by 'more suffering' you meant worse suffering in an experiential sense, then please see my first response to Michael.

4) "The fact that the level of suffering in each person is the same doesn't imply that they are morally equivalent outcomes."

I didn't say it was implied. If I thought it was implied, then my response to Objection 1 would have been much shorter.

5) "This is a textbook case of begging the question."

I don't see how my assumption is anywhere near what I want to conclude. It seems to me like an assumption that is plausibly shared by all. That's why I assumed it in the first place: to show that my conclusion can be arrived at from shared assumptions.

6) "No one you're arguing with will grant that we should act differently for cases 2 and 3."

I would hesitate to use "No one". If this were true, then I would have expected more comments along those lines. More importantly, I wonder why one wouldn't grant that we should act differently in choice situations 2 and 3. If the reason boils down to the thought that 5 minor pains is experientially worse than 1 major pain, regardless if the 5 minor pains are all had felt by one person or spread across 5 different people, then I would point you to my conversation with Michael_S.

Finally, I just want to say that all the people I've conversed with on this forum so far have been very friendly and not dismissive, despite perhaps some differences in view. I wasn't surprised by that because (presumably) most people on here are effective altruists, and it would seem rather odd for an effective altruist - someone who identifies with helping the less fortunate - to be unfriendly or dismissive. Anyways, I do hope to remain unsurprised by that. I think only in a friendly and non-dismissive atmosphere can the interlocutors benefit from their conversation.

comment by kbog · 2018-03-20T00:33:32.027Z · score: 0 (2 votes) · EA(p) · GW(p)

I never said they tell me to arbitrarily save fewer people. I said that they tell us to give each person an equal chance of being saved

But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it's fewer people all the same, and it's not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you're defending.

Yes, there is more instances of suffering. But as I have tried to argue, x instances of suffering spread across x people is just as morally bad as 1 instance of the same kind of suffering had by one other person.

But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people, which presupposes that more total suffering does not necessarily imply worseness in such gedanken.

I didn't say it was implied.

But you need to defend such an implication if you wish to claim that it is not morally worse for more people to suffer an equal amount.

I don't see how my assumption is anywhere near what I want to conclude.

Because anyone who buys the basic arguments for helping more people rather than fewer will often prefer to alleviate five minor headaches rather than one major one, regardless of whether they happen to different people or not.

It seems to me like an assumption that is plausibly shared by all.

OK, well: it's not.

More importantly, I wonder why one wouldn't grant that we should act differently in choice situations 2 and 3.

Because there is no reason for the distribution of certain wrongs across different people to affect the badness of those wrongs, as our account of the badness of those wrongs does not depend on any facts about the particular people to whom they occur.

I would hesitate to use "No one". If this were true, then I would have expected more comments along those lines.

brianwang712's response based on the Original Position implies that the decision to not prevent 5 minor headaches is wrong, even though he didn't take the time to spell it out.

If the reason boils down to the thought that 5 minor pains is experientially worse than 1 major pain, regardless if the 5 minor pains are all had felt by one person or spread across 5 different people, then I would point you to my conversation with Michael_S

Look, your comments towards him are very long and convoluted. I'm not about to wade through it just to find the specific 1-2 sentences where you go astray. Especially when you stuff posts with "updates" alongside copies of your original comments, I find it almost painful to look through.

Finally, I just want to say that all the people I've conversed with on this forum so far have been very friendly and not dismissive, despite perhaps some differences in view. I wasn't surprised by that because (presumably) most people on here are effective altruists, and it would seem rather odd for an effective altruist - someone who identifies with helping the less fortunate - to be unfriendly or dismissive. Anyways, I do hope to remain unsurprised by that. I think only in a friendly and non-dismissive atmosphere can the interlocutors benefit from their conversation.

I don't see why identifying with helping the less fortunate (something which almost everybody does, in some fashion or other) implies that we should hold philosophical arguments to gentle standards. The time and knowledge of people who help the less fortunate is particularly valuable, so one should be willing and able to credibly signal the occasional times when one is confident that the people who help the less fortunate ought to be focusing elsewhere. Conversations mustn't be friendly to be informative, and I'm really not being dismissive about anything you write which I do have the time to read.

comment by Jeffhe · 2018-03-20T03:35:47.580Z · score: 0 (2 votes) · EA(p) · GW(p)

1) "But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it's fewer people all the same, and it's not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you're defending."

To arbitrarily save fewer people is to save them on a whim. I am not suggesting that we should save them on a whim. I am suggesting that we should give each person an equal chance of being saved. They are completely different ideas.

2) "But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people, which presupposes that more total suffering does not necessarily imply worseness in such gedanken."

Please show me where I supposed that 5 minor headaches are MORALLY worse when they happen to one person than when they happen to multiple people. In both choice situations 2 and 3, I provided REASONS for saying

A) why 5 minor headaches all had by one person is morally worse than 1 major headache had by one person, and

B) why 1 major headache had by one person is morally worse than 5 minor headaches spread across 5 people.

From A. and B., you can infer that I believe 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 persons, but don't say that I supposed this. I provided reasons. You can reject those reasons, but that is a different story.

If you meant that I supposed that 5 minor headaches are EXPERENTIALLY worse when they happen to one person than when they happen to multiple people, sure, it can be inferred from what I wrote that I was supposing this. But importantly, to make this assumption is not a stretch as it seems (at least to me) like an assumption plausibly shared by many. But it turns out that Michael_S disagreed, at which time I was glad to defend this assumption. More importantly, even if I made this supposition (as we have to start from somewhere), it does not mean that by doing so, I was simply assuming and not arguing for what you quoted.

3) "But you need to defend such an implication if you wish to claim that it is not morally worse for more people to suffer an equal amount."

If you don't see an argument in my response to Objection 1, I'll live with that since I put a lot of time into writing that essay and no one else has said the same.

4) "Because anyone who buys the basic arguments for helping more people rather than fewer will often prefer to alleviate five minor headaches rather than one major one, regardless of whether they happen to different people or not."

By basic arguments, I presume you mean utilitarian arguments. First off, I was not writing this for a utilitarian audience. If I writing this for an audience that finds it intuitive to save Amy and Susie instead of Bob, and I was trying to show how other (perhaps more basic) intuitions that I assumed were commonly held (i.e. saving one from a major headache instead of 5 each from a minor headache) could provide the ingredients for showing that we should provide each of them with an equal chance of being helped.

If I was writing this for strictly a utilitarian audience, I would have taken a different approach which would have included explaining why 5 pains all had by one person is experentially worse than 5 pains spread across 5 people.

Many people who are effective altruists have reasons for helping people, such as the pond argument, but not reasons for helping the many over the few. So it is uncharitable of you to simply assume that my audience are all utilitarians.

5) "brianwang712's response based on the Original Position implies that the decision to not prevent 5 minor headaches is wrong, even though he didn't take the time to spell it out."

Not true. It is not clear what the conclusion from the original position would be when the levels of pain for the people involved differ. Some people are extremely risk-adverse to extreme pains, and may not agree to a policy of helping the greater number when what is at stake for the few is really bad pain.

6) "Look, your comments towards him are very long and convoluted. I'm not about to wade through it just to find the specific 1-2 sentences where you go astray. Especially when you stuff posts with "updates" alongside copies of your original comments, I find it almost painful to look through."

I'm sorry you find them convoluted. I updated the very first replies to Brian and Michael_S in order to try to make my position more clear for first-time readers like you. I spent a lot of time on trying to make my replies more clear because I don't want to waste reader's time. If I failed to do that, I can only say I tried.

7) "I don't see why identifying with helping the less fortunate (something which almost everybody does, in some fashion or other) implies that we should hold philosophical arguments to gentle standards."

I never asked for gentle standards. I asked for a non-dismissive and friendly attitude.

8) "The time and knowledge of people who help the less fortunate is particularly valuable, so one should be willing and able to credibly signal the occasional times when one is confident that the people who help the less fortunate ought to be focusing elsewhere."

I didn't quite understand the latter half, but yes, their time is valuable, which is why I've tried to be as clear I can. In any case, it is a good thing to critically examine one's own views from time to time, no matter how vital one's time seems under the supposition of that view. So - if I understood the latter part correctly - you needn't worry so much about saving other people's time from my post.

9) "Conversations mustn't be friendly to be informative, and I'm really not being dismissive about anything you write which I do have the time to read."

A person (at least speaking for myself) is much more receptive to the content of another's comment when they are put in a friendly (though demanding) manner. Thus, friendliness helps make conversation more informative.

Whereas dismissive and unfriendly comments like "I'm not about to wade through it just to find the specific 1-2 sentences where you go astray." or "I find it almost painful to look through." do not.

P.S. I will not be replying to any more of your comments that I feel are either uncharitable, dismissive or shows a lack of effort spent on understanding my position.

Ops, just noticed I missed a comment you made:

10) "Because there is no reason for the distribution of certain wrongs across different people to affect the badness of those wrongs, as our account of the badness of those wrongs does not depend on any facts about the particular people to whom they occur."

As I see it, a case or state of affairs in which 5 minor headaches are all felt by one person is MORALLY WORSE than a case in which 5 minor headaches are spread across 5 persons because 5 minor headaches all felt by one person is EXPERIENTIALLY WORSE than 5 minor headaches spread across 5 persons.

I take experience to be the only morally relevant factor, and in this way, I am a moral singularist (as opposed to pluralist). For why I think the former is experientially worse than the latter, please at least read my first reply to Michael_S. Thanks.

comment by kbog · 2018-03-20T07:53:15.650Z · score: 0 (0 votes) · EA(p) · GW(p)

From A. and B., you can infer that I believe 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 persons, but don't say that I supposed this. I provided reasons.

You simply assert that we would rather save Emma's major headache rather than five minor ones in case 3. But if you've stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn't change the story. I just don't follow the argument here.

If you don't see an argument in my response to Objection 1, I'll live with that since I put a lot of time into writing that essay and no one else has said the same.

My whole point here is that your response to Objection 1 doesn't do any work to convince us of your premises regarding the headaches. Yeah there's an argument, but its premise is both contentious and undefended.

Many people who are effective altruists have reasons for helping people, such as the pond argument, but not reasons for helping the many over the few. So it is uncharitable of you to simply assume that my audience are all utilitarians.

I'm not just speaking for utilitarians, I'm speaking for anyone who doesn't buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well.

Not true. It is not clear what the conclusion from the original position would be when the levels of pain for the people involved differ. Some people are extremely risk-adverse to extreme pains, and may not agree to a policy of helping the greater number when what is at stake for the few is really bad pain

The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it's an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people's preferences anyway so that there isn't any dissonance between what people would select and what utilitarianism says.

comment by Jeffhe · 2018-03-20T17:59:02.321Z · score: 0 (0 votes) · EA(p) · GW(p)

1) "You simply assert that we would rather save Emma's major headache rather than five minor ones in case 3. But if you've stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn't change the story. I just don't follow the argument here."

I DO NOT simply assert this. In case 3, I wrote, "Here, I assume you would say that we should save Emma from the major headache or at least give her a higher chance of being saved because a major headache is morally worse than 5 minor headaches spread across 5 persons and it's morally worse BECAUSE a major headache hurts more (in some non-arbitrary sense) than the 5 minor headaches spread across 5 people. Here, the non-arbitrary sense is straightforward: Emma would be hurting more than any one of the 5 others who would each experience only 1 minor headache." (I capped 'because' for emphasis here)

You would not buy that reason I gave (because you believe 5 minor headaches, spread across 5 people, is experientially worse than a major headache), but that is a different story. If you were more charitable and patient while reading my post, thinking about who my audience is (many of whom aren't utilitarians and don't buy into interpersonal aggregation of pains) etc, I doubt you would be leveling all the accusations you have against me. It wastes both your time and my time to have to deal with them.

2) "My whole point here is that your response to Objection 1 doesn't do any work to convince us of your premises regarding the headaches. Yeah there's an argument, but its premise is both contentious and undefended."

I was just using your words. You said "But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people." As I said, I assumed a premise that I thought the vast majority of my audience would agree with (i.e., at bottom, that 5 minor headaches all had by one person is experientially worse than 5 minor headaches spread across 5 people). If YOU find that premise contentious, great, we can have a discussion about it, but please don't make it sound like my argument doesn't do any work for anyone.

3) "I'm not just speaking for utilitarians, I'm speaking for anyone who doesn't buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well."

Well, I don't, which is why I assumed the premise in the first place. I mean I wouldn't assume a premise that I thought the majority of my audience will disagree with. It's certainly not obvious to me that 5 minor headaches all had by one person is experientially just as bad as 5 minor headaches spread across 5 people.

4) "The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it's an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people's preferences anyway so that there isn't any dissonance between what people would select and what utilitarianism says."

Sorry, I'm not familiar with the axioms of expected utility theory or with preference utilitarianism. But perhaps I can understand your position by asking 2 questions:

1) According to you, would it be rational behind the veil of ignorance to agree to a policy that said: In a trade off situation between saving a person from torture or saving another person from torture AND saving a third person from a minor headache, the latter two are to be saved. 2) In an actual trade off situation of this kind, would you think we ought to save the latter two?

comment by kbog · 2018-03-20T21:12:57.925Z · score: 0 (0 votes) · EA(p) · GW(p)

Well, I don't, which is why I assumed the premise in the first place. I mean I wouldn't assume a premise that I thought the majority of my audience will disagree with. It's certainly not obvious to me that 5 minor headaches all had by one person is experientially just as bad as 5 minor headaches spread across 5 people.

But if anyone did accept that premise then they would already believe that the number of people suffering doesn't matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie's suffering is not a greater problem than Bob's suffering. So I can't tell if it's actually doing any work. If not, then it's just adding unnecessary length. That's what I mean when I say that it's too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob's diseases in the first place, making your claim that Amy and Susie's diseases are not experientially worse than Bob's disease and so on.

Sorry, I'm not familiar with the axioms of expected utility theory or with preference utilitarianism.

PU says that we should assign moral value on the basis of people's preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you're using the term correctly) that they're putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational.

1) According to you, would it be rational behind the veil of ignorance to agree to a policy that said: In a trade off situation between saving a person from torture or saving another person from torture AND saving a third person from a minor headache, the latter two are to be saved. 2) In an actual trade off situation of this kind, would you think we ought to save the latter two?

Yes to both.

comment by Jeffhe · 2018-03-21T01:14:02.294Z · score: 0 (0 votes) · EA(p) · GW(p)

1) "But if anyone did accept that premise then they would already believe that the number of people suffering doesn't matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie's suffering is not a greater problem than Bob's suffering. So I can't tell if it's actually doing any work. If not, then it's just adding unnecessary length. That's what I mean when I say that it's too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob's diseases in the first place, making your claim that Amy and Susie's diseases are not experientially worse than Bob's disease and so on."

The reason why I discussed those three cases was to answer the basic question: what makes one state of affairs morally worse than another. Indeed, given my broad audience, some who have no philosophy background, I wanted to start from the ground up.

From that discussion, I gathered two principles that I used to support premise 2 of my argument against Objection 1. I say "gathered" and not "deduced" because you actually don't disagree with those two principles, even though you disagree with an assumption I made in one of the cases (i.e. case 3). What your disagreement with that assumption indicates is a disagreement with premise 1 of my argument against Objection 1.

P1. read: "The degree of suffering in the case of Amy and Susie would be the same as in the case of Bob, even though the number of instances of suffering would differ (e.g., 2:1)."

You disagree because you think Amy's and Susie's pains would together be experientially worse than Bob's pain.

All this is to say that I don't think the discussion of the 3 cases was unnecessary, because it served the important preliminary goal of establishing what makes one state of affairs morally worse than another.

However, it seems like I really should have defended P1. of my argument (and similarly my assumption in case 3) more thoroughly. So I do admit that my post is lacking in this respect, which I appreciate you're pointing out. I'm also sure there are ways to make it more clear and concise. I will consider your suggested approach during future editting sessions.

Update (Mar 21): After thinking through what you said some more, I've decided I'm going to re-do my response to Objection 1 along the lines of what you're suggesting. Thanks for motivating this improvement.

2) "PU says that we should assign moral value on the basis of people's preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you're using the term correctly) that they're putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational."

Thanks for that explanation. I see where I went wrong in my previous reply now, so I concede this point.

3) "Yes to both."

Ok, interesting. And, just out of curiosity, you don't consider this as biting a bullet? I mean there are people who have given up on the veil-of-ignorance approach specifically because they think it is morally unacceptable to not give the single person ANY chance of being saved from torture just because it comes with the additional, and relatively trivial, benefit of relieving a minor headache.

P.S. I will reply to your other comment after I've read the paper you linked me to. But, I do want to note that you were being very uncharitable in your reply that "Stipulations can't be true or false - they're stipulations. It's a thought experiment for epistemic purposes." Clearly stipulations/suppositions cannot be false relative to the thought experiment. But surely they can be false relative to reality - to what is actually the case.

comment by kbog · 2018-03-24T21:29:37.170Z · score: 0 (0 votes) · EA(p) · GW(p)

I don't think the discussion of the 3 cases was unnecessary, because it served the important preliminary goal of establishing what makes one state of affairs morally worse than another.

But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it's not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.

If you disagree, try to sketch out a view (that isn't blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches.

Ok, interesting. And, just out of curiosity, you don't consider this as biting a bullet?

How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?

I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet.

I mean there are people who have given up on the veil-of-ignorance approach specifically because they think it is morally unacceptable to not give the single person ANY chance of being saved from torture just because it comes with the additional, and relatively trivial, benefit of relieving a minor headache.

Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don't see what could possibly lead one to prefer it.

And merely having a "chance" of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don't sit on the mere fact of the chance and covet it as though it were something to value on its own.

comment by Jeffhe · 2018-03-27T22:54:58.595Z · score: 0 (0 votes) · EA(p) · GW(p)

1) "But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it's not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.

If you disagree, try to sketch out a view (that isn't blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches."

Arguing for what factors are morally relevant in determining whether one case is morally worse than another is preliminary to arguing that some specific case (i.e. Amy and Susie suffering) is morally just as bad as another specific case (i.e. Bob suffering). My 3 cases were only meant to do the former. From the 3 cases, I concluded:

  1. That the amount of pain is a morally relevant factor in determining whether one case is morally worse than another.

  2. That the number of instances of pain is a morally relevant factor only to the extent that they affect the amount of pain at issue. (i.e. the number of instances of pain is not morally relevant in itself).

I take that to be preliminary work. Where I really dropped the ball was in my lackluster argument for P1 (and, likewise, for my assumption in case 3). No utilitarian would have found it convincing, and thus I would not have succeeded in convincing them that the outcome in which Amy and Susie both suffer is morally just as bad as the outcome in which only Bob suffers, even if they agreed with 1. and 2., which they do.

Anyways, to the extent that you think my argument for P1 sucked to the point where it was like I was begging the question against the utilitarian, I'm happy to concede this. I have since reworked my response to Objection 1 as a result, thanks to you.

2) "How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?

I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet."

Because you effectively deny the one person ANY CHANCE of being helped from torture SIMPLY BECAUSE you can prevent an additional minor headache - a very very very minor one - by helping the two. Anyways, a lot of people think that is pretty extreme. If you don't think so, that's perhaps mainly because you don't believe WHO SUFFERS MATTERS. If that's the case, then I would encourage you to reread my response to Objection 2, where I make the case that who suffers is of moral significance.

3) "Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don't see what could possibly lead one to prefer it."

You do give each party a 50% chance of being saved by choosing to flip a coin, instead of choosing to just help one party over the other. I prefer giving a 50% chance to each party because

A) I don't think the case in which the two would suffer involves more pain than the case in which the one would (given our discussion under Michael_S's post),

B) I believe who suffers matters (given my response to Objection 2)

Even if you disagree with me on A), I think if you agreed with me on B), you would at least give the one person a 49% of being helped, and the other two a 51% of being helped.

It is true that once the coin has been flipped, one party still ends up suffering at the end of the day. But that does not mean that they didn't at one point actually have a 50% of being helped.

4) "And merely having a "chance" of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don't sit on the mere fact of the chance and covet it as though it were something to value on its own."

I agree that the only reason that we value a chance of being saved is that it may lead to us actually being saved, and in that sense, we don't value it in itself. But I don't get why that entails that giving each party a 50% of being saved is not what we should do.

Btw, sorry I haven't replied to your response below brian's discussion yet. I haven't found the time to read that article you linked. I do plan to reply sometime soon.

Also, can you tell me how to quote someone's text in the way that you do in your responses to me? It is much cleaner than my number listing and quotations. Thanks.

comment by kbog · 2018-03-28T04:24:44.276Z · score: 0 (0 votes) · EA(p) · GW(p)

Because you effectively deny the one person ANY CHANCE of being helped from torture

Your scenario didn't say that probabilistic strategies were a possible response, but suppose that they are. Then it's true that, if I choose a 100% strategy, the other person has 0% chance of being saved, whereas if I choose a 99% strategy, the other person has a 1% chance of being saved. But you've given no reason to think that this would be any better. It is bad that one person has a 1% greater chance of torture, but it's good that the other person has 1% less chance of torture. As long as agents simply have a preference to avoid torture, and are following the axioms of utility theory (completeness, transitivity, substitutability, decomposability, monotonicity, and continuity) then going from 0% to 1% is exactly as good as going from 99% to 100%.

SIMPLY BECAUSE you can prevent an additional minor headache - a very very very minor one - by helping the two.

That's not true. I deny the first person any chance of being helped from torture because it denies the second person any chance of being tortured and it saves the 3rd person from an additional minor pain.

Anyways, a lot of people think that is pretty extreme.

I really don't see it as extreme. I'm not sure that many people would.

A) I don't think the case in which the two would suffer involves more pain than the case in which the one would (given our discussion under Michael_S's post),

B) I believe who suffers matters (given my response to Objection 2)

First, I don't see how either of these claims imply that the right answer is 50%. Second, for B), you seem to be simply claiming that interpersonal aggregation of utility is meaningless, rather than making any claims about particular individuals' suffering being more or less important. The problem is that no one is claiming that anyone's suffering will disappear or stop carrying moral force, rather we are claiming that each person's suffering counts for a reason while two reasons pointing in favor of a course of action are stronger than one reason.

Even if you disagree with me on A), I think if you agreed with me on B), you would at least give the one person a 49% of being helped, and the other two a 51% of being helped.

Again I cannot tell where you got these numbers from.

It is true that once the coin has been flipped, one party still ends up suffering at the end of the day. But that does not mean that they didn't at one point actually have a 50% of being helped.

But it does mean that they don't care.

But I don't get why that entails that giving each party a 50% of being saved is not what we should do.

If agents don't have special preferences over the chances of the experiences that they have then they just have preferences over the experiences. Then, unless they violate the von Neumann-Morgenstern utility theorem, their expected utility is linear with the probability of getting this or that experience, as opposed to being suddenly higher merely because they had a 'chance.'

Also, can you tell me how to quote someone's text in the way that you do in your responses to me?

use >