Posts

Efforts to develop the caring emotion and intellectual virtues in people rank where on EA's priority list? 2020-03-14T22:01:15.901Z
Is Effective Altruism fundamentally flawed? 2018-03-13T02:18:35.250Z

Comments

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-24T03:26:21.316Z · EA · GW

Because you told me that it's the same amount of pain as five minor toothaches and you also told me that each minor toothache is 1 base unit of pain.

Where in supposition or the line of reasoning that I laid out earlier (i.e. P1) through to P5)) did I say that 1 major headache involves the same amount of pain as 5 minor toothaches?

I attributed that line of reasoning to you because I thought that was how you would get to C) from the supposition that 5 minor toothaches had by one person is experientially just as bad as 1 major toothache had by one person.

But you then denied that that line of reasoning represents your line of reasoning. Specifically, you denied that P1) is the basis for asserting P2). When I asked you what is your basis for P2), you assert that I told you that 1 major headache involves the same amount of pain as five minor toothaches. But where did I say this?

In any case, it would certainly help if you described your actual step by step reasoning from the supposition to C), since, apparently, I got it wrong.

If you mean that it feels worse to any given person involved, yes it ignores the difference, but that's clearly the point, so I don't know what you're doing here other than merely restating it and saying "I don't agree."

I'm not merely restating the fact that Reason S ignores this difference. I am restating it as part of a further argument against your sense of "involves more pain than" or "involves the same amount of pain as". The argument in essence goes: P1) Your sense relies on Reason S P2) Reason S does not care about pain-qua-how-it-feels (because it ignores the above stated difference). P3) We take pain to matter because of how it feels. C) Therefore, your sense is not in harmony with why pain matters (or at least why we take pain to matter).

I had to restate that Reason S ignores this difference as my support for P2, so it was not merely stated.

On the other hand, you do not care how many people are in pain, and you do not care how much pain someone experiences so long as there is someone else who is in more pain, so if anyone's got to figure out whether or not they "care" enough it's you.

Both accusations are problematic.

The first accusation is not entirely true. I don't care about how many people are in pain only in situations where I have to choose between helping, say, Amy and Susie or just Bob (i.e. situations where a person in the minority party does not overlap with anyone in the majority party). However, I would care about how many people are in pain in situations where I have to choose between helping, say, Amy and Susie or just Amy (i.e. situations where the minority party is a mere subset of the majority party). This is due to the strict pareto principle which would make Amy and Susie each suffering morally worse than just Amy suffering, but would not make Amy and Susie suffering morally worse than Bob suffering. I don't want to get into this at this point because it's not very relevant to our discussion. Suffice it to say that it's not entirely true that I don't care about how many people are in pain.

The second accusation is plain false. As I made clear in my response to Objection 2 in my post, I think who suffers matters. As a result, if I could either save one person from suffering some pain or another person from suffering a slightly less pain, I would give each person a chance of being saved in proportion to how much each has to suffer. This is what I think I should do. Ironically, your second accusation against me is precisely true of what you stand for.

You've pretty much been repeating yourself for the past several weeks, so, sure.

In my past few replies, I have:

1) Outlined in explicit terms a line of reasoning that got from the supposition to C), which I attributed to you.

2) Highlighted that that line of reasoning appealed to Reason S.

3) On that basis, argued that your sense of "involves the same amount of pain as" goes against the spirit of why pain matters.

If that comes across to you as "just repeating myself for the past several weeks", then I can only think that you aren't putting enough effort into trying to understand what I'm saying.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-23T16:42:54.031Z · EA · GW

the reason why 5 minor toothaches spread among 5 people is equivalent to 5 minor toothache had by one person is DIFFERENT from the reason for why 5 minor headaches had by one person is equivalent to 1 major toothache had by one person.

No, both equivalencies are justified by the fact that they involve the same amount of base units of pain.

So you're saying that just as 5 MiTs/5 people is equivalent to 5 MiTs/1 person because both sides involve the same amount of base units of pain, 5 MiTs/1 person is equivalent to 1 MaT/1 person because both sides involve the same amount of base units of pain (and not because both sides give rise to what-it's-likes that are experientially just as bad).

My question to you then is this: On what basis are you able to say that 1 MaT/1 person involves 5 base units of pain?

But Reason S doesn't give a crap about how bad the pains on the two sides of the equation FEEL

Sure it does. The presence of pain is equivalent to feeling bad. Feeling bad is precisely what is at stake here, and all that I care about.

Reason S cares about the amount of base units of pain there are because pain feels bad, but in my opinion, that doesn't sufficiently show that it cares about pain-qua-how-it-feels. It doesn't sufficiently show that it cares about pain-qua-how-it-feels because 5 base units of pain all experienced by one person feels a whole heck of a lot worse than anything felt when 5 base units of pain are spread among 5 people, yet Reason S completely ignores this difference. If Reason S truly cared about pain-qua-how-it-feels, it cannot ignore this difference.

I understand where you're coming from though. You hold that Reason S cares about the quantity of base units of pain precisely because pain feels bad, and that this fact alone sufficiently shows that Reason S is in harmony with the fact that we take pain to matter because of how it feels (i.e. that Reason S cares about pain-qua-how-it-feels).

However, given what I just said, I think this fact alone is too weak to show that Reason S is in harmony with the fact that we take pain to matter because of how it feels. So I believe my objection stands.

Have we hit bedrock?

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-22T23:50:19.311Z · EA · GW

I see the problem. I will fix this. Thanks.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-22T23:37:16.627Z · EA · GW

I was trying to keep the discussions of 'which kind of pain is morally relevant' and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.

I think I see the original argument you were going for. The argument against my approach-minus-the-who-suffers-matters-bit is that it renders all resulting states of affairs equally bad, morally speaking, because all resulting states of affairs would involve the same total pain. Given that we should prevent the morally worst case, this means that my approach would have it that we shouldn't take any action, and that's just absurd. Therefore, my way of determining total pain is problematic. Here "a resulting state of affairs" is broadly understood as the indefinite span of time following a possible action, as opposed to any particular point in time following a possible action. On this broad understanding, it seems undeniable that each possible action will result in a state of affairs with the same total maximal pain, since there will surely be someone who suffers maximally at some point in time in each indefinite span of time.

Well, if who suffered didn't matter, then I think leximin should be used to determine which resulting state of affairs is morally worse. According to leximin, we determine which state of affairs is morally better as follows: Step 1: From each state of affairs, select a person among the worst off in that state of affairs. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 2. Step 2: From each state of affairs, select a person among the worst off in that state of affairs, except for the person who has already been selected. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 3. And so forth... According to this method, even though all resulting states of affairs will involve the same total pain, certain resulting states of affairs will be morally better than others, and we should act so as to realize them.

My appeal to leximin is not ad hoc because it takes an individual's suffering seriously, which is inline with my approach. Notice that leximin can be used to justify saving Susie and Amy over Bob. I don't actually endorse leximin because leximin does not take an individual's identity seriously (i.e. it doesn't treat who suffers as morally relevant, whereas I do. I think who suffers matters).

So that is one response I have to your argument: it grants you that the total pain in each resulting state of affairs would be the same and then argues that this does not mean that all resulting state of affairs would be morally just as bad.

Another response I have is that, most probably, different states of affairs will involve different amounts of pain, and so some states of affairs will be morally worse than others just based on total pain involved. This becomes more plausible when we keep in mind what the maximum amount of pain is on my approach. It is not the most intense pain, e.g. a torture session. It is not the longest pain, e.g. a minor headache that lasts one's entire life. Rather, it is the most intense pain over the longest period of time. The person who suffers maximum pain is the person who suffers the most intense pain for the longest period of time. Realizing this, it is unlikely that each possible action will lead to a state of affairs involving this. (Note that this is to deny A1.)

Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).

But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them.

To give each possible action an equal chance is certainly not to flip a coin between murdering someone or not. At any given moment, I have thousands (or perhaps an infinite number) of possible actions I could take. Murdering the person in front of me is but one. (There are many complexities here that make the discussion hard like what counts as a distinct action.)

However, I understand that the point of your objection is that my approach can allow the murder of an innocent. In this way, your objection is like that classical argument against utilitarianism. Anyways, I guess, like effective altruism, I can recognize rules that forbid murdering etc. I should clarify that my goal is not to come up with a complete moral theory as such. Rather it is to show that we shouldn't use the utilitarian way of determining "total pain", which underlies effective altruism.

I have argued for this by

1) arguing that the utilitarian way of determining "total pain" goes against the spirit of why we take pain to matter in the first place. In response, you have suggested a different framing of utilitarianism on which they are determining a "total moral value" based on people's pains, which is different from determining a total pain. I still need to address this point.

2) responding to your objection against my way of determining "total pain" (first half of this reply)

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-22T18:21:31.568Z · EA · GW

Hey Alex! Sorry for the super late response! I have a self-control problem and my life got derailed a bit in the past week >< Anyways, I'm back :P

How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.

This is an interesting question, adding another layer of chance to the original scenario. As you know, if (there was a 100% chance) I could give each person a chance of being saved in proportional to his/her suffering, I would do that instead of outright saving the person who has the worst to suffer. After all, this is what I think we should do, given that suffering matters, but who suffers also matters. Here, there seems to me a nice harmony between these two morally relevant factors – the suffering and the identity of who suffers, where both have a sufficient impact on what we ought to do: we ought to give each person a chance of being saved because who suffers matters, but each person’s chance ought to be in proportion to what he/she has to suffer because suffering also matters.

Now you’re asking me what I would do if there was only a 95% chance that I could give each person a chance of being saved in proportion to his/her suffering with a 5% chance of not helping anyone at all: would I accept the 95% chance or outright save the person who has the worst to suffer?

Well, what should I do? I must admit it’s not clear. I think it comes down to how much weight we should place on the morally relevant factor of identity. The more weight it has, the more likely the answer is that we should accept the 95% chance. I think it’s plausible that it has enough weight such that we should accept a 95% chance, but not a 40% chance. If one is a moral realist, one can accept that there is a correct objective answer yet not know what it is.

One complication is that you mention the notion of fairness. On my account of what matters, the fair thing to do – as you suggest - seems to be to give each person a chance in proportion to his/her suffering. Fairness is often thought of as a morally relevant factor in of itself, but if what the fair thing to do in any given situation is grounded in other morally relevant factors (e.g. experience and identity), then its moral relevance might be derived. If so, I think we can ignore the notion of fairness.

For example:

• Suppose Alice is experiencing 10 units of suffering (by some common metric)

• 10n people (call them group B) are experiencing 1 units of suffering each

• We can help exactly one person, and reduce their suffering to 0

In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of 'total pain' remains at 10 as Alice is not helped.

This means that n/(n+1) proportion of the time the 'total pain' remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.

This is a fantastic objection. This objection is very much in the spirit of the objection I was raising against utilitarianism: both objections show that the respective approaches can trivialize suffering given enough people (i.e. given that n is large enough). I think this objection shows a serious problem with giving each person a chance of being saved proportional to his/her suffering insofar as it shows that doing so can lead us to give a very very small chance to someone who has a lot to suffer when it intuitively seems to me that we should give him a much higher chance of being saved given how much more he/she has to suffer relative to any other person.

So perhaps we ought to outright save the person who has the most to suffer. But this conclusion doesn’t seem right either in a trade-off situation involving him and one other person who has just a little less to suffer, but still a whole lot. In such a situation, it intuitively seems that we should give one a slightly higher chance of being saved than the other, just as it intuitively seems that we should give each an equal chance of being saved in a trade-off situation where they each have the same amount to suffer.

I also have an intuition against utilitarianism. So if we use intuitions as our guide, it seems to leave us nowhere. Maybe one or more of these intuitions can be “evolutionarily debunked”, sparing one of the three approaches, but I don’t really have an idea of how that would go.

Indeed, for another example:

• Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.

• However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.

You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with "reason and empathy".

I had anticipated this objection when I wrote my post. In footnote 4, I wrote:

“Notice that with certain types of pain episodes, such as a torture episode vs a minor headache, there is such a big gap in amount of suffering between them that any clear-headed person in the world would rather endure an infinite number of minor headaches (i.e. live with very frequent minor headaches in an immortal life) than to endure the torture episode. This would explain why in a choice situation in which we can either save a person from torture or x number of persons from a minor headache (or 1 person from x minor headaches), we would just save the person who would be tortured rather than give the other(s) even the slightest chance of being helped. And I think this accords with our intuition well.”

Admittedly, there are two potential problems with what I say in my footnote.

1) It’s not clear that any clear-headed person would do as I say, since it seems possible that the what-it’s-like-of-going-through-infinite-minor-headaches can be experientially worse than the what-it’s-like-of-going-through-a-torture-session.

2) Even if any clear-headed person would do as I say, it’s not clear that this can yield the result that we should outright save the one person from torture. It depends on how the math works out, and I’m terrible at math lol. Does 1/infinity = 0? If so, then it seems we ought to give the person who would suffer the minor headache a 0% chance (i.e. we ought to outright save the other person from torture).

But the biggest problem is that even if what I say in my footnote can adequately address this objection, it cannot adequately address your previous objection. This is because in your previous example concerning Alice, I think she should have a high chance of being saved (e.g. around 90%) no matter how big n is, and what I say in footnote 4 cannot help me get that result.

All in all, your previous objection shows that my own approach leads to a result that I cannot accept. Thanks for that (haha). However, I should note that it doesn’t make the utilitarian view more plausible to me because, as I said, your previous objection is very much in the spirit of my own objection against utilitarianism.

I wonder if dropping the idea that we should give each person a chance of being saved proportional to his/her suffering requires dropping the idea that who suffers matters... I used the latter idea to justify the former idea, but maybe the latter idea can also be used to justify something weaker - something more acceptable to me... (although I feel doubtful about this).

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-13T23:51:40.349Z · EA · GW

I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things.

By "you switched", do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I've switched?

And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-13T23:43:01.030Z · EA · GW

Thanks for the exposition. I see the argument now.

You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.

I've since made clear to you that who suffers matters to me too, so if the above is right, then according to my moral theory, what we ought to do is assign an equal chance to any possible action we could take, since each possible action gives rise to the same total pain, just suffered by different individuals.

Your argument would continue: Any moral theory that gave this absurd recommendation cannot be correct. Since the root of the absurdity is my preferred approach to determining total pain, that approach to determining total pain must be problematic too.

My response:

JanBrauner, if I remember correctly was talking about extreme unpredictability, but your argument doesn't seem to be based on unpredictability. If A1 and A2 are true, then each possible action more-or-less seems to inevitably result in a different person suffering maximal pain.

Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).

I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive. Since I don't find the latter absurd at all (keeping in mind that none of the million will suffer anything worse than the one, i.e. that the one would suffer no less than any one of the million), I would not find the former absurd either. Indeed, giving each person an equal chance of being saved from being burned alive seems to me like the right thing to do given that each person has the same amount to suffer. So I would feel similarly about assigning each possible action an equal chance (assuming A1 and A2 are true).

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-13T22:01:35.715Z · EA · GW

So you're suggesting that most people aggregate different people's experiences as follows:

FYI, I have since reworded this as "So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:"

I think it is a more precise formulation. In any case, we're on the same page.

Basically I think sentences like:

"I don't think what we ought to do is to OUTRIGHT prevent the morally worse case"

are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using 'morally worse' in a nonstandard way (and possibly use a different term). I have the intuition that if you say "X is the morally relevant factor" then which actions you say are right will depend solely on how they effect X.

The way I phrased Objection 1 was as follows: "One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie."

Notice that this objection in argument form is as follows:

P1) Two people suffering a given pain is morally worse than one other person suffering the given pain.

P2) We ought to prevent the morally worst case.

C) Therefore, we should help Amy and Susie over Bob.

My argument with kbog concerns P1). As I mentioned, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.

Given this premise, I've been arguing that two people suffering a given pain does not involve more pain than one person suffering the given pain, and thus P1) is false. And kbog has been arguing that two people suffering a given pain does involve more pain than one person suffering the given pain, and thus P1) is true. Of course, both of us are right on our respective preferred sense of "involves more pain than". So I recently started arguing that my sense is the sense that really matters.

Anyways, notice that P2) has not been debated. I understand that consequentialists would accept P2). But for other moral theorists, they would not because not all things that they take to matter (i.e. to be morally relevant, to have moral value, etc) can be baked into/captured by the moral worseness/goodness of a state of affairs. Thus, it seems natural for them to talk of side constraints, etc. For me, two things matter: experience matters, and who suffers it matters. I think the latter morally relevant thing is best captured as a side constraint.

However, you are right that I should make this aspect of my work more clear.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-13T19:58:31.261Z · EA · GW

Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)

I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badness' etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don't know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).

So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:

  1. Assign a moral value to each person's experiences based on its overall what-it's-like. For example, if someone is to experience 5 headaches, we are to assign a single moral value to his 5 headaches based on how experientially bad the what-it's-like-of-going-through-5-headaches is. If going through 5 such headaches is about experientially as bad as going through 1 major headache, then we would assign the same moral value to someone's 5 minor headaches as we would to someone else's 1 major headache.

  2. We then add up the moral value assigned to each person's experiences to get a global moral value, and compare this moral value to the other global values corresponding to the other states of affairs we could bring about.

This approach reminds me of trade-off situations that involve saving lives instead of saving people from suffering. For example, suppose we can either save Amy's and Susie's life or Bob's life, but we cannot save all. Who do we save? Most people would reason that we should save Amy's and Susie's life because each life is assigned a certain positive moral value, so 2 lives is twice the moral value as 1 life. I purposely avoided talking about trade-off situations involving saving lives because I don't think a life has moral value in itself, yet I anticipated that people would appeal to life having some sort of positive moral value in itself and I didn't want to spend time arguing about that. In any case, if life does have positive moral value in itself, then I think it makes sense to add those values just as it makes sense to add the dollar values of different merchandise. This would result in Amy's and Susie's death being a morally worse thing than Bob's death, and so I would at least agree that what we ought to do in this case wouldn't be to give everyone a 50% chance.

In any case, if we assign a moral value to each person's experience in the same way that we might assign a moral value to each person's life, then I can see how people reach the conclusion that more people suffering a given pain is morally worse than fewer people suffering the given pain (even if the fewer are other people). Moreover, given step 1., I agree that this approach, at least prima facie, respects [the fact that pain matters solely because of how it FEELS] more than the approach that I've attributed to kbog). (I added the "[...]" to make the sentence structure more clear.) As such, this is an interesting approach that I would need to think more about, so thanks for bringing it up. But, even granting this approach, I don't think what we ought to do is to OUTRIGHT prevent the morally worse case; rather we ought to give a higher chance to preventing the morally worse case proportional to how much morally worse it is than the other case. I will say more about this below.

Then I am really not sure at all what you are meaning by 'morally worse' (or 'right'!). In light of this, I am now completely unsure of what you have been arguing the entire time.

Please don't be alarmed (haha). I assume you're aware that there are other moral theories that recognize the moral value of experience (just as utilitarianism does), but also recognizes other side constraints such that, on these moral theories, the right thing to do is not always to OUTRIGHT prevent the morally worst consequence. For example, if a side constraint is true of some situation, then the right thing to do would not be to prevent the morally worst consequence if doing so violates the side constraint. That is why these moral theories are not consequentialist.

You can think of my moral position as like one of these non-consequentialist theories. The one and only side constraint that I recognize is captured by the fact that who suffers matters. Interestingly, this side constraint arises from the fact that experience matters, so it is closer to utilitarianism than other moral theories in this respect. Here's an example of the side constraint in action: Suppose I can either save 100 people from a minor headache or 1 other person from a major headache. Going by my sense of "more pain" (i.e. my way of quantifying and comparing pains), the single person suffering the major headache is morally worse than the 100 people each suffering a minor headache because his major headache is experientially worse than any of the other people's minor headache. But in this case, I would not think the right thing to do is to OUTRIGHT save the person with the major headache (even though his suffering is the morally worse case). I would think that the right thing to do is to give him a higher chance of being saved proportional to how much worse his suffering is experientially speaking than any one of the others (i.e. how much morally worse his suffering is relative to the 100's suffering).

Similarly, if we adopted the approach you outlined above, maybe the 100 people each suffering a minor headache would be the morally worse case. If so, given the side constraint, I would still similarly think that it would not be right to OUTRIGHT save the 100 from their minor headaches. I would again think that the right thing to do would be to give the 100 people a higher chance of being saved proportional to how much morally worse their suffering is relative to the single person's suffering.

I hope that helps.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-12T22:37:12.772Z · EA · GW

Hey Alex,

Thanks again for taking the time to read my conversation with kbog and replying. I have a few thoughts in response:

(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).)

When you say that many people here would embrace the assumption that "two people experiencing the same pain is twice as bad as one person experiencing that pain", are you using "bad" to mean "morally bad?"

I ask because I would agree if you meant morally bad IF the single person was a subset of the two people. For example, I would agree that Amy and Susie each suffering is twice as morally bad as just Amy suffering. However, I would not agree IF the single person was not a subset of the two (e.g., if the single person was Bob). If the single person was Bob, I would think the two cases are morally just as bad.

Now, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.

However, based on my preferred sense of "more pain", two people suffering involves the same amount of pain as one person suffering, irrespective of whether the single person is a subset or not.

Therefore, you might wonder how I am able to arrive at the different opinions above. More specifically, if I think Amy and Susie each suffering involves the same amount of pain as just Amy suffering, shouldn't I be committed to saying that the former is morally just as bad as the latter, rather than twice as morally bad (which is what I want to say?)

I don't think so. I think the pareto principle provides an adequate reason for taking Amy and Susie each suffering to be morally worse than just Amy's suffering. As Otsuka (a philosopher at Harvard) puts it, the Pareto states that “One distribution of benefits over a population is strictly Pareto superior to another distribution of benefits over that same population just in case (i) at least one person is better off under the former distribution than she would be under the latter and (ii) nobody is worse off under the former than she would be under the latter." Since just Amy suffering (i.e. Susie not suffering) is Pareto superior to Amy and Susie each suffering, therefore just Amy suffering is morally better than Amy and Susie each suffering. In other words, Amy and Susie each suffering is morally worse than just Amy suffering. Notice, however, that if the single person were Bob, condition (ii) would not be satisfied because Bob would be made worse off. The Pareto principle is based on the appealing idea that we shouldn't begrudge another person an improvement that costs us nothing. Amy shouldn't begrudge Susie an improvement that costs her nothing.

Anyways, I just wanted to make that aspect of my thinking clear. So I would agree with you that more people suffering is morally worse than fewer people suffering as long as the smaller group of people is a subset of the larger group, due to the Pareto principle. But I would not agree with you that more people suffering is morally worse than fewer people suffering if those fewer people are not a subset of the larger group, since the Pareto principle is not a basis for it, nor is there more pain in the former case than the latter case on my preferred sense of "more pain". And since I think my preferred sense of "more pain" is the one that ultimately matters because it respects the fact that pain matters solely because of how it feels, I think others should agree with me.

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.

The veil of ignorance approach at minimum supports a policy of helping the greater number (given the stipulation that each person has an equal chance of occupying anyone's position). However, as I argued, this stipulation is not true OF the real world because each of us didn't actually have an equal chance of being in any of our position, and what we should do should be based on the facts, and not on a stipulation. In kbog's latest reply to me regarding the veil of ignorance, he seems to argue that the stipulation should determine what we ought to do (irrespective of whether it is true in the actual world) because "The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system." I have yet to respond to this latest reply because I have been too busy arguing about our senses of "more pain", but if I were to respond, I would say this: "I agree that we should give equal consideration to everyone, which is why I believe we should give each person a chance of being helped proportional to the suffering they face. The only difference is that this is giving equal consideration to everyone in a way that respects the facts of the world." Anyways, I don't want to say too much here, because kbog might not see it and it wouldn't be fair if you only heard my side. I'll respond to kbog's reply eventually (haha) and you can follow the discussion there if you wish.

Let me just add one thing: Based on Singer's intro to Utilitarianism, Harsanyi argued that the veil of ignorance also entails a form of utilitarianism on which we ought to maximize average utility, as opposed to Rawls' claim that it entails giving priority to the worst off. If this is right, then the veil of ignorance approach doesn't support classical utilitarianism which just says we ought to maximize utility and not average utility.

One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else.

Yes, they could, but I also argued that who suffers matters in my response to Objection 2, and to simply help the person suffering the most is to ignore this fact. Thus, even if one person suffering a lot is experientially worse (and thus morally worse) than many others each suffering something less, I believe we should give the others some chance of being helped. That is to say, in light of the fact that who suffers matters, I believe it is not always right to prevent the morally worse case.

To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).

While this is a possible position to hold, it is not a plausible one, because it effectively entails that the numbers matter in itself. That is, such a person thinks he should save the many over one other person not because he thinks the many suffering involves more pain than the one suffering (for he denies that a non-purely experientially determined amount of pain can be compared with a purely experientially determined amount of pain). Rather, he thinks he should save the many solely because they are many. But it is hard to see how numbers matter in itself.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-12T00:41:53.334Z · EA · GW

Hey kbog, if you don't mind, let's ignore my example with the 5000 pains because I think my argument can more clearly be made in terms of my toothache example since I have already laid a foundation for it. Let me restate that foundation and then state my argument in terms of my toothache example. Thanks for bearing with me.

The foundation:

Suppose 5 minor toothaches had by one person is experientially just as bad as 1 major toothache had by one person.

Given the supposition, you would claim: 5 minor toothaches spread among 5 people involves the same amount of pain as 1 major toothache had by one person.

Let me explain what I think is your reasoning step by step:

P1) 5 minor toothaches had by one person and 1 major toothache had by one person give rise to two different what-it's-likes that are nevertheless experientially JUST AS BAD. (By above supposition) (The two different what-it's-likes are: the what-it's-like-of-going-through-5-minor-toothaches and the what-it's-like-of-going-through-1-major-toothache.)

P2) Therefore, we are entitled to say that 5 minor toothaches had by one person is equivalent to 1 major toothache had by one person. (By P1)

P3) 5 minor toothaches spread among 5 people is 5 minor toothaches, just as 5 minor toothaches had by one person is 5 minor toothaches, so there is the same quantity of minor toothaches (or same quantity of base units of pain) in both cases. (Self-evident)

P4) Therefore, we are entitled to say that 5 minor toothaches spread among 5 people is equivalent to 5 minor toothaches had by one person. (By P4)

P5) Therefore, we are entitled to claim that 5 minor toothaches spread among 5 people is equivalent to 1 major toothache had by one person. (By P2 and P4)

C) Therefore, 5 minor toothaches spread among 5 people involves the same amount of pain as 1 major toothache had by one person. (By P5)

As the illustrated reasoning shows, 5 minor toothaches spread among 5 people involves the same amount of pain as 1 major toothache had by one person (i.e. C) only if 5 minor toothaches had by ONE person is equivalent to 1 major toothache (i.e. P2). You agree with this.

Moreover, as the illustrated reasoning also shows, the reason why 5 minor toothaches had by one person is equivalent to 1 major toothache (i.e. P2) is because they give rise to two different what-it's-likes that are nevertheless experientially just as bad (i.e. P1). I presume you agree with this too. Call this reason "Reason E", E for "experientially just as bad")

Furthermore, as the illustrated reasoning shows, the reason why 5 minor toothaches spread among 5 people is equivalent to 5 minor toothache had by one person is DIFFERENT from the reason for why 5 minor headaches had by one person is equivalent to 1 major toothache had by one person. That is, 5 minor toothaches spread among 5 people is equivalent to 5 minor toothaches had by one person (i.e. P4) because they share the same quantity of base units of pain, namely 5, irrespective of how the 5 base units of pain are spread (i.e. P3), and NOT because they give rise to two what-it's-likes that are experientially just as bad (as they clearly don't). Call this reason (i.e. P3) "Reason S", S for "same quantity of base units of pains"

Argument:

So there are these two different types of reasons underlying your equivalence claims (I will use "=" to signify "is equivalent to":

5 MiTs/5 people = 5 MiTs/1 person = 1 MaT/1 person

........................(Reason S).................(Reason E)..........................

Now, never mind the transitivity problem that Reasons S and E create for your reasoning. Indeed, that's not the problem I want to raise for your sense of "involves more pain."

The problem with your sense of "involves more pain" is that it admits of Reason S as a basis for saying X involves more pain than Y. But Reason S, unlike Reason E, is against the spirit of why we take pain to matter. We take pain to matter because of the badness of how it feels, as you rightly claim. But Reason S doesn't give a crap about how bad the pains on the two sides of the equation FEEL; it doesn't care that 5 MiTs/1 person constitutes a pain that feels a whole lot worse than any anything on the other side of the equation. It just cares about how many base units of pain there are on each side. And, obviously, more base units of pain does not mean there is experientially worse pain precisely because the base units of pain can be spread out among many different people.

Maybe you think that no amount of minor pains can ever be equally important as one excruciating pain.

This an interesting question. Perhaps the what-it's-like-of-going-through-an-INFINITE-number-of-a-very-mild-sort-of-pain cannot be experientially worse than the what-it's-like-of-suffering-one-instance-of-third-degree-burns. If so, then I would think that 1 third-degree burn/1 person is morally worse than infinite mild pains/1 person. In any case, I don't think what I think here is relevant to my argument against your utilitarian sense of "involves more pain than".

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-10T21:14:32.725Z · EA · GW

Hey Alex,

Thanks for your reply. I can understand why you'd be extremely confused because I think I was in error to deny the intelligibility of the utilitarian sense of "more pain".

I have recently replied to kbog acknowledging this mistake, outlining how I understand the utilitarian sense of "more pain", and then presenting an argument for why my sense of "more pain" is the one that really matters.

I'd be interested to know what you think.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-04-10T21:07:27.261Z · EA · GW

Hi kbog,

Sorry for taking awhile to get back to you – life got in the way... Fortunately, the additional time made me realize that I was the one who was confused as I now see very clearly the utilitarian sense of “involves more pain than” that you have been in favor of.

Where this leaves us is with two senses of “involves more pain than” and with the question of which of the two senses is the one that really matters. In this reply, I outline the two senses and then argue for why the sense that I have been in favor of is the one that really matters.

The two senses:

Suppose, for purposes of illustration, that a person who experiences 5 minor toothaches is experientially just as badly off as someone who experiences a major toothache. This supposition, of course, makes use of my sense of “involves more pain than” – the sense that analyzes “involves more pain than” as “is experientially worse than”. This sense compares two what-it’s-likes (e.g., the what-it’s-like-of-going-through-5-minor-toothaches vs the what-it’s-like-of-going-through-a-major-toothache) and compares them with respect to their what-it’s-like-ness – their feel. On this sense, 5 minor toothaches all had by one person involves the same amount of pain as 1 major toothache had by one person in that the former is experientially just as bad as the latter.

On your sense (though not on mine), if these 5 minor toothaches were spread across 5 people, they would still involve the same amount of pain as 1 major toothache had by one person. This is because having 1 major toothache is experientially just as bad as having 5 minor toothaches (i.e. using my sense), which entitles one to claim that the 1 major toothache is equivalent to 5 minor toothaches, since they give rise to distinct what-it’s-likes that are nevertheless experientially just as bad. At this point, it’s helpful to stipulate that one minor toothache = one base unit of pain. That is, let’s suppose that the what-it’s-like-of-going-through-one-minor-toothache is experientially as bad as any of the least experientially bad experience(s) possible. Now, since there are in effect 5 base units of pain in both cases, therefore the cases involve the same amount of pain (in your sense). It is irrelevant that the 5 base units of pain are spread among 5 people in one case. This is because it is irrelevant how those 5 base units of pain feel when experienced together since we are not comparing the cases with respect to their what-it’s-like-ness – their feel. Rather, we are comparing the cases with respect to their quantity of the base unit of pain.

Which is the sense that really matters?

I believe the sense I am in favor of is the one that really matters, and that this becomes clear when we remind ourselves why we take pain to matter in the first place.

We take pain to matter because of its negative felt character – because of how it feels. I argue that we should favor my sense of “involves more pain than” because it fully respects this fact, whereas the sense you’re in favor of goes against the spirit of this fact.

According to your sense, 5 minor toothaches spread among 5 people involves the same amount of pain as one major toothache had by one person. But doesn't this clearly go against the spirit of the fact that pain matters solely because of how it feels? None of the 5 people feel anything remotely bad. There is simply no experience of anything remotely bad on their side of the equation. They each feel a very mild pain – unpleasant enough to be perceived to be experientially bad, but that’s it. That’s the worst what-it’s-like on their side of the equation. Yet, a bundle of 5 of these mild what-it’s-likes somehow involve the same amount of pain as one major toothache. That can only be acceptable if the felt character of the major toothache (and of pain in general) is not as important to you as the sheer quantity of very mild pains (i.e. of base units of pain). But this is against the spirit of why pain matters.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-31T02:09:01.903Z · EA · GW

Only my response to Objection 1 is more or less directed to the utilitarian. My response to Objection 2 is meant to defend against other justifications for saving the greater number, such as leximin or cancelling strategies. In any case, I think most EAs (even the non-utilitarians) will appeal to utilitarian reasoning to justify saving the greater number, so addressing utilitarian reasoning is important.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-31T01:55:42.982Z · EA · GW

Hey Alex, thanks for your comment!

I didn't know what the source of my disagreement with EAs would be, so I hope you can understand why I couldn't structure my post in a way that would have already taken into account all the subsequent discussions. But thanks for your suggestion. I may write another post with a much simpler structure if my discussion with kbog reaches a point where either I realize I'm wrong or he realizes he's wrong. If I'm wrong, I hope to realize it asap.

Also, I agree with kbog. I think it's much likelier that one of us is just confused. Either kbog is right that there is an intelligible sense in which 5 minor headaches spread among 5 people can involve more pain than 1 major headache had by one person or he is not.

After figuring that out, there is the question of which sense of "involves more pain than" is more morally important: is it the "is experientially worse than" sense or kbog's sense? Perhaps that comes down to intuitions.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-31T01:38:58.946Z · EA · GW

Hi bejaq,

Thanks for your thoughtful comment. I think your first paragraph captures well why I think who suffers matters. The connection between suffering and who suffers it is to strong for the former to matter and for the latter not to. Necessarily, pain is pain for someone, and ONLY for that someone. So it seems odd for pain to matter, yet for it not to matter who suffers it.

I would also certainly agree that there are pragmatic considerations that push us towards helping the larger group outright, rather than giving the smaller group a chance.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-31T00:56:11.382Z · EA · GW

hey kbog, I didn't anticipate you would respond so quickly... I was editting my reply while you replied... Sorry about that. Anyways, I'm going to spend the next few days slowly re-reading and sitting on your past few replies in an all-out effort to understand your point of view. I hope you can do the same with just my latest reply (which I've editted). I think it needs to be read to the end for the full argument to come through.

Also, just to be clear, my goal here isn't to change your mind. My goal is just to get closer to the truth as cheesy as that might sound. If I'm the one in error, I'd be happy to admit it as soon as I realize it. Hopefully a few days of dwelling will help. Cheers.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-30T21:17:49.297Z · EA · GW

You'll need to read to the very end of this reply before my argument seems complete.

In both cases I evaluate the quality of the experience multiplied by the number of subjects. It's the same aspect for both cases. You're just confused by the fact that, in one of the cases but not the other, the resulting quantity happens to be the same as the number provided by your "purely experiential sense".

Case 1: 5 minor headaches spread among 5 people

Case 2: 1 major headache had by one person

Yes, I understand that in each case, you are multiplying a certain amount of pain (determined solely by how badly something feels) by the number of instances to get a total amount of pain (determined via this multiplication), and then you are comparing the total amount of pain in each case.

For example, in Case 1, you are multiplying the amount of pain of a minor headache (determined solely by how badly a minor headache feels) by the number of instances to get a total amount of pain (determined via this multiplication). Say each minor headache feels like a 2, then 2 x 5 = 10. Call this 10 “10A”.

Similarly, in Case 2, you are multiplying the amount of pain of a major headache (determined solely by how badly a major headache feels) by the number of instances, in this case just 1, to get a total amount of pain (determined via this multiplication). Say the major headache feels like a 6, then 6 x 1 = 6. Call this latter 6 “6A”.

You then compare the 10A with the 6A. Moreover, since the amounts of pain represented by 10A and 6A are both gotten by multiplying one dimension (i.e. amount of pain, determined purely experientially) by another dimension (instances), you claim that you are comparing things along the same dimension, namely, A. But this is problematic.

To see the problem, consider

Case 3: 5 minor headaches all had by 1 person.

Here, like in Case 1, we can multiply the amount of pain of a minor headache (determined purely experientially) by the number of instances to get a total amount of pain (determined via this multiplication). 2 x 5 = 10. This 10 is the 10A sort.

OR, unlike in Case 1, we can determine the final amount of pain not by multiplying those things, but instead in the same way we determine the amount of pain of a single minor headache, namely, by considering how badly the 5 minor headaches feels. We can consider how badly the what-it's-like-of-going-through-5-minor-headaches feels. It feels like a 10, just as a minor headache feels like a 2, and a major headache feels like a 6. Call these 10E, 2E and 6E respectively. The ‘E’ signifies that the numbers were determined purely experientially.

Ok. I'm sure you already understand all that. Now here's the problem.

You insist that there is no problem with comparing 10A and 6A. After all, they are both determined in the same way: multiplying an experience by its instances.

I am saying there is a problem with that. The problem is that saying 10A is more than 6A makes no sense. Why not? Because, importantly, what goes into determining the 10A and 6A are 2E and 6E respectively: 2E x 5 = 10A. 6E x 1 = 6A. So what?

Well think about it. 2E x 5 instances is really just 2E, 2E, 2E, 2E, 2E.

And 6E x 1 instance is really just 6E.

So when you assert 10A is more than 6A, you are really just asserting that (2E, 2E, 2E, 2E, 2E) is more than 6E.

But then notice that, at bottom, you are still working with the dimension of experience (E) - the dimension of how badly something feels. The problem for you, then, is that the only intelligible form of comparison on this dimension is the "is experientially more bad than" (i.e. is experientially worse than) comparison.

(Of course, there is also the dimension of instances, and an intelligible form of comparison on this dimension is the “is more in instances than” comparison. For example, you can say 5 minor headaches is more in instances than 1 major headache (i.e. 5 > 1). But obviously, the comparison we care about is not merely a comparison of instances.)

Analogously, when you are working with the dimension of weight - the dimension of how much something weighs -, the only intelligible form of comparison is "weighs more than".

Now, you keep insisting that there is an analogy between

1) your way of comparing the amounts of pain of various pain episodes (e.g. 5 minor headaches vs 1 major headache), and

2) how we normally compare the weights of various things (e.g. 5 small oranges vs 1 big orange).

For example, you say,

No, I am effectively saying that the weight of five oranges is more than the weight of one orange.

So let me explain why they are DIS-analogous. Consider the following example:

Case 1: Five small oranges, 2lbs each. (Just like 5 minor headaches, each feeling like a 2).

Case 2: One big orange, 6lbs. (Just like 1 major headache that feels like a 6).

Now, just as the 2 of a minor headache is determined by how badly it feels, the 2 of a small orange is determined by how much it weighs. So just as we write, 2E x 5 = 10A, we can similarly write 2W x 5 = 10A. And just as we write, 6E x 1 = 6A, we can similarly write 6W x 1 = 6A.

Now, if you assert that (the total amount of weight represented by) 10A is more than 6A, I would have NO problem with that. Why not? Because the comparison "is more than" still occurs on the dimension of weight (W). You are saying 5 small oranges WEIGHS more than 1 big orange. The comparison thus occurs on the SAME dimension that was used to determine the number 2 and 6 (numbers that in turn determined 10A and 6A): A small orange was determined to be 2 by how much it WEIGHED. Likewise with the big orange. And when you say 10A is more than 6A, the comparison is still made on that dimension.

By contrast, when you assert that (the total amount of pain represented by) 10A is more than 6A, the "is more than" does not occur on the dimension of experience anymore. It does not occur on the dimension of how badly something feels anymore. You are not saying that 5 minor headaches spread among 5 people is EXPERIENTIALLY WORSE than 1 major headache had by 1 person. You are saying something else. In other words, the comparison does NOT occur on the same dimension that was used to determine the number 2 and 6 (numbers that in turn determined 10A and 6A): A minor headache was determined to be 2 by how EXPERIENTIALLY BAD IT FELT. Likewise with the major headache. Yet, when you say 10A is more than 6A, you are not making a comparison on that dimension anymore.

So I hope you see how your way of comparing the amounts of pain between various pain episodes is disanalogous to how we normally compare the weights between various things.

Now, just as the dimension of weight (i.e. how much something weighs) and the dimension of instances (i.e. how many instances there are) do not combine to form some substantive third dimension on which to compare 5 small oranges with a big orange, the dimension of experience (i.e. how badly something feels) and the dimension of instances do not combine to form some substantive third dimension on which to compare 5 minor headaches spread among 5 people and 1 major headache had by one person. At best, they combine to form a trivial third dimension consisting in their collection/conjunction, on which one can intelligibly compare, say, 32 minor headaches with 23 minor headaches, irrespective of how the 32 and 23 minor headaches are spread. This trivial dimension is the dimension of "how many instances (i.e. how much) of a certain pain there is". On this dimension, 5 minor headaches spread among 5 people cannot be compared with a MAJOR headache, because they are different pains, but 5 minor headaches spread among 5 people can be compared with 5 minor headaches all had by 1 person. Moreover, the result of such a comparison would be that they are the same on this dimension (as I allowed in an earlier reply). But this is a small victory given that this dimension won't allow any comparisons between differential pains (e.g. 5 minor headaches and a major headache).

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-29T18:18:23.990Z · EA · GW

Just because two things are different doesn't mean they are incommensurate.

But I didn't say that. As long as two different things share certain aspects/dimensions (e.g. the aspect of weight, the aspect of nutrition, etc...), then of course they can be compared on those dimensions (e.g. the weight of an orange is more than the weight of an apple, i.e., an orange weighs more than an apple).

So I don't deny that two different things that share many aspects/dimensions may be compared in many ways. But that's not the problem.

The problem is that when you say that the amount of pain involved in 5 minor headaches spread among 5 people is more than the amount of pain involved in 1 major headache (i.e., 5 minor headaches spread among 5 people involves more pain than 1 major headache), you are in effect saying something like the WEIGHT of an orange is more than the NUTRITION of an apple. This is because the former "amount of pain" is used in a non-purely experiential sense while the latter "amount of pain" is used in a purely experiential sense. When I said you are comparing apples to oranges, THIS is what I meant.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-29T00:54:11.223Z · EA · GW

The fact that they are separate doesn't mean that their content is any different from the experience of the one person. Certainly, the amount of pain they involve isn't any different.

Yes, each of the 5 minor headaches spread among the 5 people are phenomenally or qualitatively the same as each of the 5 minor headaches of the one person. The fact that the headaches are spread does not mean that any of them, in themselves, feel any different from any of the 5 minor headaches of the one person. A minor headache feels like a minor headache, irrespective of who has it.

Now, each such minor headache constitutes a certain amount of pain, so 5 such minor headaches constitutes five such pain contents, and in THAT sense, five times as much pain. Moreover, since there are 5 such minor headaches in each case (i.e. the 1 person case and the 5 people case), therefore, each case involves the same amount of pain. This is so even if 5 minor headaches all had by one person (i.e. the what-it's-like-of-going-through-5-minor-headaches) is experientially different from 5 minor headaches spread across 5 people (5 experientially independent what-it's-likes-of-going-through-1-minor-headache).

Analogously, a visual experience of the color orange constitutes a certain amount of orange-ish feel, so 5 such visual experiences constitutes 5 such orange-ish feels, and in THAT sense, 5 times as much orange-ish feel. If one person experienced 5 such visual experiences one right after another and we recorded these experiences on an "experience recorder" and did the same with 5 such visual experiences spread among 5 people (where they each have their visual experience one right after the other), and then we played back both recordings, the playbacks viewed from the point of view of the universe would be identical: if each visual experience was 1 minute long, then both playbacks would be 5 minutes long of the same content. In this straight forward sense, 5 such visual experiences had by one person involves just as much orange-ish feel as 5 such visual experiences spread among 5 people. This is so even if the what-it's-like-of-going-through-5-such-visual-experiences is not experientially the same as 5 experientially independent what-it's-likes-of-going-through-1-such-visual-experience.

Right? I assume this is what you have in mind.

I thus understand your alternative account or sense of 'involves more pain than'. I can see how according to it, 5 minor headaches had by 1 person involves the same amount of pain as 5 minor headaches spread among 5 people.

But again, consider 5 minor headaches spread among 5 people vs 1 major headache. Here you claim that the 5 minor headaches involves more pain than 1 major headache, and I asked you to explain in what sense. Why did I do this? Because it is clearest here how your account fails to achieve what you think it can achieve.

So let's carefully think about this for a second. Each minor headache constitutes a certain amount of pain - the amount of pain determined how shitty it feels in absolute terms. The same is true of the major headache. Since a major headache feels a lot shittier in absolute terms, we might use '6' to represent the amount of pain it constitutes, and a '2' to represent the amount of pain a single minor headache constitutes. IMPORTANTLY, both numbers - and the amount of pain they each represent - are determined by how shitty the major headache and the minor headache respectively FEEL. (Note: As I mentioned in an earlier reply, how shitty a pain episode feels is a function of both its intensity and duration).

Ok. Now, we have 5 experientially independent minor headaches. We have 5 such pain contents, and in THAT sense, 5 times as much pain. The duration of the playback would be 5 times as long compared to the playback of 1 minor headache.) Ok, but do we have something that we can appropriately call 10. Well, these numbers are meant to represent the amount of pain there is and we just said that the amount of pain is determined by how shitty something feels.

The question then is: Do 5 experientially independent minor headaches some how collectively constitute an amount of pain that feels like a 10. Clearly they don't because only the what-it's-like-of-going-through-5-minor-headaches can plausibly feel like a 10, and 5 experientially independent what-it's-likes-of-going-through-1-minor-headache is not experientially the same as 1 what-it's-like-of-going-through-5-minor-headaches.

You might reply that 5 experientially minor headaches collectively constitute a 10 in that each minor headache constitutes an amount of pain represented by 2 and there are 5 such headaches. In other words, the duration of the playback is 5 times as long. There is, in that sense, 5 times the amount of pain, which is 10.

Yes, there is 5 times the amount of pain in THAT sense, which is why I would agree that 5 minor headaches all had by one person involves just as much pain as 5 minor headaches spread among 5 people in THAT sense. BUT, notice that only the number 2 is experientially determined. The 5 is not. The 5 is the number of instances of the minor headaches. As a result, the number 10 is not experientially determined. So, the number 10 simply signifies a certain amount of pain (2) repeated 5 times. It does NOT signify an amount of pain that feels like a 10.

You might not disagree. You might ask, what is the problem here? The problem is that while you can compare a 10 and a 10 that are both determined in this non-purely experiential way, which in effect is what you do to get the result that 5 minor headaches had by one person involves just as much pain as 5 minor headaches spread among 5 people, you CANNOT compare a 10 and a 6 when the 10 is determined in this non-purely experiential way and the 6 is determined in a purely experiential way. For when the numbers are determined in different ways, they signify different things, and are thus incommensurate.

I can make the same point by talking in terms of pain, rather than in terms of numbers. When you say that 5 minor headaches all had by one person involves the same amount of pain as 5 minor headaches spread among 5 people, you are USING 'amount of pain' in a non-purely experiential sense. The amount of pain, so used, is determined by a certain amount of pain used in a purely experiential sense (i.e. an amount of pain determined by how shitty a minor headache feels) x how many minor headaches there are. While you can compare two amounts of pains, so used, with each other, you cannot compare an amount of pain, so used, with a certain amount of pain used in a purely experiential sense (i.e. an amount of pain determined by how shitty a major headache feels).

Of course, how many minor headaches there are will affect the amount of pain there is (used in a purely experiential sense) when the headaches all occur in one person. For 5 minor headaches all had by one person results in the what-it's-like-of-going-through-5-minor-headaches, which feels shittier (i.e. is experentially worse) than a major headache and thus constitutes more pain than a major headache. Thus, when I say 5 minor headaches all had by one person involves an amount of pain that is more than the amount of pain of a major headache, I am using both "amount of pain" in a purely experiential sense. I am comparing apples to apples. But when you say that 5 minor headaches spread among 5 people involves an amount of pain that is more than the amount of pain of a major headache, you are using the former "amount of pain" in a non-purely experiential sense (the one I described in the previous paragraph) and the latter "amount of pain" in a purely experiential sense. You are comparing apples to oranges.

In this response, I've tried very hard to make clear why it is that even though your account of 'involves more pain than' can work for 5 minor headaches all had by one person vs 5 minor headaches spread across 5 people (and get the result you want: i.e. that the amount of pain in each case is the same), your account cannot work for 5 minor headaches spread across 5 people vs 1 major headache. Thus, your account cannot achieve what you think it can achieve.

I worry that I haven't been as clear as I wish to be (despite my efforts), so if any part of it comes off unclear, I hope you can be as charitable as you can and make an effort to understand what I'm saying, even if you disagree with it.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-28T03:46:43.365Z · EA · GW

1) "The point is that the subject has the same experiences as that of having one headache five times, and therefore has the same experiences as five headaches among five people."

One subject-of-experience having one headache five times = the experience of what-it's-like-of-going-through-5-headaches. (Note that the symbol is an equal sign in case it's hard to see.)

Five headaches among five people = 5 experientially independent experiences of what-it's-like-of-going-through-1-headache. (Note the 5 experiences are experientially independent of each other because each is felt by a numerically different subject-of-experience, rather than all by one subject-of-experience.)

The single subject-of-experience does not "therefore has the same experiences as five headaches among five people."

2) "You think it should be "involves more pain for one person than". But I think it should be "involves more pain total", or in other words I take your metric, evaluate each person separately with your metric, and add up the resulting numbers."

Ok, and after adding up the numbers, what does the final resulting number refer to in reality? And in what sense does the referent (i.e. the thing referred to) involve more pain than a major headache?

Consider the case in which the 5 minor headaches are spread across 5 people, and suppose each minor headache has an overall shittiness score of 2 and a major headache has an overall shittiness score of 6. If I asked you what '2' refers to, you'd easily answer the shitty feeling characteristic of what it's like to go through a minor-headache. And you would say something analogous for '6' if I asked you what it refers to.

You then add up the five '2's and get 10. Ok, now, what does the '10' refer to? You cannot answer the shitty feeling characteristic of what it's like to go through 5 minor headaches, for this what-it's-like is not present since no individual feels all 5 headaches. The only what-it's-like that is present are 5 experientially independent what-it's-like-of-going-through-1-minor-headache. Ok so what does '10' refer to? 5 of these shitty feelings? Ok, and in what sense do 5 of these shitty feelings involve more pain than 1 major headache? Clearly not in an experiential sense for only the what-it's-like-of-going-through-5-minor-headaches is plausibly experientially worse than a major headache. So in what sense does the referent involve more pain than a major headache?

THIS IS THE CRUX OF OUR DISAGREEMENT. I CANNOT SEE HOW 5 what-it's-like-of-going-through-1-minor-headache involves more pain than 1 major headache. YES, mathematically, you can show me '10 > 6' all day long, but I don't see any reality onto which it maps!

3) "It's just plain old cardinal utility: the sum of the amount of pain experienced by each person."

Yes, but I don't see how that "sum of pain" can involve more pain than 1 major headache because what that "sum of pain" is, ultimately speaking, are 5 what-it's-likes-of-going-through-1-minor-pain, and NOT 1 what-it's-like-of-going-through-5-minor-pains.

4) "Why?"

Because ultimately you'll need an account of 'involves more pain than' on which 5 minor headaches spread across 5 people can involve more pain than 1 major headache. And in that situation, it is clearly the case that the 5 minor headaches are not experientially worse than the 1 major headache (for only the what-it's-like-of-going-through-5-minor-headaches can plausibly be experientially worse than 1 major headache).

My point was just that you'll need an account of 'involves more pain than' that can make sense of how 5 experientially independent what-it's-likes-of-going-through-1-minor-headache can involve more pain than 1 major headache, for my account (i.e. "is experientially worse than") certainly cannot make sense of it.

5) "It is distributed - 20% of it is in each of the 5 people who are in pain."

But when it's distributed, you won't have an overall shittiness that is shittier than the experience of 1 major headache, at least not when we understand "is shittier than" as meaning "is experientially worse than". For 5 experientially independent what-it's-likes-of-going-through-1-minor-headache are not experientially worse than 1 major headache: only the what-it's-like-of-going-through-5-minor-headaches can plausibly be experientially worse than 1 major headache.

Your task, again, is to provide a different account of 'involves more pain than' or 'shittier than' on which, somehow, 5 experientially independent what-it's-likes-of-going-through-1-minor-headache can somehow involve more pain than 1 major headache.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-27T22:54:58.595Z · EA · GW

1) "But you are trying to argue about what makes one state of affairs morally worse than another. That is what you are trying to do in the first place. So it's not, and cannot be, preliminary. And if you started from the ground up then it would have contained something that carried force to utilitarians for instance.

If you disagree, try to sketch out a view (that isn't blatantly logically inconsistent) where someone would have agreed with you on Amy/Susan/Bob but disagreed on the headaches."

Arguing for what factors are morally relevant in determining whether one case is morally worse than another is preliminary to arguing that some specific case (i.e. Amy and Susie suffering) is morally just as bad as another specific case (i.e. Bob suffering). My 3 cases were only meant to do the former. From the 3 cases, I concluded:

  1. That the amount of pain is a morally relevant factor in determining whether one case is morally worse than another.

  2. That the number of instances of pain is a morally relevant factor only to the extent that they affect the amount of pain at issue. (i.e. the number of instances of pain is not morally relevant in itself).

I take that to be preliminary work. Where I really dropped the ball was in my lackluster argument for P1 (and, likewise, for my assumption in case 3). No utilitarian would have found it convincing, and thus I would not have succeeded in convincing them that the outcome in which Amy and Susie both suffer is morally just as bad as the outcome in which only Bob suffers, even if they agreed with 1. and 2., which they do.

Anyways, to the extent that you think my argument for P1 sucked to the point where it was like I was begging the question against the utilitarian, I'm happy to concede this. I have since reworked my response to Objection 1 as a result, thanks to you.

2) "How is it biting a bullet to prefer to save one person being tortured AND one person with a headache, compared to simply saving one person being tortured?

I struggle to see how anyone might find that position counterintuitive. Rather, accepting the converse choice seems like biting the bullet."

Because you effectively deny the one person ANY CHANCE of being helped from torture SIMPLY BECAUSE you can prevent an additional minor headache - a very very very minor one - by helping the two. Anyways, a lot of people think that is pretty extreme. If you don't think so, that's perhaps mainly because you don't believe WHO SUFFERS MATTERS. If that's the case, then I would encourage you to reread my response to Objection 2, where I make the case that who suffers is of moral significance.

3) "Making the other choice also gives someone no chance of being saved from torture, and it also gives someone no chance of being saved from a headache, so I don't see what could possibly lead one to prefer it."

You do give each party a 50% chance of being saved by choosing to flip a coin, instead of choosing to just help one party over the other. I prefer giving a 50% chance to each party because

A) I don't think the case in which the two would suffer involves more pain than the case in which the one would (given our discussion under Michael_S's post),

B) I believe who suffers matters (given my response to Objection 2)

Even if you disagree with me on A), I think if you agreed with me on B), you would at least give the one person a 49% of being helped, and the other two a 51% of being helped.

It is true that once the coin has been flipped, one party still ends up suffering at the end of the day. But that does not mean that they didn't at one point actually have a 50% of being helped.

4) "And merely having a "chance" of being saved is morally irrelevant. Chances are not things that exist in physical or experiential terms the way that torture and suffering do. No one gives a shit about merely having a chance of being saved; someone who had a chance of being saved and yet is not saved is no better off than someone who had no chance of being saved from the beginning. The reason that we value a chance of being saved is that it may lead to us actually being saved. We don't sit on the mere fact of the chance and covet it as though it were something to value on its own."

I agree that the only reason that we value a chance of being saved is that it may lead to us actually being saved, and in that sense, we don't value it in itself. But I don't get why that entails that giving each party a 50% of being saved is not what we should do.

Btw, sorry I haven't replied to your response below brian's discussion yet. I haven't found the time to read that article you linked. I do plan to reply sometime soon.

Also, can you tell me how to quote someone's text in the way that you do in your responses to me? It is much cleaner than my number listing and quotations. Thanks.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-27T21:47:00.264Z · EA · GW

Hi Telofy, nice to hear from you again :)

You say that you have no intuition for what a subject-of-experience is. So let me say two things that might make it more obvious:

1.Here is how I defined a subject-of-experience in my exchange with Michael_S:

"A subject of experience is just something which "enjoys" or has experience(s), whether that be certain visual experiences, pain experiences, emotional experiences, etc... In other words, a subject of experience is just something for whom there is a "what-it's-like". A building, a rock or a plant is not a subject of experience because it has no experience(s). That is why we don't feel concerned when we step on grass: it doesn't feel pain or feel anything. On the other hand, a cow is a subject-of-experience: it presumably has visual experiences and pain experience and all sorts of other experiences. Or more technically, a subject-of-experience (or multiple) may be realized by a cow's physical system (i.e. brain). There would be a single subject-of-experience if all the experiences realized by the cow's physical system are felt by a single subject. Of course, it is possible that within the cow's physical system's life span, multiple subjects-of-experience are realized. This would be the case if not all of the experiences realized by the cow's physical system are felt by a single subject."

I later enriched the definition a bit as follows: "A subject-of-experience is a thing that has, OR IS CAPABLE OF HAVING, experience(s). I add the phrase 'or is capable of having' this time because it has just occurred to me that when I am in dreamless sleep, I have no experiences whatsoever, yet I'd like to think that I am still around - i.e. that the particular subject-of-experience that I am is still around. However, it's also possible that a subject-of-experience exists only when it is experiencing something. If that is true, then the subject-of-experience that I am is going out of and coming into existence several times a night. That's spooky, but perhaps true."

2.Having offered a definition to Michael, I then say to him here is WHAT MAKES a particular subject-of-experience the numerically same subject-of-experience over time:

"Within any given physical system that can realize subjects of experience (e.g. a cow's brain), a subject-of-experience at time t-1 (call this subject "S1") is numerically identical to a subjective-of-experience at some later time t-2 (call this subject "S2") if and only if an experience at t-1 (call this experience "E1") and an experience at t-2 (call this experience "E2") are both felt by S1. That is S1 = S2 iff S1 feels E1 and E2."

Let me just add: A particular subject-of-experience can obviously be qualitatively different over time, which would happen when his personality changes or memory changes (or is erased) etc... But that doesn't imply there is any numerical difference. I assume the distinction between numerical identity and qualitative identity is a familiar one to you. In any case, here is an example to illustrate the distinction: Two perfectly matching coins are qualitatively the same, yet they are numerically distinct insofar as they are not one and the same coin.

I hope what I have said here helps!

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-27T20:10:28.034Z · EA · GW

Hi kbog, glad to hear back from you.

1) "But I don't have an accurate appreciation of what it's like to be 5 people going through 5 headaches either. So I'm missing out on just as much as the amnesiac. In both cases people's perceptions are inaccurate."

I don't quite understand how this is a response to what I said, so let me retrace some things:

You first claimed that if I believed that 5 minor headaches all had by one person is experientially worse than 5 minor headaches spread across 5 people, then I would be committed to "believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time" and this is a problem.

I replied that it does matter how many times I get tortured because even if my memory is wiped each time, it is still ME (as opposed to a numerically different subject-of-experience, e.g. you) who would experience torture again and again. If my memory is wiped, I will incorrectly VIEW each additional episode of torture as the first one I've ever experienced, but it would not BE the first one I've ever experienced. I would still experience what-it's-like-of-going-through-x-number-of-torture-episodes even if after each episode, my memory was wiped. Since it's the what-it's-like-of-going-through-x-number-of-torture-episodes (and not my memory of it) that is experientially worse than something else, and since X is morally worse than Y when X is experientially worse (i.e. involves more pain) than Y, therefore, it does matter how many times I'm tortured irrespective of my memory.

Now, the fact that you said that I "will never have the experience of being tortured a second time" suggests that you think that memory-continuity is necessary to being the numerically same subject-of-experience (i.e. person). If this were true, then every time a person's memory is wiped, a numerically different person comes into existence and so no person would experience what-it's-like-of-going-through-2-torture-episodes if a memory wipe happens after each torture episode. But I don't think memory-continuity is necessary to being the numerically same subject-of-experience. I think a subject-of-experience at time t1 (call this subject "S1") and a subject-of-experience at some later time t2 (call this subject "S2") are numerically identical (though perhaps qualitatively different) just in case an experience at t1 (call this experience E1) and an experience at t2 (call this experience E2) are both felt by S1. In other words, I think S1 = S2 iff E1 and E2 are both felt by S1. S1 may have forgotten about E1 by t2 (due to a memory wipe), but that doesn't mean it wasn't S1 who also felt E2.

In a nutshell, memory (and thus how accurate we appreciate our past pains) is not morally relevant since it does not prevent a person from actually experiencing what-it's-like-of-going-through-multiple-pains, and it is this latter thing that is morally relevant. So I don't quite see the point of your latest reply.

2) "Of course you can define a relation to have that property, but merely defining it that way gives us no reason to think that it should be the focus of our moral concern.

If I were to define a relation to have the property of being the target of our moral concern, it wouldn't be impacted by how it were spread across multiple people."

I am not simply defining a relation here. We both agree that experience is morally relevant and that therefore pain is morally bad, and that therefore an outcome that involves more pain than another outcome is morally worse than the latter outcome. That is, we agree X is morally worse than Y iff X involves more pain than Y. But how are we to understand phrase 'involves more pain than'? I understand it as meaning "is experientially worse than", which is why I ultimately think that 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 people. You seem to agree with me that the former is experientially worse than the latter, yet you deny that the former is morally worse than the latter. Thus, you have to offer another plausible account of the phrase 'involves more pain than' on which 5 minor headaches all had by one person involves just as much pain as 5 minor headaches spread across 5 people. IMPORTANTLY, this account has to be one according to which 5 minor headaches all had by one person can involve more pain than 1 major headache and not merely in an experientially worse sense. Can you offer such an account?

I mean, how can 5 minor headaches all had by one person involve more pain than 1 major headache if not in an experientially worse sense? You might try to use math to help illustrate your point of view. You might say, well suppose each minor headache represents a pain of a magnitude of 2, and the major headache represents a pain of a magnitude of 6. You might further clarify that the 2 doesn't just signify the INTENSITY of the minor pain since how shitty a pain episode is doesn't just depend on its intensity but also on its duration. Thus, you might clarify that the 2 represents the overall shitness of the pain - the disutility of it, so to speak. Next, you might say that insofar as there are 5 such minor headaches, they represent 10 disutility, and 10 is bigger than 6. Therefore 5 minor headaches all had by one person involves more pain than a major headache.

But then I would ask you: what is the reality underpinning the number 10? Is it not some overall shittiness that is experientially worse than the overall shittiness from experiencing one major headache? Is it not the overall shittiness of what-it's-like-of-going-through-5-minor-headaches? If it is, then we haven't departed from my "is experientially worse than" interpretation of 'involves more pain than'. If it isn't, then what is it?

To see the problem even more clearly, consider when the 5 minor headaches are spread across 5 people. Here again, you will say that the 5 minor headaches represent 10 disutility and 10 is greater than 6, therefore 5 minor headaches spread across 5 people involve more pain than one major headache. This conclusion is easy to arrive at when one just focuses on the math: 2 x 5 = 10 and 10 > 6. But we must not forget to ask ourselves what the "10" might signify in reality. Is it meant to signify an overall shittiness that is shittier than the experience of 1 major headache? Ok, but where in reality is this overall shittiness? I certainly don't see it. I don't see the presence of this overall shittiness because there is no experience of it.

(Thus, I find using math to show that 5 minor headaches spread across 5 people involves more pain than 1 major headache is very misleading: yes, mathematically, you can easily portray it. But, at bottom, the '10' maps onto nothing in reality.)

So in conclusion, I don't see any other plausible interpretation of 'involves more pain than' than "is experientially worse than". If that is the case, then not only is it the case that I haven't arbitrarily defined a relation, but it's also the case that this relation is the only plausible morally relevant relation.

3) "Well, so do I. The point is that the mere fact that 5 headaches in one person is worse for one person doesn't necessarily imply that it is worse overall for 5 headaches among 5 people."

We need to distinguish between experientially worse and morally worse. You agree that 5 headaches in one person is experientially worse than 5 headaches spread across 5 people, yet you insist that that doesn't mean the former is morally worse than the latter. Well, again, this requires you to show that there is another plausible interpretation of 'involves more pain than' on which the former involves just as much pain as the latter.

Also, I should note that I was too hasty when I said that I think experience is the ONLY morally relevant factor. Actually, I also think who suffers is a morally relevant factor, but that doesn't affect our discussion here.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-23T16:35:44.843Z · EA · GW

Hi Brian,

I think the reason why you have such a strong intuition of just saving Amy and Susie in a choice situation like the one I described in my previous reply is that you believe Amy's burning to death plus Susie's sore throat involves more or greater pain than Bob's burning to death. Since you think minimizing aggregate pain (i.e. maximizing aggregate utility) is what we should do, your reason for just Amy and Susie is clear.

But importantly, I don't share your belief that Amy's burning to death and Susie's sore throat involves more or greater pain than Bob's burning to death. On this note, I have completely reworked my response to Objection 1 a few days ago to make clear why I don't share this belief, so please read that if you want to know why. On the contrary, I think Amy's burning to death and Susie's sore throat involves just as much pain as Bob's burning to death.

So part of the positive case for giving everyone an equal chance is that the suffering on either side would involve the same LEVEL/AMOUNT of pain (even though the suffering on Amy's and Susie's side would clearly involve more INSTANCES of pain: i.e. 2 vs 1.)

But even if the suffering on Amy's and Susie's side would involve slightly greater pain (as you believe), there is a positive case for giving Bob some chance of being saved, rather than 0. And that is that who suffers matters, for the reason I offered in my response to Objection 2. I think that response provides a very powerful reason for giving Bob at least some chance, and not no chance at all, even if his pain would be less great than Amy's and Susie's together. (My response to Objection 3 makes clear that giving Bob some chance is not in conflict with being impartial, so that response is relevant too if you think doing so is being partial)

At the end of the day, I think one's intuitions are based on one's implicit beliefs and what one implicitly takes into consideration. Thus, if we shared the same implicit beliefs and implicitly took the same things into consideration, then we would share the same intuitions. So one way to view my essay is that it tries to achieve its goal by doing two things:

1) Challenging a belief (e.g. that Amy's burning to death plus Susie's sore throat involves more pain than Bob's burning to death) that in part underlies the differences in intuition between me and people like yourself.

2) Reminding people of another important moral fact that should figure in their implicit thought processes (and thus be reflected in their intuitions): that who suffers matters. This moral fact is often forgotten about, which skews people's intuitions. Once this moral fact is seriously taken into account, I bet people's intuitions would not be the same. Importantly, I bet the vast majority of people (including yourself) would feel that giving Bob some chance of being saved is more appropriate than none, EVEN IF you still thought that Amy's pain and Susie's pain involve slightly more pain than Bob's.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-22T03:05:38.698Z · EA · GW

No worries!

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-22T01:32:28.461Z · EA · GW

Hey Brian,

I just wanted to note that another reason why you might not want to use the veil-of-ignorance approach to justify why we should save the greater number is that it would force you to conclude that, in a trade off situation where you can either save one person from an imminent excruciating pain (i.e. being burned alive) or another person from the same severe pain PLUS a third person from a very minor pain (e.g. a sore throat), we should save the second and third person and give 0 chance to the first person.

I think it was F. M. Kamm who first raised this objection to the veil-of-ignorance approach in his book Morality, Mortality Vol 1. (I haven't actually read the book). Interestingly, kbog - another person I've been talking with on this forum - accepts this result. But I wonder if others like yourself would. Imagine Bob, Amy and Susie were in a trade off situation of the kind I just described, and imagine that Bob never actually had a chance to be in Amy's or Susie's position. In such a situation, do you think you should just save Amy and Susie?

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-21T03:27:08.441Z · EA · GW

Hi Michael,

I removed the comment about worrying that we might not reach a consensus because I worried that it might send you the wrong idea (i.e. that I don't want to talk anymore). It's been tiring I have to admit, but also enjoyable and helpful. Anyways, you clearly saw my comment before I removed it. But yeah, I'm good with talking on.

I agree that experiences are the result of chemical reactions, however the nature of the relations "X being experientially worse than Y" and "X being greater in number than Y" are relevantly different. Someone by the name of "kbog" recently read my very first reply to you (the updated edition) and raised basically the same concern as you have here, and I think I have responded to him pretty aptly. So if you don't mind, can you read my discussion with him:

http://effective-altruism.com/ea/1lt/is_effective_altruism_fundamentally_flawed/dmu

I would have answered you here, but I'm honestly pretty drained from replying to kbog, so I hope you can understand. Let me know what you think.

Regarding defining S1, I don't think I can do better than to say that S1 is a thing that has, or is capable of having, experience(s). I add the phrase 'or is capable of having' this time because it has just occurred to me that when I am in dreamless sleep, I have no experiences whatsoever, yet I'd like to think that I am still around - i.e. that the particular subject-of-experience that I am is still around. However, it's also possible that a subject-of-experience exists only when it is experiencing something. If that is true, then the subject-of-experience that I am is going out of and coming into existence several times a night. That's spooky, but perhaps true.

Anyways, I can't seem to figure out why you need any better of a definition of a subject-of-experience than that. I feel like my definition sufficiently distinguishes it from other kinds of things. Moreover, I have provided you with a criteria for identity over time. Shouldn't this be enough?

You write, "I think moral personhood doesn't make sense as a binary concept (the mind from a brain is different at different times, sometimes vastly different such as in the case of a major brain injury) The matter in the brain is also different over time (ship of Theseus)."

I agree with all of this, but I would insist those NEED NOT BE numerical differences, just qualitative differences. A mind can be very qualitatively different (e.g. big personality change) from one moment to the next, but that does not necessarily mean that it is a numerically different mind. Likewise, a brain can be very qualitative different (e.g. big change in shape) from one moment to the next, but that does not necessarily mean that it is a numerically different brain.

You then write, "I don't see a good reason to call these the same person in a moral sense in a way that two minds of two coexisting brains wouldn't be."

Well, if a particular mind is the numerically same mind before and after a big qualitative change (e.g., due to a brain injury), then clearly there is reason to call it the same mind/person in a way that two minds of two coexisting brains wouldn't be. After all, it's the numerically same mind, whereas two minds of two coexisting brains are clearly two numerically different minds.

You might agree that there is a literal reason to call it the same mind, but deny that there is a moral reason that wouldn't be true of two minds of two coexisting brains. But I think the literal reason constitutes or provides the moral reason: if a mind is numerically the same mind before and after a big qualitative change (e.g. big personality change), then that means whatever experiences are had by that mind before and after the change are HAD BY THAT NUMERICALLY SAME MIND. So if that particular mind suffered a headache before the radical change and then suffered a headache after the change, it is THAT PARTICULAR MIND THAT SUFFERS BOTH. That is enough reason to also call that mind the same mind in a moral sense that wouldn't also be true of two numerically different minds of two coexisting brains.

I didn't quite understand the sentences after that.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-21T01:14:02.294Z · EA · GW

1) "But if anyone did accept that premise then they would already believe that the number of people suffering doesn't matter, just the intensity. In other words, the only people to whom this argument applies are people who would agree with you in the first place that Amy and Susie's suffering is not a greater problem than Bob's suffering. So I can't tell if it's actually doing any work. If not, then it's just adding unnecessary length. That's what I mean when I say that it's too long. Instead of adding the story with the headaches in a separate counterargument, you could have just said all the same things about Amy and Susie and Bob's diseases in the first place, making your claim that Amy and Susie's diseases are not experientially worse than Bob's disease and so on."

The reason why I discussed those three cases was to answer the basic question: what makes one state of affairs morally worse than another. Indeed, given my broad audience, some who have no philosophy background, I wanted to start from the ground up.

From that discussion, I gathered two principles that I used to support premise 2 of my argument against Objection 1. I say "gathered" and not "deduced" because you actually don't disagree with those two principles, even though you disagree with an assumption I made in one of the cases (i.e. case 3). What your disagreement with that assumption indicates is a disagreement with premise 1 of my argument against Objection 1.

P1. read: "The degree of suffering in the case of Amy and Susie would be the same as in the case of Bob, even though the number of instances of suffering would differ (e.g., 2:1)."

You disagree because you think Amy's and Susie's pains would together be experientially worse than Bob's pain.

All this is to say that I don't think the discussion of the 3 cases was unnecessary, because it served the important preliminary goal of establishing what makes one state of affairs morally worse than another.

However, it seems like I really should have defended P1. of my argument (and similarly my assumption in case 3) more thoroughly. So I do admit that my post is lacking in this respect, which I appreciate you're pointing out. I'm also sure there are ways to make it more clear and concise. I will consider your suggested approach during future editting sessions.

Update (Mar 21): After thinking through what you said some more, I've decided I'm going to re-do my response to Objection 1 along the lines of what you're suggesting. Thanks for motivating this improvement.

2) "PU says that we should assign moral value on the basis of people's preferences for them. So if someone thinks that being tortured is really really really bad, then we say that it is morally really really really bad. We give the same weight to things that people do. If you say that someone is being risk-averse, that means (iff you're using the term correctly) that they're putting so much effort into avoiding a risk that they are reducing their expected utility. That means that they are breaking at least one of the axioms of the Von Neumann-Morgenstern Utility Theorem, which (one would argue, or assert) means that they are being irrational."

Thanks for that explanation. I see where I went wrong in my previous reply now, so I concede this point.

3) "Yes to both."

Ok, interesting. And, just out of curiosity, you don't consider this as biting a bullet? I mean there are people who have given up on the veil-of-ignorance approach specifically because they think it is morally unacceptable to not give the single person ANY chance of being saved from torture just because it comes with the additional, and relatively trivial, benefit of relieving a minor headache.

P.S. I will reply to your other comment after I've read the paper you linked me to. But, I do want to note that you were being very uncharitable in your reply that "Stipulations can't be true or false - they're stipulations. It's a thought experiment for epistemic purposes." Clearly stipulations/suppositions cannot be false relative to the thought experiment. But surely they can be false relative to reality - to what is actually the case.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-20T22:39:04.658Z · EA · GW

1) "Well I can see how it is possible for someone to believe that. I just don't think it is a justified position, and if you did embrace it you would have a lot of problems. For instance, it commits you to believing that it doesn't matter how many times you are tortured if your memory is wiped each time. Because you will never have the experience of being tortured a second time."

I disagree. I was precisely trying to guard against such thoughts by enriching my first reply to Michael_S with a case of forgetfulness. I wrote, "Now, by the end of our 5th minor headache, we might have long forgotten about the first minor headache because, say, it happened so long ago. So, by the end of our 5th minor headache, we might not have an accurate appreciation of what it’s like to go through 5 minor headaches EVEN THOUGH we in fact have experienced what it’s like to go through 5 minor headaches." (I added the caps here for emphasis)

The point I was trying to make in that passage is that if one person (i.e. one subject-of-experience) experienced all 5 minor headaches, then whether he remembers them or not, the fact of the matter is that HE felt all of them, and insofar as he has, he is experientially worse off than someone who only felt a major headache. Of course, if you asked him at the end of his 5th minor headache whether HE thinks he's had it worse than someone with a major headache, he may say "no" because, say, he has forgotten about some of the minor headaches he's had. But that does NOT MEAN that, IN FACT, he did not have it worse. After all, the what-it's-like-of-going-through-5-minor-headaches is experentially worse than one major headache, and HE has experienced the former, whether he remembers it or not.

So, if my memory is wiped each time after getting tortured, of course it still matters how many times I'm tortured. Because I WILL have the experience of being tortured a second time, whether or not I VIEW that experience as such.

2) "There are two rooms, painted bright orange inside. One person goes into the first room for five minutes, five people go into the second for one minute. If we define orange-perception as the phenomenon of one conscious mind's perception of the color orange, the amount of orange-perception for the group is the same as the amount of orange-perception for the one person.

Something being experiential doesn't imply that it is not quantitative. We can clearly quantify experiences in many ways, e.g. I had two dreams, I was awake for thirty seconds, etc. Or me and my friends each saw one bird, and so on."

My point wasn't that we can't quantify experience in various ways, but that relations of an experiential nature, like the relation of X being experientially worse than Y, behave in relevantly different ways from relations of a quantitative - maybe 'non-experiential' might have been a better word - nature, like the relation of X being heavier than Y. As I tried to explain, the "experientially-worse-than" relation is impacted by whether the X (e.g. 5 minor headaches) are spread across 5 people or all had by one person, whereas the "heavier-than" relation is not impacted by whether X (e.g. 100 tons) are spread across 5 objects or true of 1 object.

3) "Yes, but the question here is whether 5 what-it's-lies-of-going-through-1-minor-headache is 5x worse than 1 minor headache. We can believe this moral claim without believing that the phenomenon of 5 separate headaches is phenomenally equivalent to 1 experience of 5 headaches. There are lots of cases where A is morally equivalent to B even though A and B are physically or phenomenally different."

The moral question here is whether a case in which 5 minor headaches are all had by one person is morally equivalent (i.e. morally just as bad) as a case in which 5 minor headaches are spread across 5 people. You think it is, and I think it isn't. Instead, I think the former case is morally worse than the latter case.

And the ONLY reason why I think this is because I think 5 headaches all had by one person is experientially worse than 5 headaches spread across 5 people. As I said before, I think experience is the only morally relevant factor.

Since I don't think anything other than experience matters, I would deny the existence of cases in which A and B are morally just as bad/good where A and B differ phenomenally.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-20T19:00:50.361Z · EA · GW

1) "Because I don't have any reason to feel different."

Ok, well, that comes as a surprise to me. In any case, I hope after reading my first reply to Michael_S, you at least sort of see how it could be possible that someone like I would feel surprised by that, even if you don't agree with my reasoning. In other words, I hope you at least sort of see how it could be possible that someone who would clearly agree with you that, say, 5 minor headaches all had by 1 tall person is experientially just as bad as 5 minor headaches all had by 1 short person, might still disagree with you that 5 minor headaches all had by 1 person is experientially just as bad as 5 minor headaches spread across 5 people.

2) "If you want to show that the cases are different in a relevant way, then you need to spell it out. In the absence of reasons to say that there is a difference, we assume by default that they're similar."

That's what my first reply to Michael_S, in effect, aimed to do.

3) "The third sentence does not follow from the second. This is like saying "there is nothing present in a Toyota Corolla that could make it weigh more than a Ford F-150, therefore five Toyota Corollas cannot weigh more than a Ford F-150." Just because there is no one element in a set of events that is worse than a bad thing doesn't mean that the set of events is not worse than the bad thing. There are lots of events where badness increases with composition, even without using aggregative utilitarian logic. E.g.: it is okay to have sex with Michelle, and it is okay to marry Tiffany, but it is not okay to do both."

Your reductio-by-analogy (I made that phrase up) doesn't work, because your analogy is relevantly different. In your analogy, we are dealing with the relation of _ being heavier than _, whereas I'm dealing with the relation of _ being experientially worse than _. These relations are very different in nature: one is quantitative in nature, the other is experiential in nature. You might insist that this is not a relevant difference, but I think it is when one really slows down to think about exactly what is it that makes 5 minor headaches experientially worse than a major headache.

As I mentioned, the answer is the what-it's-like-of-going-through-5-minor-headaches. That is, the what-it's-like of going through one minor headache, then another (sometime later), then another, then another, then another. It's THAT SPECIFIC WHAT-IT'S-LIKE that can plausibly be experientially worse than a major headache. It's THAT SPECIFIC WHAT-IT'S-LIKE that can plausibly be "shittier" or "sucker" than a major headache.

However, when the 5 minor headaches are spread across 5 people, there is just 5 what-it's-likes-of-going-through-1-minor-headache, and no single what-it's-like-of-going-through-5-minor-headaches. Why? Because each of the minor headaches in this situation would be felt by a numerically non-identical subject-of-experience (i.e. 5 people), and numerically different subjects-of-experience cannot have their experiences "linked". Otherwise, they would not be numerically different.

Therefore, only 5 minor headaches, when all had by one subject-of-experience (i.e. one person) can they be experientially worse than one major headache. And therefore, 5 minor headaches, when all had by one person, is experientially worse than 5 minor headaches, spread across 5 people.

I think what I just said above shows clearly how the relation of _ being experientially worse than _ is impacted by whether the 5 minor headaches are all had by one person or spread across 5 different people. Whereas the relation of _ being heavier than _ is not similarly affected. So that is the relevant difference.

I hope you can really consider what I'm saying here. Thanks.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-20T17:59:02.321Z · EA · GW

1) "You simply assert that we would rather save Emma's major headache rather than five minor ones in case 3. But if you've stipulated that people would rather endure one big headache than five minor ones, then the big headache has more disutility. Just because the minor ones are split among different people doesn't change the story. I just don't follow the argument here."

I DO NOT simply assert this. In case 3, I wrote, "Here, I assume you would say that we should save Emma from the major headache or at least give her a higher chance of being saved because a major headache is morally worse than 5 minor headaches spread across 5 persons and it's morally worse BECAUSE a major headache hurts more (in some non-arbitrary sense) than the 5 minor headaches spread across 5 people. Here, the non-arbitrary sense is straightforward: Emma would be hurting more than any one of the 5 others who would each experience only 1 minor headache." (I capped 'because' for emphasis here)

You would not buy that reason I gave (because you believe 5 minor headaches, spread across 5 people, is experientially worse than a major headache), but that is a different story. If you were more charitable and patient while reading my post, thinking about who my audience is (many of whom aren't utilitarians and don't buy into interpersonal aggregation of pains) etc, I doubt you would be leveling all the accusations you have against me. It wastes both your time and my time to have to deal with them.

2) "My whole point here is that your response to Objection 1 doesn't do any work to convince us of your premises regarding the headaches. Yeah there's an argument, but its premise is both contentious and undefended."

I was just using your words. You said "But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people." As I said, I assumed a premise that I thought the vast majority of my audience would agree with (i.e., at bottom, that 5 minor headaches all had by one person is experientially worse than 5 minor headaches spread across 5 people). If YOU find that premise contentious, great, we can have a discussion about it, but please don't make it sound like my argument doesn't do any work for anyone.

3) "I'm not just speaking for utilitarians, I'm speaking for anyone who doesn't buy the premise for choice 3. I expect that lots of non-utilitarians would reject it as well."

Well, I don't, which is why I assumed the premise in the first place. I mean I wouldn't assume a premise that I thought the majority of my audience will disagree with. It's certainly not obvious to me that 5 minor headaches all had by one person is experientially just as bad as 5 minor headaches spread across 5 people.

4) "The original position argument is not an empirical prediction of what humans would choose in such-and-such circumstances, it's an analysis of what we would expect of them as the rational thing to do, so the hedonist utilitarian points out that risk aversion violates the axioms of expected utility theory and it would be rational of people to not make that choice, whereas the preference utilitarian just calibrates the utility scale to people's preferences anyway so that there isn't any dissonance between what people would select and what utilitarianism says."

Sorry, I'm not familiar with the axioms of expected utility theory or with preference utilitarianism. But perhaps I can understand your position by asking 2 questions:

1) According to you, would it be rational behind the veil of ignorance to agree to a policy that said: In a trade off situation between saving a person from torture or saving another person from torture AND saving a third person from a minor headache, the latter two are to be saved. 2) In an actual trade off situation of this kind, would you think we ought to save the latter two?

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-20T03:35:47.580Z · EA · GW

1) "But that involves arbitrarily saving fewer people. I mean, you could call that non-arbitrary, since you have some kind of reason for it, but it's fewer people all the same, and it's not clear how reason or empathy would generally lead one to do this. So there is no prima facie case for the position that you're defending."

To arbitrarily save fewer people is to save them on a whim. I am not suggesting that we should save them on a whim. I am suggesting that we should give each person an equal chance of being saved. They are completely different ideas.

2) "But you have not argued it, you assumed it, by way of supposing that 5 headaches are worse when they happen to one person than when they happen to multiple people, which presupposes that more total suffering does not necessarily imply worseness in such gedanken."

Please show me where I supposed that 5 minor headaches are MORALLY worse when they happen to one person than when they happen to multiple people. In both choice situations 2 and 3, I provided REASONS for saying

A) why 5 minor headaches all had by one person is morally worse than 1 major headache had by one person, and

B) why 1 major headache had by one person is morally worse than 5 minor headaches spread across 5 people.

From A. and B., you can infer that I believe 5 minor headaches all had by one person is morally worse than 5 minor headaches spread across 5 persons, but don't say that I supposed this. I provided reasons. You can reject those reasons, but that is a different story.

If you meant that I supposed that 5 minor headaches are EXPERENTIALLY worse when they happen to one person than when they happen to multiple people, sure, it can be inferred from what I wrote that I was supposing this. But importantly, to make this assumption is not a stretch as it seems (at least to me) like an assumption plausibly shared by many. But it turns out that Michael_S disagreed, at which time I was glad to defend this assumption. More importantly, even if I made this supposition (as we have to start from somewhere), it does not mean that by doing so, I was simply assuming and not arguing for what you quoted.

3) "But you need to defend such an implication if you wish to claim that it is not morally worse for more people to suffer an equal amount."

If you don't see an argument in my response to Objection 1, I'll live with that since I put a lot of time into writing that essay and no one else has said the same.

4) "Because anyone who buys the basic arguments for helping more people rather than fewer will often prefer to alleviate five minor headaches rather than one major one, regardless of whether they happen to different people or not."

By basic arguments, I presume you mean utilitarian arguments. First off, I was not writing this for a utilitarian audience. If I writing this for an audience that finds it intuitive to save Amy and Susie instead of Bob, and I was trying to show how other (perhaps more basic) intuitions that I assumed were commonly held (i.e. saving one from a major headache instead of 5 each from a minor headache) could provide the ingredients for showing that we should provide each of them with an equal chance of being helped.

If I was writing this for strictly a utilitarian audience, I would have taken a different approach which would have included explaining why 5 pains all had by one person is experentially worse than 5 pains spread across 5 people.

Many people who are effective altruists have reasons for helping people, such as the pond argument, but not reasons for helping the many over the few. So it is uncharitable of you to simply assume that my audience are all utilitarians.

5) "brianwang712's response based on the Original Position implies that the decision to not prevent 5 minor headaches is wrong, even though he didn't take the time to spell it out."

Not true. It is not clear what the conclusion from the original position would be when the levels of pain for the people involved differ. Some people are extremely risk-adverse to extreme pains, and may not agree to a policy of helping the greater number when what is at stake for the few is really bad pain.

6) "Look, your comments towards him are very long and convoluted. I'm not about to wade through it just to find the specific 1-2 sentences where you go astray. Especially when you stuff posts with "updates" alongside copies of your original comments, I find it almost painful to look through."

I'm sorry you find them convoluted. I updated the very first replies to Brian and Michael_S in order to try to make my position more clear for first-time readers like you. I spent a lot of time on trying to make my replies more clear because I don't want to waste reader's time. If I failed to do that, I can only say I tried.

7) "I don't see why identifying with helping the less fortunate (something which almost everybody does, in some fashion or other) implies that we should hold philosophical arguments to gentle standards."

I never asked for gentle standards. I asked for a non-dismissive and friendly attitude.

8) "The time and knowledge of people who help the less fortunate is particularly valuable, so one should be willing and able to credibly signal the occasional times when one is confident that the people who help the less fortunate ought to be focusing elsewhere."

I didn't quite understand the latter half, but yes, their time is valuable, which is why I've tried to be as clear I can. In any case, it is a good thing to critically examine one's own views from time to time, no matter how vital one's time seems under the supposition of that view. So - if I understood the latter part correctly - you needn't worry so much about saving other people's time from my post.

9) "Conversations mustn't be friendly to be informative, and I'm really not being dismissive about anything you write which I do have the time to read."

A person (at least speaking for myself) is much more receptive to the content of another's comment when they are put in a friendly (though demanding) manner. Thus, friendliness helps make conversation more informative.

Whereas dismissive and unfriendly comments like "I'm not about to wade through it just to find the specific 1-2 sentences where you go astray." or "I find it almost painful to look through." do not.

P.S. I will not be replying to any more of your comments that I feel are either uncharitable, dismissive or shows a lack of effort spent on understanding my position.

Ops, just noticed I missed a comment you made:

10) "Because there is no reason for the distribution of certain wrongs across different people to affect the badness of those wrongs, as our account of the badness of those wrongs does not depend on any facts about the particular people to whom they occur."

As I see it, a case or state of affairs in which 5 minor headaches are all felt by one person is MORALLY WORSE than a case in which 5 minor headaches are spread across 5 persons because 5 minor headaches all felt by one person is EXPERIENTIALLY WORSE than 5 minor headaches spread across 5 persons.

I take experience to be the only morally relevant factor, and in this way, I am a moral singularist (as opposed to pluralist). For why I think the former is experientially worse than the latter, please at least read my first reply to Michael_S. Thanks.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-20T01:46:42.865Z · EA · GW

1) The reason that the conclusions made in such a scenario have a bearing on reality is that the conclusions are necessarily both fair and rational.

The conclusions are rational under the stipulation that each person has an equal chance of being in anybody's position. But it is not actually rational given that the stipulation is false. So you can't just say that the conclusions have a bearing on reality because they are necessarily rational. They are rational under the stipulation, but not when you take into account what is actually the case.

And I don't see how the conclusion is fair to Bob when the conclusion is based on a false stipulation. Bob is a real person. He shouldn't be treated like he had an equal chance of being in Amy's or Susie's position, when he in fact didn't.

2) "My reply to Bob would be to essentially restate brianwang's original comment..."

Sorry, can you quote the part you're referring to?

3) "...and explain how the morally correct course of action is supported by a utilitarian principle of indifference argument."

Can you explain what this "utilitarian principle of indifference argument" is?

4) "and that none of the things he says (like the fact that he is not Amy or Susie, or the fact that he is scared) are sound counterarguments."

Please don't distort what I said. I had him say, "The fact of the matter is that I had no chance of being in Amy's or Susie's position.", which is very different from saying that he was not Amy or Susie. If he wasn't Amy or Susie, but actually had an equal chance of being either of them, then I would take the veil of ignorance approach more seriously.

I added the part that he is said because I wanted it to sound realistic. It is uncharitable to assume that that forms part of my argument.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-19T23:17:29.487Z · EA · GW

1) "Reason and empathy don't tell you to arbitrarily save fewer people."

I never said they tell me to arbitrarily save fewer people. I said that they tell us to give each person an equal chance of being saved.

2) "This doesn't answer the objection."

That premise (as indicated by "P1."), plus my support for that premise, was not meant to answer an objection. It was just the first premise of an argument that was meant to answer objection 1.

3) "There is more suffering when it happens to two people, and more suffering is morally worse."

Yes, there is more instances of suffering. But as I have tried to argue, x instances of suffering spread across x people is just as morally bad as 1 instance of the same kind of suffering had by one other person. If by 'more suffering' you meant worse suffering in an experiential sense, then please see my first response to Michael.

4) "The fact that the level of suffering in each person is the same doesn't imply that they are morally equivalent outcomes."

I didn't say it was implied. If I thought it was implied, then my response to Objection 1 would have been much shorter.

5) "This is a textbook case of begging the question."

I don't see how my assumption is anywhere near what I want to conclude. It seems to me like an assumption that is plausibly shared by all. That's why I assumed it in the first place: to show that my conclusion can be arrived at from shared assumptions.

6) "No one you're arguing with will grant that we should act differently for cases 2 and 3."

I would hesitate to use "No one". If this were true, then I would have expected more comments along those lines. More importantly, I wonder why one wouldn't grant that we should act differently in choice situations 2 and 3. If the reason boils down to the thought that 5 minor pains is experientially worse than 1 major pain, regardless if the 5 minor pains are all had felt by one person or spread across 5 different people, then I would point you to my conversation with Michael_S.

Finally, I just want to say that all the people I've conversed with on this forum so far have been very friendly and not dismissive, despite perhaps some differences in view. I wasn't surprised by that because (presumably) most people on here are effective altruists, and it would seem rather odd for an effective altruist - someone who identifies with helping the less fortunate - to be unfriendly or dismissive. Anyways, I do hope to remain unsurprised by that. I think only in a friendly and non-dismissive atmosphere can the interlocutors benefit from their conversation.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-19T22:24:06.807Z · EA · GW

Hey kbog,

Thanks for your comment. I never said it was up for debate. Rather, given that it is stipulated, I question whether agreements reached under such stipulations have any force or validity on reality, given that the stipulation is, in fact, false.

Please read my second response to brianwang712 where I imagine that Bob has a conversation with him. I would be curious how you would respond to Bob in that conversation.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-19T18:15:45.618Z · EA · GW

Hey gworley3,

Here's the comment I made about the difference between effective-altruism and utilitarianism (if you're interested): http://effective-altruism.com/ea/1ll/cognitive_and_emotional_barriers_to_eas_growth/dij

Comment by Jeffhe on [deleted post] 2018-03-19T17:45:15.502Z

Hey gworley3,

I decided to delete the post seeing that it wasn't getting many responses. Thanks for replying anyways!

Comment by Jeffhe on [deleted post] 2018-03-19T00:33:10.116Z

Hey Khorton,

Thanks for sharing! For some reason, I totally did not expect faith/religion to come up. Clearly I have not thought broadly in enough ><. If I included a new option like

10) I donate/plan to donate because I am of a particular faith/religion that calls on me or requires me to do charitable deeds

do you think that would be more true of you than 1)? How important is it to you that doing charitable deeds is morally good or right? In other words, what if God did not create morality and simply requested that you help others without it being morally good or bad? Do you think you would still do it?

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-19T00:06:45.028Z · EA · GW

REVISED TO BE MORE CLEAR ON MAR 19:

You also write, "There is more pain (more of these chemical reactions based experiences) in the 5 headaches than there is in the 1 whether or not they occur in a single subject. I don't see any reason to treat this differently then the underlying chemical reactions."

Well, to me the reason is obvious: when we say that 5 minor pains in one person is greater than (i.e. worse than) a major pain in one person" we are using "greater than" in an EXPERIENTIAL sense. On the other hand, when we say that 10 neural impulses in one person is greater than 5 neural impulses in one person, we are using "greater than" in a QUANTITATIVE/NUMERICAL sense. These two comparisons are very different in their nature. The former is about the relative STRENGTH of the pains, the latter is about the relative QUANTITIES of neural impulses.

So just because 10 neural impulses is greater than 5 neural impulses in the numerical sense, whether the 10 impulses take place in 1 brain or 5 brains, that does NOT mean that 5 minor pains is greater than 1 major headache in the experiential sense, whether the 5 minor pains are realized in 1 brain or 5 brains.

This relates back to why I said it can be very misleading to represent pain comparisons in numerals like 5*2>5. Such representations do not distinguish between the two senses described above, and thus can easily lead one to conflate them.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-18T23:34:48.234Z · EA · GW

Just to make sure we're on the same page here, let me summarize where we're at:

In choice situation 2 of my paper, I said that supposing that any person would rather endure 5 minor headaches of a certain sort than 1 major headache of a certain sort when put to the choice, then a case in which Al suffers 5 such minor headaches is morally worse than a case in which Emma suffers 1 such major headache. And the reason I gave for this is that Al's 5 minor headaches is more painful (i.e. worse) than Emma's major headache.

In choice situation 3, however, the 5 minor headaches are spread across 5 different people: Al and four others. Here I claim that the case in which Emma suffers a major headache is morally worse than a case in which the 5 people each suffer 1 minor headache. And the reason I gave for this is that Emma's major headache is more painful (i.e. worse) than each of the 5 people's minor headache.

Against this, you claim that if the supposition from choice situation 2 carries over to choice situation 3 - the supposition that any person would rather endure 5 minor headaches than 1 major headache if put to the choice -, then the case in which the 5 people each suffer 1 minor headache is morally worse than Emma suffering a major headache. And your reason for saying this is that you think 5 minor headaches spread across the 5 people is more painful (i.e. worse) than Emma's major headache.

THAT is what I took you to mean when you wrote: "Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people."

As a result, this whole time, I have been trying to explain why it is that 5 minor headaches spread across five people CANNOT be more painful (i.e. worse) than a major headache, even while the same minor 5 headaches all had by one person can (and would be, under the supposition).

Importantly, I never took myself to be disagreeing with you on whether 5 instances of a minor headache is more than 1 instance of a major headache. Clearly, 5 instances of a minor headache is more than 1 instance of a major headache, regardless of whether the 5 instances were all experienced by a single subject-of-experience or spread across 5.

I took our disagreement to be about whether 5 instances of a minor headache, when spread across 5 people, is more painful (i.e. worse) than an instance of a major headache.

My view is that only when the 5 headaches are all had by one subject-of-experience could they be more painful (i.e. worse) than a major headache. Moreover, my view is that it literally makes no sense to say (or that it is at least false to say, even if it made sense) that the 5 headaches, when spread across 5 people, is more painful (i.e. worse) than a major headache, under the supposition.

If I am right, then in choice situation 3, the morally worse case should be the case in which Emma suffers one major headache, not the case in which 5 people each suffer one minor headache.

In response to your question, "what makes a single subject "a single subject", here is another stab: Within any given physical system that can realize subjects of experience (e.g. a cow's brain), the subject-of-experience at t-1 (S1) is numerically identical to the subjective-of-experience at t-2 (S2) if and only if an experience at t-1 (E1) and an experience at t-2 (E2) are both felt by S1. That is S1 = S2 iff S1 feels E1 and E2.

That in conjunction with the definition I provided earlier is probably the best I can do to communicate what I take a subject-of-experience to be, and what makes a particular subject-of-experience the numerically same subject-of-experience over time.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-18T21:53:22.129Z · EA · GW

Hey Brian,

No worries! I've enjoyed our exchange as well - your latest response is both creative and funny. In particular, when I read "They have read your blog post on the EA forum and decide to flip a coin", I literally laughed out loud (haha). It's been a pleasure : ) If you change your mind and decide to reply, definitely feel welcome to.

Btw, for the benefit of first-time readers, I've updated a portion of my very first response in order to provide more color on something that I originally wrote. In good faith, I've also kept in the response what I originally wrote. Just wanted to let you know. Now onto my response.

You write, "In the donor case, Bob had a condition where he was in the minority; more often in his life, however, he will find himself in cases where he is in the majority (e.g., hospital case, loan case). And so over a whole lifetime of decisions to be made, Bob is much more likely to benefit from the veil-of-ignorance-type approach."

This would be true if Bob has an equal chance of being in any of the positions of a given future trade off situation. That is, Bob would have a higher chance of being in the majority in any given future trade off situation if Bob has an equal chance of being in any of the positions of a given trade off situation. Importantly, just because there is more positions on the majority side of a trade off situation, that does not automatically mean that Bob has a higher chance of being among the majority. His probably or chance of being in each of the positions is crucial. I think you were implicitly assuming that Bob has an equal chance of being in any of the positions of a future trade off situation because he doesn't know his future. But, as I mentioned in my previous post, it would be a mistake to conclude, from a lack of knowledge about one's position, that one has an equal chance of being in any one's position. So, just because Bob doesn't know anything about his future, it does not mean that he has an equal chance of being in any of the positions in the future trade off situations that he is involved in.

In my original first response to you, I very briefly explained why I think people in general do not have an equal chance of being in anybody's position. I have sense expanded that explanation. If what I say there is right, then it is not true that "over a whole lifetime of decisions to be made, Bob [or anyone else] is much more likely to benefit from the veil-of-ignorance-type approach [than the equal-chance approach]."

All the best!

Comment by Jeffhe on [deleted post] 2018-03-17T21:59:57.319Z

Hey RandomEA (nice to chat again in a different setting lol),

Thanks for linking me to that. I understand moral duty and obligation to mean the same. Do you know what difference they had in mind? And 'opportunity' sounds very vague. It doesn't tell us much about the psychology of the surveyees.

Comment by Jeffhe on [deleted post] 2018-03-17T21:57:59.198Z

Hey adamaero,

I agree that reasons change! But I would be curious what your current reason is :P (don't worry if you don't want to say)

Also, can you tell me which count as justifications and which count as reasons for you, and the difference between a reason and a justification for you?

I understand myself to be using the word 'reason' to mean cause here, but 'reason' can also be used to mean justification since in everyday parlance, it is a pretty loose term. Something similar can be said for the words 'why' and 'because'.

As I see it, the real distinction is between a cause and a justification. We all more-or-less know what someone means when they say X is the cause of Y. However justification is less clear, so I want to share my understanding of justification (so you know where my mind is at).

As I see it, Y (e.g. an action or belief or piece of legislation) requires justification ONLY IF it is held to some standard (perhaps an implicit one). That which does the justifying (i.e. X) does it by showing how X in fact meets that standard. Take a CEO's actions. They are held (by shareholders and others) to the standard of being conducive to the success of the business. If it is unclear to them how one of the CEO's recent actions (say, laying off a rather effective employee) is good for the business, they might ask the CEO to justify his action. The CEO might then say that he was made aware that that employee was planning to leak company secrets. In saying this, he is offering a fact that shows how his action meets the standard it is held to.

Note that it follows from this understanding of justification that justification is subjective in the sense that justification is always justification TO SOMEONE. If you and I hold Y to different standards, then when presented with X, Y may be justified TO YOU, though it remains justified to me. And someone who doesn't hold Y to any standard won't even ask for a justification of it in the first place.

Note also that for many things, it makes sense to ask both for a cause and a justification, like actions and beliefs. But since almost everything has a cause, but relatively few things are held to a standard (implicit or explicit), questions of cause occur more.

Finally note that cause and justification can interact in various ways. For example, a person might believe that a certain act is justified, and that belief in conjunction with a desire to act in a justified way may cause him to act in that way.

I've never shared these views about justification with anyone but a close friend. So it would be interesting to know if your view is the same.

Having said all that, I admit I could have made certain of the reasons I listed sound more "cause-y" (maybe 1 and 2). Are those the ones you're concerned about?

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-17T19:14:47.889Z · EA · GW

1) A subject of experience is just something which "enjoys" or has experience(s), whether that be certain visual experiences, pain experiences, emotional experiences, etc... In other words, a subject of experience is just something for whom there is a "what-it's-like". A building, a rock or a plant is not a subject of experience because it has no experience(s). That is, for example, why we don't feel concerned when we step on grass: it doesn't feel pain or feel anything. On the other hand, a cow is a subject-of-experience - it presumably has visual experiences and pain experience and all sorts of other experiences. Or more technically, a subject-of-experience (or multiple) may be realized by a cow's physical system (i.e. brain). There would be a single subject-of-experience if all the experiences realized by the cow's physical system are felt by a single subject. Of course, it is possible that within the cow's physical system's life span, multiple subjects-of-experience are realized. This would be the case if not all of the experiences realized by the cow's physical system are felt by a single subject.

2) But when we say that 5 minor headaches is "worse" or "more painful" than a major pain, we are not simply making a "greater than, less than, or equal to" number comparison like 5 minor headaches is more headaches than 1 major headaches.

Clearly 5 minor headaches, whether they are spread across 5 persons or not, is more headaches than 1 major headache. But that is irrelevant. Because the claim you're making is that 5 minor headaches, whether they are spread across 5 persons or not, is WORSE or MORE PAINFUL than 1 major headache. And this is where I disagree.

I am saying that for 5 minor headaches to be plausibly worse than a major headache, it must be the case that there is a what-it's-like-of-going-through-5-minor-headaches, because only THAT KIND of experience can be plausibly worse or more painful than a major headache. But, for there to be THAT KIND of experience, it must be the case that all 5 minor headaches are felt by a single subject of experience and not spread among 5 experientially independent subjects of experience. For when the 5 minor headaches are spread, there is only 5 experientially independent what-it's-like-of-going-through-a-minor-headache, and no what-it's-like-of-going-through-5-minor-headache.

Sorry for the caps btw, I have no other way of placing emphasis.

Comment by jeffhe on Enlightened Concerns of Tomorrow · 2018-03-17T17:57:40.501Z · EA · GW

Hey Cassidy,

Very well written post! I didn't read his book, but just going off your summary of his view where you characterize him as "asserting that knowledge and technology will alleviate most of our persisting worries in time" and where you quote him saying, “… there is no limit to the betterments we can attain if we continue to apply knowledge to enhance human flourishing.”, I am curious how much weight Pinker as well as you give to

1) empathy (i.e. the ability to imagine oneself in the shoes of another - to imagine what it might be like for another) and/or

2) caring for strangers and/or

3) fair-minded-ness (e.g., intellectual humility, critical thinking skills, listening skills, etc).

in the solution to making the world a better place of a lasting nature.

My own opinion is that knowledge and technology alone cannot solve many of the problems that make our world a less than ideal place such as wars or long standing conflicts like the Israeli-Palestinian conflict or the drug cartel problem or religiously motivated terrorism. Knowledge and technology might solve poverty and disease, but I don't see them solving many great sources of suffering for innocent people.

From this point of view, I find that one of the biggest gaps in our education systems these days is a lack of emphasis on teaching/instilling the things I've mentioned above. Having said that, I am tempted by the idea that one of the best ways to make the world a better place in the future is to donate to organizations that try to promote those things in school. I wonder what your opinion on that is.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-17T17:24:13.645Z · EA · GW

Hey RandomEA,

Sorry for the late reply. Well, say I'm choosing between the World Food Programme (WFP) and some other charity, and I have $30 to donate. According to WFP, $30 can feed a person for a month (if I remember correctly). If I donate to the other charity, then WFP in its next operation will have $30 less to spend on food, meaning someone who otherwise would have been helped won't be receiving help. Who that person is, we don't know. All we know is that he is the person who was next in line, the first to be turned away.

Now, you disagree with this. Specifically you disagree that it could be said of any SPECIFIC person that, if I don't donate to WFP, that it would be true of THAT person that he won't end up receiving help that he otherwise would have. And this is because:

1) HE - that specific person - still had a chance of being helped by WFP even if I didn't donate the $30. For example, he might have gotten in line sooner than I'm supposing he has. And you will say that this holds true for ANY specific person. Therefore, the phrase "he won't end up receiving help" is not guaranteed.

2) Moreover, even if I do donate the $30 to WFP, there isn't any guarantee that he would be helped. For example, HE might have gotten in line way to late for an additional $30 to make a difference for him. And you will say that this holds true for ANY specific person. Therefore, the phrase "that he otherwise would have" is also not guaranteed.

In the end, you will say, all that can be true of any SPECIFIC person is that my donation of $30 would raise THAT person's chance of being helped.

Therefore, in the real world, you will say, there's rarely a trade-off choice situation between specific people.

I am tempted to agree with that, but two points:

1) There still seems to be a trade off choice situation between specific groups of people: i.e. the group helped by WFP and the group helped by the other charity.
2) I think, at least in refugee camps, there is already a list of all the refugees and a document specifying who in specific is next in line to receive a given service/aid. In these cases, we will be faced with a trade off choice situation between a specific individual (who we would be helping if we donated to the refugee camp) and whatever group of people that would be helped by donating to another charity. I wonder what percentage of real life situations are like this. Moreover, if you're looking for real life trade off situations between some specific person(s) and some other specific person or specific group, they are clearly not hard to find. For example, you can either help a specific homeless man vs whoever. Or you can help a specific person avoid torture by helping pay off a ransom vs whoever else by helping a charity. Or you can spend fund a specific person's cancer treatment vs whoever. Etc...

My overall point is that trade off situations of the kind I describe in my paper are very real and everywhere EVEN IF it is true that there are trade off situations of the nature you describe.

Thanks.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-17T03:31:30.737Z · EA · GW

1) I agree that the me today is different from the me yesterday, but I would say this is a qualitative difference, not a numerical difference. I am still the numerically same subject-of-experience as yesterday's me, even though I may be qualitatively different in various physical and psychological ways from yesterday's me. I also agree that the me today is different from the you today, but here I would say that the difference is not merely qualitative, but numerical too. You and I are numerically different subjects-of-experience, not just qualitatively different.

Moreover, I would agree that our qualitative differences are a matter of degrees and not of kind. I am not a chair and you a subject-of-experience. We are both embodied subjects-of-experience (i.e. of that kind), but we differ to various degrees: you might be taller or lighter-skinned, etc

I thus agreed with all your premises and have shown that they can be compatible with the existence of a subject-of-experience that extends through time. So I don't quite see a convincing argument for the lack of the existence of a subject-of-experience that extends through time.

2) So here you're granting me the existence of a subject-of-experience that extends through time, but you're saying that it makes no moral difference whether one subject-of-experience suffers 5 minor headaches or 5 numerically different subjects-of-experience each experience 1 minor headache, and that therefore, we should just focus on the number of headaches.

Well, as I tried to explain in previous replies, when there is one subject-of-experience who extends through time, it is possible for him to experience what it's like of going through 5 minor headaches, since after all, he experiences all 5 minor headaches (whether he remembers experiencing them or not). Moreover, it is ONLY the what-it's-like-of-going-through-5-minor-headaches that can plausibly be worse or more painful than the what-it's-like-of-going-through-a-major-headache.

In contrast, when the 5 minor headaches are spread across 5 people, each of the 5 people experiences only what it's like to go through 1 minor headache. Moreover, the what-it's-like-of-going-through-1-headache CANNOT plausibly be worse or more painful than the what-it's-like-of-going-through-a-major-headache.

Thus it matters whether the 5 headaches are experienced all by a single subject-of-experience (i.e. experienced together) or spread across five experientially independent subject-of-experiences (i.e. experienced independently). It matters because, again, ONLY when the 5 headaches are experienced together can there be the what-it's-like-of-going-through-5-minor-headaches and ONLY that can plausibly be said to be worse or more painful than the what-it's-like-of-going-through-a-major-headache.

P.S. I have extensively edited my very first reply to you, so that it is more clear and detailed for first-time readers. I would recommend giving it a read if you have the time. Thanks.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-17T02:12:42.613Z · EA · GW

Hi Telofy,

Thanks for this lucid reply. It has made me realize that it was a mistake to use the phrase "clear experiential sense" because that misleads people into thinking that I am referring to some singular experience (e.g. some feeling of exhaustion that sets in after the final headache). In light of this issue, I have written a "new" first reply to Michael_S to try to make my position clearer. I think you will find it helpful. Moreover, if you find any part of it unclear, please do let me know.

What I'm about to say overlaps with some of the content in my "new" reply to Michael_S:

You write that you don't see anything morally relevant linking the person moments of a single person. Are you concluding from this that there is not actually a single subject-of-experience who feels, say, 5 pains over time (even though we talk as if there is)? Or, are you concluding from this that even if there is actually just a single subject-of-experience who feels all 5 pains over time, it is morally no different from 5 subjects-of-experience who each feels 1 pain of the same sort?

What matters to me at the end of the day is whether there is a single subject-of-experience who extends through time and thus is the particular subject who feels all 5 pains. If there is, then this subject experiences what it's like of going through 5 pains (since, in fact, this subject has gone through 5 pains, whether he remembers going through them or not). Importantly, the what-it's-like-of-going-through-5-pains is just the collection of the past 5 singular pain episodes, not some singular/continuous experience like an feeling of exhaustion or some super intense pain from the synthesis of the intensity of the 5 past pains. It is this what-it's-like that can plausibly be worse than the what it's like of going through a major pain. Since there could only be this what-it's-like when there is a single subject who experiences all 5 pains, therefore 5 pains spread across 5 people cannot be worse than a major pain (since, at best, there would only be 5 experientially independent what-it's-like-of-going-through-1-minor-headache).

My latest reply to Michael_S focuses on the question whether there could be a single subject-of-experience who extends through time, and thus capable of feeling multiple pains.

Comment by jeffhe on Is Effective Altruism fundamentally flawed? · 2018-03-16T22:17:28.080Z · EA · GW

Hi Jonathan,

Thanks for directing me to Scanlon's work. I am adequately familiar with his view on this topic, at least the one that he puts forward in What We Owe to Each Other. There, he tried to put forward an argument to explain why we should save the greater number in a choice situation like the one involving Bob, Amy and Susie, which respected the separateness of persons, but his argument has been well refuted by people like Michael Otsuka (2000, 2006).

Regarding your second point, what reason can you give for giving each person less than the maximum equal chance possible (e.g. 50%) aside from wanting to sidestep a conclusion that is worrying to you? Suppose I choose to give Bob, Amy and Susie each a 1% of being saved, instead of each a 50% of being saved, and I say to them, "Hey none of you have anything to complain about because I'm technically giving each of you an equal chance, even though most likely, none of you will be saved." Each of them can reasonably protest that doing so does not treat them with the appropriate level of concern. Say then, I give each of them a 1/3 chance of being saved (as you propose we do) and again I say to them, "Hey none of you have anything to complain about because I'm technically giving each of you an equal chance". Don't you think they can reasonably protest in the same way until I give them each the maximum equal chance (i.e. 50%)?

Regarding your third point, I don't see how I can divide up the groups differently. They come to me as given. For example, I can't somehow switch Bob and Amy's place such that the choice situation is one of either helping Amy or helping Bob and Susie. How would I do that?