Asymmetric altruism

post by Stijn · 2020-06-27T17:49:06.082Z · score: 5 (6 votes) · EA · GW · 3 comments

Contents

  Is veganism altruistic for animals?
  Is saving the future altruistic?
  Positive versus negative altruism
None
3 comments

In effective altruism, we have to prioritize the most effective ways to do good. But there are different notions of altruism that influence our prioritization. Altruism has to do with helping others. But the tricky question becomes: helping who exactly? And what is helping? I will argue that we have to make a distinction between positive versus negative altruism, and that this distinction becomes important in effective altruistic prioritization.

To start, consider a person who is about to undergo a surgical operation. At time 1, before the operation, the person is fully conscious and has mental state P1. We can choose between two options A and B. At time 2, during the operation, the person can either have anesthesia (option A), or not (option B). This can be described with two possible worlds. In the world A where we choose for the anesthesia, the anesthetized person is unconscious, having an empty mental state P2A=0 (i.e. no subjective experiences and preferences). In the second world, the patient does not get the anesthesia and will be in extreme agony, with mental state P2B. Altruistically speaking, it is better to choose option A, because this is helping the patient. Giving the anesthesia is something the patient wants.

In the case of the surgical operation, it is clear who is being helped. We can consider the mental states P1, P2A and P2B as belonging to the same person, because those mental states are related to each other. In particular, the person at time 1 with mental state P1 is concerned about his/her own future and hence identifies him/herself with the future mental states P2A and P2B. Similarly, the person with mental state P2B can acknowledge that he/she is the same person as P1 as well as P2A. P2A is basically P2B’s alter ego in the other possible world. A slightly tricky issue arises when we consider P2A, who is unconscious and hence not able to feel a personal identity with neither P1 nor P2A. P2A has no beliefs, and hence no belief that he/she is the same person as P1. Still, given the beliefs of P1 and P2B, we can consider P1, P2A and P2B as the same person, and the anesthesia helps that person.

Is veganism altruistic for animals?

Giving anesthesia to the patient is a clear example of altruism: it helps the other. But what about veganism? Animal farming causes animal suffering. Almost all farm animals have very negative experiences. We can avoid this suffering, by eating vegan. But that means those farm animals would not be born and hence not exist.

Consider at time 1 a bunch of atoms and molecules floating around. This group of molecules has an empty mental state P1=0. Then we have a choice to eat vegan (option A) or eat meat (option B). Option A means those atoms will keep floating around, having again an empty mental state P2A=0. Only in option B will those atoms rearrange themselves to create a mental state P2B in an animal brain. P2B has unwanted negative experiences.

If we choose option A, are we helping the animal? Which animal? The animal does not exist in option A: the mental state P2A was empty. P1 also is an empty mental state, which means no identification with neither P2A nor P2B. And it is very unlikely that animal P2B can identify him/herself with the non-existing animals (i.e. the bunch of molecules) P1 and P2A. Hence, P1, P2A and P2B cannot be considered as the same person. So, are we really helping an animal when we choose a situation where the animal does not exist?

Is saving the future altruistic?

Next, we can consider existential risks: situations that lead to the extinction of intelligent or sentient life. At time 1, future generations are not born yet, and hence they can be represented by a bunch of atoms having empty mental states P1=0. Then we can choose between two options: either we do not avoid the existential catastrophe, which means those atoms will have a future empty state P2A=0. Or we prevent the extinction, which means those atoms will rearrange themselves and future people will be born, having mental states P2B.

If we choose option B, are we helping those future people? Yes, because those people will exist in world B. But if we choose option A, are we harming those people? No, because those people will never exist in world A.

Positive versus negative altruism

It is time to consider two kinds of altruism. Positive altruism means: choosing what someone else wants. Negative altruism, on the other hand, means: not choosing what someone else does not want. This is a bit related to the two versions of the golden rule: “Treat others in ways that you want to be treated”, versus “Do not treat others in ways that you do not want to be treated.”

By choosing the anesthesia, we are altruistic in both positive and negative senses. We choose what person P1 wants (the anesthesia), and we do not choose what person P2B does not want (the suffering). By choosing veganism, we are only being negatively altruistic: we do not choose what person P2B does not want. And by choosing to avoid the existential risk, we are only being positively altruistic: we choose what people with mental states P2B want.

When we have to prioritize between different ways to do good, the question is whether double altruism (i.e. both positive and negative altruism) is more valuable than single altruism, and whether single positive altruism is more valuable than single negative altruism. How can we tell which is most important?

It can be argued that double altruism is twice as good as single altruism, in the sense that double altruism takes into account the preferences of two mental states P1 and P2B, whereas single altruism only considers P2B. Hence, when choosing between double and single altruism, double altruism can be prioritized (all else equal, hence assuming the preferences or wants are equally strong in the different situations).

But suppose we have to choose between single positive and single negative altruism. For example: should we prioritize veganism or safeguarding the future (assuming that an equal amount of animals and potential future beings are involved, with equally strong preferences for option B)? We see a lot of asymmetries in ethics (e.g. killing someone is worse than not saving someone, and causing the existence of someone who constantly suffers is always bad whereas causing the existence of someone who is always happy is not always good). Some asymmetries can be defended (see e.g. here), and I tend to believe that negative altruism is more valuable than positive altruism. If negative altruism is considered very important, then veganism becomes more important.

In theory, we can solve this issue by being altruistic: let the others decide. In particular: ask the farm animals and the future generations whether they prioritize negative altruism above positive altruism. But that is of course unfeasible. How to weigh positive versus negative altruism is a question I will leave for further investigations.

3 comments

Comments sorted by top scores.

comment by Thomas Kwa (tkwa) · 2020-06-27T18:18:03.872Z · score: 1 (1 votes) · EA(p) · GW(p)

Have you heard the 80000 Hours podcast episode with Will MacAskill? The first hour has a decent exploration of asymmetries and similar deontological concerns, and MascAskill's paralysis argument is a fairly good argument against them.

comment by Thomas Kwa (tkwa) · 2020-06-27T19:32:27.807Z · score: 1 (1 votes) · EA(p) · GW(p)

I notice that I meant to link to this different episode on the non-identity problem but found it didn't really fit and rationalized that away, so my comment may not be relevant.

comment by antimonyanthony · 2020-06-27T18:52:04.151Z · score: 1 (1 votes) · EA(p) · GW(p)

Asymmetries need not be deontological; they could be axiological. A pure consequentialist could maintain that negative experiences are lexically worse than absence of good experiences, all else equal (in particular, controlling for the effects of good experiences on the prevalence of negative experiences). This is controversial, to be sure, but not inconsistent with consequentialism and hence not vulnerable to Will's argument.