some concerns with classical utilitarianismpost by nil (eFish) · 2020-11-14T09:29:22.544Z · EA · GW · 17 comments
Brief Disclaimer Suggests Calculations Where an Actual Experience Can Be Lost in Abstractions and Tries to Compare Aggregates With an Actual Experience / “Canceling Out” Actual and Expected Suffering Commensurability of (Extreme) Suffering Not (Only) About Solving Problems vs the Optional / Supererogatory "Maximizing Happiness" Is Insatiable Conclusion Acknowledgments Footnotes None 17 comments
As the majority of (aspiring) effective altruists endorse some kind of utilitarianism, I’m especially keen on understanding if and how the concerns I’ve accumulated from elsewhere about classical utilitarianism (CU) can be resolved. I would be glad if someone points me to resources and discussions where the concerns below have been addressed.
A Brief Disclaimer
I myself endorse (a form of negative) consequentialism and am sympathetic with (algo-hedonistic) utilitarianism in its locating intrinsic moral (dis)value in sentience, and in its emphasis on impartiality and effectiveness. With some modifications to it, based mostly on some of the concerns I have with CU, I may identify with negative utilitarianism (NU).
I also reviewed drafts of Magnus Vinding’s Suffering-Focused Ethics: Defense and Implications (Vinding, 2020), the recent [EA · GW] book that gave me a lot of material for this post. (I highly recommend the book if you are thinking of checking it out.)
Suggests Calculations Where an Actual Experience Can Be Lost in Abstractions
My general concern with CU is that the language it uses is suggestive of abstractions which, in my view, can give meaningless or at least misleading results if we are not careful. For despite defining the intrinsic value in terms of well-being, CU cultivates an intuition where an actual, intense experience happening in a being can be “canceled out” or prefered to an aggregate of many subtle / trivial feelings.
I’m not arguing against abstractions (definitely not as a programmer ;)), as they are indispensable for making decisions in the changing complex world. The problem is, some of these abstractions can sometimes confuse us into (in)action that can give us highly suboptimal results in the real world (such as extreme suffering).
Also note that due to the nature of our memory and closed-individualist intuitions, some calculations that (on a better reflection at least) many would not apply interpersonally (i.e. across individuals), seem much more plausible intrapersonally (if only until we actually suffer and regret). (Cf. Derek Parfit’s “compensation principle” - “One person's burdens cannot be compensated by benefits provided for someone else.” - and Karl Popper’s “from the moral point of view, pain cannot be outweighed by pleasure, and especially not one man’s pain by another man’s pleasure” (emphasis mine).)
The subsections below expand on the general concern of this section.
Aggregates and Tries to Compare Aggregates With an Actual Experience
In determining how good or bad a prescription is, CU aggregates “experiences, lives, or societies” to obtain the total value (and, according to utilitarianism.net, this is a defining feature of utilitarianism in general). Yet such aggregating can lead to what one may find as meaningless notions, such as comparing one experience to an aggregate of disconnected many (“meaningless” because no one experiences the aggregate).
For example, CU seems to deem one euphoric experience worse compared to N persons eating snacks when N is “sufficiently high”. Can this be justified if we care about an actual experience?
Here’s how psychologist and philosopher Richard Ryder expresses the objection:
One of the important tenets of painism (the name I give to my moral approach) is that we should concentrate upon the individual because it is the individual - not the race, the nation or the species - who does the actual suffering. For this reason, the pains and pleasures of several individuals cannot meaningfully be aggregated, as occurs in utilitarianism and most moral theories. One of the problems with the utilitarian view is that, for example, the sufferings of a gang-rape victim can be justified if the rape gives a greater sum total of pleasure to the rapists. But consciousness, surely, is bounded by the boundaries of the individual. My pain and the pain of others are thus in separate categories; you cannot add or subtract them from each other. They are worlds apart.
Without directly experiencing pains and pleasures they are not really there - we are counting merely their husks. Thus, for example, inflicting 100 units of pain on one individual is, I would argue, far worse than inflicting a single unit of pain on a thousand or a million individuals, even though the total of pain in the latter case is far greater. [bolding mine]
One can even question further whether preventing pain in several persons is always strictly better (morally speaking) than preventing the same felt pain in fewer individuals, other things being equal. For example, philosopher John Taurek argued (Taurek, 1977),
… The numbers, in themselves, simply do not count for me. I think they should not count for any of us.
As he explained, “[f]ive individuals each losing his life does not add up to anyone's experiencing a loss five times greater than the loss suffered by any one of the five,” unlike losing five objects “[b]ecause the five objects are together five times more valuable in my eyes than the one.” But for individuals “what happens to them is of great importance”.
Taurek wrote that
… there is simply no good reason why you should be asked to suffer so that the group may be spared. Suffering is not additive in this way. The discomfort of each of a large number of individuals experiencing a minor headache does not add up to anyone's experiencing a migraine. In such a trade-off situation ... we are to compare your pain or your loss, not to our collective or total pain, whatever exactly that is supposed to be, but to what will be suffered or lost by any given single one of us. [bolding mine]
Or, from his other example, why “focusing on the numbers should move you to sacrifice for [a group] collectively when you have no reason to sacrifice for them individually”?
Taurek’s view may seem extreme, as many may find aggregating too “useful” for making decisions in complex situations. But again, one need be careful when e.g. two equal numbers on paper stand for an aggregate of disconnected experiences on one side and fewer but stronger experiences on the other.
“Outweighing” / “Canceling Out” Actual and Expected Suffering
Would a classical utilitarian accept someone’s being tortured in return for a "sufficiently high" number of persons enjoying snacks? Would they still accept the offer while knowing precisely what it is like to be tortured? Or, to reduce the aggregation concern, would a CU accept someone’s being tortured while someone else is experiencing bliss of a “greater” absolute value?
Classical utilitarianism holds that we should act so that the world contains the greatest sum total of positive experience over negative experience.
A related (to aggregating) implication of CU is that separate experiences can “outweigh” or “cancel out” each other. This can lead to prescriptions where, for example, extreme suffering (including s-risks) are allowed to happen if the pleasure elsewhere is “sufficiently” high.
My concern is that the notion of “canceling out” obscures the fact that “outweighed” experiences still occur (or are expected to). They cannot be somehow erased from reality. Experiences are not like summands, which can be turned into a meaningful sum: we do not obtain a “net positive” experience (where summand experiences are canceled from history) by causing any amount of happiness “for” suffering.
In the world, all summand experiences would happen; and calculating the “net experience” would remain a potentially misleading abstraction. (We may still make sense of it, most plausibly in some intrapersonal tradeoffs, but I don’t think it can work for beings who cannot reason in such abstractions and make deliberate long-term tradeoffs with the self.) For example, one doesn’t obtain a “net positive” experience by, say, entertaining a sufficiently large audience while torturing a pig (behind the scene).
Even if one is inclined to bite the bullet here, it is worth at least considering that representing suffering and pleasure on a one-dimensional, linear [EA · GW] axis - and thus making them commensurable on our map - is an abstraction that can break in some real cases. And for ethical frameworks mistakes are especially unacceptable, as they can have catastrophic consequences, such as (arguably) allowing the creation of happiness (for the untroubled) at the cost of extreme suffering.
Dan Geinster (who argues for a simple dichotomy of hurt and the absence of hurt) objects to the outweighting notion thus:
... any amount of bliss (or enjoyment) cannot “justify”, “outweigh”, or “cancel out” any amount of hurt, as when people say that the joys of this world outweigh its sorrows (which is essentially saying that lesser hurt justifies greater hurt). Sure that bliss can reduce that hurt (such as fun reducing boredom), but [it] is relevant only insofar as it does. For what hurt it doesn’t reduce, still exists – and that’s the sticking point. Indeed, to say that bliss justifies hurt is like saying that the vast emptiness of space somehow outweighs all the suffering on earth … .
Similarly, the Center for Reducing Suffering (CRS) writes,
[The] notion of outweighing is more problematic than is commonly recognized, since it is not obvious in what sense such outweighing is supposed to obtain, nor what justifies it. [emphasis mine]
As CRS notes elsewhere,
The view that the disvalue of many states of mild discomfort can be added up to have greater disvalue than a full day of extreme suffering ... rests on highly non-obvious premises — for example, that the disvalue of different levels of discomfort and suffering can, in principle, be measured along a cardinal scale that has interpersonal validity, and furthermore that these value entities occupy the same dimension (so to speak) on this notional scale. In fact, these premises are “highly controversial and widely rejected” (Knutsson, 2016), and hence they too require elaborate justification.
Philosopher Clark Wolf in a 2004 paper touches on the concern from a population-ethical perspective and proposes what he calls “Misery Principle”:
If people are badly off, suffering, or otherwise remediably miserable, it is not appropriate to address their ill-being by bringing more happy people into the world to counterbalance their disadvantage. We should instead improve the situation of those who are badly off.
The intuition that happiness and suffering elsewhere can outweigh each other may partly come from a common experience of being in a state that has pleasant and aversive components. One side usually dominates the other, making the whole experience (dis)agreeable. It then may be tempting to extrapolate this across different consciousness-moments, despite there being no net experience across these consciousness-moments. As Vinding writes (bringing up also the issues of aggregating and extreme suffering that I introduce in the corresponding sections above and below) (Vinding, 2020, 8.5):
… unlike the case of pleasant components dominating aversive components, there is no straightforward sense in which the happiness of many can outweigh the extreme suffering of a single individual, although we may be tempted to (mis)extrapolate like this from the case of aversive components, their vast dissimilarity from suffering notwithstanding.
While suffering cannot be “outweighed” (in the sense of being undone by happiness elsewhere), it can be prevented and otherwise reduced. I suggest that adding this caveat to CU could make its common interpretations much more plausible.
Philosopher Simon Knutsson gives the following “one-paragraph case” against focusing on bringing many beings into existence at the cost of not preventing extreme suffering (cf. Nick Bostrom’s notion of “astronomical waste”):
There’s ongoing sickening cruelty: violent child pornography, chickens are boiled alive, and so on. We should help these victims and prevent such suffering, rather than focus on ensuring that many individuals come into existence in the future. When spending resources on increasing the number of beings instead of preventing extreme suffering, one is essentially saying to the victims: “I could have helped you, but I didn’t, because I think it’s more important that individuals are brought into existence. Sorry.”
Implicit Commensurability of (Extreme) Suffering
Perhaps the main reason I find CU (as an ethical theory) implausible is that if we assume we can always “aggregate” well- and ill-being, it is possible to allow even extreme suffering to take place (in exchange for preventing N instances of mild pain or for “greater” bliss, for example).
By “extreme/intense suffering” I mean suffering at least so bad that, in the moment one is experiencing it, one deems the suffering irredeemable and impossible to consent to (or, for beings who cannot make such judgment, suffering of a similar felt intensity).
Allowing such suffering is simply that - allowing extreme suffering to occur. Nothing, I submit, “cancels it out”. The tragedy - the suffering - cannot be undone. Even if it is done to prevent greater extreme suffering, it still occurs, i.e. it is not “made up for” or “canceled out” in any way. (Alas, even extreme suffering, while always being intrinsically bad, can be instrumentally good/necessary in this way.) It is an event in the world; it cannot be totalled out like a number.
It is not deontology (in my case, though deontology is compatible with the view I present); nor (necessarily) moral realism. It is “suffering realism” (or consciousness realism in general): it is acknowledging that suffering (as any other phenomenal state) is real: it is an objective part of the world, even if it is fully present only to a suffering consciousness-moment. And that the badness of suffering - and the forceful badness of extreme suffering - is simply part of what suffering is. As Vinding writes (Vinding, 2020, 5.4),
On my account, this is simply a fact about consciousness: the experience of suffering is inherently bad, and this badness carries normative force — it carries a property of this ought not occur that is all too clear, undeniable even, in the moment one experiences it. We do not derive this property. We experience it directly. [bolding mine]
As a safeguard against extreme suffering in particular, Vinding proposes a “principle of sympathy for intense suffering” (the quote is from Vinding, 2020, 4.1):
[W]e should sympathize with the evaluations of those subjects who experience suffering so intense that they 1) consider it unbearable — i.e. they cannot consent to it even if they try their hardest — and 2) consider it unoutweighable by any positive good, even if only for a brief experience-moment. More precisely, we should minimize the amount of such experience-moments of extreme suffering.
(Ryder may agree with this prioritization, as he continues his criticism of (aggregative) utilitarianism above with “In any situation we should thus concern ourselves primarily with the pain of the individual who is the maximum sufferer.” Philosopher Joseph Mendola may agree too: an "ordinal modification" to CU he proposes implies that our top ethical priority is to “ameliorate the condition of the worst-off moment of phenomenal experience in the world”.)
In identifying ““greater” happiness at the cost of extreme suffering” as a critical liability of CU, I seem to concur with Vinding, who used to be a CU and writes (Vinding, 2020, 0.5):
For instance, classical utilitarianism would, in theory, say that we should torture a person if it resulted in “correspondingly greater” happiness for others … . I used to simply shrug this off with the reply that such an act would never be optimal in practice … . Yet this reply is a cop-out, as it does not address the issue that imposing torture for joy would be right in theory. Beyond that, with a small modification to the thought experiment, my cop-out reply is not even true at the practical level, since classical utilitarianism, at least as many people construe it, indeed often would demand that we prioritize increasing future happiness rather than reducing future torment, in effect producing happiness at the price of torturous suffering that could have been prevented. [bolding mine]
One may ask why preventing extreme suffering should be granted the top priority (rather than, say, creating intense bliss or preserving knowledge)? The main reason, again, comes from the qualitative nature of extreme suffering: suffering is inherently urgent and problematic, it “cries out for its own abolition” (Mayerfeld); it cannot be ignored (when one is confronted with the suffering directly, i.e. by experiencing it).
As Vinding puts it (Vinding, 2020, 5.5),
… it is a fact about the intrinsic nature of extreme suffering that it carries disvalue and normative force unmatched by anything else. It is not merely a fact about our beliefs about extreme suffering. After all, our higher-order beliefs and preferences can easily fail to ascribe much significance to extreme suffering. And to the extent they do, they are simply wrong: they fail to track the truth of the disvalue intrinsic to extreme suffering. A truth all too evident to those who experience such suffering directly.
Any unproblematic state and “above”, on the other hand, carries “no urgent call for betterment whatsoever, and hence increasing the happiness of those who are not suffering has no urgency in the least” (Vinding, 2020, 1.4). Or consider the following intuition (ibid.):
Being forced to endure torture rather than dreamless sleep, or an otherwise neutral state, would be a tragedy of a fundamentally different kind than being forced to “endure” a neutral state instead of a state of maximal bliss.
Karl Popper contrasted suffering (without specifying intensity) and happiness thus (Popper, 1945, 9):
In my opinion ... human suffering makes a direct moral appeal, namely, the appeal for help, while there is no similar call to increase the happiness of a man who is doing well anyway.
Consequently, he criticized the classical utilitarian exchange of suffering for pleasure (especially in the interpersonal case) (ibid.):
A further criticism of the Utilitarian formula “Maximize pleasure” is that it assumes a continuous pleasure-pain scale which allows us to treat degrees of pain as negative degrees of pleasure. But, from the moral point of view, pain cannot be outweighed by pleasure, and especially not one man’s pain by another man’s pleasure.
(Popper further held that “unavoidable suffering—such as hunger in times of an unavoidable shortage of food—should be distributed as equally as possible.” (ibid.))
Clark Wolf, in an earlier paper (Wolf, 1997) where he develops an alternative to CU called the “Impure Consequentialist Theory of Obligation” (mentioned below in section “Obligations vs the Optional / Supererogatory”), likewise questions the commensurability assumed by CU:
Classical utilitarians assume [...] that pains and pleasures are commensurable so that they can balance one another out in a grand utilitarian aggregate. But it is far from obvious that pains and pleasures are commensurable in this way, and there is good reason to doubt that the twin utilitarian aims [of maximizing happiness and minimizing misery] are even compatible-- at least not without further explanation.
Even if one doesn’t accept the superiority and incommensurability of extreme suffering, one may still agree that reducing it makes the most sense in practice. For extreme suffering is still of huge disvalue, and it is much relatively easier to prevent than to bring about and sustain an “equivalent” happiness. Not least, “reducing unnecessary suffering” would probably gather a larger support (and cause less controversy) than promoting happiness. This is partly due to the fact that many ethics already emphasize compassion as a main value, and that there are more known causes of suffering (and bigger agreement on which ones are worth reducing) than there are known causes of happiness (especially such sources that are uncontroversially worth increasing for the happiness itself, rather than its instrumental value) (Vinding, 2020, 1.5).
On another common objection that we would need to specify a seemingly arbitrary point at which suffering becomes “infinitely worse”, see CRS’s "Clarifying lexical thresholds" and "Lexical views without abrupt breaks", “5.6 Extreme Versus Non-Extreme Suffering” and objection 10 in (Vinding, 2020), and Knutsson’s "Value lexicality".
Not (Only) About Solving Problems
The utilitarian doctrine is, that happiness is desirable, and the only thing desirable, as an end; all other things being only desirable as means to that end.
— John Stuart Mill, Utilitarianism, chap. 4
I think many EAs can relate to (re)defining “ethics as being about solving the world’s problems”. We can thus obtain a “problem vs non-problem” dichotomy, a less assuming (and hence perhaps less controversial) scale (compared to positive-negative ones) to measure how much different states in the world are worth our altruistic investments.
One may insist that an untroubled state that could be an intense pleasure is a problem. One response may be to agree but then to ask how important it is relative to preventing suffering. Given the qualitative and quantitative differences between happiness and suffering, preventing the worst suffering should arguably be given the top priority.
An analogy of “being below and above water respectively” from (Vinding, 2020, 8.5) may be illustrative in this discussion:
… one can say that, in one sense, being 50 meters below water is the opposite of being 50 meters above water. But this does not mean, quite obviously, that a symmetry exists between these respective states in terms of their value and moral significance. Indeed, there is a sense in which it matters much more to have one’s head just above the water surface than it does to get it higher up still.
[Turning rocks into happiness] strikes me as okay, but still utterly useless and therefore immoral if it comes at the opportunity cost of not preventing suffering. The non-creation of happiness is not problematic, for it never results in a problem for anyone (i.e. any consciousness-moment), and so there’s never a problem you can point to in the world; the non-prevention of suffering, on the other hand, results in a problem. [bolding mine]
Philosopher Henry Hiz (who endorsed NU) may have agreed, too:
For ethics, there is only suffering and the effective ways of alleviating it. [emphasis mine]
Then, cognitive scientist Stevan Harnad argues that pain and pleasure, like hot and cold sensations, are incommensurable, and thus:
… it is the partial coupling of pleasure with pain (because pleasure reduction or deprivation can also feel painful) that makes pleasure matter morally at all. For in a unipolar hedonic world with only pleasure and no pain (hence no regret or disappointment or discomfort if deprived of pleasure) there would be no welfare problems … . [bolding mine]
Defining ethics in terms of problems is also supported by the antifrustrationist axiology (i.e. value theory), which finds value in keeping the number of frustrated preferences as low as possible rather than in creating satisfied extra preferences. Peter Singer used to argue a similar view in the past:
The creation of preferences which we then satisfy gains us nothing. We can think of the creation of the unsatisfied preferences as putting a debit in the moral ledger which satisfying them merely cancels out.
Philosopher Johann Frick likewise proposes a view “where any reasons to confer well-being on a person are conditional on the fact of her existence.” He makes an analogy between making a promise and creating a new sentient being:
… Most of us believe that we have a moral reason not to make a promise that we won’t be able to keep. (Compare: we have a moral reason not to create a life that will unavoidably be not worth living). By contrast, we do not think that we have a reason to make a promise just because we will be able to keep it. (Compare: we do not have a moral reason to create a new life just because that life will be worth living).
(Frick calls this view that well-being has conditional value “the conditional wide-person-affecting view”, as a reference to Parfit’s “wide-person-affecting view”.. He contrasts it with “a teleological conception of well-being, as something to be ‘promoted’”. This latter view, he argues, “has prevented philosophers from developing a theory that gives a satisfactory account of both” the procreation asymmetry mentioned next and Parfit’s “non-identity problem”.)
“The procreation asymmetry” is a notion in population ethics, expressed by philosopher Jan Narveson as, “We are in favor of making people happy, but neutral about making happy people.” CU’s positive principle to maximize happiness can view both of the strategies - “making people happy” and “making happy people” - as equally valid. Wolf again (Wolf, 1997):
If happiness is good, and more of it is better, then the positive principle seems to tell us that it would be better to have more well-off people around so that their happiness could contribute to a larger utilitarian aggregate.
This is concerning because increasing the number of beings risks bringing miserable beings into existence, and because this would divert resources from helping existing beings. (And however many happy beings this would create, the suffering of others would not be canceled out from the world in any meaningful sense or prevented: see the section on “outweighing” above.)
One major reason why we often assume positive counterparts to “negatives” in ethics may simply come from our language. As Knutsson notes,
… the phrases ‘negative well-being’ and ‘negative experiences’ are unfortunate because if something is negative, it sounds as if there is a positive counterpart. Better names may be ‘problematic moments in life’ and ‘problematic experiences’, because unproblematic, which seems to be the opposite of problematic, does not imply positive.
(Note that one needn’t hold that experiences with a positive hedonic tone have no intrinsic value (or that there are no such experiences to begin with) to agree with the “problem vs non-problem” dichotomy.)
In general ,if we understand ethics the proposed way, CU’s pursuit of maximizing (“aggregated”) happiness “minus” suffering is, again, concerning: while many, I think, would agree that suffering, in itself, is a problem, a neutral / tranquil state that could have been intensely pleasurable is not. For an untroubled state can only (if at all) be deemed (intrinsically) problematic by an outside observer (who, perhaps ironically, themself may be driven by a dissatisfied feeling created by that seeming absence of (“greater”) pleasure in the other). Suffering, in contrast, is intrinsically problematic.
Obligations vs the Optional / Supererogatory
Prescribing “maximizing happiness” (with or without explicit “minus suffering”), CU does not distinguish between the obligatory and supererogatory (i.e. nice-to-haves) - one must “maximize happiness”, fullstop.
CU thus implies that one is required to create happiness at the cost of suffering when the happiness is great enough, and when it cannot be created without the suffering. This, for example, conflicts with Parfit’s compensation principle, as (Wolf, 1997):
The requirement that one person's burdens cannot be compensated by benefits to another person implies that the obligation to minimize misery is lexically prior to the correlative obligation to maximize well-being.
A modification to utilitarianism, however, that assigns the top priority to reducing suffering, of the worst kind first, could be proposed. (The creation of happiness would either still be an obligation, just of a lower priority, or a supererogation, for example.) Perhaps, an ethic of this kind could serve as a compromise between CUs and those who think that creating happiness (for the already well-off) at the cost of suffering is unacceptable.
Wolf’s “Impure Consequentialist Theory of Obligation” would be one example of a utilitarian ethic that distinguishes obligations and supererogatory acts. It splits CU into the two principles explicitly, and only one of them defines an obligation:
1. Negative Principle of Obligation [NPO]: We have a prima facie obligation to minimize misery.
2. Positive Principle of Beneficence [PPB]: Actions are good if they increase well-being. Actions are better or less-good depending on the "amount" of well-being in which they result.
(The theory is impure consequentialist because “it allows that people do not always have an obligation to do what will result in the best outcome, and since it leaves room for actions that are supererogatory.”)
"Maximizing Happiness" Is Insatiable
CU’s maxim to "maximize happiness" is insatiable (at least in theory, for we don’t know boundaries on bliss and of the universe), especially if we think we should maximize happiness beyond existing beings. Some may consider this a positive feature. Some may find it overdemanding. Some may just accept it as how the world is.
Some may find it questionable in the first place that we are required to maximize happiness beyond existing beings. For, again, what is the source of urgency of this pursuit, or who feels a deprivation when a happy being is not brought into existence?
Some, like Wolf, distinguish two utilitarian maxims - to “maximize happiness” and to “minimizing misery” - and contrast the two (Wolf, 1997):
There are two important, and seldom noticed differences between these twin utilitarian commands. First, the positive utilitarian imperative to "maximize happiness" is insatiable, while the negative utilitarian command to "minimize misery" is satiable: no matter how much happiness we have, the positive principle tells us that more would always be better. But the negative principle ceases to generate any obligations once a determinate but demanding goal has been reached: if misery could be eliminated, no further obligation would be implied by the negative principle, even if it were possible to provide people (or non-human 'persons') with additional bliss.
Although the insatiability objection may not sound compelling on its own, I think it can be useful to contrast the unconditional maximization with the less assuming and arguably more plausible goal of ensuring no sentient being suffers. In the light of the arguments from the post and elsewhere (Vinding, 2020), I find it highly implausible that the maximization dictum has the same priority and urgency as does addressing suffering of existing and future beings. Is CU that attractive that we are willing to accept problems on top of problems whose problematicity is not even a choice (and, again, to give them the same priority as helping the worst-off)?
Karl Popper, for example, thought that “[i]t adds to clarity ... of ethics if we formulate our demands negatively, i.e. if we demand the elimination of suffering rather than the promotion of happiness”. He even cautioned that “the greatest happiness principle can easily be made an excuse for a benevolent dictatorship” (and suggested that “[w]e should replace it by a more modest and more realistic principle — the principle that the fight against avoidable misery should be a recognized aim of public policy, while the increase of happiness should be left, in the main, to private initiative”).
One may also counter the common objection that strictly-suffering-focused views are depressing or bleak with the following: on negative views, most of the world - the inanimate - is already in an optimal state, while CU views it as a lost opportunity (and, further, views going from 0 to “+10” bliss as intrinsically valuable as going from “-10” sufffering to a neutral state).
The view that the prescription to minimize suffering is completable, however, may only apply in theory. For how could we be sure that suffering, once abolished, will never re-emerge, in the however distant future? Given that we may never know the future with absolute certainty, I don’t see how the risk of suffering re-emerging could ever be fully eliminated. Vinding expresses such a view, for example (Vinding, 2020, 13.3):
… if suffering warrants special moral concern, the truth is that we should never forget about its existence. For even if we had abolished suffering throughout the living world, there would still be a risk that it might reemerge, and this risk would always be worth reducing.
We can be sure that this risk would always be there for solid theoretical reasons: the world is just far too complex for us to predict future outcomes with great precision, especially at the level of complex social systems, and hence we can never be absolutely certain that suffering will not reemerge. Thus, we should never lose sight of this risk.
Granted this risk of suffering re-emerging, strict suffering minimizers may deem instrumentally valuable some initiatives that they otherwise would find wasteful (if not unethical, given the lost opportunity to reduce suffering): the value would come from converting parts of the world to less suffering-prone states.
Speculatively, in a hypothetical future where suffering is abolished, strict suffering minimizers (then risk minimizers) may agree with other value systems on a common goal where happiness (and perhaps some other purported intrinsic goods) are maximized, with the constraint that creating any purported good at the cost of suffering is not allowed. (Increasing happiness in such scenarios may be seen by the risk minimizers as keeping matter and energy in a state free from suffering.)
With its simple principles and axiology and its impartiality, CU understandably appeals to many EAs.
Alas, with CU it is easy to end up optimizing for something else (e.g. not-applicable abstract aggregates) than avoiding intense suffering and creating a sustainable bliss for all. This is despite the explicit ultimate concern of CU with happiness and suffering.
One may also question CU’s implication that untroubled states that can be a (greater) happiness are inherently problematic, despite their not being a problem for anyone. (And one may find less defensible still the claim that such victimless problems have the same level of urgency as does reducing ongoing and future suffering).
And, what I find most concerning about this ethic, some of its common interpretations allow extreme suffering when it is believed to be “outweighed” by happiness elsewhere.
At least so appears to me from the objections I presented in the post.
If some of the concerns I tried to present are misguided, I hope it can be worth someone’s time to comment with relevant pointers and points.
If at least some of the concerns are viable, the post, I hope, will spark productive conversations, which, ideally, will clarify our thinking on ethical decision making.
Many thanks to Max Maxwell Brian Carpendale [EA · GW], Sasha Cooper [EA · GW], Michael St. Jules [EA · GW], Magnus Vinding, and anonymous contributors for useful comments and suggestions for a draft of the post.
Because I refer generally to modern readings of CU (as opposed to exact views of classical utilitarians like Jeremy Bentham, John Stuart Mill, and Henry Sidgwick), I would rather call the view I critique “common utilitarianism” or “a common version of CU”. Nevertheless, in this post I use the conventional name. ↩︎
Perhaps unfortunately, the disvalue of ill-being is often only implicitly assumed. ↩︎
Extreme suffering that doesn’t prevent worse suffering, that is. ↩︎
As Magnus Vinding writes (Vinding, 2020, 3.2),
For although it is tempting to conclude that the distinction between intrapersonal and interpersonal tradeoffs must collapse under reductionist views of personal identity, such views can, in fact, still make some sense of our common-sense notions of personal identity — e.g. as relating to particular streams of consciousness-moments — and thus still allow us to deem certain tradeoffs permissible across one set of consciousness-moments, yet not across others.
Taurek illustrated his point about a “group’s pain” with a person from the group asking an outsider who would suffer to instead “consider carefully,”
... "not, of course, what I personally will have to suffer. None of us is thinking of himself here! But contemplate, if you will, what we the group, will suffer. Think of the awful sum of pain that is in the balance here! There are so very many more of us."
“At best,” Taurek concluded, “such thinking seems confused.” ↩︎
Does the view presented by Taurek negate the notion of effective altruism? I think it doesn’t, but it does make EA harder, as it laser-focuses us on finding sustainable systematic solutions, solutions to root causes to prevent corresponding stemming problems for all. For example, while in this view it is equally bad when one being is suffering and when there are many such beings, there are solutions sparing all that an EA could contribute to.
In a similar vein, Taurek’s view implies that s-risks are perhaps more likely, as they are now defined only by intensity of suffering and by lock-in scenarios, irrespective of the number of sufferers. ↩︎
At least we don’t have evidence for this. ↩︎
Alas, to know exactly what it is like to be tortured, one would need to experience being tortured. ↩︎
Arthur Schopenhauer likewise wrote (Schopenhauer, 1844, vol II, p. 576) that “a thousand had lived in happiness and pleasure would never do away with the anguish and death-agony of a single one”, and that his “present well-being” could not “undo his previous sufferings [emphasis mine]”. In Schopenhauer’s view,
… it is quite superfluous to dispute whether there is more good or evil in the world; for the mere existence of evil decides the matter, since evil can never be wiped off, and consequently can never be balanced, by the good that exists along with or after it. [emphasis mine]
Similarly, there’s no net temperature of separate volumes of water. (Mixing water in this analogy would be analogous to combining experiences in one consciousness-moment.) ↩︎
There are many arguments for implausibility of such a model besides the argument that “outweighing” suffering does not map to reality as the language may suggest. They are elaborated on, for example, in the first part of (Vinding, 2020) and by philosopher Jamie Mayerfeld in The Moral Asymmetry of Happiness and Suffering and Suffering and Moral Responsibility. To outline, some of these arguments are “that suffering carries a moral urgency that renders its reduction qualitatively more important than increasing happiness (for those already well-off); that the presence of suffering is bad, or problematic, in a way the absence of happiness is not; that experiences are primarily valuable to the extent they are absent of suffering; ... and that we should sympathize with and prioritize those who experience the worst forms of suffering.” (Vinding, 2020, 5.5) ↩︎
Vinding likewise advises great caution when our ethical priorities rest on this simple model (Vinding, 2020, 8.5):
It may, of course, seem intuitive to assume that some kind of symmetry must obtain, and to superimpose a certain interval of the real numbers onto the range of happiness and suffering we can experience — from minus ten to plus ten, say. Yet we have to be extremely cautious about such naively intuitive moves of conceptualization. … [I]t is especially true when our ethical priorities hinge on these conceptual models; when they can determine, for instance, whether we find it acceptable to allow astronomical amounts of suffering to occur in order to create “even greater” amounts of happiness.
This definition of extreme suffering is based on Vinding’s formulations in (Vinding, 2020), especially in chapter 4. ↩︎
CRS notes that while “suffering-focused views do tend to hold that ... suffering can be deemed worse, and hence more deserving of priority, than ... suffering [elsewhere]”, these views - “and strong negative consequentialist views more generally” - do not invoke “any outweighing in the sense of thinking that suffering, including extreme suffering in particular, can be “cancelled out” or “made up for” by different states elsewhere.” ↩︎
At least in theory. ↩︎
Cf. philosopher Seana Shiffrin’s writing:
There is a substantial asymmetry between the moral significance of harm delivered to avoid substantial, greater harms and harms delivered to bestow pure benefits [(i.e. a benefit which would not cause harm if omitted)].
It can be said that in terms of traditional terminology, “suffering realism” is only an ontological position, not an evaluative or moral one. I don’t see a problem here, as our priorities directly follow from what suffering is (at least when we are confronted with it directly). Saying suffering is “bad” is redundant, for its badness is in the nature of the experience itself. No “evaluation” is needed, for the “badness”, the “moral” force of suffering is inescapable. It is inescapably and inherently problematic, unlike any purported intrinsic problem.
For better or worse, suffering “just” is (suffering). (The same applies, mutatis mutandis, for everything else in existence, of course.)
One may or may not call this moral realism, but I think this would be mostly a matter of terminological preference. ↩︎
Suffering is inherently problematic also in a sense that, as noted in the next section and as Vinding writes (Vinding, 2020, 8.5):
… the wrongness and problematic nature of suffering is manifest from the inside, inherent to the experience of suffering itself rather than superimposed, whereas the notion that there is something (similarly) wrong and problematic about states absent of suffering must be imposed from the outside. Suffering and happiness are qualitatively different in these regards, whether intense or not.
Manu Herrán gives the following empty/open-individualist intuition on prioritizing reducing the worst suffering:
If all sentient beings were a single being and I were that being, other things being equal, I’d improve my situation starting from fixing my most intense suffering.
As for knowledge, I don’t see how one would defend pursuing non-instrumental knowledge at the cost of not preventing extreme suffering (assuming one argues that knowledge is intrinsically valuable in the first place). ↩︎
See also Knutsson’s Many-valued Logic and Sequence Arguments in Value Theory on how one may address the sequence/continuum/spectrum argument using many-valued logic. ↩︎
Some of which are mentioned earlier in the post, but more are discussed in (Vinding, 2020, part I). ↩︎
A preference utilitarian in the past, Singer appears to have shifted to hedonistic utilitarianism. Already in the same 1980 article he goes on writing:
Given that people exist and wish to go on existing, Preference Utilitarians have grounds for seeking to satisfy their wishes, but they cannot say that the universe would have been a worse place if we had never come into existence at all. On the other hand Classical Utilitarians can say this, if they believe our existence has on the whole been happy rather than miserable. That, perhaps, is a reason for seeking to combine the two views. [bolding mine]
According to Frick’s “conditional wide-person-affecting view”,
if you are going to pick either [a Good life of person B] or [a Great life of person C], you ought to pick Great, because this benefits people more. You thereby achieve more of what you have reason to want for person C’s sake, conditional on her existence, than what you would have reason to want for person B’s sake, conditional on his existence. But, at the same time, there is no moral reason to create a new “sake” for which we have reason to do things. In a three-way choice between Great, Good, and Nobody, there is nothing wrong with choosing Nobody.
Frick argues that this view allows one to uphold both “the Non-Identity Intuition” that one has “a strong moral reason not to choose Good over Great” (in a two-way choice) and the procreation asymmetry. ↩︎
This population-ethical view is reflected in antifrustrationism, the “problem vs non-problem” dichotomy, some forms of antinatalism, and, in general, traditions and views that find non-existence unproblematic (at least intrinsically, for one can exist to prevent worse suffering than one causes). ↩︎ ↩︎
This point was made by David Pearce in personal communication with Vinding (Vinding, 2020, 1.4). ↩︎
For comparison, the “problem vs non-problem” dichotomy from the previous section doesn’t define supererogatory acts either. While, say, weak NU would require to minimize suffering (including risks of future suffering) as less but still important or as supererogatory.
One may also argue that there is a requirement to maximize well-being, but it is conditional on the existence of a person to whom it accrues. Johann Frick, for example, in the paper cited in the text argues that “there is no unconditional moral reason to confer benefits on people by creating them” (Frick, 2020, 9). ↩︎
Similarly but from an “existential risk” perspective, Pearce responds elsewhere:
… a thoroughgoing [classical] utilitarian is obliged to convert your matter and energy into pure utilitronium, erasing you, your memories and indeed human civilisation. By contrast, the negative utilitarian believes that all our ethical duties will have been discharged when we have phased out suffering. Thus a negative utilitarian can support creating a posthuman civilisation animated by gradients of intelligent bliss … . By contrast, the classical utilitarian is obliged to erase such a rich posthuman civilisation with a utilitronium shockwave.
“Similarly,” Popper continued, “it is helpful to formulate the task of scientific method as the elimination of false theories (from the various theories tentatively proffered) rather than the attainment of established truths.” ↩︎
Or more generally, Popper wrote (Popper, 1945):
Instead of the greatest happiness for the greatest number, one should demand, more modestly, the least amount of avoidable suffering for all ....
For an extended exploration of the objection, see “Are Anti-Hurt Views Bleak?” section in (Vinding, 2020, 2). ↩︎
Comments sorted by top scores.