Minimalist axiologies and positive lives

post by Teo Ajantaival · 2021-11-13T10:57:04.391Z · EA · GW · 12 comments

Contents

  Summary
  1. What is axiology?
  2. What are minimalist axiologies?
    The less this, the better
    Contents versus roles
  3. How do these views help us make sense of population ethics?
    The Mere-Addition Paradox
    The Repugnant Conclusion
    The Very Repugnant Conclusion
    Solving problems: A way to make sense of population ethics?
  4. What are we comparing when we make the assumption of “all else being equal”?
    Isolated value-containers
    Counterintuitive boundaries
  5. What do these views imply in practice?
    Naive versus sophisticated minimalism
    Compatibility with everyday intuitions
    Preventing instead of outweighing hell
    Self-contained versus relational flourishing
  6. Without the concept of intrinsic positive value, how can life be worth living?
    A more complete view
  What additional questions do you have about these views?
  Acknowledgments
  References
None
12 comments

This is part two of a series on minimalist axiologies. Every part of this series builds on the previous parts, but can also be read independently.

Summary

In order to maximize our positive impact, we first need to clarify what would constitute a change in the right direction. To guide us, we need an axiology, i.e. a theory of independent value, also known as intrinsic value.

For example, many axiologies hold it independently valuable both to promote bliss and to avoid agony. Yet these do not always represent a coherent twin-principle similar to “Head North, Avoid South”. When multiple guiding principles point in different directions, we need to define acceptable tradeoff ratios (or “priority weights”) between them. This is hard to do in an intuitively agreeable way, which is arguably one of the main reasons why people often feel conflicted about accepting certain implications in the field of population axiology.

Minimalist axiologies refer to a class of axiologies whose central conception of independent value is of the kind that says “The less this, the better.” In other words, their fundamental standard of value is only about the avoidance of something, and not about the maximization of something else. This essay looks at minimalist axiologies that are impartial and welfarist (i.e. concerned with the welfare of all sentient beings), with a focus on their theoretical and practical implications. For example, these views reject the Very Repugnant Conclusion, which is implied by many other axiologies in population theory.

Minimalist axiologies are arguably neglected in population theory due to their (apparent) implication that no life could be axiologically positive. Yet we should remember that the standard theoretical assumption of “all else being equal” (i.e. of causal isolation) is practically always false, enabling lives to make a positive difference [EA · GW] for other beings. By assuming that lives are at best subjectively perfect but never helpful, the field of population theory is basically excluding the possibility of positive lives on these views.

In other words, minimalist axiologies can support a relational notion of positive lives, which is ignored by standard population theory where lives are treated as “isolated value-containers” with no interaction. This “isolated view” of the value of lives is plausibly a major cause of axiological disagreement and obfuscation, provided that our moral intuitions are adapted to track not only the “contents” of individual lives, but also their overall positive and negative roles. A more complete view would recognize the fact that the roles of a life can, and probably often do, end up being far more significant than its “contents”. And so minimalist axiologies are compatible with saying that a life can be very positive in terms of its overall value.

1. What is axiology?

Axiology is the philosophical study of value. The field of axiology is specifically concerned with the question of what things, if any, have independent value, also known as intrinsic value. ‘Axiologies’ in the plural refer to specific views on this axiological question. Once we assume a specific axiology that ascribes independent value to certain entities or states, we may see the value of all other things as extrinsic, instrumental, or relational in terms of their effects on these entities or states.

This distinction applies at the level of our assumed axiology, and not necessarily at the level of our everyday perception: we may both formally deny that something has independent value, and also be right to practically feel that it does have value — without explicitly “unpacking” what this value depends on — such as when we treat some widely-held values as valid heuristics [? · GW] until they run into conflicts with each other. (More on this in the section on practical implications [EA · GW], and in a future post on multi-level minimalism.)

Commonly, we seek clarity about the nature of independent value by listening to what our supposedly value-tracking intuitions say about certain thought experiments. For example, we construct thought experiments where only a single thing is intended to be changing, “all else being equal”, and ask whether it feels true that this change is accompanied by a change in value. (More on this in the section on the “all else being equal” assumption [EA · GW].)

Based on such thought experiments of isolated value, we might feel that more blissful mind-moments mean more value, and that more agonized mind-moments mean more disvalue, and thereby come to follow two independent standards of value: “The more bliss, the better, all else being equal” (BliMax), and “The less agony, the better, all else being equal” (AgoMin).

Dilemmas famously arise when we want to follow both BliMax and AgoMin, as they are not always perfectly anticorrelated. That is, we often cannot both “maximize bliss” and “minimize agony”, because even as these two guiding principles may seem to be polar opposites of each other, they do not always constitute a coherent twin-principle similar to “Head North, Avoid South”. The field of population axiology has highlighted many ways in which BliMax and AgoMin pull us into mutually incompatible directions, as well as the lack of consensus on how to compare the supposed independent value of bliss with the supposed independent disvalue of agony. (More on this in the section on population ethics [EA · GW].)

2. What are minimalist axiologies?

The less this, the better

Minimalist axiologies may be a suitable name for the class of axiologies whose central conception of independent value is of the kind that says “The less this, the better.” In other words, their fundamental standard of value is about the avoidance of something, and not about the maximization of something else. To list a few examples, minimalist axiologies may be formulated in terms of avoiding cravings (tranquilism seen as a welfarist monism; certain Buddhist axiologies); disturbances (Epicureanism); pain or suffering (Schopenhauer; Richard Ryder); frustrated preferences (antifrustrationism); or unmet needs (some interpretations of care ethics).

This essay looks at minimalist axiologies that are interpreted as monist, impartial, and welfarist — i.e. concerned with the welfare of all sentient beings — and is not meant to apply to axiologies that are pluralist, partial, or concerned with non-welfarist avoidance goals, such as minimizing human intervention in nature, or avoiding the loss of unique information. (Pluralism can still be introduced at the level of practical decision procedures. More on this in a future post on multi-level minimalism.)

In “sacred value tradeoffs” between multiple (seemingly) independent standards of value, minimalist axiologies avoid the problem of having to determine independent “priority weights” for different intrinsic values in order to resolve their mutual conflict — such as the conflict between creating bliss for many at the cost of agony for others. In other words, instead of using multiple standards of value, such as both BliMax and AgoMin, minimalist axiologies construe “positive value” in a purely relational way, i.e. with regard to their overall avoidance goal for all beings. This enables value comparisons to be made under a shared standard of value.

When we look at only one kind of change, all else being equal [EA · GW], it may seem intuitive that bliss is independently good and agony is independently bad. Yet many people may also feel internally conflicted about tradeoffs where value and disvalue would need to be compared with each other (so that we could say whether some tradeoff between them is “net positive” or not). To solve these dilemmas, minimalist axiologies would respect promotion intuitions to the degree that they are conducive to the overall avoidance goal, but draw a line before agreeing to create more (isolated) value for some at the cost of disvalue for others.

For example, suffering-focused minimalism would promote happiness in the place of suffering, but not at the cost of suffering, all else being equal (cf. Vinding, 2020b, Chapter 3). To illustrate, let us compare two different ways to weigh the pros and cons of the factory farming of sentient beings:

  1. To the extent that this process entails a lot of suffering, some might argue that this process is still “worthwhile” due to the happiness that it also entails or promotes in others — i.e. happiness at the cost of suffering.
  2. Conversely, suffering-focused minimalism would reject the implicit pluralism of justifying suffering with happiness. Instead, it would “compare suffering with suffering” by looking at relations like, “What is this happiness needed for?” or “Does this happiness help prevent worse suffering than what it costs?” — i.e. happiness in the place of suffering, or happiness as a way [EA · GW] to prevent worse suffering.

In other words, minimalist axiologies sidestep the problem of having to find acceptable “tradeoff ratios” between different independent values. These views reject the “board game-like” logic of placing different amounts of positive value on different kinds of things, which can be replaced with a focus on the relational question of how the objects of our promotion intuitions could help with the overall avoidance goal.

A common misunderstanding of this “denial of positive value” relates to the mismatch between abstract population theory, where we are dealing with causally isolated lives subject to an assumption of “all else being equal”, and the real world, where lives can be relationally positive, even on minimalist axiologies, precisely because they can make a positive difference for the lives of others. (More on this in the section on the “all else being equal” assumption [EA · GW].)

Contents versus roles

Our practical intuitions about the value and worthwhileness of lives are arguably correct in saying that something crucial is lacking if we naively translate our abstract population theory directly into practice and think that “no life could be positive”. After all, as soon as we break the artificial boundaries of individual lives as “isolated value-containers” — which they are usually treated as in population ethics — we return to the practical world where lives virtually always have significant effects on other lives. Western culture may condition us to think of individuals [EA · GW] as “independent, self-contained, autonomous entities”, and to pay insufficient attention to a more relational perspective. Yet for an analysis of the overall value of lives to reach any kind of practical relevance, we need to recognize that the concept of an “independent individual” is a blind spot to be filled by an account of the interactions between lives. (More on this in the section on practical implications [EA · GW].)

Specifically, our intuitions about value arguably evolved in a fundamentally interpersonal world where we intuitively account not only for the “contents” of individual lives, but also for their roles in other lives. Therefore, when we object to the idea that “no life could be positive, all else being equal”, this need not stem from the sentiment that “Surely lives have at least some isolated positive value”. We might just as plausibly be objecting to the highly unrealistic assumption of “all else being equal [EA · GW]”, which effectively strips these lives of all their positive effects on anyone, and thereby leaves us only with what these lives subjectively “contain” in the absence of all their (often highly significant) positive roles [EA · GW].

As social animals, some of us may think that positive value is fundamentally not something that we “have”, “contain”, or “accumulate” in isolation. Instead, our intuitions may be animated by a relational view that sees positive value as something that we “do” for each other, and something that we cannot simply produce in causally isolated experience machines to make the world a better place. (More on this in the section on lives worth living [EA · GW].)

To the extent that our value intuitions track not only the “contents” but also the social roles of individual lives, it may be difficult to determine the degree to which we think of positive value as an independent or relational phenomenon. Yet many counterintuitive conclusions in population ethics may be attributed to the assumption of positive value as an independent, and independently aggregable, phenomenon. That is, we may have good reasons to question the conception of positive value as a “plus-point” that could be summed up or stacked in isolation from the social roles of the lives that contain it, while still endorsing positive value in a strong, relational sense. (More on this in the section on population ethics [EA · GW].)

(The clarification of how minimalist axiologies can support a notion of “a life worth living” will be a recurring theme throughout the remaining sections.)

3. How do these views help us make sense of population ethics?

The field of population [? · GW] ethics is “the philosophical study of the ethical problems arising when our actions affect who is born and how many people are born in the future”. A subfield of population ethics is population axiology, which is about figuring out what makes one state of affairs better than another. This is a famously tricky question to answer without running into counterintuitive conclusions, provided that we make the assumption of independently positive lives (cf. Arrhenius, 2000a). Minimalist axiologies do not make this assumption, and they neatly avoid the conclusions that are pictured in the three diagrams in the next three subsections.

(Note: The conclusions are named “paradoxical” or “repugnant” after the intuitions of people who find them troubling. Generally, people differ a lot in which intuitions they are willing to “give up” in population ethics. Arguably, many would accept the Mere-Addition Paradox [EA · GW], some would accept the Repugnant Conclusion [EA · GW], and few would accept the Very Repugnant Conclusion [EA · GW].)

Before looking at the diagrams, let us already note two ways in which people might implicitly disagree about how to interpret them.

First, the diagrams contain populations that should arguably be imagined to consist only of lives that never interact with each other. (More on this in the section on the “all else being equal” assumption [EA · GW].)

Second, some of the diagrams contain a horizontal line that indicates a “zero level” of “neutral welfare”, which may be interpreted in different ways. For example, when diagrams illustrating the (Very) Repugnant Conclusion contain lives that are “barely worth living”, some may think that these lives involve “slightly more” happiness than suffering, while others may think that they “never suffer”. (More on this in footnote 16 [EA · GW] in DiGiovanni, 2021.)

A different interpretation of the horizontal line is used in antifrustrationism by Christoph Fehige (1998), which equates welfare with the avoidance of preference dissatisfaction (or “frustration”). When Fehige’s own diagrams contain the horizontal line, it just means the point above which the person “has a weak preference for leading her life rather than no life” (Fehige, 1998, p. 534). On Fehige’s view, the lives with “very high welfare” are still much better off than the lives “barely worth living”. Yet if we assume that the lives above the horizontal line have all their preferences satisfied and “never suffer”, then minimalist axiologies would find no (subjective) problems in the Mere-Addition Paradox or the Repugnant Conclusion. Even so, they would still not strictly prefer larger populations, finding all populations of such problem-free lives rather equally perfect (in causal isolation [EA · GW]).

However, it is arguably highly unrealistic to assume that the lives people usually refer to as “barely worth living” would be subjectively perfect or “never suffer”. Therefore, we will use Fehige’s view as an extended example of how minimalist axiologies would actively reject the following conclusions. (For a similar axiology centered on experiences rather than preferences, see tranquilism by Gloor, 2017.)

The Mere-Addition Paradox

Derek Parfit’s Mere-Addition Paradox is based on a comparison of four populations. Each bar is a distinct group of beings. The bar’s width indicates their numbers and the bar’s height indicates their level of welfare. We assume that every being in this diagram has “a life worth living”. (The populations in A+ and B− consist of two distinct groups that are “divided by water”; the population in B is simply the two groups of B− combined into one.)

(A diagram of the Mere-Addition Paradox; source.)

The paradox results from the following comparisons that contradict some people’s intuitive preference for the high-average population of A over the lower-average population of B:

  1. Intuitively, “A+ is no worse than A,” since A+ simply contains more lives, all worth living.
  2. Next, “B− is better than A+,” since B− has both greater total welfare and greater average welfare.
  3. Finally, “B− is equal to B,” since B is simply the same groups, only combined.
  4. Now, “B is better than A,” based on steps 1–3.

This paradox is a problem for people who strongly feel that “A is better than B but who are also sympathetic to total utilitarianism — perhaps due to wanting to avoid average utilitarianism for its implying the sadistic conclusion. Yet if we assume that subjective problems are experienced more by the lives in B than by the lives in A, then minimalist axiologies would prefer A over B without implying the sadistic conclusion.

Essentially, the solution of Fehige (1998) is to assume that the welfare of a life depends entirely on its level of preference dissatisfaction (or “frustration”). On this view, a population of perfectly (or almost perfectly) satisfied beings cannot, other things being equal, be improved by the “mere addition” of new, less satisfied beings. This is because the frustration of those new beings is an additional subjective problem in comparison to the non-problematic non-existence of their imaginary counterparts in the smaller population.

(We should remember that Fehige’s use of the term “preference frustration” is much broader than the everyday feeling that we call frustration; after all, basically all lives in the real world have at least some of their preferences frustrated, even if some may be free of the feelings of frustration.)

While this may be a theoretically tidy solution to the Mere-Addition Paradox, many critics have objected that it depends on a theory of welfare that they find to be counterintuitive, incomplete, or unconvincing. (These objections will be responded to in a future post.)

Overall, we would be wise to abstain from hastily dismissing minimalist axiologies as being absurd, because there are plenty of ways to interpret them in less absurd ways without losing their theoretical benefits. A lot of their intuitive absurdity might already result from the unrealistic thought experiments themselves, which imply that we are not actually comparing the value of lives of the kind that are familiar to us, but only of lives of a very peculiar and asocial kind, namely lives in complete causal isolation from each other. (More on this in the section on the “all else being equal” assumption [EA · GW].)

The Repugnant Conclusion

Next, by continuing the logic of “mere addition”, we arrive at the ‘Repugnant Conclusion’. Quote:

In Derek Parfit's original formulation[,] the Repugnant Conclusion is characterized as follows: “For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living” (Parfit 1984). ... The Repugnant Conclusion is a problem for all moral theories which hold that welfare at least matters when all other things are equal. (Arrhenius, Ryberg, & Tännsjö, 2014.)

(A diagram of the Repugnant Conclusion; source.)

Minimalist axiologies avoid the Repugnant Conclusion, as they deny that lives “barely worth living” would constitute a vast heap of independent “plus-points” in the first place. For example, Fehige (1998) would assume that the lives with a “very high quality of life” would be quite free from subjective problems, which is better, all else being equal [EA · GW], than a much larger set of lives that still have a lot of their preferences dissatisfied.

As noted by another commenter on Fehige (1998):

Among its virtues, [antifrustrationism] rescues total utilitarianism from the repugnant conclusion. If utility is measured by the principle of harm avoidance instead of aggregated preference satisfaction, utilitarianism does not, as the accusation often goes, entail that it is better the more (acceptably) happy lives there are[, other things being equal]. (Karlsen, 2013, p. 160.)

Fehige’s theory of welfare is seemingly dismissed by Arrhenius, Ryberg, and Tännsjö (2014) on the grounds that it would, counterintuitively, deny the possibility of lives worth living. Quote:

However, a theory about welfare that denies the possibility of lives worth living is quite counter-intuitive [Ryberg, 1996]. It implies, for example, that a life of one year with complete preference satisfaction has the same welfare as a completely fulfilled life of a hundred years, and has higher welfare than a life of a hundred years with all preferences but one satisfied. Moreover, the last life is not worth living (Arrhenius 2000b).

Yet this objection seems to imply that a life could be worth living only for its own sake, i.e. for some kind of satisfaction that it independently “contains”, and to deny that a life could also be worth living for its positive roles [EA · GW]. Again, we need to properly account for the fact that Fehige’s model is comparing lives only in causal isolation. (More on this in the section on the “all else being equal” assumption [EA · GW].)

As soon as we step outside of the thought experiment (of “all else being equal”) and start comparing these lives in our actual, interpersonal world, we may well think — even on Fehige’s terms — that a subjectively perfect life of one year would be much less valuable (overall, for all beings) than would be the subjectively almost perfect century. After all, many of our preferences and preferred actions have significant implications for the welfare of others. (More on this in the section on practical implications [EA · GW].)

However, it is not necessarily counterintuitive to prefer the perfect year — or even nonexistence — over the less-than-perfect century in complete causal isolation, where we can be no-one’s friend or partner, do no good work for anyone, and generally make no positive difference in any way. Regardless of how we felt during the year, or during the century, others would live as if we never had. In other words, we may question the overall worth of spending our life in the experience machine, provided that it does not help solve any problem. (More on this in the section on lives worth living [EA · GW].)

The Very Repugnant Conclusion

The Repugnant Conclusion was termed repugnant due to the intuition that a legion of lives “barely worth living” cannot be better than a smaller population of lives with very high welfare. Some say that in this case, the intuition is wrong and that we should simply bite the bullet and follow the utilitarian math (of additive aggregationism). However, presumably fewer people would accept the ‘Very Repugnant Conclusion’ (VRC), in which the “better” world contains a lot of subjectively hellish lives, supposedly “compensated for” by a vast number of lives “barely worth living”. Quote:

There seems to be more trouble ahead for total [symmetric [EA · GW]] utilitarians. Once they assign some positive value, however small, to the creation of each person who has a weak preference for leading her life rather than no life, then how can they stop short of saying that some large number of such lives can compensate for the creation of lots of dreadful lives, lives in pain and torture that nobody would want to live? (Fehige, 1998, pp. 534–535.)

(A diagram of the VRC; source.)

In more formal terms, quote:

Let W1 be a world filled with very happy people leading meaningful lives [A]. Then, according to total [symmetric] utilitarianism, there is a world W2 which is better than W1, where there is a population of suffering people [N] much larger than the total population of W1, and everyone else has lives barely worth living [Z] - but the population is very huge. (Source [LW · GW].)

One way to avoid the VRC is to follow Fehige’s suggestion and interpret utility as “a measure of avoided preference frustration”. On this view, utilitarianism “asks us to minimize the amount of preference frustration”, which leads us to prefer W1 over W2 (Fehige, 1998, pp. 535–536). As noted by Fehige (1998, p. 518), “Maximizers of preference satisfaction should instead call themselves minimizers of preference frustration.”

Every minimalist axiology would prefer W1 over W2 due to being structurally similar to Fehige’s view — that is, none of them would say that the supposed “plus-points” of W2 could somehow independently [EA · GW] “counterbalance” the agony of the others, regardless of the number of the lives “barely worth living”.

In contrast, the VRC is a problem for a lot of “symmetric” axiologies besides classical hedonism. Quote:

Consider an axiology that maintains that any magnitude of suffering can be morally outweighed by a sufficiently great magnitude of preference satisfaction, virtue, novelty, beauty, knowledge, honor, justice, purity, etc., or some combination thereof. It is not apparent that substituting any of these values for happiness in the VRC makes it any more palatable[.] (DiGiovanni, 2021 [EA · GW].)

At the moment, the VRC is not even mentioned on Utilitarianism.net, which only states that “[the] most prominent objection to the total view is the repugnant conclusion”, nor in The Precipice, which similarly only claims that “[the] main critique of the Total [symmetric [EA · GW]] View is that it leads to something called the repugnant conclusion” (Ord, 2020, Appendix B: Population Ethics and Existential Risk). Yet for anyone who is bothered by the repugnant conclusion, a much stronger reason to reject symmetric total views would be that they support the VRC.

Solving problems: A way to make sense of population ethics?

In general, one can avoid the VRC (and the two other conclusions above) by maintaining that ethics is about solving problems. On this view, any choice between two populations (all else being equal) will depend on preventing the overall greater amount of subjectively problematic states, such as extreme suffering. Regardless of the precise definition of what constitutes a subjectively problematic state, all minimalist views (as explored here) are also “problem-focused views”. In other words, they reject the metaphor that ethical problems could be “counterbalanced” instead of prevented. Quote:

[Only] the existence of such problematic states imply genuine victims, while failures to create supposed positive goods (whose absence leaves nobody troubled) do not imply any real victims — such “failures” are mere victimless “crimes”. ... According to this view, we cannot meaningfully “cancel out” or “undo” a problematic state found somewhere by creating some other state elsewhere. (Vinding, 2020a.)

Generally, the metaphor of “ethical counterbalancing” may rest on a terminological confusion. As argued in Vinding (2020b, pp. 155–156) — based on Knutsson (2021, section 3) — when we speak of a “negative” experience, we may automatically assume that it could be counterbalanced by its symmetrically “positive” counterpart. Yet we would not say that a problematic experience could be counterbalanced by its “corresponding” unproblematic counterpart. Quote:

Consider, by analogy, the states of being below and above water respectively. One can certainly say that being below water is the opposite of being above water. In particular, one can say that, in one sense, being 50 meters below water is the opposite of being 50 meters above water. But this does not mean, quite obviously, that a symmetry exists between these respective states in terms of their value and moral significance. Indeed, there is a sense in which it matters much more to have one’s head just above the water surface than it does to get it higher up still. (Vinding, 2020b, p. 156.)

Similarly, intuitions [EA · GW] that reject the VRC may be framed in terms of subjectively unproblematic versus subjectively problematic experiences. For example, we could reject the VRC based on the following principle:

All else being equal [EA · GW], any world that contains only unproblematic experiences is no worse than a second world that contains problematic experiences, regardless of what else the second world contains.

A sufficient criterion for “problematic experiences” (in the VRC) could be to consider whether the worlds contain unconsentable suffering. Presumably, all beings in the happy world of W1 would consent to living their lives (qualifying their lives as subjectively unproblematic on this criterion), whereas many of the beings in W2 would not consent to living their lives (qualifying their lives as subjectively problematic). Thus, the previous principle — coupled with this criterion — would reject the VRC on the grounds of consent.

On the water analogy, such a “problem-focus” would imply that we prioritize helping sentient beings avoid the depths of extreme suffering, but not that we attempt to “outweigh” the depths of some with the heights of others, unless [EA · GW] this actually seems to be helpful for counteracting the overall amount of expected problems in the world. (More on this in the section on practical implications [EA · GW], and in a future post on multi-level minimalism.)

4. What are we comparing when we make the assumption of “all else being equal”?

Isolated value-containers

The ceteris paribus assumption is often translated into English as something like “all else being equal”, “all else unchanged”, or “other things held constant”. Generally, this assumption means that we exclude any changes other than those explicitly mentioned. And so when we make this assumption in population theory, the idea is to compare any two hypothetical populations only with respect to their explicit differences — such as the level and distribution of welfare among these populations — and to rule out the influence of any other factors.

Yet we should be careful to appreciate the full implications of comparing populations in this way. After all, we may all too automatically simply agree to this as standard practice, and forget to give a second thought to what we are thereby implicitly agreeing to give up. Quite often, we only see it in the form of a parenthetical remark (ceteris paribus), and sometimes we are simply silently expected to play by its rules where it is the unvoiced and unquestioned background assumption. But in the high-stakes game of population theory, this seemingly innocuous assumption may decisively influence our view on what kinds of lives, if any, are worth living, and what they are worth living for. (More on this in the section on lives worth living [EA · GW].)

To illustrate, let us consider a scenario where the ceteris paribus assumption would actually be true: namely, we are comparing only “isolated value-containers” or “isolated Matrix-lives” that never interact with each other (not even by acausal “influence”). If this sounds radical, then we may not always realize how radical the ceteris paribus assumption in fact is. After all, it represents only an “isolated view” of lives worth living, as it focuses only on their own “contents” (in terms of subjective welfare), and completely excludes their overall effects on the welfare of others.

Counterintuitive boundaries

Now, our practical intuitions about the overall value of lives — such as of all the lives “barely worth living” in the (Very [EA · GW]) Repugnant Conclusion [EA · GW] — may implicitly be tracking not only the “contents” of these lives (i.e. their own level of welfare), but also their overall effects on the welfare of others. And in practice, it may indeed seem like a repugnantly bad idea to trade away a high-welfare population for a legion of lives “barely worth living”, as the latter would seem to not have enough well-being as a resource [EA · GW] to adequately take care of each other in the long term. (More on this in the section on practical implications [EA · GW].) A practical intuition in the opposite direction is also possible, namely that a larger population could create more goods, insights, and resources that everyone could benefit from, and thus have a brighter future in the long run.

Yet to give any weight to such instrumental effects, even implicitly, would already violate the ceteris paribus assumption, which was meant to rule out all interactions from affecting the comparison. And so we should be very careful to properly respect the boundaries of this assumption, such as by explicitly imagining that we are comparing only “isolated Matrix-lives”. After all, our intuitions are arguably adapted for an interpersonal world with a time dimension: two features of life that are difficult for us to put aside when entering thought experiments about the overall value of individual lives. To the extent that our practical intuition may, by default, be evaluating the hypothetical lives roughly like it would in the real world, we need to make an extra effort to really constrain it from introducing any other factors whose influence was supposed to be ruled out.

Perhaps a lot of axiological disagreement could be resolved by simply being more clear about the mismatch between our practical intuitions and abstract population theory. In any case, we need to recognize how the (properly respected) ceteris paribus assumption — i.e. the isolated view — is radically exclusive of many of the things that our practical intuitions are implicitly tracking.

5. What do these views imply in practice?

Naive versus sophisticated minimalism

The section on population ethics [EA · GW] showed how minimalist axiologies avoid what (for other views) are often called tricky problems in population theory. Yet one may still worry that these views would have counterintuitive implications in practice. However, we should be aware that many of these supposedly counterintuitive or radical implications could result not only from an isolated view [EA · GW] — which excludes the positive roles [EA · GW] of individual lives — but also from a naive consequentialism [? · GW], which ignores the positive roles of various norms [? · GW] of everyday morality, such as those of autonomy, cooperation, and non-violence. (More on this in a future post on multi-level minimalism.)

Moreover, a naive consequentialism is often not based on a nuanced understanding of expected value thinking, and instead falls victim to a kind of “narrative misconception” of consequentialism, in which a view would support any means necessary to bring about its axiologically ideal “end state”. One could argue that the idea of a utilitronium shockwave amounts to such a misconception relative to the practical implications of classical utilitarianism. In the case of minimalist axiologies, this misconception looks like the claim that we must, at any cost, “seek a future where problems are eventually reduced to zero”, which is very different from minimizing problems over all time in expectation. After all, only the latter kind of thinking (i.e. expected value thinking) is properly sensitive to risks of making things worse, whereas the first, misconceived view is closer to a cost-insensitive “all in” gamble for manifesting an ideal outcome at some particular point in time.

For example, naive minimalism might disregard the norms of everyday morality as soon as there would (apparently) be even the slightest chance of bringing about its hypothetically ideal “end state”, even if this would violate the preferences of others. By contrast, sophisticated [? · GW] minimalism would be concerned with the “total outcome” — which spans all of time — and be highly sensitive to the risk of making things worse overall. For instance, any violent strategy for “preventing problems” would very likely backfire in various ways, such as by undermining one’s credibility as a potential ally for large-scale cooperation, ruining the reputation of one’s (supposedly altruistic) cause, and eroding the (positive [EA · GW]) norm of respecting individual autonomy. Because the backfire risks depend on complex interactions that happen over considerable spans of time, they are likely to be underemphasized by simple thought experiments that collapse hypothetical populations into two-dimensional images. (More on the “narrative misconception” of consequentialism in the next post.)

Compatibility with everyday intuitions

What, then, do these views imply in practice, assuming a sophisticated minimalism over all time? The second half of Suffering-Focused Ethics (Vinding, 2020b, pp. 141–277) is an accessible and extensive treatment of basically the same question, particularly for views that prioritize minimizing extreme suffering. In large part, the practical implications for minimalist views are probably the same as those for suffering-focused views. Yet minimalist views differ from at least some suffering-focused views in one respect, which is that minimalist axiologies explicitly deny the concept of “independently aggregable positive value”, i.e. anything that could be “stacked” in causal isolation to make the world a better place in the axiological sense.

For that reason, minimalist views may feel like being somehow opposed to the things that we cherish as being intrinsically valuable. Yet minimalist views need not imply anything radical about the “quantity [? · GW]” of positive value that we intuitively attribute to many things at the level of our everyday psychology. After all, the kinds of things that we may deem “intrinsically valuable” at an intuitive level are often precisely the kinds of things that rarely need any extrinsic justification in everyday life, such as sound physical and mental health, close relationships, and intellectual curiosity. If required, we could often “unpack” the value of these things in terms of their long-term and indirect effects, namely their usefulness for preventing more problems than they cause. But when our (intuitively) positive pursuits have many beneficial effects across a variety of contexts, we are often practically wise to avoid spending the unnecessary effort to separately “unpack” their value in relational terms.

By only focusing on our positive feelings in the immediate moment, we may actually even underestimate the overall usefulness of things like maintaining a rich social life, learning new skills, and coming up with new insights. After all, if the overall goal is to minimize problems, then we are faced with the dauntingly complex meta-problem of identifying interventions that can reasonably be expected to prevent more problems than they cause — and by any measure, this meta-problem will require us to combine a vast amount of knowledge and supportive values. Minimalist views do not imply that we hyper-specialize in this meta-problem in a way that would dismiss all apparent “intrinsic values” as superfluous, but rather that we adhere to a diverse range [? · GW] of these values so as to advance a mature and comprehensive approach to alleviating problems.

Preventing instead of outweighing hell

Of course, it would be a suspicious convergence [EA · GW] if everything that people may think of as being intrinsically valuable would also be relationally aligned with the minimization of overall problems. Yet the everyday implications of minimalist views are not necessarily very different from those of other consequentialist views, because all of them share the personal ideal of living an effective and strategic life aligned with some overall optimization goal, which implies some common constraints and recommendations for everyday conduct. Where the views differ (the most) may be in their long-term [? · GW] implications. Rather than primarily ensuring that we spread out into space, minimalist views would arguably imply that we prioritize avoiding worst-case scenarios (cf. Gloor, 2018 [EA · GW]; DiGiovanni, 2021 [EA · GW]), because prioritizing large-scale space colonization may well increase the amount of subjective problems over the long term (in expectation).

On non-minimalist utilitarian grounds, Bostrom (2003) argues that the main priority of human civilization should be to enable astronomical amounts of independently positive lives to exist in the far future. Yet minimalist axiologies imply that the non-existence of those lives is not a problem for their own sake; after all, in terms of subjective problems, non-existent beings do not need to be saved from non-existence. In particular, non-existent lives do not subjectively need to exist in the way that some other beings need to avoid subjective hell. (A different question is whether the existence of future generations may help to overall prevent subjectively problematic experiences across all beings. More on this in a future post.)

Tradeoffs like the Very Repugnant Conclusion [EA · GW] (VRC) are not only theoretical, because arguments like that of Bostrom (2003) imply that the stakes may be astronomically high in practice. When non-minimalist axiologies find the VRC a worthwhile tradeoff, they would presumably also have similar implications on an arbitrarily large scale. Therefore, we need to have an inclusive discussion about the extent to which the subjective problems (e.g. extreme suffering) of some can be “counterbalanced” by the “greater (intrinsic) good” for others, because this has direct implications for what kind of large-scale space colonization could be called “net positive”.

Self-contained versus relational flourishing

When psychologists speak of flourishing, it can have many different meanings. As a value-laden concept, it is often bundled together with things like optimal growth and functioning, social contribution, and having a purpose in life. Yet when we load the concept of flourishing with axiological value, as is done by the authors of Utilitarianism.net and The Precipice (Ord, 2020), we should ask whether this value is intrinsic or relational in the axiological sense.

On minimalist views, flourishing would not be a “self-contained” state of isolated value. Yet minimalist views do support a notion of flourishing as personal alignment with something beyond ourselves. Minimalist flourishing would mean that we are skillfully serving the unmet needs of all sentient beings, aligning our well-being [EA · GW] with the overall prevention of ill-being.

In practice, a minimalist sense of “optimal growth and functioning” would probably look more like strategic self-investment and healthful living rather than self-sacrifice (similar to other impartial consequentialist views). After all, we first need to patiently grow our strengths, skills, and relationships before we can sustainably and effectively apply ourselves to help others. And because life is long, it makes sense to keep growing these capacities, meeting our needs in harmony with the needs of others, and actively seek the best ways in which we can play positive roles for all sentient beings.

6. Without the concept of intrinsic positive value, how can life be worth living?

A more complete view

What we see in standard population theory are only the isolated “welfare bars” of what each individual life independently “contains”. In practice, we also have hidden “relational roles bars”.

On any impartial and welfarist view, our own “aggregate welfare” is often a much smaller part of our life’s overall value compared to our effects on the welfare of others. And if we think of our own ideal life, or perhaps the life of our favorite historical or public figure, we are often practically right to focus on the roles of this life for others, and not only, or even mostly, on how it feels from the inside. After all, the value of the roles is (ultimately) measured in the same [? · GW] unit of value as the welfare, and in that sense the roles can be much bigger than what any single life independently “contains”.

In other words, we may, in the bigger picture, “compare effort with effort”, and find that a sufficient reason to spend effort is to save effort, reduce inner conflict, and lighten the load for all sentient beings in the long term.

Indeed, if we assume that basically all of our daily struggles are much easier to bear compared to instances of the most intense pains (Gómez-Emilsson, 2019 [EA · GW]), then we may already find some lightness and relief in being relatively problem-free at the personal level. And we may further realize that we can play very worthwhile roles by focusing our spare efforts on helping to relieve such extreme burdens on the whole. Conversely, if we assume that our burdens are worthwhile for some “positive essence”, then we again face interpersonal tradeoffs like the VRC [EA · GW], as well as the question of whether we would allow arbitrarily large harms for the “greater good” of creating astronomical amounts of this essence.

Finally, we might question the practical relevance of thinking that a life could be worth living only for some kind of “self-contained” satisfaction. After all, our practical intuitions and dilemmas are always related to tradeoffs in the interpersonal world. Without the concept of intrinsic positive value, a life can be worth living for its positive roles.

What additional questions do you have about these views?

The next posts will address more specific questions, such as whether minimalist views would imply that we should seek to “minimize populations” (meanwhile: see Vinding, 2020b, pp. 141–148), and whether a vast [EA(p) · GW(p)] amount of small pains could imply “a preference for hell over heaven” (meanwhile: see Vinding, 2021).

A future post will also address many of the published objections to antifrustrationism (Fehige, 1998), and take a closer look at tranquilism (Gloor, 2017).

What additional questions or feedback arise in relation to minimalist axiologies? Please let me know in the comments or via this anonymous form.

Acknowledgments

This essay was funded by the Center for Reducing Suffering.

Special thanks to Magnus Vinding for help with editing, and to Aaron Gertler [EA · GW] for commenting on an early draft. Valuable comments were also provided by Simon Knutsson, Tobias Baumann, and Winston Oswald-Drummond. Commenting does not imply endorsement of any of my claims.

References

Ajantaival, T. (2021). Positive roles of life and experience in suffering-focused ethics. Ungated; EA Forum [EA · GW].

Arrhenius, G. (2000a). An impossibility theorem for welfarist axiologies. Economics & Philosophy, 16(2), 247–266. Ungated.

Arrhenius, G. (2000b). Future Generations: A Challenge for Moral Theory (Doctoral dissertation, Acta Universitatis Upsaliensis). Ungated.

Arrhenius, G., Ryberg, J., & Tännsjö, T. (2014). The Repugnant Conclusion. In Zalta E. N. (ed.), The Stanford Encyclopedia of Philosophy (Spring 2014 Edition). Ungated.

Bostrom, N. (2003). Astronomical waste: The opportunity cost of delayed technological development. Utilitas, 15(3), 308–314. Ungated.

DiGiovanni, A. (2021). A longtermist critique of “The expected value of extinction risk reduction is positive”. EA Forum [EA · GW].

Fehige, C. (1998). A Pareto Principle for Possible People. In Fehige, C. & Wessels, U. (eds.) Preferences (pp. 508–543). Berlin: Walter de Gruyter. Ungated.

Gloor, L. (2017). Tranquilism. Ungated.

Gloor, L. (2018). Cause prioritization for downside-focused value systems. Ungated; EA Forum [EA · GW].

Gómez-Emilsson, A. (2019). Logarithmic Scales of Pleasure and Pain. Ungated; EA Forum [EA · GW].

Karlsen, D. S. (2013). Is God Our Benefactor? An Argument from Suffering. Journal of Philosophy of Life, 3, 145–167. Ungated.

Knutsson, S. (2021). The world destruction argument. Inquiry, 64(10), 1004–1023. Ungated; EPUB.

MacAskill, W., Chappell, R. Y., & Meissner, D. (2021). Population Ethics: The Total View. Ungated.

Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette Books.

Ryberg, J. (1996). Is the repugnant conclusion repugnant?. Philosophical Papers, 25(3), 161–177.

Vinding, M. (2020a). On purported positive goods “outweighing” suffering. Ungated.

Vinding, M. (2020b). Suffering-Focused Ethics: Defense and Implications. Ratio Ethica. Ungated.

Vinding, M. (2021). Comparing repugnant conclusions: Response to the “near-perfect paradise vs. small hell” objection. Ungated.

12 comments

Comments sorted by top scores.

comment by Gregory_Lewis · 2021-11-14T10:40:38.504Z · EA(p) · GW(p)

Tradeoffs like the Very Repugnant Conclusion [EA · GW] (VRC) are not only theoretical, because arguments like that of Bostrom (2003) imply that the stakes may be astronomically high in practice. When non-minimalist axiologies find the VRC a worthwhile tradeoff, they would presumably also have similar implications on an arbitrarily large scale. Therefore, we need to have an inclusive discussion about the extent to which the subjective problems (e.g. extreme suffering) of some can be “counterbalanced” by the “greater (intrinsic) good” for others, because this has direct implications for what kind of large-scale space colonization could be called “net positive”.

 

This seems wrong to me, and confusing 'finding the VRC counter-intuitive' with 'counterbalancing (/extreme) bad with with good in any circumstance is counterintuitive' (e.g. the linked article to Omelas) is unfortunate - especially as this error has been repeated a few times in and around SFE-land.

 

First, what is turning the screws in the VRC is primarily the aggregation, not the (severe/) suffering. If the block of 'positive lives/stuff' in the VRC was high magnitude - say about as much (or even more) above neutral as the block of 'negative lives/stuff' lie below it - there is little about this more Omelas-type scenario a classical utilitarian would find troubling. "N terrible lives and k*N wonderful lives is better than N wonderful lives alone" seems plausible for sufficiently high values of k. (Notably, 'Minimalist' views seem to fare worse as it urges no value of k - googleplexes, TREE(TREE(3)), 1/P(Randomly picking the single 'wrong' photon from our light cone a million times consecutively), etc. would be high enough.)

The challenge of the V/RC is the counter-intuitive 'nickel and diming' where a great good or bad is outweighed by a vast multitude of small/trivial things. "N terrible lives and c*k*N barely-better-than-nothing lives is better than N wonderful lives alone" remains counter-intuitive to many who accept the first scenario (for some value of k) basically regardless of how large you make c. The natural impulse (at least for me) is to wish to discount trivially positive wellbeing rather than saying it can outweigh severe suffering if provided in sufficiently vast quantity. 

If it were just 'The VRC says you can counterbalance severe suffering with happiness' simpliciter  which was generally counterintuitive, we could skip the rigmarole of A, A+, B etc. and just offer Omelas-type scenarios (as Tomasik does in the linked piece) without stipulating the supposedly outweighing good stuff comprises a lot of trivial well-being.

 

Second, although scenarios where one may consider counterbalancing (/severe) suffering with happiness in general may not be purely theoretical (either now or in the future) the likelihood of something closely analogous to the VRC in particular looks very remote. In terms of 'process' the engine of the counter-intuitiveness relies on being able to parcel out good stuff in arbitrarily many arbitrarily small increments rather than in smaller more substantial portions; in terms of 'outcome' one needs a much smaller set of terrible lives outweighed by a truly vast multitude of just-about-better-than-nothing ones. I don't see how either arise on credible stories of the future. 

 

Third, there are other lines classical utilitarians or similar can take in response to the VRC besides biting the bullet (or attempting to undercut our intuitive responses): critical level views, playing with continuity, and other anti-aggregation devices to try and preserve trading-off in general but avoid the nickel and diming issues of the VRC in particular . Obviously, these themselves introduce other challenges (so much so I'm more inclined to accept the costly counter-examples than the costs of (e.g.) non-continuity) and surveying all this terrain would be a gargantuan task far beyond the remit of work introducing a related but distinct issue. 

But I bring this up because I anticipate the likely moves you will make to avoid the counter-example Shulman and I have brought up will be along the lines of anti-aggregationist moves around lexicality, thresholds, and whatnot. If so, what is good for the goose is good for the gander: it seems better to use similarly adapted versions of total utilitarianism as a 'like for like' comparison. 'Lexical threshold total utilitarianism', which lexically de-prioritises dis/value below some magnitude can accept mere addition, accept trading off suffering for sufficient (non-trivial) happiness, but avoid both the RC and VRC. This seems a better point of departure for weighing up minimalism or not, rather than discussing counter-examples to one or the other view which only apply given an (ex hypothesi) mistaken account of how to aggregate harms and benefits.

Replies from: Teo Ajantaival, MichaelStJules, antimonyanthony
comment by Teo Ajantaival · 2021-11-16T18:38:53.648Z · EA(p) · GW(p)

(Edit: Added a note(*) on minimalist views and the extended VRC of Budolfson & Spears.)

Thanks for highlighting an important section for discussion. Let me try to respond to your points. (I added the underline in them just to unburden the reader’s working memory.)


This seems wrong to me,

The quoted passage contained many claims; which one(s) seemed wrong to you?


and confusing 'finding the VRC counter-intuitive' with 'counterbalancing (/extreme) bad with with good in any circumstance is counterintuitive' (e.g. the linked article to Omelas) is unfortunate - especially as this error has been repeated a few times in and around SFE-land.

My argument was rather the other way around. Namely, if we accept any kind of counterbalancing of harms with isolated [EA · GW] goods, then CU-like views would imply that it is net positive to create space colonies that are at least as good as the hellish + barely positive lives of the VRC. And given arguments like astronomical waste [? · GW] (AW) (Bostrom, 2003), the justified harm could be arbitrarily vast as long as the isolated positive lives are sufficiently numerous. (Tomasik’s Omelas article does not depend on the VRC, but speaks of the risk of astronomical harms given the views of Bostrom, which was also my intended focus.)

(To avoid needless polarization and promote fruitful dialogue, I think it might be best to generally avoid using “disjointing” territorial metaphors such as “SFE-land” or “CU-land”, not least considering the significant common ground [EA · GW] among people in the EA(-adjacent) community.)


First, what is turning the screws in the VRC is primarily the aggregation, not the (severe/) suffering.

For minimalist views, there is a very relevant difference between the RC and VRC, which is that the RC can be non-problematic (provided that we assume that the lives “never suffer [EA · GW]”, cf. footnote 16 here [EA · GW]), but minimalist views would always reject the VRC. For minimalist views, the (severe) suffering is, of course, the main concern. My point about the VRC was to highlight how CU can justify astronomical harms even for (supposedly) barely positive isolated lives, and an even bigger commonsensical worry is how much harm it can justify for (supposedly) greatly positive isolated lives.


If the block of 'positive lives/stuff' in the VRC was high magnitude - say about as much (or even more) above neutral as the block of 'negative lives/stuff' lie below it - there is little about this more Omelas-type scenario a classical utilitarian would find troubling. "N terrible lives and k*N wonderful lives is better than N wonderful lives alone" seems plausible for sufficiently high values of k. (Notably, 'Minimalist' views seem to fare worse as it urges no value of k … would be high enough.)

It seems true that more people would find that more plausible. Even so, this is precisely what minimalists may find worrying about the CU approach to astronomical tradeoffs, namely that astronomical harms can be justified by the creation of sufficiently many instances of isolated goods.

Additionally, I feel like the point above applies more to classical utilitarianism (the view) rather than to the views of actual classical utilitarians, not to mention people who are mildly sympathetic to CU, which seems a particularly relevant group in this context given that they may represent an even larger number of people in the EA(-adjacent) community.

After all, CU-like views contain a minimalist (sub)component, and probably many self-identified CUs and CU-sympathetic people would thereby be at least more than a “little” troubled by the implication that astronomical amounts of hellish lives — e.g. vastly more suffering than what has occurred on Earth to date — would be a worthwhile tradeoff for (greater) astronomical amounts of wonderful lives (what minimalist views would frame as unproblematic lives), especially given that the alternative was a wonderful (unproblematic) population with no hellish lives.

(For what it’s worth, I used to feel drawn to a CU axiology until I became too troubled by the logic of counterbalancing harm for some with isolated good for others. For many people on the fence, the core problem is probably this kind of counterbalancing itself, which is independent of the VRC but of course also clearly illustrated by it.)


If it were just 'The VRC says you can counterbalance severe suffering with happiness' simpliciter which was generally counterintuitive, we could skip the rigmarole of A, A+, B etc. and just offer Omelas-type scenarios (as Tomasik does in the linked piece) without stipulating the supposedly outweighing good stuff comprises a lot of trivial well-being.

Of course, minimalist views (as explored here) would deny all counterbalancing [EA · GW] of severe problems with isolated [EA · GW] goods, independent of the VRC.

The Mere-Addition Paradox, RC, and VRC are often-discussed problems to which minimalist views may provide satisfying answers. The first two were included in the post for many reasons, and not only as a build-up to the VRC. The build-up was also not meant to end with the VRC, but instead to further motivate the question of how much harm can be justified to reduce astronomical waste (AW).

If CU-like views can justify the creation of a lot of hellish lives even for vast amounts of isolated value-containers that have only “barely positive” contents (the VRC), then how much more hellish lives can they supposedly counterbalance once those containers are filled (cf. AW)?


Second, although scenarios where one may consider counterbalancing (/severe) suffering with happiness in general may not be purely theoretical (either now or in the future) the likelihood of something closely analogous to the VRC in particular looks very remote. In terms of 'process' the engine of the counter-intuitiveness relies on being able to parcel out good stuff in arbitrarily many arbitrarily small increments rather than in smaller more substantial portions; in terms of 'outcome' one needs a much smaller set of terrible lives outweighed by a truly vast multitude of just-about-better-than-nothing ones. I don't see how either arise on credible stories of the future.

MichaelStJules already responded to this in the sibling comment [EA(p) · GW(p)]. Additionally, I would again emphasize that the main worry is not so much the practical manifestation of the VRC in particular, but more the extent to which much worse problems might be justified by CU-like views given the creation of supposedly even greater amounts of isolated goods (i.e. reducing AW).


Third, there are other lines classical utilitarians or similar can take in response to the VRC besides biting the bullet (or attempting to undercut our intuitive responses): critical level views, playing with continuity, and other anti-aggregation devices to try and preserve trading-off in general but avoid the nickel and diming issues of the VRC in particular.

MichaelStJules already mentioned an arbitrariness objection to those lines. Additionally, my impressions (based on Budolfson & Spears, 2018) are that the VRC cannot be avoided by any leading welfarist axiology despite prior consensus in the literature to the contrary” and that [the extended] VRC cannot be avoided by any other welfarist axiology in the literature.”

Their literature did not include minimalist views(*). Did they also omit some CU-like views, or are the VRC-rejecting CU-like views not defended by anyone in the literature?


Obviously, these themselves introduce other challenges (so much so I'm more inclined to accept the costly counter-examples than the costs of (e.g.) non-continuity) and surveying all this terrain would be a gargantuan task far beyond the remit of work introducing a related but distinct issue.

This again leaves me wondering: Are all of the VRC-rejecting CU-like views so arbitrary or counterintuitive that people will just rather accept the VRC? And will even the most attractive of those views still justify astronomical harms for a sufficiently high amount of isolated lives that are “taller” than those in the VRC?

This does not ease the worry that CU-like views can justify astronomically large harms in order to create isolated positive lives that never needed to exist in the first place.


But I bring this up because I anticipate the likely moves you will make to avoid the counter-example Shulman and I have brought up will be along the lines of anti-aggregationist moves around lexicality, thresholds, and whatnot.

First, in terms of practical relevance, one could argue that the choice to “prefer hell to prevent an imperfect heaven” is much more speculative and unlikely than is the VRC for CU-like views, not to mention the likelihood of CU justifying astronomical harms for supposedly greater goods regardless of the VRC (i.e. for reducing AW). In other words, the former can much more plausibly be disregarded as practically irrelevant than can the latter.

Second, lexical views do indeed avoid the conclusion in question, but these need not entail abrupt thresholds (per the arguments here and here), and even if they do, the threshold need not be an arbitrary or ad hoc move. For example, one could hold that there is a difference between psychologically consentable and unconsentable suffering, which is normally ignored by the logic of additive aggregationism. Moreover, the OP entails no commitment to additive aggregationism, as it only specifies that the minimalist views in question are monist, impartial, and welfarist.


If so, what is good for the goose is good for the gander: it seems better to use similarly adapted versions of total utilitarianism as a 'like for like' comparison. 'Lexical threshold total utilitarianism', which lexically de-prioritises dis/value below some magnitude can accept mere addition, accept trading off suffering for sufficient (non-trivial) happiness, but avoid both the RC and VRC. This seems a better point of departure for weighing up minimalism or not, rather than discussing counter-examples to one or the other view which only apply given an (ex hypothesi) mistaken account of how to aggregate harms and benefits.

First, I am ​​happy to compare like views in this way in my forthcoming post. I would greatly appreciate it if people were to present or refer me to specific such views to be compared.

Second, the point above may seem to imply that there is a symmetry between these lexical adaptations, i.e. that we can “similarly” construct lexical minimalism and lexical symmetric totalism (if you allow the short expression). Yet the fact that we can make formally symmetric constructions for these different views does not imply that the respective plausibility of these constructions is symmetric at the substantive level. In this sense, what is good for the goose may do nothing for the gander. (But again, I’m happy to explore the possibility that it might.)

Specifically, how would one set the threshold(s) on the lexical symmetric view in a non-arbitrary way, and has anyone presented and defended plausible versions of such views?

Furthermore, most people would probably find it much more plausible that some harms cannot be counterbalanced by any amount of isolated goods (“a lexical minimalist component”), than that some goods can counterbalance any amount of isolated harms (a similarly lexical positive component). At least I’ve never heard anyone defend or outline the latter kind of view. (By contrast, beyond examples in academic philosophy, there are numerous examples in literature hinting at “minimalist lexicality”.)

Overall, I remain worried about the vast harms that CU-like views could justify for the supposed greater good, also considering that even you feel inclined to rather accept the VRC than deal with the apparently arbitrary or counterintuitive features of the versions of CU-like views that avoid it. (And if one proposes a positive lexical threshold, it seems that above the lexical threshold there is always a higher isolated good that can justify vast harms.)

Lastly, why do we need to “accept trading off suffering for sufficient (non-trivial) [isolated [EA · GW]] happiness” in the first place? Would not a relational [EA · GW] account [EA · GW] of the value of happiness suffice? What seems to be the problem with relational goods, without isolated goods?


(*) A note on minimalist views and the extended VRC of Budolfson & Spears (2018).

Strictly speaking, the extended VRC in the formulation of Budolfson & Spears does not pertain to minimalist views, because they say "u^h>0" (i.e. strictly greater than zero). So minimalist views fall outside of the domain that they draw conclusions for.

But if we allow the "high-utility lives" to be exactly zero, or even less than zero, then their conclusion would also hold for (continuous, aggregationist) minimalist views. (But the conclusion arguably also becomes much less implausible in the minimalist case compared to the symmetric case, cf. the final point below.) 

So it (also) holds for continuous aggregationist minimalist views that there exists a base population "such that it is better to both add to the base population the negative-utility lives and cause [a sufficiently large number of] ε-changes".

But beyond questioning the continuous aggregationist component of these views (indeed a possibility that lies open to many kinds of views with such a component), and beyond questioning the practical relevance of this conclusion for minimalist views versus for symmetric views (as I do above), one may further argue that the conclusion is significantly more [EA(p) · GW(p)] plausible [EA(p) · GW(p)] in the minimalist case than in the case where we allow torture for the sake of isolated, purported goods that arguably do not need to exist. For in the minimalist case, the overall burden of subjective problems is still lessened (assuming continuous aggregationist minimalism). We are not creating extreme suffering for the mere sake of isolated, "unrelieving" goods.

Replies from: Gregory_Lewis
comment by Gregory_Lewis · 2021-11-18T15:09:10.141Z · EA(p) · GW(p)

Thanks for the reply, and with apologies for brevity.

Re. 1 (ie. "The primary issue with the VRC is aggregation rather than trade-off"). I take it we should care about plausibility of axiological views with respect to something like 'commonsense' intuitions, rather than those a given axiology urges us to adopt. It's at least opaque to me whether commonsense intuitions are more offended by 'trade-offy/CU' or 'no-trade-offy/NU' intuitions. On the one hand:

  • "Any arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)"
  • (a fortiori) "N awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives)."

But on the other:

  • "No amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever)."
  • (a fortiori) "No amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)"

However, I am confident the aggregation views - basically orthogonal to this question - are indeed the main driver for folks finding the V/RC particularly repugnant. Compare:

  1. 1 million great lives vs. 1 million terrible lives and a Quadrillion great lives.
  2. 1 thousand great lives vs. 1 thousand terrible lives and TREE(3) marginally good lives.

A minimalist view may well be concerned with increasing the amount of aggregate harm in 1 vs. 2, and so worry that (re. 2) if CU was willing to accept this, it would accept a lot more aggregate harm if we increase the upside to more than compensate (e.g. TREE(3) great lives). Yet I aver commonsense intuitions favour 1 over 2, and would find variants of 2 where the downside is increased but the upside is reduced but concentrated (e.g. a trillion great lives) more palatable. 

So appeals along the lines of "CU accepts the VRC, and - even worse - would accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy lives" seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.

 

Re. 3 I've read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: "Whenever aggregation is done over an unbounded space, repugnant outcomes inevitably occur"; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.) 

The replies minimalists can make here seem very 'as good for the goose as the gander' to me:

  1. One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/continuity/etc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.
  2. One could urge we shouldn't dock points to a theory for counter-examples which are impractical/unrealistic, the x/VRCs for minimalism fare much better than the x/VRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the 'in principle' determination for scenarios (I don't ever recall - e.g. - replies for averagism along the lines of "But there'd never be a realistic scenario where we'd actually find ourselves minded to add net-negative lives to improve average utility"). In any case, a lot of the xVRCs applicable to CU-variants require precisely stipulated 'base populations', so they're presumably also 'in the clear' by this criterion.
  3.  One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/VRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they can't play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness don't, 'subjective perfection' equated to neutrality, etc.) "Conditional on minimalist intuitions, minimalism has no truly counter-intuitive results" is surely true, but also question-begging to folks who don't share them (compare a totalist asserting the VRC is much less counter-intuitive than minimalist-xVRCs as - 'obviously' - wellbeing can be greater than zero, and axiology shouldn't completely discount unbounded amounts of it in evaluation).

[Finally, I'm afraid I can't really see much substantive merit in the 'relational goods' approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed 'better than nothing', and I don't find relational attempts to undercut this by offering an account of these being roundabout ways/policies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have 'lives worth living' in the sense that 'without me these other people would be worse off', but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible (cf. virtually everyone's expressed and implied preferences suggest non-assent to 'no-trade-off' views). 

Similarly, I think assessing 'isolated' goods as typical population cases do is a good way to dissect out the de/merits of different theories, and noting our evaluation changes as we add in a lot of 'practical' considerations seems apt to muddy the issue again (for example, I'd guess various 'practical elaborations' of the V/RC would make it appear more palatable, but I don't think this is a persuasive reply). 

I focus on the 'pure' population ethics as "I don't buy it" is barren ground for discussion.]

Replies from: Teo Ajantaival
comment by Teo Ajantaival · 2021-11-24T19:27:55.797Z · EA(p) · GW(p)

Thanks for the reply!


Re. 1 (ie. "The primary issue with the VRC is aggregation rather than trade-off"). I take it we should care about plausibility of axiological views with respect to something like 'commonsense' intuitions, rather than those a given axiology urges us to adopt.

Agreed, and this is also why I focus also on the psychological and practical implications of axiological views, and not only on their theoretical implications. Especially in the EA(-adjacent) community, it seems common to me that the plausibility of theoretical views is assessed also based on the plausibility of their practical implications, which tap into further important intuitions than what may be involved by staying at the abstract level.

E.g., people may bite bullets in theory to retain a consistent view, but still never bite those bullets in practice due to some still unarticulated reasons, which may indicate an inconsistency between their explicit and implicit axiology.


It's at least opaque to me whether commonsense intuitions are more offended by 'trade-offy/CU' or 'no-trade-offy/NU' intuitions.

By ‘trade-offy’ and ‘no-trade-offy’, I’d like to emphasize that we mean trade-offs between isolated things. In other words, the diagrams of population ethics could just as well consist of causally isolated experience machines (“isolated Matrix-lives”), which is plausibly a confounding factor for our practical (“commonsense”) intuitions, as our practical intuitions are arguably adapted for trade-offs in an interpersonal (“relational”) world.


On the one hand:
"Any arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)"
(a fortiori) "N awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives)."

It’s very unclear to me how many people actually believe that any arbitrarily awful thing can be counterbalanced by sufficiently many (and/or awesome) isolated Matrix-lives, or other isolated goods. By default, I would assume that most people do not (want to) think about torture, and also do not properly respect the “all else being equal” assumption, and thereby would not count as votes of “informed consent” for those claims. Additionally, in at least one small Mechanical Turk survey about a tradeoff for people themselves, more than 40 percent of people said that they would not accept one minute of extreme suffering for any number of happy years added to their lives.


But on the other:
"No amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever)."
(a fortiori) "No amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)"

The first claim (i.e. “a lexical minimalist component”) is precisely what has been defended in the philosophical (and fictional) literature. And again, this claim might be something that most people have not thought about, because only a minority of people have had first- or even second-person experience of an awful thing that might be defended as being categorically “impossible to compensate for with isolated goods”, such as torture.

(The second claim does not strictly follow from the first, which was about “awful” things; e.g. some SFE views hold that sufficiently awful things are lexical bads, but not that all kinds of tiny bads are. This is also relevant for the practical implications of lexical minimalist views with relational goods, on which pinpricks may be practically ignored unless they increase the risk of lexically bad things, whereas anything worthy of the name “great thing” would probably play positive roles to help reduce that risk.)


However, I am confident the aggregation views - basically orthogonal to this question - are indeed the main driver for folks finding the V/RC particularly repugnant. Compare: [...]
So appeals along the lines of "CU accepts the VRC, and - even worse - would accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy lives" seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.

Here I would again note that our commonsense intuitions are arguably not adapted to track the isolated value of lives, and so we should be careful to make it clear that we are comparing e.g. isolated Matrix-lives. By default, I suspect that people may think of the happy populations as consisting of lives like their own or of people they know, which may implicitly involve a lot of effects on other lives.

Of course, the framings of “isolated Matrix-lives” or “experience machines” may themselves bring in connotations that can feel pejorative or dismissive with regard to the actual subjective experience of those lives, but my point is just to drive home the fact that these lives are, by hypothesis, radically devoid of any positive roles for others, or even for their future selves. And if people implicitly have a relational notion of positive value (e.g. if they think of positive value as implying an inverse causal relation to some subjective problems), then they may feel very differently about harms counterbalanced by isolated goods vs. harms counterbalanced by relational goods (of which minimalist views can endorse the latter).

To be clear, the inverse relations include not only subjective problems prevented by social relationships, but also e.g. any desirable effects on wild animals [? · GW] and future s-risks [? · GW]. Admittedly, probably neither of the latter two is a very commonsensical contributor to positive tradeoffs, but I’d guess that neither would many people find it intuitive to counterbalance astronomical harms with (“even greater amounts of”) isolated experience machines, or with a single “utility monster”. Arguably, all of these cases are also tricky to measure against people’s commonsense intuitions, given that not many people have thought about them in the first place.


Re. 3 I've read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: "Whenever aggregation is done over an unbounded space, repugnant outcomes inevitably occur"; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.)

Yeah, we can formally construct xVRCs for minimalist views, including for lexical minimalist views, but my claim is that these are consistently less repugnant in like-like comparisons with symmetric views (relative to commonsense or widely shared intuitions). Specifically in the lexical minimalist xVRC — i.e. these [EA(p) · GW(p)] comments [EA(p) · GW(p)] which you refer to in your point #3 below — the tradeoff results in ever less (and less intense) suffering if followed repeatedly. By comparison, every symmetric xVRC would keep on increasing suffering if scaled up in an analogous way, which is arguably the most repugnant aspect of the VRC.

Additionally, this comment [EA(p) · GW(p)] (upstream of the linked ones) points out a source of intra-personal repugnance in the symmetric cases, namely that CU-like views would be fine with the “marginally good” ε-lives being “roller coaster” lives that also contain a lot of extreme suffering:

One way to see that a ε increase could be very repugnant is to recall Portmore’s (1999) suggestion that ε lives in the restricted RC could be “roller coaster” lives, in which there is much that is wonderful, but also much terribly suffering, such that the good ever-so-slightly outweighs the bad [according to some symmetric view]. Here, one admitted possibility is that an ε-change could substantially increase the terrible suffering in a life, and also increase good components; such a ε-change is not the only possible ε-change, but it would have the consequence of increasing the total amount of suffering. ... Moreover, if ε-changes are of the “roller coaster” form, they could increase deep suffering considerably beyond even the arbitrarily many [u < 0] lives, and in fact could require everyone in the chosen population to experience terrible suffering. [From Budolfson & Spears]

Of course, in some minimalist examples it is arguably repugnant to create extreme suffering to avoid a vast number of mildly problematic states. But I would claim that commonsense (and not only minimalist) intuitions would find even more repugnant the analogous symmetric case, namely to create extreme suffering for a vast number of mildly positive states which are not needed to relieve anyone’s burden. (The latter case may appear especially repugnant if the symmetric view in question would allow the mildly positive states to be “roller coaster” lives that are not even themselves free of, but would in fact contain a lot of, extreme suffering.) Consider, for instance, that:

  • A 2017 survey by FLI (n > 14,000), found that the goal people favored most as the ideal aim of a future civilization was “minimizing suffering”. This was the most popular aim by a large margin, ahead of “maximizing positive experiences”, and most of the people who favored this goal were probably not suffering while they responded to the survey.
  • The authors of Moral Uncertainty write (p. 185):
According to some plausible moral views, the alleviation of suffering is more important, morally, than the promotion of happiness. According to other plausible moral views (such as classical utilitarianism), the alleviation of suffering is equally as important, morally, as the promotion of happiness. But there is no reasonable moral view on which the alleviation of suffering is less important than the promotion of happiness. So, under moral uncertainty, it’s appropriate to prefer to alleviate suffering rather than to promote happiness more often than the utilitarian would.
  • The above points do not tip the scales all the way in favor of minimalism over CU-variants, but they do suggest that common intuitions would not necessarily favor ‘additively aggregationist CU’ (even before looking at the respective x/VRCs for these views, let alone after considering the overall direction when we iterate such tradeoffs multiple times).

The replies minimalists can make here seem very 'as good for the goose as the gander' to me:
1. One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/continuity/etc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.

Agreed, although it is unclear whether continuous aggregation is in fact more typical. But since I’m interested in defending lexical minimalism (which many people already hold with a priority for extreme suffering), I’d be curious to hear if anyone has defended an analogous symmetric view, or how that view would be constructed in the first place. E.g., should I compare “priority for the worst-off” with a view that (also) entails “priority for the best-off”, even if no one (to my knowledge) defends the latter priority?


2. One could urge we shouldn't dock points to a theory for counter-examples which are impractical/unrealistic, the x/VRCs for minimalism fare much better than the x/VRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the 'in principle' determination for scenarios

The literature is mostly not written by people trying to figure out whether to prioritize the reduction of AW [? · GW] versus the reduction of s-risks. And once we accept some tradeoff in theory, it becomes relevant to ask if we would plausibly accept similar tradeoffs that could practically occur on an astronomical scale, for which the e-changes could of course first be “enlarged” so as to make more practical sense. (At least I feel like none of my intended points depend on the e-changes being tiny, nor on the base populations consisting of lives with mutually equal welfare, so I’m fine with discussing x/VRCs that are in those ways more realistic — especially if we account for the “roller coaster” aspects of more realistic lives.)

In other words, whether we affirm or reject the claim that purported positive goods can outweigh extreme suffering has great relevance for our priorities, whereas the question of whether lexical minimalist views are more plausible than non-lexical minimalist views has limited practical relevance, since the real-life implications (e.g. for ideal population sizes) are roughly convergent for minimalist views.


3. One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/VRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they can't play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness don't, 'subjective perfection' equated to neutrality, etc.)

Again, I’m happy to pick the closest symmetric view to compare with the minimalist priority for extreme suffering, but I’m still unsure what that view might be (and eager to hear if there is anything to be read about such views).

I don’t agree that the points about the minimalist xVRCs’ comparatively greater plausibility are mostly re-statements of minimalism itself. Rather, I claim that commonsense intuitions would favor the lexical minimalist xVRC — in which suffering is “spread more equally between those who already exist and those who do not” (and eventually minimized if iterated) — over any symmetric xVRC of “expanding hell to help the best-off”. (In other words, even if one finds it somewhat plausible that happiness has independent value, or value in isolation, it still seems that the symmetric xVRCs are worse than the minimalist xVRC.)

(For subjective perfection equated with the absence of something, I was thinking of tranquilism as a need-based account of the isolated value of different experiential states, which is centered on cravings to change one’s subjective experience.)


Finally, I'm afraid I can't really see much substantive merit in the 'relational goods' approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed 'better than nothing', and I don't find relational attempts to undercut this by offering an account of these being roundabout ways/policies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have 'lives worth living' in the sense that 'without me these other people would be worse off', but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible

(Strictly speaking, minimalism is a category that contains NU but only overlaps with SFE; some SFE views may recognize isolated positive value even as they prioritize reducing suffering, and e.g. Fehige’s view represents a preference-based instead of suffering-focused minimalism.)

About the naive intuition that happiness is indeed ‘better than nothing’, I’m curious if that really applies also for isolated Matrix-lives (for most people). As I’ve noted in this section [EA · GW], by focusing on isolated value we may often underestimate the relational value of some goods, which may be greater than the amount of intrinsic value we perceive them to have.

About the relational account having dispiriting or emotionally unsatisfying implications, those can also be compared between views (to the extent that they matter for the plausibility of axiological views). E.g., on minimalist views, unlike CU-like views, it’s not a tragedy or atrocity if we fail to reduce astronomical waste. In this sense, minimalist views may be less dispiriting than CU-like views. Moreover, I’d practically emphasize that our positive roles need not be limited to the confines of our social communities, but extend all the way to those communities’ effects on things like factory farming, wild-animal suffering, and the risks of future suffering (and thus potentially match or even exceed our commonsense feelings about the positive value of many lives, even if this would formally consist of “only” relational instead of independently positive value).

However, we should also be careful to account for our personal emotional responses to the implications of a given axiology. By analogy with empirical claims, we would probably want our views on (e.g.) global catastrophic risks to be unaffected by whether we find them dispiriting or not. Similarly, we should arguably account for such feelings in our axiological considerations of what, if anything, would constitute an axiologically positive life in causal isolation (and, specifically, what would constitute a life capable of counterbalancing the suffering of others without the consent of the latter).

comment by MichaelStJules · 2021-11-15T03:30:23.576Z · EA(p) · GW(p)

EDIT:

But I bring this up because I anticipate the likely moves you will make to avoid the counter-example Shulman and I have brought up will be along the lines of anti-aggregationist moves around lexicality, thresholds, and whatnot.

Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.

 

My original comment follows.


I think your first and third points are mostly right, but I would add that minimalist axiologies can avoid the (V)RC without (arbitrary) critical levels, (arbitrary) thresholds, giving up continuity, or giving up additivity/separability, which someone might find as counterintuitive as the VRC. Views like these tend to look more arbitrary, or assuming transitivity, the independence of irrelevant alternatives and a far larger unaffected population, often reduce to solipsism or recommend totally ignoring value that's (weakly or strongly) lexically dominated in practice. So, if you find the (V)RC, and these aggregation tricks or their implications very counterintuitive, then minimalist and person-affecting views will look better than otherwise (not necessarily best), and classical utilitarianism will look worse than otherwise (but potentially still best overall or better than minimalist axiologies, if the other points in favour are strong enough).

Furthermore, the VRC is distinguished from the RC by the addition of severe suffering. Someone might find the VRC far worse than the RC (e.g. the person who named it, adding the "Very" :P), and if they do, that may indeed say something about their views on suffering and bad lives, and not just about the aggregation of the trivial vs values larger in magnitude. I do suspect like you that considering Omelas (or tradeoffs between a more even number of good and bad lives) would usually already get at this, though, but maybe not always.

 

That being said, personally, I am also separately sympathetic to lexicality (and previously non-additivity, but less so now because of the arguments in the papers I cited above), but not because of the RC or VRC, but because of direct intuitions about torture vs milder suffering (dust specks or even fairly morally significant suffering). EDIT: I guess this is the kind of "counter-example" you and Shulman have brought up?

 

On your second point, I don't think something like the VRC is remote, although I wouldn't consider it my best guess for the future. If it turns out that it's more efficient to maximize pleasure (or value generally) in a huge number of tiny systems that produce very little value each, classical utilitarians may be motivated to do so at substantial cost, including sacrificing a much higher average welfare and ignoring s-risks. So, you end up with astronomically many more marginally good lives and a huge number of additional horrible lives (possibly astronomically many, although far fewer than the marginally good lives) and missing out on many very high welfare lives. This is basically the VRC. This seems unlikely unless classical utilitarians have majority control over large contiguous chunks of space in the future.

Replies from: Gregory_Lewis
comment by Gregory_Lewis · 2021-11-15T16:43:14.091Z · EA(p) · GW(p)

Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.

Yeah, that's it. As you note these sorts of moves seem to have costs elsewhere, but if one thinks on balance they nonetheless should be accepted, then the V/RC isn't really a strike against 'symmetric axiology' simpliciter, but merely 'symmetric axiologies with a mistaken account of aggregation'. If instead 'straightforward/unadorned' aggregation is the right way to go, then the V/RC is a strike against symmetric views and a strike in favour of minimalist ones; but 'straightforward' aggregation can also produce highly counter-intuitive results for minimalist views which symmetric axiologies avoid (e.g. "better N awful lives than TREE(N+3) lives of perfect bliss and a pin-prick"). 

Hence (per 3) I feel the OP would be trying to have it both ways if they don't discuss argumentative resources which could defend a rival theory from objections they mount against it, yet subsequently rely upon those same resources to respond to objections to their preferred theory.

(Re. 2, perhaps it depends on the value of "tiny" - my intuition is the dynamic range of (e.g.) human happiness is much smaller than that for future beings, so 'very small' on this scale would still typically be greatly above the 'marginally good' range by the lights of classical util. If (e.g.) commonsenically happy human lives/experiences are 10, joyful future beings could go up to 1000, and 'marginally good' is anything <1, we'd be surprised to find the optimal average for the maximal aggregate is in the marginally good range. Adding in the 'V' bit to this RC adds a further penalty). 

Replies from: MichaelStJules
comment by MichaelStJules · 2021-11-15T17:31:54.129Z · EA(p) · GW(p)

That all seems fair to me.

With respect to 2, I'm thinking something on the order of insect brains. There are reasons to expect pleasure to scale sublinearly with brain size even in artificial brains optimized for pleasure, e.g. a lot of unnecessary connections that don't produce additional value, greater complexity in building larger brains without getting things wrong, or even giving weight to the belief that integrating minds actually reduces value, say because of bottlenecks in some of the relevant circuits/functions. Smaller brains are easier/faster to run in parallel.

This is assuming the probability of consciousness doesn't dominate. There may also be scale efficiencies, since the brains need containers and to be connected to things (even digitally?) or there may be some other overhead.

So, I don't think it would be too surprising to find the optimal average in the marginally good range.

comment by antimonyanthony · 2021-11-20T16:31:27.569Z · EA(p) · GW(p)

I think it's useful to have a thought experiment to refer to other than Omelas to capture the intuition of "a perfect, arbitrarily large utopia is better than a world with arbitrarily many miserable lives supposedly counterbalanced by sufficiently many good lives." Because:

  • The "arbitrarily many" quantifiers show just how extreme this can get, and indeed the sort of axiology that endorses the VRC is committed to judging the VRC as better the more you multiply the scale, which seems backwards to my intuitions.
  • The first option is a utopia, whereas the Omelas story doesn't say that there's some other civilization that is smaller yet still awesome and has no suffering.
  • Omelas as such is confounded by deontological intuitions, and the alternative postulated in the story is "walking away," not preventing the existence of such a world in the first place. I've frequently found that people get hung up on the counterproductiveness of walking away, which is true, but irrelevant to the axiological point I want to make. The VRC is purely axiological, so more effective at conveying this.

So while I agree that aggregation is an important part of the VRC, I also disagree that the "nickel and diming" is at the heart of this. To my intuitions, the VRC is still horrible and borderline unacceptable if we replace the just-barely-worth-living lives with lives that have sufficiently intense happiness, intense enough to cross any positive lexical threshold you want to stipulate. In fact, muzak and potatoes lives as Parfit originally formulated them (i.e., with no suffering) seem much better than lots of lives with both lexically negative and lexically "positive" experiences. I'll eagerly accept Parfit's version of the RC [EA(p) · GW(p)]. (If you want to say this is contrary to common sense intuitions, that's fine, since I don't put much stock in common sense when it comes to ethics; there seem to be myriad forces pushing our default intuitions in directions that make evolutionary sense but are disturbing to me upon reflection.)

[edited for some clarifications]

comment by MichaelStJules · 2021-11-15T03:37:22.146Z · EA(p) · GW(p)

I thought the Mere Addition Paradox and the Repugnant Conclusion were the same thing?

Either way, I do think it's useful to distinguish two versions as you have, since the main reason I find the RC counterintuitive is already explained by my intuitions about the Mere Addition Paradox, and the RC brings in additional considerations about number vs average value, which are irrelevant to me.

Replies from: Teo Ajantaival
comment by Teo Ajantaival · 2021-11-16T18:27:06.268Z · EA(p) · GW(p)

Yeah, I guess some people use the names interchangeably. I agree that it can be useful to look at them separately, which was done in Fehige (1998). Their difference is also described in the following way (on Wikipedia):

[Parfit] claims that on the face of it, it may not be absurd to think that B is better than A. Suppose, then, that B is in fact better than A ... . It follows that this revised intuition must hold in subsequent iterations of the original steps. For example, the next iteration would add even more people to B+, and then take the average of the total happiness, resulting in C-. If these steps are repeated over and over, the eventual result will be Z, a massive population with the minimum level of average happiness; this would be a population in which every member is leading a life barely worth living. Parfit claims that it is Z that is the repugnant conclusion.
comment by Charlie Steiner · 2021-11-21T21:07:41.368Z · EA(p) · GW(p)

I'm curious about your takes on the value-inverted versions of the repugnant and very-repugnant conclusions. It's easy to "make sense" of a preference (e.g. for positive experiences) by deciding not to care about it after all, but doing that doesn't actually resolve the weirdness in our feelings about aggregation.

Once you let go of trying to reduce people to a 1-dimensional value first and then aggregate them second, as you seem to be advocating here in ss. 3/4, I don't see why we should try to hold onto simple rules like "minimize this one simple thing." If the possibilities we're allowed to have preferences about are not 1-dimensional aggregations, but are instead the entire self-interacting florescence of life's future, then our preferences can get correspondingly more interesting. It's like replacing preferences over the center of mass of a sculpture with preferences about its pose or theme or ornamentation.

Replies from: Teo Ajantaival
comment by Teo Ajantaival · 2021-11-24T19:16:22.250Z · EA(p) · GW(p)

Thanks!


I'm curious about your takes on the value-inverted versions of the repugnant and very-repugnant conclusions.

I’m not sure what exactly they are. If either of them means to “replace a few extremely miserable lives with many, almost perfectly untroubled ones”, then it does not sound repugnant to me. But maybe you meant something else.

(Perhaps see also these [EA(p) · GW(p)] comments [EA(p) · GW(p)] about adding slightly less miserable people to hell to reduce the most extreme suffering therein, which seems, to me at least, to result in an overall more preferable population when repeated multiple times.)


It's easy to "make sense" of a preference (e.g. for positive experiences) by deciding not to care about it after all, but doing that doesn't actually resolve the weirdness in our feelings about aggregation.

Did you mean

  1. the subjective preference, of the “lives worth living” themselves, to have positive experiences, or
  2. the preference of an outside observer, who is looking at the population comparison diagrams, to count those lives as having isolated positive value?

If 1, then I would note that e.g. the antifrustrationist and tranquilist accounts would care about that subjective preference, as they would see it as a kind of dissatisfaction with the preferrer’s current situation. Yet when we are looking at only causally isolated lives, these views, like all minimalist views, would say that there is no need to create dissatisfied (or even perfectly fulfilled) beings for their own sake in the first place. (In other words, creating and fulfilling a need is only a roundabout way to not having the need in the first place, unless we also consider the positive roles of this process for other needs, which we arguably should do in the practical world.)

If 2, then I’d be eager to understand what seems to be missing with the previous “need-based” account.

(I agree that the above points are unrelated to how to aggregate e.g. small needs vs. extreme needs. But in a world with extreme pains, I might e.g. deprioritize any amount of isolated small pains, i.e. small pains that do not increase the risk of extreme pains nor constitute a distraction or opportunity cost for alleviating extreme pains. Perhaps one could intuitively think of this as making “the expected amount of extreme pains” the common currency. Of course, that kind of aggregation may seem repugnant between a few extreme pains vs. a vast amount of slightly less extreme pains, but in practice we would also account for their wide ”error bars” and non-isolated nature.)


Once you let go of trying to reduce people to a 1-dimensional value first and then aggregate them second, as you seem to be advocating here in ss. 3/4, I don't see why we should try to hold onto simple rules like "minimize this one simple thing." If the possibilities we're allowed to have preferences about are not 1-dimensional aggregations, but are instead the entire self-interacting florescence of life's future, then our preferences can get correspondingly more interesting. It's like replacing preferences over the center of mass of a sculpture with preferences about its pose or theme or ornamentation.

The claim of axiological monism is only that our different values ultimately reduce to one value. Without a common measure, it would seem that multiple independent values are incommensurable and cannot be measured against each other even in principle.

So no one claims that people would descriptively follow only a single guiding principle, nor that it would be simple to e.g. decide how to prioritize between our intertwined and often contradictory preferences. Our needs and preferences can be “about” anything, but if e.g. someone prefers the existence of additional beings, we should plausibly weigh the magnitude of this (unfulfilled) preference against the potential unfulfilled needs that those beings might suffer from. And it seems questionable that e.g. someone’s aesthetic preferences could in themselves override another’s need to avoid extreme suffering (cf. gladiator games).