# Comparisons of Capacity for Welfare and Moral Status Across Species

post by Jason Schukraft · 2020-05-18T00:42:04.134Z · score: 93 (33 votes) · EA · GW · 59 comments

## Contents

  Executive Summary
Moral Weight Series
Introduction and Context
The Comparison Problem
Comparative Moral Value
Capacity for Welfare
Variabilism vs. Invariabilism
Theories of Welfare and Their Capacity Implications
Moral Status
Degrees of Moral Status
What Determines Moral Status
Objections
Won’t Intensity of Suffering Swamp Concerns about Moral Status and Capacity for Welfare?
Aren’t capacity for welfare and moral status multidimensional or action-relative or context-sensitive?
Might Welfare Constituents or Moral Interests Be Non-Additive?
Isn’t Probability of Sentience already a Good Enough Proxy for Moral Status and Capacity for Welfare?
Doesn’t Status-Adjusted Welfare Require a Commitment to a Problematic Form of Moral Realism?
Conclusion
Credits
Works Cited
Notes
None


# Executive Summary

Effective altruism aims to allocate resources so as to promote the most good in the world. To achieve the most efficient allocation of resources, we need to be able to compare interventions that target different species, including humans, cows, chickens, fish, lobsters, and many others.

Comparing cause areas and interventions that target different species requires a comparison in the moral value of different animals (including humans). Animals differ in their cognitive, emotional, social, behavioral, and neurological features, and these differences are potentially morally significant. According to many plausible philosophical theories, such differences affect (1) an animal’s capacity for welfare, which is the range of how good or bad an animal’s life can be, and/or (2) an animal’s moral status, which is the degree to which an animal’s welfare matters morally.

Theories of welfare are traditionally divided into three categories: (1) hedonistic theories, according to which welfare is the balance of experienced pleasure and pain, (2) desire-fulfillment theories, according to which welfare is the degree to which one’s desires are satisfied, and (3) objective list theories, according to which welfare is the extent to which one attains non-instrumental goods like happiness, virtue, wisdom, friendship, knowledge and love. Most plausible theories of welfare suggest differences in capacity for welfare among animals, though the exact differences and their magnitudes depend on the details of the theories and on various empirical facts.

A central question in the literature on moral status is whether moral status admits of degrees. The unitarian view, endorsed by the likes of Peter Singer, says ‘no.’ The hierarchical view, endorsed by the likes of Shelly Kagan, says ‘yes.’ If moral status admits of degrees, then the higher the status of a given animal, the more value there is in a given unit of welfare obtaining for that animal. Status-adjusted welfare, which is welfare weighted by the moral status of the animal for whom the welfare obtains, is a useful common currency both unitarians and hierarchists can use to frame debates.

Different theories entail different determinants of capacity for welfare and moral status, though there is some overlap among positions. According to most plausible views, differences in capacity for welfare and moral status are determined by some subset of differences in things like: intensity of valenced experiences, self-awareness, general intelligence, autonomy, long-term planning, communicative ability, affective complexity, self-governance, abstract thought, creativity, sociability, and normative evaluation.

Understanding differences in capacity for welfare and moral status could significantly affect the way we wish to allocate resources among interventions and cause areas. For instance, some groups of animals that exhibit tremendous diversity, such as fish or insects, are often treated as if all members of the group have the same moral status and capacity for welfare. Further investigation could compel us to prioritize some of the species in these groups over others. More generally, if further investigation suggested we have been overestimating the moral value of mammals or vertebrates compared to the rest of the animal kingdom, we might be compelled to redirect many resources to invertebrates or non-mammal vertebrates. To understand the importance of these considerations, we must first develop a broad conceptual framework for thinking about this issue.

# Introduction and Context

This post is the first in Rethink Priorities’ series about comparing capacity for welfare and moral status across different groups of animals. The primary goal of this series is to improve the way resources are allocated within the effective animal advocacy movement in the medium-to-long-term. A secondary goal is to improve the allocation of resources between human-focused cause areas and nonhuman-animal-focused cause areas. In this first post I lay the conceptual framework for the rest of the series, outlining different theories of welfare and moral status and the relationship between the two. In the second entry [EA · GW] in the series, I compare two methodologies for measuring capacity for welfare and moral status. In the third entry [EA · GW] in the series, I explain what the subjective experience of time is, why it matters, and why it’s plausible that there are morally significant differences in the subjective experience of time across species. In the fourth entry [EA · GW] in the series, I explore critical flicker-fusion frequency as a potential proxy for the subjective experience of time. In the fifth, sixth, and seventh entries in the series, I investigate variation in the characteristic range of intensity of valenced experience across species.

# The Comparison Problem

The effective altruism (EA) movement aims to allocate resources efficiently among interventions. Comparing interventions across cause areas requires comparing the relative value of human lives (or interests or experiences) against the lives (or interests or experiences) of nonhuman animals. Within the animal welfare cause area, efficiently allocating resources requires comparing the relative value of the lives (or interests or experiences) of many different types of animals. Humans directly exploit a huge variety of animals: pigs, cows, goats, sheep, rabbits, hares, mice, rats, chickens, turkeys, quail, ducks, geese, frogs, turtles, herring, anchovies, carp, tilapia, milkfish, catfish, eels, octopuses, squid, crabs, shrimp, bees, silkworms, lac bugs, cochineal, black soldier flies, mealworms, crickets, snails, earthworms, nematodes, and many others.[1] Counting somewhat conservatively, there are at least 33 orders of animals, across 13 classes and 6 phyla, that humans directly exploit in large numbers.[2] The effective animal advocacy (EAA) movement has limited resources, and it must choose how to allocate these scarce resources among these different animals, most of whom are treated miserably by humans.[3] Since we can’t (yet) help all these animals, we must decide which animals to prioritize. Sometimes these prioritization questions will be guided by practical concerns, like the degree to which an intervention is tractable or the degree to which a certain strategy will affect the long-run prospects of the movement. Ultimately, though, practical concerns ought to be guided by the answer to a much more fundamental question: What is the ideal[4] allocation of resources among different groups of animals?

Even if practical concerns continue to dominate our strategic decisions in the near-term, understanding the ideal allocation of resources could change our estimates of the expected value of different meta-interventions. Suppose, for example, that we come to believe both that farmed insects deserve about 1/3 of EAA resources and that practical limitations mean that we can currently only dedicate about 1/300th of EAA resources to farmed insects. If that were the case, then the expected value of overcoming these limitations—either by working on moral circle expansion or funding new charities or researching new interventions or whatever—would be quite high. If, however, we come to believe that farmed insects deserve 1/299th of EAA resources but practical limitations mean that we can currently only dedicate 1/300th of EAA resources to farmed insects, then the expected value of overcoming these limitations would be much lower. Even if we are far from an ideal world, it’s still important to know what an ideal world looks like so we can plot the best path to get there.

# Comparative Moral Value

To answer the fundamental question, we need to be able to compare the moral value of different types of animals. There are two non-exclusive ways animals could characteristically differ in intrinsic moral value: (1) certain animals could have a greater capacity for welfare than others and (2) certain animals could have a higher moral status than others. Below, I sketch a conceptual framework for thinking about capacity for welfare and moral status. In the second entry [EA · GW] in the series, I analyze how best to actually measure capacity for welfare and moral status, given the current state of our scientific knowledge and scientific toolset.

Although capacity for welfare and moral status are related, it’s important to keep the two concepts conceptually distinct—else we will be apt to over- or underestimate the moral value of a given experience, interest, or life. In my experience, many conversations that purport to be about moral status are actually about capacity for welfare. For that reason, I initially discuss the two concepts separately. However, on some theories of moral status, capacity for welfare is a contributor to moral status. So ultimately it might make more sense to think about comparative moral value in terms of status-adjusted welfare, which is welfare weighted by the moral status of the creature for whom the welfare obtains. I discuss status-adjusted welfare after the capacity for welfare and moral status sections.

In what follows, I intend to adopt as theory-neutral an approach as possible. I explore the implications of a number of different plausible viewpoints in order to highlight the collection of features that might be relevant to comparing capacity for welfare and moral status across animals. There are very few knockdown arguments in this area of philosophy and thus we should all be keenly aware of our uncertainty. When making cross-species comparisons of welfare and moral status, the best we can do is take note of where the recommendations of different theories overlap and where they diverge. Incorporating this knowledge will hopefully allow us to build interventions that are sufficiently robust in the face of our uncertainty.

# Capacity for Welfare

Capacity for welfare is how good or bad a subject’s life can go. One is a welfare subject if and only if things can be non-instrumentally good or bad for it. Positive welfare is that which is non-instrumentally good for some subject; negative welfare is that which is non-instrumentally bad for some subject.[5] A subject’s capacity for welfare is the total range between a subject’s maximum positive welfare and minimum negative welfare.[6] Capacity for welfare should be distinguished from realized welfare. If capacity for welfare is how good or bad a creature’s life can go, then realized welfare is how good or bad a creature’s life actually goes. Creatures with a greater capacity for welfare have the potential to make a greater per capita difference to the world’s overall realized welfare stock.

Synchronic welfare is welfare at a particular time. Diachronic welfare is welfare over time. The fact that one creature has a greater capacity for synchronic welfare than some other creature does not entail that the creature also has a greater capacity for diachronic welfare. If one were analyzing differences in total welfare over the course of a lifetime (diachronic welfare), differential lifespans would need to be taken into account. Creatures with longer lifespans have longer to amass welfare. So even if a given creature’s capacity for welfare at any one time is lower than some other creature, if the former creature lives longer than the latter, it may be able to accrue more welfare. (So holding lifespans fixed, a greater capacity for synchronic welfare does entail a greater capacity for diachronic welfare.[7]) The analysis below concerns synchronic welfare. Synchronic welfare is the more fundamental concept, and it is easier to investigate, so nothing is lost by this simplification. In practice, though, when we want to compare lives saved across species, we will have to account for differential lifespans in order to estimate total welfare over the course of a lifetime and so we will appeal to diachronic welfare.

Capacity for welfare is how good or bad a subject’s life can go. But it’s important to note that there is no single concept capacity for welfare. One can generate multiple concepts depending on how one interprets the modal force of the ‘can’ in ‘how good or bad a subject’s life can go.’ Take some actual pig confined to a gestation crate on a factory farm. We can perhaps imagine a metaphysically possible (but physically impossible) world in which a god grants this pig her freedom and gives her the ability to reason like a superintelligent machine. If reasoning abilities generally raise capacity for welfare,[8] then, in a very broad sense of ‘can,’ this pig’s life can go very well indeed. On the other hand, if we simply ask how good or bad the actual pig’s life can go, given that she will spend her whole life in a gestation crate, then, in a narrow sense of ‘can,’ her life can only go very poorly. The first sense of ‘can’ is obviously too broad: the mere metaphysical possibility of vast pig welfare doesn’t tell us anything about how to treat actual pigs. The second sense of ‘can’ is obviously too narrow: we think it a tragedy that the pig is confined precisely because her life can go much better.

To remain a useful concept in practice, capacity for welfare must be relativized so that it encompasses all and only the normal variation of species-typical animals. In other words, the concept must be restricted so as to exclude possibilities in which a subject’s capacity for welfare is unnaturally raised or lowered. To see why, consider that with the right sort of advanced genetic engineering, it may be possible to breed a pig that is, in essence, a superpleasure machine. That is, with the right artificial brain alterations, perhaps we can create a pig that experiences pleasures that are orders of magnitude greater than the pleasures that any creature (pig or otherwise) has experienced before.[9] But even if such a scenario were physically possible, it would not tell us anything about the moral value of normal pigs in the circumstances in which we actually find them.[10] Peter Vallentyne makes much the same point by distinguishing capacity from potential. He writes, “Instead of focusing on the potential for well-being, we should, I believe, focus on the capacity for well-being. A capacity is something that can be realized now, whereas a potential is something that can be realized only at some later time after the capacity is developed. Thus, for example, most normal adults now have the potential to play a simple piece on the piano (i.e. after much practice to develop their capacities), but only a few adults now have the capacity to do so” (Vallentyne 2007: 228). In this parlance, even if a pig has the potential for extreme, god-like pleasure, that potential does not affect the pig’s capacity for pleasure (and thus does not affect the pig’s capacity for welfare).[11]

In somewhat formal terms, the capacity for welfare for some subject, S, is determined by the range of welfare values S[12] experiences in some proper subset of physically possible worlds. How wide or narrow we should circumscribe the set of relevant possible worlds will be contentious, but in general we should be guided by considerations of practicality. If we circumscribe the relevant possible worlds as tightly as possible, then only the actual world will remain in the set, and capacity for welfare will collapse to actual welfare. Obviously, that is too narrow. But if we draw the line too far in modal space, we will include some modally distant possible worlds in which S experiences abnormally large or small welfare values because S has been unnaturally altered or stimulated. These remote possibilities are generally irrelevant to resource allocation—at least in the medium-term—so those worlds should not affect a subject’s capacity for welfare. We want to circumscribe the set of possible worlds so that it includes all and only normal variation in the welfare values of species-typical animals.[13]

There are two non-exclusive ways capacity for welfare might be a determinant of an animal’s characteristic moral value. The first is direct. Capacity for welfare might be one of the factors that determines an animal’s moral status. I’ll save discussion of this potential role for the section on moral status. Another way capacity for welfare might shape characteristic moral value is indirect. On this view, there’s nothing intrinsically valuable about capacity for welfare. All that matters is welfare itself. But because animals with a greater capacity for welfare are in a position to make a greater contribution to the world’s welfare—either positive or negative—they deserve more of our attention.[14] This position is usually supplemented by the claim that animals with a greater capacity for welfare tend, in fact, to attain more valuable goods and more disvaluable bads: their highs are higher, their lows, lower. Importantly, the claim that animals with a higher capacity for welfare have the potential to experience more valuable goods and more disvaluable bads is a conceptual truth. But the claim that animals with a higher capacity for welfare tend to experience more valuable goods and disvaluable bads is a contingent empirical assertion. It could be the case that some types of animals have a large capacity for welfare but in fact only oscillate within a narrow range.[15] When evaluating interventions, it is imperative that potential welfare gains and losses are compared, not merely the capacity for welfare of the animals targeted. Capacity for welfare tells us how high or low such gains or losses could be. And if capacity for welfare is correlated with disposition to welfare, it tells us even more. Thus, it is plausibly the case that the greater an animal’s capacity for welfare, the more good we can typically do by improving its life.

## Variabilism vs. Invariabilism

Before tracing the implications of different conceptions of welfare, we must first ask if the same conception of welfare is applicable to all animals. Welfare variabilism is the view that the basic constituents of welfare may differ across different subjects of welfare. (For example, for one type of animal, welfare may consist in the balance of pleasure over pain; for another type of animal, welfare may consist in the satisfaction of desires.) Welfare invariabilism is the view that the same basic theory of welfare is true for all subjects of welfare.[16]

On initial inspection, welfare variabilism appears to be the more intuitive view. Richard Kraut captures the common sense behind the variabilist position fairly well. He notes that “when we think about the good of animals, our thoughts vary according to the kind of animal we have in mind. We must ask what is good for a member of this species or that, and the answer to that question will not necessarily be uniform across all species. Unimpeded ﬂying is good—that is, good for birds. Although pleasure is good for every animal capable of feeling it, the kinds of pleasure that are good for an animal will depend on the kind of animal it is. And the stimulation of the pleasure centers of an animal’s brain may, on balance, be very bad for it if it prevents the animal from getting what it needs and engaging in the kinds of behavior that constitute a healthy life for a member of its kind” (Kraut 2007: 89).

However, a little reflection reveals that variabilism is far from the intuitive view it purports to be. For a start, it’s unclear what could ground the applicability of a theory of welfare to some animals but not others. Suppose that the capacity for unimpeded flight is a constituent of a bird’s welfare but not a fish’s welfare.[17] How could we explain this alleged fact? A natural thought is that flying is good for a bird but not for a fish. But that answer doesn’t work in this context. Recall that the constituents of an animal’s welfare are those things that are non-instrumentally good for it. So we can’t explain the claim that flying is non-instrumentally good for a bird but not a fish by appealing to the very claim that flying is non-instrumentally good for a bird but not a fish.

Rather than appealing directly to the claim that flying is good for a bird but not for a fish, we might instead appeal to certain facts about the nature of birds and fish. Birds must reach high places to mate, they must survey the ground from high distances to find food, they must take to the air to avoid predators, and so on.[18] None of these claims are true of fish. Here, however, we must remember the definition of welfare: positive welfare is that which is non-instrumentally good for some subject. If unimpeded flight is only good for birds in virtue of what it allows birds to accomplish, then it is not non-instrumentally good. Indeed, even though fish and birds are very different types of creatures, it seems they both benefit from a similar good, namely unimpeded movement, and it is this fact that explains why birds benefit from unimpeded flight.[19] Of course, unimpeded movement is not itself a very plausible candidate for a non-instrumental good. Animals move in order to do other things, such as eat, mate, or play—generalizing a bit, we might say that they move in order satisfy desires, seek pleasures, and avoid pains—and it is the ability to partake of these sorts of activities which more plausibly contribute to an animal’s welfare.[20]

Welfare invariabilism is not committed to the claim that the constituents of welfare are accessible to all welfare subjects. As I show below, some theories of welfare posit welfare constituents that certain nonhuman animals plausibly cannot obtain. Theoretical contemplation, for instance, may be a constituent of welfare, but it is not an activity in which fish are likely to engage.[21] If some elements of welfare are inaccessible to some animals but not others, then welfare invariabilism can recover some of the intuitive pull of welfare variabilism. When we think about the welfare of animals, it is important that we specify the type of animal under discussion. The reason isn’t that certain theories of welfare apply to some animals and not others; the reason is that some welfare constituents are available to some animals but not others. If we want to improve the welfare of some animal, we need to know which welfare goods an animal is capable of appreciating.

If welfare is a unified concept and if welfare is a morally significant category across species, it seems as if invariabilism is the better option. Invariabilism is the simpler view, and it avoids the explanatory pitfalls of variabilism at little intuitive cost. While we should certainly leave open the possibility that variabilism is the correct view, in what follows I will assume invariabilism.[22]

## Theories of Welfare and Their Capacity Implications

Determining the ideal allocation of resources among different types of animals will require making comparisons of welfare across disparate groups of animals. Making comparisons of welfare across disparate groups of animals will require, among other things, understanding the constituents of welfare for different animals. In this section I discuss in broad strokes the manner in which different theories of welfare postulate differences in capacity for welfare. (I here set aside the practical difficulty of actually developing empirically-reliable metrics for measuring capacity for welfare. I take up this difficulty in the second entry [EA · GW] in the series.)

Traditionally, theories of welfare are divided into three categories: hedonistic theories, desire-fulfillment theories, and objective list theories.[23] According to hedonistic theories of welfare, welfare is the balance of experienced pleasure and pain.[24] According to desire-fulfillment theories of welfare, welfare is the degree to which one’s desires are satisfied.[25] According to objective list theories of welfare, welfare consists of the achievement, creation, instantiation, or possession of certain objective goods, such as love, knowledge, freedom, virtue, beauty, friendship, justice, wisdom, or happiness.[26]

Evaluating the implications of these three families of theories for nonhuman animals is not easy, in no small part due to the large internal variation within the families of theories, the details of which would take us too far afield from the present topic.[27] Nonetheless, some general remarks can illuminate the manner in which a theory of welfare can bear on differences in capacity for welfare across species. There are two non-exclusive ways animals might differ in their capacity for welfare: they might differ with respect to the number of welfare constituents they can attain, or they might differ with respect to the degree to which they can attain those welfare constituents. An animal that can attain more kinds of welfare goods and more of those goods will have a higher capacity for welfare than an animal that lacks access to as many and as much.

On some theories of welfare, certain welfare constituents will be inaccessible to many nonhuman animal welfare subjects.[28] This fact is most obvious for objective list theories. The basic idea is that “the range of forms and levels of well-being that are in principle accessible to an individual is determined by that individual’s cognitive and emotional capacities and potentials. The more limited an individual’s capacities are, the more restricted his or her range of well-being will be. There are forms and peaks of well-being accessible to individuals with highly developed cognitive and emotional capacities that cannot be attained by individuals with lower capacities” (McMahan 1996: 7). Suppose that one believes that the constituents of welfare are varied and include love, friendship, knowledge, freedom, virtue, wisdom, and pleasure. A species-typical adult human being can experience any of these goods. For many nonhuman animals, however, differences in capacities will render some of these goods unattainable. Octopuses are solitary creatures and thus plausibly will never experience true friendship or love. If theoretical contemplation is a requirement for wisdom, then frogs plausibly will never experience true wisdom. If moral agency is a requirement for virtue, fish plausibly cannot be virtuous. Hence, if some form of objective list theory is correct, and the constituents of welfare are as philosophers have generally described them,[29] then many nonhuman animals will have a lower capacity for welfare than species-typical adult human beings.[30]

Hedonists of a certain stripe might also hold that some welfare constituents are inaccessible to nonhuman animals. According to traditional accounts of hedonism, the value of a given pleasurable experience is the product of the experience’s intensity and its duration. However, the hedonist John Stuart Mill added a third component to this calculation: the quality of the pleasure. Mill distinguished so-called higher pleasures from so-called lower pleasures. According to Mill, both humans and nonhuman animals can experience lower pleasures, but only humans have access to higher pleasures. Higher pleasures make a greater contribution to welfare than lower pleasures and for this reason Mill famously contended that “It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, are of a different opinion, it is because they only know their own side of the question. The other party to the comparison knows both sides” (Mill 1861: chapter 2).[31]

Even if a theory of welfare holds that its welfare constituents are accessible to all welfare subjects, human and nonhuman alike, it might be the case that animals characteristically differ with respect to the degree to which they can attain those welfare constituents. Take hedonism, for example. Suppose one rejects Mill’s distinction between higher and lower pleasures so that the value of a pleasurable experience is just the product of its intensity and duration. It could be the case that differences in social, emotional, or psychological capabilities affect the characteristic intensity of pleasurable (and painful) experiences.[32] (Differences in neuroanatomy might even affect the characteristic duration of animal experiences.[33]) Many philosophers believe that differences in capacities affect the characteristic phenomenal range of experience. For example, Peter Singer writes, “There are many areas in which the superior mental powers of normal adult humans make a difference: anticipation, more detailed memory, greater knowledge of what is happening and so on. These differences explain why a human dying from cancer is likely to suffer more than a mouse” (Singer 2011: 52). Peter Vallentyne writes, “The typical human capacity for well-being is much greater than the typical mouse capacity for well-being. Part of well-being (what makes a life go well) is the presence of pleasure and the absence of pain. The typical human capacity for pain and pleasure is no less than that of mice, and presumably much greater, since we have, it seems plausible, more of the relevant sorts of neurons, neurotransmitters, receptors, etc. In addition, our greater cognitive capacities amplify the magnitude of pain and pleasure” (Vallentyne 2007: 213).[34]

There are, however, countervailing considerations. While it’s true that sophisticated cognitive abilities sometimes amplify the magnitude of pain and pleasure, those same abilities can also act to suppress the intensity of pain and pleasure.[35] When I go to the doctor for a painful procedure, I know why I’m there. I know that the procedure is worth the pain, and perhaps most importantly, I know that the pain is temporary. When my dog goes to the vet for a painful procedure, she doesn’t know why she’s there or whether the procedure is worth the pain, and she has no idea how long the pain will last.[36] It seems intuitively clear that in this case superior cognitive ability reduces rather than amplifies the painful experience.[37]

Another way to potentially get a handle on the phenomenal intensity of nonhuman experience is to consider the evolutionary role that pain plays. Pain teaches us which stimuli are noxious, how to avoid those stimuli, and what we ought to do to recover from injury. Because intense pain can be distracting, animals in intense pain seem to be at a selective disadvantage compared to conspecifics not in intense pain. Thus, we might expect evolution to select for creatures with pains just phenomenally intense enough (on average) to play the primary instructive role of pain. Humans are among the most cognitively sophisticated animals on the planet, plausibly the animals most likely to pick up on patterns in signals only weakly conveyed. In general, less cognitively sophisticated animals probably require stronger signals for pattern-learning. If pain is the signal, then we might reasonably expect the phenomenal intensity of pain to correlate inversely with cognitive sophistication.[38] If that’s the case, humans might experience (on average) the least intense pain in all the animal kingdom.

These considerations are important and often overlooked, but ultimately they are orthogonal to the current discussion. The question is not whether differences in characteristics contribute to the realization of more or less welfare but whether these differences contribute to the capacity for more or less welfare. I think the answer to the latter question is clearer than the answer to the former. Advanced social, emotional, and intellectual complexity opens up new dimensions of pleasure and suffering that widen the range of experience. Martha Nussbaum puts the point this way: “More complex forms of life have more and more complex capabilities to be blighted, so they can suffer more and different types of harm. Level of life is relevant not because it gives different species differential worth per se, but because the type and degree of harm a creature can suffer varies with its form of life” (Nussbaum 2004: 309). For example, the combination of physical and emotional torture plausibly generates the possibility of greater overall pain than physical torture alone. Conversely, the combination of physical and emotional intimacy plausibly generates the possibility (whether typically realized or not) of greater overall pleasure than physical intimacy alone.[39] Analogous considerations apply to objective list theories. Such theories postulate that differences in social, emotional, and cognitive capacities affect the degree to which many intrinsic goods can be obtained.

Desire-fulfillment theories also appear to predict differences in capacity for welfare. Some authors have argued that because “[h]uman desires are more numerous and more complex than those of nonhumans” (Crisp 2003: 760), species-typical adult humans have a greater capacity for welfare than nonhuman animals. This argument can be challenged on several fronts. First, it’s not obvious why cognitive, affective, or social sophistication should affect the number of desires an animal has. For every flower in the meadow, a honey bee might have a strong desire to visit that particular flower. These desires would all be of the same type, but they would be numerous. Second, it’s not clear what the relationship is between welfare and number of satisfied desires. Derek Parfit (1984: 497) offers an objection to the simple view according to which welfare increases summatively by satisfied desires. An addict might experience a strong desire to take her drug of choice every few minutes and satisfy that desire. But even if the addict’s life contains many more satisfied desires than the non-addict, it seems the non-addict leads a better life. Third, even granting that humans have many complex desires and the more desires one has the higher one’s capacity for welfare, desire strength still needs to be accounted for. A praying mantis’s desire to mate might be stronger than any desire humans ever experience. Together, these considerations cast some doubt on the claim that desire-fulfillment theories of welfare are committed to the position that humans generally have a greater capacity for welfare than nonhuman animals. These considerations don’t, however, suggest that capacity for welfare is uniform across all animals. It’s uncertain which characteristics affect desire strength, number, and complexity, but whatever those characteristics are, it’s plausible that they vary across species.

The bottom line is that most (though not all) plausible theories of welfare suggest differences in capacity for welfare among animals.[40] The exact differences and their magnitudes depend on the details of the theories and on various empirical facts. For our purposes, what’s important is that many (though not all) of the features that plausibly influence capacity for welfare also recur in the literature on moral status, discussed below. The overlap between features that are relevant to capacity for welfare and features that are relevant to moral status sometimes begets conceptual confusion that hinders clear thinking on this complicated topic. But the overlap also makes the empirical investigation of properties relevant to the ideal allocation of resources among animals somewhat simpler.

# Moral Status

We turn now to moral status and begin with some basic definitions. An entity has moral standing[41] if and only if it has some intrinsic moral worth (no matter how small).[42] The interests of an entity with moral standing must be considered in (ideal) moral deliberation; the interests of an entity with moral standing cannot (morally) be ignored, though its interests can be overridden by the interests of other entities with moral standing. Put another way, an entity with moral standing can be wronged. You can damage a coffee mug, but you can’t wrong a coffee mug (though by damaging the coffee mug you might wrong its owner).

Philosophers have generally proposed two features which might, either independently or in conjunction, confer moral standing: sentience and agency. Sentience in this context is the capacity for valenced experience or, more simply, the ability to feel pleasures and pains.[43] Agency in this context is the capacity to possess desires, plans, and preferences.[44] Almost certainly, all sentient agents have moral standing.[45] It’s likely that sentience is sufficient on its own for moral standing, though that view is just slightly more controversial. The view that agency on its own is also sufficient for moral standing is more controversial still and hangs on substantive disagreements about the nature of agency.[46]

Defining moral status is trickier.[47] David DeGrazia writes, “Moral status is the degree (relative to other beings) of moral resistance to having one's interests—especially one's most important interests—thwarted,” adding “A and B have equal moral status, in the relevant sense, if and only if they deserve equal treatment” (DeGrazia 1991: 74). Thomas Douglas writes, “To say that a being has a certain moral status is, on this view, roughly to say that it has whatever intrinsic non-moral properties give rise to certain basic moral protections,” adding “[o]ther things being equal, a being with higher moral status will enjoy stronger and/or broader basic rights or claims than a being of lesser moral status” (Douglas 2013: 476). And Shelly Kagan writes, “The crucial idea remains this: other things being equal, the greater the status of a given individual, the more value there is in any given unit of welfare obtaining for that individual” (Kagan 2019: 109). For our purposes, I’ll let moral status be the degree to which the interests of an entity with moral standing must be weighed in (ideal) moral deliberation or the degree to which the experiences of an entity with moral standing matter morally.

Strictly speaking, moral status is a property of individuals. However, in both the philosophical literature on the subject and in informal discussions, it’s common for authors to ascribe moral status to species. One might speak of the moral status of cows or chickens. Moral status is ascribed to higher taxonomic ranks too. One might speak of the moral status of octopuses (an order) or the moral status of insects (a whole class). Moral status is even ascribed to groups that lack a taxonomic correlate, like fish. (‘Fish’ is a gerrymandered grouping of three evolutionarily distinct classes.[48])

In all these cases, ascription of moral status to a taxonomic group is non-literal. Taxonomic groups are abstract entities. They are neither sentient nor autonomous. They don’t have moral standing, let alone moral status.[49] An ascription of some level of moral status to ants, say, is shorthand for one of three things. It might mean that all (or perhaps the vast majority of) ants have the exact same moral status. (This is more plausible if there are relatively few levels of moral status.) It might refer to the average (either mean or median) moral status of ants. Or it might signify the moral status of a ‘species-typical’ ant, which may come apart from the average moral status of actual ants. In either case, the ascription may be restricted to species-typical adult members of the group or it may apply to all individuals within the taxon.

## Degrees of Moral Status

A central question in the literature on moral status is whether moral status admits of degrees. There are two main positions with regard to this question: (1) the unitarian view, according to which there are no degrees of moral status and (2) the hierarchical view, according to which the equal interests/experiences of two creatures will count differently (morally) if the creatures have differing moral statuses.

Peter Singer is a representative proponent of the unitarian view.[50] Singer writes, “Pain and suffering are bad and should be prevented or minimized, irrespective of the race, sex or species of the being that suffers. How bad a pain is depends on how intense it is and how long it lasts, but pains of the same intensity and duration are equally bad, whether felt by humans or animals” (Singer 2011: 53). This view follows from what Singer calls the principle of equal consideration of interests, which entails that “the fact that other animals are less intelligent than we are does not mean that their interests may be discounted or disregarded” (Singer 2011: 49). However, as Singer and other unitarians are quick to stress, even though intelligence doesn’t confer any additional intrinsic value on a creature, it’s not as if cognitive sophistication is morally irrelevant. Recall the Singer quote discussed above: “There are many areas in which the superior mental powers of normal adult humans make a difference: anticipation, more detailed memory, greater knowledge of what is happening and so on. These differences explain why a human dying from cancer is likely to suffer more than a mouse” (Singer 2011: 52). So for Singer and other unitarians, even though mice and humans have the same moral status, it doesn’t follow that humans and mice have the same capacity for welfare. Hence, alleviating human and mice suffering may not have equal moral importance. Humans are cognitively, socially, and emotionally more complex than mice, so in many cases it will make sense to prioritize human welfare over mice welfare.

Shelly Kagan is a representative proponent of the hierarchical view.[51] He writes, “A hierarchical approach to normative ethics emerges rather naturally from two plausible thoughts. First, the various features that underlie moral standing come in degrees, so that some individuals have these features to a greater extent than others do (or in more developed or more sophisticated forms). Second, absent some special explanation for why things should be otherwise, we would expect that those who do have those features to a greater extent would, accordingly, count more from the moral point of view. When we put these two thoughts together they constitute what is to my mind a rather compelling (if abstract) argument for hierarchy” (Kagan 2019: 279). The basic idea is that moral standing is grounded in the capacity for welfare and the capacity for rational choice. Plausibly, some animals have a greater capacity for welfare and rational choice than others. If possessing the capacity for welfare and rational choice confers moral status, then the possession of those capacities to a greater degree should confer more moral status.

The question of whether moral status admits of degrees also intersects with the question of distribution of realized welfare among animals. Tatjana Višak (2017: 15.5.1 and 15.5.2) argues that any welfare theory that predicts large differences in realized welfare between humans and nonhuman animals must be false because, given a commitment to prioritarianism[52] or egalitarianism,[53] such a theory of welfare would imply that we ought to direct resources to animals that are almost as well-off as they possibly could be. For example, suppose for the sake of argument that a mouse’s capacity for welfare maxes out at 10 on some arbitrary scale and a human’s capacity for welfare maxes out at 100 on the same scale. If there is a human being that currently scores 10 out of 100 and a mouse that currently scores 9 out of 10, prioritarianism and egalitarianism imply, all else equal, that we ought to increase the welfare of the mouse before increasing the welfare of the human. Even for those of us who care about mouse welfare, this seems intuitively like the wrong result. After all, the mouse is doing almost as well as it possibly could be, whereas the human is falling well short of her natural potential.

Kagan agrees that this result is intuitively unacceptable. He writes, “I find it impossible to take seriously the suggestion that this inequality is, in and of itself, morally objectionable—that the mere fact mice are worse off than us is morally problematic, and so we are under a pressing moral obligation to correct this inequality. Yet that does seem to be the conclusion that is forced upon us if we embrace both egalitarianism and unitarianism” (Kagan 2019: 65). Rather than fault theories of welfare that predict unequal distributions of welfare, Kagan invokes degrees of moral status to resolve the conflict of intuitions.[54] By adjusting level of welfare to account for moral status, Kagan’s position delivers the verdict that prioritarianism and egalitarianism need not necessarily prioritize a mouse’s welfare over a human’s welfare, even if the mouse’s welfare is lower in absolute terms than the human’s welfare.[55]

Ultimately, from a practical standpoint, the difference between the unitarian approach and the hierarchical approach may not be very deep. It might be thought that although the hierarchical approach can countenance prioritizing animals according to their moral value, the unitarian approach cannot. As we’ve already seen, however, that’s not the case. A unitarian like Singer believes that similar pains count similarly, no matter if it’s a mouse or a human that experiences the pains. But it doesn’t follow from this claim that mice lives have the same moral value as human lives. Indeed, there is broad consensus among unitarians that mice lives don’t have the same value as human lives. Proponents of both camps agree that some animals are more valuable than others.

For instance, Martha Nussbaum, a unitarian, writes, “Almost all ethical views of animal entitlements hold that there are morally relevant distinctions among forms of life. Killing a mosquito is not the same sort of thing as killing a chimpanzee” (Nussbaum 2004: 308). Elizabeth Harman, another unitarian, makes a similar point: “Consider a healthy adult person’s sudden painless death in the prime of life and a cat’s sudden painless death in the prime of life. Both of these deaths deprive their subjects of future happiness. But the person’s death harms the person in many ways that the cat’s death does not harm the cat. The person’s future plans and desires about the future are thwarted. The shape of the person’s life is very different from the way he would want it to be. The person is deprived of the opportunity to come to terms with his own death and to say goodbye to his loved ones. None of these harms are suffered by the cat. Therefore, the person is more harmed by his death than the cat is harmed by its death” (Harman 2003: 180). Even Singer admits, “When we come to consider the value of life, we cannot say quite so conﬁdently that a life is a life and equally valuable, whether it is a human life or an animal life. It would not be speciesist to hold that the life of a self-aware being, capable of abstract thought, of planning for the future, of complex acts of communication and so on, is more valuable than the life of a being without these capacities” (Singer 2011: 53).[56]

In this respect, the unitarian view is hardly distinguishable from the hierarchical view. Jean Kazez, a proponent of the hierarchical approach, writers, “If a life goes well or badly based (at least partly) on the way capacities are exercised, then what is built-in value, more precisely? It’s natural to think of it in terms of capacities themselves. The more valuable of two lives is the one that could amount to more, over a lifetime, if both individuals had a chance to ‘be all that you can be.’ If capacities are what give value to a life, then to compare animal and human lives, we must compare animal and human capacities” (Kazez 2010: 86). In broad outline, the traits, features, and psychological capabilities that, for the proponent of the hierarchical view, determine moral status, are the same sorts of traits, features, and psychological capabilities that do the heavy-lifting for the unitarian in ensuring there is an ordering of capacity for welfare. Indeed, this connection is, for the hierarchy proponent, no accident. Kagan writes, “So lives that are more valuable by virtue of involving a greater array of goods, or more valuable forms of those goods, will require a greater array of psychological capacities, or at least more advanced versions of those capacities. [...] More advanced capacities make possible more valuable forms of life, and the more advanced the capacities, the higher the moral status grounded in the possession of those very capacities” (Kagan 2019: 121). So if asked how to allocate resources across dissimilar animal taxa, both views would appeal to the same general sorts of features, even if the underlying theoretical role those features play in the respective views is different.

## What Determines Moral Status

Suppose for the moment that moral status does admit of degrees. To understand where animals rank in terms of moral status, we must first understand why moral status differs across the animal kingdom. Kagan tells us that “if people have a higher moral status than animals do, then presumably this is by virtue of having certain features that animals lack or have in a lower degree. Similarly, if some animals have a higher status than others, then the former too must have some features that the latter lack, or that the latter have to an even lower degree” (Kagan 2019: 112). What are these features? Philosophers have proposed a long list of capacities that plausibly contribute to moral status. Kagan mentions abstract thought, creativity, long-term planning, self-awareness, normative evaluation, and self-governance (Kagan 2019: 125-126). Kazez invokes intelligence, autonomy, creativity, nurturing, skill, and resilience (Kazez 2010: 93). DeGrazia cites cognitive, affective, and social complexity, moral agency, autonomy, capacity for intentional action, rationality, self-awareness, sociability, and linguistic ability (DeGrazia 2008: 193). None of these authors claim that their lists are exhaustive.

Another idea is that capacity for welfare itself plays a large role in determining moral status. Both Peter Vallentyne (2007: 228-230) and Kagan (2019: 279-284) have argued that moral standing is grounded in the capacity for welfare and the capacity for rational choice. Because those capacities admit of degrees, they argue, moral status too must come in degrees.[57] There are two possible readings of these positions. One reading is that capacity for welfare directly determines (at least in part) moral status. The other reading is that moral status is grounded in various capacities that also just so happen to be relevant for determining capacity for welfare. The first interpretation runs the risk of double-counting. Even before considering moral status, we can say that lives that contain more and more of non-instrumental goods are more valuable than lives that contain fewer and less of those non-instrumental goods. It’s not clear why those lives should gain additional moral value—in virtue of a higher moral status—merely because they were more valuable in the first place. For this reason, I think it makes more sense to think that capacity for welfare does not play a direct role in determining moral status, though many of the features relevant for welfare capacity are also relevant for moral status.

Most, if not all, of the capacities discussed above come in degrees. An animal can be more or less sociable, more or less intelligent, and more or less creative. So if two animals have all of these capacities, but the first animal has the capacities to a much greater extent, the first animal will have a higher moral status. In Kagan’s words: “Psychological capacities play a role in grounding one’s status. And statuses differ, precisely because these capacities seem to come in varieties that differ in terms of their complexity and sophistication. That is to say, some types of animals have a greater capacity for complex thought than others, or can experience deeper and more sophisticated emotional responses” (Kagan 2019: 113). Of course, even if we were confident that philosophers had identified the full list of features relevant to the determination of moral status—and philosophers themselves are not confident that they have—many problems would remain.[58]

One problem is whether and how to weight the features. Octopuses are incredibly intelligent, creative creatures—but they are also deeply asocial. Ants are plausibly much less intelligent and creative, but they tend to live in densely populated mounds, with so-called supercolonies containing millions of individual ants. Kazez frames the problem this way: “There are many capacities to which we assign positive value, but we don’t always have a definite idea of their relative values. If we’re trying to rank bower birds, crows, and wolves, it depends what’s more valuable, artistic ability (which favors the bower bird) or sheer intelligence (which favors the crow) or sociability (which favors the wolf). We’re not going to be able to put these three species on separate rungs of a ladder, in any particular order, and neither is the situation quite as crisp as a straightforward tie. We just don’t know how to assign them a place on the ladder, relative to each other” (Kazez 2010: 87-88).[59]

A further complication is what Harman calls combination effects: “A property might raise the moral status of one being but not another, because it might raise moral status only when combined with certain other properties” (Harman 2003: 177-178). For example, it might be the case that a certain degree of autonomy is required before some prosocial capacities contribute to moral status. Maybe nurturing behavior that is entirely pre-programmed and instinctive counts for less than love freely given. Honey bees and cows both care for their young, but if we think cows have a greater capacity for rational choice than honey bees, then the same level of juvenile guardianship might raise the moral status of cows more than honey bees.[60]

There is also the question of whether moral status is continuous or discrete. If moral status is continuous, then on some arbitrary scale (say 0 to 1), an individual’s moral status can in theory take on any value. If moral status is discrete, then there are tiers of moral status. Arguments can be marshalled for either position. On the one hand, it seems as if many of the features that ground moral status—such as general intelligence, creativity, and sociability—vary more-or-less continuously. Hence, even if for practical purposes we ascribe moral status in tiers, we should acknowledge moral status’s underlying continuity. On the other hand, continuity of moral status raises a number of intuitive conflicts. Many people have the intuition that human babies have the same moral status as human adults despite the fact that adults are much more cognitively and emotionally sophisticated than babies.[61] Many people also have the intuition that severely cognitively-impaired humans, whose intellectual potential has been permanently curtailed, have the same moral status as species-typical humans.[62] And many people have the intuition that normal variation in human intellectual capacities makes no difference to moral status, such that astrophysicists don’t have a higher moral status than social media influencers.[63] These intuitions are easier to accommodate if moral status is discrete.[64]

A further question is, if moral status is discrete, how many tiers of moral status are there? Kagan conjectures there are only about six levels of moral status (Kagan 2019: 293). He writes, “The idea here would be to have not only a relatively small number of groupings, but also a relatively easy way to assign a given animal to its relevant group. After all, it would hardly be feasible to expect us to undertake a detailed investigation of a given animal’s specific psychological capacities each time we were going to interact with one. This makes it almost inevitable that in normal circumstances we will assign a given animal on the basis of its species (or, more likely still, on the basis of even larger, more general biological categories)” (Kagan 2019: 294).[65] If there are only a handful of tiers, getting the exact number right is going to be important. A model on which there are five tiers of moral status could have drastically different implications for how we should allocate resources than a model with seven tiers of moral status.

Finally, if moral status is discrete, we need to know how much more valuable each tier is than the preceding tier. Is it a linear scale or logarithmic? Something else entirely? Is the top tier only marginally better than the next-highest tier? Is it twice as valuable? Ten times as valuable? Again, different answers to these questions could have drastically different implications for how we should allocate resources across animals.

As I’ve emphasized, capacity for welfare and moral status are distinct concepts. Nonetheless, they are closely related, both in theoretical and practical terms. In theoretical terms, capacity for welfare is potentially relevant for determining moral status. In practical terms, anyone interested in comparing the moral value of different animals will have to grapple with both potential differences in capacity for welfare and potential differences in moral status. It would be convenient, then, if there were a single term that could capture both welfare and moral status. Fortunately, there is.

Status-adjusted welfare is welfare weighted by the moral status of the creature for whom the welfare obtains.[66] It’s calculated by multiplying quantity of welfare by some number between 0 and 1, with 1 being the highest moral status and 0 being no moral standing. Status-adjusted welfare is neutral on the question of degrees of moral status. Unitarians assign all creatures with moral standing the same moral status, so for the unitarian, status-adjusted welfare just collapses to welfare. Status-adjusted welfare is a useful common currency both unitarians and hierarchists can use to frame debates. Of two interventions, all other things being equal, both camps will prefer the intervention that produces the higher quantity of status-adjusted welfare.

I began this post by posing a fundamental question: what is the ideal allocation of resources among different groups of animals? One good answer[67] is: whatever allocation maximizes status-adjusted welfare. Reflection on status-adjusted welfare might change the way we hope to allocate resources. It seems to me that much of the animal welfare movement’s allocative decision-making implicitly assumes an ordering of animals by moral status or capacity for welfare. Fish are exploited in greater numbers than mammals and birds, but fish are generally perceived to be less cognitively and emotionally complex than mammals and birds and thus their interests and experiences are given less weight. Arthropods are exploited in even greater numbers than fish, but they are generally perceived to be even less cognitively and emotionally complex and thus are afforded even less weight. These judgments appear to be largely intuition-driven, informed by neither deep philosophical rumination nor robust empirical investigation. As such, most of these judgments are unjustified (though not exactly irrational). Maybe these judgments are true. Maybe they are not. More likely, they aren’t really precise enough to evaluate. It’s one thing to say that mammals have a greater capacity for welfare or higher moral status than fish. It’s another thing to say how much higher. Two times higher? Five times higher? A thousand times higher? If the goal is to maximize status-adjusted welfare, then the answer matters.

# Objections

The main contention of this post is that considerations of moral status and capacity for welfare could change the way we wish to allocate resources among animals and between human and non-human cause areas. In this section I consider five objections to that contention.

## Won’t Intensity of Suffering Swamp Concerns about Moral Status and Capacity for Welfare?

The conditions in which various animals are raised differ markedly. The life of a pasture-raised beef cow is very different and probably much better than the life of a battery-caged layer hen. These differences need to be accounted for when evaluating the cost-effectiveness of an intervention. All other things equal, an intervention that reduces the stock of factory-farmed chickens is probably more impactful than a similar intervention that reduces the stock of pasture-raised cows.[68] Of course, measuring the comparative suffering of different types of animals is not always easy. Nonetheless, it does appear that we can get at least a rough handle on which practices generally inflict the most pain, and several experts have produced explicit welfare ratings for various groups of farmed animals that seem to at least loosely converge.[69] Our understanding of moral status and capacity for welfare is comparatively much weaker, and very few informative, authoritative estimates of comparative moral status have been produced. The estimates that do exist vary widely and the ranges are large.[70] Thus, according to this objection, data on intensity of suffering will generally swamp our tentative, uncertain concerns about moral status and capacity for welfare.

The first point to note about this objection is that it is merely a practical objection. If we did possess reliable data on moral status and capacity for welfare, nothing in this objection suggests that we should ignore it or that that information would inevitably be less important than intensity of suffering considerations. It’s certainly true that determining comparative moral value is a daunting task. But daunting is not the same as impossible. Determining which animals are sentient is also a daunting task, but it appears possible to at least make some progress on that question. Given a similar effort, it’s plausible that we could make progress on questions of moral status and capacity for welfare. Hence, even if it’s currently the case that intensity of suffering considerations swamp moral status and capacity for welfare considerations in our decision-making, there’s no reason this need always be the case.

Secondly, it’s not so clear that we do possess an adequate understanding of relative suffering among different groups of animals. There are a number of experts and animal welfare groups who have rated the welfare conditions of farmed mammals and birds. Even if these ratings were generally in agreement and generally accurate, they would only cover a small fraction of animals directly exploited by humans. Aquaculture has exploded over the last three decades,[71] and the animal welfare movement has only recently begun to grapple with the welfare implications of aquaculture’s rise. Still less attention is devoted to other species. More than 290 million farmed frogs [EA · GW] are slaughtered every year for food. More than 2.9 billion farmed snails [EA · GW] are slaughtered per year for food (plus more for their slime [EA · GW]). And more than 22 billion cochineal bugs [EA · GW] are slaughtered annually just to produce carmine dye.[72] Even if the numbers+suffering approach is the right one, we still have a lot of work to do to understand the conditions in which different groups of animals are raised.

Finally, understanding differences in capacity for welfare is directly relevant for determining relative suffering across different groups of animals. Consider two worrisome trends on the horizon. Entomophagy is steadily gaining wider acceptance, and as a result, new insect farms are opening every year and old ones are ramping up production. On the other hand, the demand for octopus meat continues to outpace wild-caught supply, and as a result, groups in Spain and Japan are developing systems to intensively farm octopuses. It’s difficult to know in advance which trend will produce more suffering. However, if we had a better understanding of the differences in capacity for welfare between insects and cephalopods, we might be able to make better predictions.

## Aren’t capacity for welfare and moral status multidimensional or action-relative or context-sensitive?

One worry is that capacity for welfare and moral status might be significantly more complicated than I have thus far presented them. In the discussion above, I have assumed a unidimensional analysis of both capacity for welfare and moral status. That is, I have assumed that we can assign a single number for an animal’s capacity for welfare or moral status and then compare that number to the numbers of other animals. But if either capacity for welfare or moral status is multidimensional, measuring and comparing those items becomes much more difficult.

If the objective list theory of welfare is correct, then capacity for welfare is almost certainly multidimensional. Suppose one animal has a greater capacity for pleasure and friendship, and a different kind of animal has a greater capacity for wisdom and aesthetic appreciation. Which animal has a greater capacity for welfare? If certain goods are incommensurable, there may not be an all-things-considered answer. Moral status also appears plausibly multidimensional. The characteristics that philosophers have proposed contribute to moral status can plausibly come apart. If both intelligence and empathy contribute to moral status, how are we to compare creatures that score high on one but not the other?

It’s certainly true that the multidimensionality of either capacity for welfare or moral status would complicate measurement and comparison of status-adjusted welfare. But I don’t think the appropriate response to this potential difficulty is to give up on investigating capacity for welfare and moral status. If we were able to weight the various dimensions of welfare or status, we could combine the weighted average of different dimensions into a single metric. Of course, if the various dimensions are incommensurable, the situation is much trickier. However, there is a rich philosophical literature on incommensurable values, and several strategies for dealing with this problem are at least in principle open to us. So the multidimensionality of capacity for welfare or moral status does not by itself doom the usefulness of status-adjusted welfare.

A related worry is that moral status might be context-sensitive or action-relative. James Rachels puts it this way: “There is no characteristic, or reasonably small set of characteristics, that sets some creatures apart from others as meriting respectful treatment. That is the wrong way to think about the relation between an individual’s characteristics and how he or she may be treated. Instead we have an array of characteristics and an array of treatments, with each characteristic relevant to justifying some types of treatment but not others. If an individual possesses a particular characteristic (such as the ability to feel pain), then we may have a direct duty to treat it in a certain way (not to torture it), even if that same individual does not possess other characteristics (such as autonomy) that would mandate other sorts of treatment (refraining from coercion)” (Rachels 2004: 169). He concludes, “There is no such thing as moral standing simpliciter. Rather, moral standing is always moral standing with respect to some particular mode of treatment. A sentient being has moral standing with respect to not being tortured. A self-conscious being has moral standing with respect to not being humiliated. An autonomous being has moral standing with respect to not being coerced. And so on” (Rachels 2004: 170).[73]

I’m not sure Rachels is right, but his position is reasonable and deserves consideration. Yet even if his basic idea is correct, I don’t believe the objection dooms the project. The idea that context helps shape which actions are morally permissible is hardly novel or controversial. For instance, adult humans and human infants both have moral standing. But because adults and infants possess different characteristics, the same demand for autonomy renders different actions morally appropriate. In most cases, it would be wrong to restrict an adult’s movement; in most cases, it would be wrong not to restrict an infant’s movement. So I think it’s possible to retain the notion that moral standing is binary, while acknowledging that different characteristics call for different treatments.

Because our understanding of moral status is so incomplete, Shelly Kagan urges us to adopt a pragmatic approach to the topic. He acknowledges that it might be the case that “certain capacities are relevant for a given set of moral claims, while other capacities are the basis of different claims. If so, then a creature with advanced capacities of the one kind, but less advanced capacities of the other, would have a relatively high moral status with regard to the first set of claims, but a low moral status with regard to the second set” (Kagan 2019: 114). However, he believes that “while we may someday conclude that it is an oversimplification to think of status as falling along a single dimension, for the time being, at least, I think we are justified in making use of the simpler model” (Kagan 2019: 115). Since comparative moral value is so neglected within the animal welfare movement, there may be significant returns on relatively shallow investigations of the subject long before we are stymied by complications like multidimensionality.

## Might Welfare Constituents or Moral Interests Be Non-Additive?

I have suggested that we should frame the value of interventions in terms of status-adjusted welfare. If we were to compare the value of an intervention that targeted pigs with an intervention that targeted silkworms, we should consider not only the amount of welfare to be gained but also the moral status of the creatures who would gain the welfare. One way this strategy could be mistaken—or at least significantly more complicated—is if welfare or moral interests are not straightforwardly additive.

Suppose that hedonism is true and suppose that a silkworm’s capacity for pleasure and pain is roughly one one-thousandth that of a pig’s. Does that mean that, all else equal, one thousand silkworms at maximum happiness are worth one pig at maximum happiness? Not necessarily. It might be the case that the tiny pleasures of the silkworms never add up to the big pleasure of the pig. The same might be the case for moral interests. If silkworms have a moral status one one-thousandth that of a pig’s, then, if moral interests are non-additive, it doesn’t follow that the interests of a thousand silkworms—not to be confined, say—are equal in value to the interest not to be confined of a single pig.

Jean Kazez puts the point this way: “The difficulty of the idea of an exchange rate arises on any view about the value of lives, but most obviously on the ‘capacity’ view. The valuable capacities you get in a chimpanzee life you never get in a squirrel life, however many squirrels you add together. And what you get in a human life you never get in an aurochs life, no matter how many. That’s at least some reason to look askance at the notion of equitable trading of lives for lives. Say that it’s just happiness that makes a life valuable. Pretend chimpanzees are extremely happy, and squirrels only slightly happy. It does not seem true that one chimpanzee life is worth some number of squirrel lives, if you just put enough together. If you had to save one chimpanzee or a boatload of squirrels, it might make sense to save the chimpanzee; you might coherently think that that will give one individual a chance at a good life, which is better than there being lots of fairly low-quality lives” (Kazez 2010: 112).[74] Hence, if welfare constituents or moral interests are non-additive, we may not be able to use status-adjusted welfare to compare interventions.[75]

Although I grant that this position has some initial intuitive appeal, I find it difficult to endorse—or, frankly, really understand—upon reflection. For this position to succeed, there would have to exist some sort of unbridgeable value gap between small interests and big interests. And while the mere existence of such a gap is perhaps not so strange, the placement of the gap at any particular point on a welfare or status scale seems unjustifiably arbitrary. It’s not clear what could explain the fact that the slight happiness of a sufficient number of squirrels never outweighs the large happiness of a single chimpanzee. If happiness is all that non-instrumentally matters, as Kazez assumes for the sake of argument, we can’t appeal to any qualitative differences in chimpanzee versus squirrel happiness.[76] (It’s not as if, for example, that chimpanzee happiness is deserved while squirrel happiness is obtained unfairly.) And how much happier must chimpanzees be before their happiness can definitively outweigh the lesser happiness of other creatures? What about meerkats, who we might assume for the sake of argument are generally happier than squirrels but not so happy as chimpanzees? There seems to be little principled ground to stand on. Hence, while we should acknowledge the possibility of non-additivity here, we should probably assign it a fairly low credence.

## Isn’t Probability of Sentience already a Good Enough Proxy for Moral Status and Capacity for Welfare?

According to another objection, when we evaluate the impact of various interventions, we should discount the welfare that would be gained by different kinds of animals by the probability that those kinds of animals are sentient.[77] Cows are plausibly more likely to be sentient than fish; fish are plausibly more likely to be sentient than insects, and so on. Having adjusted for these differences, no discounts for moral status or capacity for welfare are necessary. An animal’s probability of sentience is already a good enough proxy for capacity for welfare and moral status.

Two points are worth mentioning in response. The first is that our uncertainty about moral status and capacity for welfare is much greater than our uncertainty about which creatures are sentient. In his 2017 Report on Consciousness and Moral Patienthood, Luke Muehlhauser puts the issue this way: “In a cost-benefit framework, one’s estimates concerning the moral weight of various taxa are likely more important than one’s estimated probabilities of the moral patienthood of those taxa. This is because, for the range of possible moral patients of most interest to us, it seems very hard to justify probabilities of moral patienthood much lower than 1% or much higher than 99%. In contrast, it seems quite plausible that the moral weights of different sorts of beings could differ by several orders of magnitude. Unfortunately, estimates of moral weight are trickier to make than, and in many senses depend upon, one’s estimates concerning moral patienthood” (Muehlhauser 2017: Appendix Z7).[78] Ignoring capacity for welfare and moral status means ignoring considerations that could drastically alter the way different interventions are valued.

Secondly, it’s not clear if the ranking of animals by probability of sentience will map neatly onto the ranking of animals by moral status or capacity for welfare. We might be uncertain that insects are sentient but come to think that if they were sentient, they would have extremely fast consciousness clock speeds, multiplying their subjective experiences per objective minute compared to large mammals. Consequently, in a ranking of expected sentience, insects might rank just below crustaceans; but in a ranking of expected moral value, insects might rank far above crustaceans. So not only would using sentience probabilities as proxy for moral status underestimate our uncertainty, such usage might also misalign the way we would ideally like to prioritize species.

In short, I agree that when calculating the value of a particular intervention, we should discount the welfare gain at stake by the probability that the animals to be affected are sentient. But sentience is no substitute for capacity for welfare or moral status. Hence, we should discount for probability of sentience and moral status.

## Doesn’t Status-Adjusted Welfare Require a Commitment to a Problematic Form of Moral Realism?

Finally, one might be concerned that moral status is just not a real thing. It’s very hard (though not quite impossible) to be an anti-realist with respect to sentience. Even if we can never reliably access the fact, it seems like there is a fact of the matter about whether or not a particular animal feels pleasures or pains. But it’s much easier to question the nature of moral status and imagine that moral status is just a human construct—that there’s no there there.

Nevertheless, I think most of us are committed to taking status-adjusted welfare seriously. If one is uncomfortable with degrees of moral status, unitarianism is a live option. Denying that any creatures have moral status, however, implies that there is no moral difference between harming a person and harming a coffee mug.[79] But most of us feel there is a moral difference, and this difference is explained by the fact that the person has moral standing and the coffee mug does not. One might also be wary of differences in capacity for welfare. If so, there are theories of welfare that can accommodate this intuition, such that all welfare subjects have the same capacity. But if one thinks intensity of valenced experience or cognitive sophistication or affective complexity contribute to welfare, then one ought to be open to the idea that different sorts of psychological and neurological capabilities give rise to differences in capacity for welfare.

Of course, even if there is a fact of the matter about moral status and capacity for welfare, learning these facts is going to require lots of empirical data about the relative capacities of different types of animals. Gathering the relevant data will probably require cooperating with a large swath of scientists. This cooperation might be hindered by the perception that moral status and capacity for welfare aren’t scientific properties. Convincing scientists to undertake experiments that will shed light on a property they might not think even exists could be tough. It’s hard enough to get the relevant scientists interested in investigating sentience. Won’t this talk of moral status and capacity for welfare, the objection asks, scare away the very allies we need to resolve our uncertainty about status-adjusted welfare?

Maybe. But biologists, neuroscientists, and comparative psychologists already investigate many of the features we care about. If necessary, we could fund further work in this vein without reference to comparative moral value. Even if the investigation of some features would require convincing scientists to take status-adjusted welfare seriously, that’s a practical difficulty, and little reason by itself to stop thinking about moral status and capacity for welfare.

# Conclusion

Animals differ in all sorts of ways: their neural architecture, their affective complexity, their cognitive sophistication, their sociability. This variation may give rise to differences in phenomenal experience, desire satisfaction, rational agency, and other potentially morally important traits and features. When we allocate resources between human and non-human causes and among different non-human animals, we are implicitly making value judgments about the comparative moral value of different species. These value judgments ought to be made explicit, and they ought to be grounded in both the details of our most plausible range of philosophical theories and the attendant relevant empirical facts. Although we should not be confident in any particular philosophical theory, if a plurality of plausible theories suggest that psychological capacities affect characteristic moral value, we should be sensitive to those differences when we allocate resources across interventions and cause areas that target different animals. In this post I have attempted to develop a broad conceptual framework for analyzing the impact and importance of capacity for welfare and moral status. Much work remains to be done to make reasonably precise the magnitude of difference that such considerations could make to our allocative decision-making. Measuring and comparing capacity for welfare and moral status is not going to be easy. But making progress on this issue could greatly advance our ability to improve the world.

# Credits

This essay is a project of Rethink Priorities. It was written by Jason Schukraft. Thanks to Marcus A. Davis, Neil Dullaghan, Derek Foster, David Moss, Luke Muehlhauser, Jeff Sebo, and Saulius Šimčikas for helpful feedback. If you like our work, please consider subscribing to our newsletter. You can see all our work to date here.

# Works Cited

Akhtar, S. (2011). Animal pain and welfare: Can pain sometimes be worse for them than for us?. in Beauchamp & Frey (eds.) The Oxford Handbook of Animal Ethics, 495-518.

Bar-On, Y. M., Phillips, R., & Milo, R. (2018). The biomass distribution on Earth. Proceedings of the National Academy of Sciences, 115(25), 6506-6511.

Broom, D. M. (2007). Cognitive ability and sentience: which aquatic animals should be protected?. Diseases of Aquatic Organisms, 75(2), 99-108.

Carlson, E. (2000). Aggregating harms — should we kill to avoid headaches? Theoria, 66(3), 246-255.

Carruthers, P. (2007). Invertebrate minds: a challenge for ethical theory. The Journal of Ethics, 11(3), 275-297.

Crisp, R. (2003). Equality, priority, and compassion. Ethics, 113(4), 745-763.

DeGrazia, D. (1991). The distinction between equality in moral status and deserving equal consideration. Between the Species, 7(2), 73-77.

DeGrazia, D. (2008). Moral status as a matter of degree? The Southern Journal of Philosophy, 46(2), 181-198.

DeGrazia, D. (2016). Modal personhood and moral status: A reply to Kagan's proposal. Journal of Applied Philosophy, 33(1), 22-25.

Douglas, T. (2013). Human enhancement and supra-personal moral status. Philosophical Studies, 162(3), 473-497.

Finnis, J. (2011). Natural law and natural rights. Oxford University Press.

Fletcher, G. (2013). A fresh start for the objective-list theory of well-being. Utilitas, 25(2), 206-220.

Fletcher, G. (2016a). The philosophy of well-being: An introduction. Routledge.

Fletcher, G. (2016b). Objective list theory. in G. Fletcher (ed) The Routledge Handbook of Philosophy of Well-Being. New York: Routledge, pp. 148-160.

Harman, E. (2003). The potentiality problem. Philosophical Studies, 114(1), 173-198.

Hausman, D. M., & Waldren, M. S. (2011). Egalitarianism reconsidered. Journal of Moral Philosophy, 8(4), 567-586.

Hooker, B. (2015). The elements of well-being. Journal of Practical Ethics, 3(1).

Hursthouse, R. (1999). On Virtue Ethics. Oxford University Press.

Kagan, S. (2019). How to count animals, more or less. Oxford, UK: Oxford University Press.

Kazez, J. (2010). Animalkind: What We Owe to Animals. Wiley-Blackwell.

Kraut, R. (2007). What is good and why: The ethics of well-being. Harvard University Press.

Lin, E. (2014). Pluralism about well-being. Philosophical Perspectives, 28, 127-154.

Lin, E. (2017). Against welfare subjectivism. Noûs, 51(2), 354-377.

Lin, E. (2018). Welfare invariabilism. Ethics, 128(2), 320-345.

Mayerfield, J. (1999). Suffering and moral responsibility. Oxford University Press.

McMahan, J. (1996). Cognitive disability, misfortune, and justice. Philosophy & Public Affairs, 25(1), 3-35.

Mill, J. S. (1861/2016). Utilitarianism. In S.M. Cahn (ed) Seven masterpieces of philosophy. Routledge, pp. 337-383.

Muehlhauser, L. (2017). Report on consciousness and moral patienthood. Open Philanthropy Project.

Norwood, F. B., & Lusk, J. L. (2011). Compassion, by the pound: the economics of farm animal welfare. Oxford University Press.

Nussbaum, M. C. (2004). Beyond “Compassion and Humanity:” Justice for Nonhuman Animals. in C. R. Sunstein and M. Nussbaum (eds) Animal Rights: Current Debates and New Directions. Oxford: Oxford University Press, pp. 299-320

Parfit, D. (1984). Reasons and persons. Oxford University Press.

Parfit, D. (1997). Equality and priority. Ratio, 10(3), 202-221.

Rachels, J. (2004). Drawing Lines. in C. R. Sunstein and M. Nussbaum (eds) Animal Rights: Current Debates and New Directions. Oxford University Press, pp. 162–74.

Sachs, B. (2011). The status of moral status. Pacific Philosophical Quarterly, 92(1), 87-104.

Sebo, J. (2018). The moral problem of other minds. The Harvard Review of Philosophy, 25, 51-70.

Singer, P. (2011). Practical ethics, 3rd Edition. Cambridge University Press.

Tiberius, V. (2015). Prudential value. in I. Hirose and J. Olson (eds) The Oxford handbook of value theory, pp. 158-174.

Vallentyne, P. (2007). Of mice and men: Equality and animals. in N. Holtug and K. Lippert-Rasmussen (eds.) Egalitarianism: New Essays on the Nature and Value of Equality, Oxford University Press, pp. 211-238.

Van Den Hoogen, J., Geisen, S., Routh, D., Ferris, H., Traunspurger, W., Wardle, D. A., ... & Bardgett, R. D. (2019). Soil nematode abundance and functional group composition at a global scale. Nature, 572(7768), 194-198.

Višak, T. (2017). Cross-Species Comparisons of Welfare. in Woodhall, A., & da Trindade, G. G. (eds.). Ethical and Political Approaches to Nonhuman Animal Issues. Palgrave Macmillan, pp. 347-363.

Woodard, C. (2013). Classifying theories of welfare. Philosophical Studies, 165(3), 787-803.

## Notes

1. My colleague Saulius Šimčikas has compiled a long list of estimates of global captive vertebrates [EA · GW]. ↩︎

2. See this spreadsheet for details. By my count, every order in the spreadsheet is exploited in numbers greater than ~50 million individuals per year. ↩︎

3. Of course, some of these animals are treated much worse than others. See the ‘Objections’ section for more discussion of this point. ↩︎

4. Ideal in the sense that we are ignoring strategic considerations like how the allocation might affect public opinion. So maybe in an ideal world we would be committing more resources to arthropod welfare, but we can’t in the actual world because doing so would risk too great a reputational harm. ↩︎

5. Some authors prefer the term ‘well-being’ to ‘welfare.’ In many instances, two terms are meant to be synonymous. However, some authors draw a distinction between well-being and welfare, reserving ‘welfare’ for non-instrumental goods constituted by experience. I use the term ‘welfare’ in the more expansive sense in which a subject’s welfare is constituted by whatever is non-instrumentally good for the subject, whether experiential or non-experiential. ↩︎

6. Note that this range need not be symmetric between positive and negative welfare. An animal might have only a small capacity for positive welfare but a large capacity for negative welfare or vice versa. ↩︎

7. I’m here assuming the additivity of welfare. More on that assumption in the ‘Objections’ section. ↩︎

8. It’s not obvious that they do, but we can substitute a different feature that does raise capacity for welfare without affecting the substance of the thought experiment. ↩︎

9. It’s uncertain that such a pig would remain a pig. But because it is uncertain, it is an epistemic possibility that it would. ↩︎

10. Of course, if there were some animals that were capable of transformation into superpleasure machines and some that were not, that information could be valuable to our technologically advanced descendants. Similarly, if there were a way to reduce the overall intensity of valenced experience, that technology could plausibly lead to reductions in animal suffering if the technique were applied to animals leading net-negative lives. ↩︎

11. Another possibility is that pigs already have the latent potential for extreme pleasure, if, say we were able to simultaneously stimulate all their neurons at once. Assuming that pigs cannot artifically achieve this stimulation on their own and that no natural circumstance activates such a stimulation, such a possibility only implies a large potential for pleasure, not a large capacity for pleasure. ↩︎

12. Or, in Lewisian terms, the counterparts of S ↩︎

13. Admittedly, filling in the details of this relativization will be complex. It’s not at all clear how to define ‘normal variation’ or ‘species-typical animal.’ I set aside that difficulty for now. ↩︎

14. When I say that they are in a position to make a greater contribution, I of course mean on a per capita basis. At the group level, extremely numerous animals might deserve more attention even if their individual capacity for welfare is quite low because collectively the group can make a bigger welfare contribution than other groups. See the “Objections” section for more discussion of this issue. ↩︎

15. Certainly this is true of some individuals. ↩︎

16. See Lin 2018 for discussion and a defense of welfare invariabilism. ↩︎

17. Of course, not all species of birds fly, so unimpeded flight is not a welfare constituent for all birds. In this discussion birds is implicitly restricted to flying birds. ↩︎

18. Again, obviously, these claims aren’t true of all birds. ↩︎

19. A distinct explanation is that flying exemplifies the essence of being a (flying) bird and that swimming exemplifies the essence of being a fish and that exemplifying one’s species-relative essence contributes to one’s flourishing. In this case, one’s degree of flourishing is the non-instrumental good that determines one’s welfare. See Hursthouse 1999, especially chapter 9, for more on the concept of ‘flourishing.’ ↩︎

20. Depending on one’s preferred theory of welfare, these activities might be valuable for their own sake or they might be valuable for the positive mental states they engender. ↩︎

21. Welfare invariabilism implies that if theoretical contemplation were a welfare constituent, then if a fish engaged in theoretical contemplation, it would be non-instrumentally good for that fish. ↩︎

22. If variabilism is true, then determining capacity for welfare is likely to be much more difficult because we'll have to figure out the right theory of welfare for each of the animals that we care about. ↩︎

23. This tripartite division traces back to Parfit 1984, though it’s hardly exhaustive of the contemporary literature. See Woodard 2013 for a novel classificatory scheme that introduces 16 distinct categories. ↩︎

24. In some classificatory schema of welfare theories, hedonistic theories is replaced with the broader category mental state theories. A theory is a mental state theory if and only if the constituents of welfare are mental states. Hedonism is by far the most popular mental state theory, so for simplicity’s sake I will avoid discussion of the broader category. ↩︎

25. According to some versions of desire theory the relevant desires need not be one’s actual desires. For instance, full information theory defines welfare in terms of the desires that a suitably idealized version of oneself would hold if one were fully informed. See Tiberius 2015: 164-166 for more on full information theory. ↩︎

26. Both hedonistic theories and desire-fulfillment theories could be understood as objective list theories, but in the context of the traditional classificatory scheme, it’s understood that the goods of an objective list theory go beyond the mere experience of pleasure or satisfaction of desires. ↩︎

27. See Fletcher 2016a for an overview. ↩︎

28. The modal status of this claim is a bit unclear. Even if the welfare constituents discussed in this paragraph are inaccessible to nonhuman animals in the actual world and in nearby possible worlds, it doesn’t follow that these welfare constituents are necessarily inaccessible. ↩︎

29. See Finnis 2011, Fletcher 2013, Fletcher 2016b, Lin 2014, Lin 2017, Hooker 2015 for recent work in the objective list tradition. ↩︎

30. This quote from Kagan 2019 nicely summarizes ways in which objective list welfare constituents might be inaccessible, in whole or in part, to certain nonhuman animals: “First of all, then, people have deeper and more meaningful relationships than animals, with more significant and valuable instances of friendships and love and family relations, based not just on caring and shared affection but on insight and mutual understanding as well. Second, people are capable of possessing greater and more valuable knowledge, including not only self-knowledge and knowledge of one’s family and friends, but also systematic empirical knowledge as well for an incredibly wide range of phenomena, culminating in beautiful and sweeping scientific theories. Third, people are capable of a significantly greater range of achievements, displaying creativity and ingenuity as we pursue a vast range of goals, including hobbies, cultural pursuits, business endeavors, and political undertakings. Fourth, people have a highly developed aesthetic sense, with sophisticated experience and understanding of works of art, including music, dance, painting, literature and more, as well as having a deeper appreciation of natural beauty and the aesthetic dimensions of the natural world, including the laws of nature and of mathematics. Fifth, people have greater powers of normative reflection, with a heightened ability to evaluate what matters, a striking capacity to aim for lives that are meaningful and most worth living, and a remarkable drive to discover what morality demands of us” (48). ↩︎

31. See also this passage: “Now it is an unquestionable fact that those who are equally acquainted with, and equally capable of appreciating and enjoying, both, do give a most marked preference to the manner of existence which employs their higher faculties. Few human creatures would consent to be changed into any of the lower animals, for a promise of the fullest allowance of a beast's pleasures; no intelligent human being would consent to be a fool, no instructed person would be an ignoramus, no person of feeling and conscience would be selfish and base, even though they should be persuaded that the fool, the dunce, or the rascal is better satisfied with his lot than they are with theirs. They would not resign what they possess more than he for the most complete satisfaction of all the desires which they have in common with him. If they ever fancy they would, it is only in cases of unhappiness so extreme, that to escape from it they would exchange their lot for almost any other, however undesirable in their own eyes. A being of higher faculties requires more to make him happy, is capable probably of more acute suffering, and certainly accessible to it at more points, than one of an inferior type” (Mill 1861: chapter 2). ↩︎

32. I discuss the specific capabilities that might make a difference in the second entry [EA · GW] in the series. ↩︎

33. For example, differences in neural processing speed might give rise to differences in the subjective experience of time. Thus, for a given minute of objective time, some animals might experience more or less than a minute of subjective time. I discuss this possibility in more detail in the third entry in the series. ↩︎

34. Vallentyne is not himself a hedonist. He adds, “Moreover, well-being does not depend solely on pain and pleasure. It’s controversial exactly what else is relevant — accomplishments, relationships, and so on — but all accounts agree that typical humans have greater capacities for whatever the additional relevant items are” (ibid.). ↩︎

35. See Akhtar 2011 for general discussion of this point. ↩︎

36. See Broom 2007: “For some sentient animals, pain can be especially disturbing on some occasions because the individual concerned uses its sophisticated brain to appreciate that such pain indicates a major risk. However, more sophisticated brain processing will also provide better opportunities for coping with some problems. For example, humans may have means of dealing with pain that fish do not, and may suffer less from pain because they are able to rationalise that it will not last for long. Therefore, in some circumstances, humans who experience a particular pain might suffer more than fish, whilst in other circumstances a certain degree of pain may cause worse welfare in fish than in humans” (103). ↩︎

37. A similar story can be told about pleasurable experiences. The knowledge that a given pleasurable experience is fleeting or undeserved or bad for one’s health can reduce enjoyment of the experience. My dog seems to enjoy her dog treats more than I enjoy my ice cream at least in part because I eat my ice cream with a guilty conscience. ↩︎

38. Alternatively, it might be the statistical regularity of the pattern rather than the phenomenal intensity of the pattern that would be assisted by cognitive sophistication. Thanks to Gavin Taylor for this point. ↩︎

39. Even ignoring the combinatory effects, it might be the case that intellectual, emotional, and social pleasures generally outstrip mere physical pleasures in intensity (and conversely for pains). ↩︎

40. See, inter alia, Višak 2017 for an argument in favor of the so-called self-fulfillment theory of welfare, according to which “a maximally well-oﬀ dog or squirrel is faring just as well as a maximally well-oﬀ human. An individual’s cognitive and emotional capacities do not necessarily determine how well oﬀ this individual can be” (348). ↩︎

41. Moral standing is also sometimes called ‘moral patienthood’ or ‘moral considerability.’ ↩︎

42. Moral standing should be distinguished from moral agency. Moral agency is the capacity to be morally responsible for one’s actions or the capacity to owe moral obligations to other beings. Moral standing does not entail moral agency. ↩︎

43. Note that this is the narrow understanding of sentience. The broader (and more common) understanding of sentience equates it with phenomenal consciousness (i.e., sentience is the capacity for any sort of experience, valenced or not). ↩︎

44. Note that agency is sometimes understood to require something like rational deliberation. This thicker sense of agency would obviously be more restrictive than the thin sense in which agency might be sufficient for moral standing. Still, there is considerable disagreement as to what constitutes a desire, plan, or preference, and one’s views on this issue will influence one’s views on which animals have moral standing and/or one’s view on the plausibility of agency as sufficient for moral standing. ↩︎

45. The theological-minded might prefer a view on which moral standing is grounded in the possession of a Cartesian soul. But on most such accounts, the possession of a Cartesian soul grants sentience or agency or both. So even most theologians will agree that all sentient agents have moral standing because they will think that the class of moral agents is coextensive with the class of beings with Cartesian souls. ↩︎

46. Agency is harder to define than sentience, and this definition complicates the debate over whether agency is sufficient for moral standing. If even crude desires, plans, and preferences are enough for agency, then it appears that creatures like spiders qualify as agents, which may by itself be a reason to suspect agency is insufficient for moral standing (Carruthers 2007). Moreover, if one sets the bar too low for agency, then it will be hard to exclude sophisticated computer programs, like OpenAI Five playing Dota 2. Although it is certainly possible that digital minds can acquire moral standing, there is widespread agreement that current programs do not have such standing. ↩︎

47. Note that some authors use the term ‘moral status’ the way I’m using the term ‘moral standing.’ This terminological difference should be distinguished from the case where an author uses the terms the way I am but who thinks that there are no degrees of moral status, in which case moral status collapses to moral standing. ↩︎

48. ‘Fish’ is a paraphyletic group. Any taxonomic group containing all fish would also contain tetrapods, which are not fish. ↩︎

49. I’m here bracketing any ecocentrist or relationist views that reject an individualist conception of moral status. ↩︎

50. Other unitarians include Elizabeth Harman, Martha Nussbaum, and Oscar Horta. ↩︎

51. Other proponents of the hierarchical view include Peter Vallentyne, Jean Kazez, and of course John Stuart Mill. ↩︎

52. Prioritarianism is the view according to which additions to welfare matter more the worse off the person is whose welfare is affected. See Parfit 1997 for more discussion. ↩︎

53. Egalitarianism is the view according to which a subject’s welfare is weighted by its standing relative to the welfare of other subjects, with more equal distributions of welfare better than less equal distributions of welfare. See Hausman & Waldren 2011 for more discussion. ↩︎

54. Another option is to reject views with distributive requirements like egalitarianism and prioritarianism. Neither Kagan nor Višak endorse this option. ↩︎

55. Note that Kagan’s position does not entail that prioritarianism and egalitarianism will never demand that we prioritize a mouse’s welfare over a human’s welfare. Depending on the exact difference in moral status, it might, for example, be the case that we ought to prioritize a mouse’s welfare over a human’s welfare when the mouse is a 4 out of 10 and the human is a 60 out of 100. ↩︎

56. Note that Singer is not necessarily endorsing this view; only saying that it cannot be rejected out of hand as speciesist. ↩︎

57. The view that welfare capacity or rational agency ground moral standing does not automatically generate a commitment to degrees of moral status, even if welfare capacity and agency admit of degrees. For one thing, although capacity for welfare and capacity for rational choice admit of degrees, the possession of these capacities does not: one either possesses these capacities or one does not. Put another way, one is either a welfare subject or not; one is either a rational agent or one is not. An analogy: Age admits of degrees. In many jurisdictions one must be 18 years old to vote, and there are good arguments that there should be some age restrictions on voting. But those arguments don’t imply that the older one is, the more one’s vote should count. ↩︎

58. As a reminder, these are merely some theoretical difficulties. Actually measuring and comparing these features across animals in practice raises a slew of different but no less vexing problems. I discuss these problems in the second entry [EA · GW] in the series. ↩︎

59. In a recent talk at Notre Dame, Eric Schwitzgebel offers a more extreme version of the same problem concerning divergent AI: “Divergent AI would have human or superhuman levels of some features that we tend to regard as important to moral status but subhuman levels of other features that we tend to regard as important to moral status. For example, it might be possible to design AI with immense theoretical and practical intelligence but with no capacity for genuine joy or suffering. Such AI might have conscious experiences with little or no emotional valence. Just as we can consciously think to ourselves, without much emotional valence, there’s a mountain over there and a river over there, or the best way to grandma’s house at rush hour is down Maple Street, so this divergent AI could have conscious thoughts like that. But it would never feel wow, yippee! And it would never feel crushingly disappointed, or bored, or depressed. It isn’t clear what the moral status of such an entity would be: On some moral theories, it would deserve human-grade rights; on other theories it might not matter how we treat it.” ↩︎

60. In a recent talk at Notre Dame, Eric Schwitzgebel offers the example of “a superpleasure machine but one with little or no capacity for rational thought. It’s like one giant, irrational orgasm all day long. Would it be great to make such things and terrible to destroy them, or is such irrational pleasure not really something worth much in the moral calculus?” ↩︎

61. One might account for these intuitions by appeal to the potential capacities that babies possess. See Harman 2003 for discussion and criticism of this idea. ↩︎

62. One might attempt to skirt this difficulty by appeal to modal capacities. Although the cognitively-impaired human does not have the potential to develop species-typical intellectual and emotional sophistication, in nearby possible worlds, the person does possess this potential. See DeGrazia 2016 for discussion and criticism of this idea. ↩︎

63. See Kagan 2019: 164-169 for more discussion of this issue. (Note that this example is for illustrative purposes only. I make no claim as to an actual difference in intelligence between astrophysicists and social media influencers. [And even if astrophysicists were smarter, social media influencers might score higher on other morally relevant traits, like empathy.]) ↩︎

64. If moral status is a continuous gradient and determined at least in part by social, affective, or intellectual capability, then some humans will likely have a higher status than others. If moral status is instead a discrete series of layers, then a single layer may encompass all humans. The likelihood of this possibility depends on how many layers there are. ↩︎

65. Importantly, Kagan is not merely suggesting that we divide moral status into six tiers for practical purposes. He believes there actually are six tiers (give or take a couple) of moral status. This position follows from his (tentative) commitment to practical realism, the view that “moral rules are to be evaluated with an eye toward our actual epistemic and motivational limitations” (Kagan 2019: 292). ↩︎

66. Another term that might be used to capture both moral status and capacity for welfare is ‘moral weight.’ Although ‘status-adjusted welfare’ isn’t a perfect term, I think ‘moral weight’ suffers from two problems. First, to my ear, it doesn’t sound agnostic between the hierarchical approach and the unitarian approach. One informal way of describing unitarianism is ‘the view that rejects moral weights.’ Second, the term is ambiguous. It might mean that different individuals can have the same interest but weight it differently (e.g., it matters morally that the person in extreme poverty puts a different weight on receiving \$100 than Mike Bloomberg does) or it might mean that different individuals with interests of the same weight might not count the same (e.g., the interests of the individual with higher moral status takes priority, i.e., the hierarchical approach). ↩︎

67. A maximizing act consequentialist who believes welfare is the only thing of intrinsic value will endorse this answer. However, other normative theories will deliver different answers. For example, some theories will say that a world in which status-adjusted welfare is maximized but unevenly distributed might be worse than a world in which status-adjusted welfare is not maximized but is more evenly distributed. More obviously, axiologies that hold that welfare isn’t the only intrinsic value won’t believe that status-adjusted welfare is the only thing that should be maximized. ↩︎

68. If pasture-raised cows lead net-positive lives, then on some consequentialist views, reducing the stock of pasture-raised cows may actually be a net-negative intervention. ↩︎

69. See the section on intensity of suffering in Stephen Warren’s “Suffering by the Pound” for more detail. ↩︎

70. See Luke Muehlhauser’s “Preliminary Thoughts on Moral Weight [LW · GW]” for the best justified estimates of which I’m aware. Muehlhauser’s ranges are extremely large, appropriately reflecting our deep uncertainty about the subject. ↩︎

71. See, for example, Figure 1 in the FAO’s 2018 “The State of World Fisheries and Aquaculture” report. ↩︎

72. The farming of cochineal may cause an additional 4.6 to 21 trillion deaths, primarily nymphs that do not survive to adulthood. ↩︎

73. For defenses of a similar position, see Vallentyne 2007 and Sachs 2011. ↩︎

74. Kazez adds, “As I put it in the last chapter, species can be very roughly ranged along a ladder. Individual human lives do have more value than individual aurochs lives, because they involve more valuable capacities. If that ranking meant there was an exchange rate, with one human life worth 100 aurochs lives, or something of the sort, then we could get a grip on the ‘profligacy point.’ If you kill more animals to save a human being than a human life is worth, then that’s profligate … and disrespectful. But granting there’s a ranking doesn’t mean recognizing any exchange rate. If one human life has more value than one aurochs life, there’s nothing that says that there must be an equivalence between one human life and 10, or 100, or 1,000, or any number of aurochs lives. And that’s not a matter of speciesist prejudice. The same is true when two animal species are compared. Chimpanzee lives may have more value, typically, than squirrel lives. It doesn’t follow that one chimpanzee is ‘worth’ 10 squirrels, or 100, or 1,000” (Kazez 2010: 112). ↩︎

75. Jamie Mayerfield makes a similar point about comparing human pains: “I said that my intuitions favor the claim that we should prevent one person from experiencing the pain of torture rather than prevent a million others from experiencing the pain of acute frustration. But in fact my intuitions favor an even stronger claim. It seems to me that when the difference in intensity is this large, no difference in the number of sufferers can justify the more intense suffering. The severe torture of one person seems worse than the painful frustration of any number of people” (Mayerfield 1999: 183). See Carlson 2000 for more discussion. ↩︎

76. Alternatively, one might adopt John Stuart Mill’s conception of happiness and hold that chimpanzee happiness is the product of higher pleasures and squirrel happiness is the product of lower pleasures. If no amount of lower pleasure could equal any amount of higher pleasure, then one would have a reason to prefer chimpanzee happiness to any amount of squirrel happiness. However, that position is (a) implausible and (b) seems to abandon the principle that happiness is the only thing that matters. ↩︎

77. For general discussion of whether and how to discount for probability of sentience, see Sebo 2018. ↩︎

78. Uncertainty in a cost-effectiveness estimate is not necessarily proportional to uncertainty in a given parameter. And there may be specific instances in which we are more uncertain about sentience than about moral status. (For example, if one thought agency were sufficient for moral standing, one might be able to estimate the moral status of, say, an advanced AI program even if one were unsure whether the AI were sentient.) Nevertheless, the general point appears sound: given the typical difference in uncertainties, reducing uncertainty about moral status and capacity for welfare is normally going to be more impactful than reducing uncertainty about sentience. ↩︎

79. One might adopt a position on which moral properties (like moral status) exist, but they’re not grounded in mind-independent properties. Metaethical constructivism is one such view. If antirealism is the view that moral properties do not exist, then constructivism is not antirealist. (Mind-dependent properties are still properties, after all.) Whether such a view is worthy of the mantle of realism is, however, contentious. ↩︎

comment by Oscar_Horta · 2020-05-18T09:37:05.767Z · score: 18 (14 votes) · EA(p) · GW(p)

It is good to be open to reconsider one's most basic assumptions every now and then. But I was nevertheless surprised when I read this post, as it questions one of the core ideas of EA, which is that of impartiality, that is, the idea that equal interests count the same. EA without impartiality wouldn't look very much like EA anymore. Partiality drives people to engage in causes that do significantly less good than others, because in doing so they do more good for those favored by their partial views.

In the literature, one of the main ways to oppose impartiality is by using the construct of moral status, according to which equal interests don't always count the same. This is very different from claiming that in different situations we can have different interests. It is not against impartiality to claim that, say, the interest in not dying of a human and a mouse, or of an old human and a young human, may be different. If these interests do differ, then, as they are different interests, they count differently. This is perfectly in accordance with the idea of the equal consideration of interests. But this is totally different from saying that the equal interests of a mouse and a human, or of an old and a young humans, shouldn't count the same, because their status is different.

As for the capacity for (positive or negative) welfare, its usefulness is very limited. It can allow you do compare the weight of different individuals only when they're facing the same situation. But, in many other cases, considering that their capacity for welfare is relevant can drive us to make very wrong decisions. Suppose Nemo the fish has a lower capacity for welfare than Jason the human. Suppose I can choose between causing some pain to Jason or a slightly worse pain to Nemo. Other things being equal, choosing the latter would be wrong. This is because what matters for this decision is not Jason and Nemo's capacity for welfare, but the actual interests involved, which, again, other things being equal are determined only by the actual alternative pains at stake.

In practice, most of the decisions we have to make are of this kind, as those affected by the different causes we can compare aren't in the same situation (which is what would allow us to make our decisions on the basis of their capacity for welfare). Rather, they are in situations that differ significantly from each other.

comment by Jason Schukraft · 2020-05-18T15:15:48.259Z · score: 17 (8 votes) · EA(p) · GW(p)

Hi Oscar,

Thanks for your comment. For what it's worth, I am not myself very sympathetic to the hierarchical view that holds that there are differences in moral status among creatures with moral standing. However, I think there are enough thoughtful people who do endorse such a view that it would be epistemically inappropriate to completely dismiss the position. These questions are tough, and I've tried to reflect our deep collective uncertainty about these matters in the post.

(I should perhaps also flag that even if there are differences in moral status, there is no a priori guarantee that humans have the highest moral status. I'm currently working on a piece about the subjective experience of time, and if there are differences in characteristic temporal experience across species, humans certainly don't come out on top of that metric. But perhaps that's irrelevant to moral status.)

Regarding the usefulness of capacity for welfare, naturally I disagree. Take fish, for instance. Fish are a tremendously diverse group of animals, and this diversity is reflected in human exploitation of fish. (By my count, humans exploit five times as many taxonomic families of fish as they do birds.) There is prima facie good reason to think that capacity for welfare differs substantially among different families of fish. The harms we inflict on fish, through aquaculture and commercial fishing, are severe, plausibly among the worst conditions the fish could experience. If capacity for welfare differs among fish, and we are inflicting severe harm on all exploited fish, then those differences in capacity for welfare would give us reason to prioritize some types of fish over others. The fish with the greater capacity for welfare are suffering more, so easing their suffering is more urgent.

Happy to talk more if you'd like.

comment by Oscar_Horta · 2020-05-18T20:49:19.655Z · score: 9 (6 votes) · EA(p) · GW(p)

Hi Jason,

I agree that many thoughtful people reject impartiality (the majority of human beings probably reject impartiality). But this is not necessarily a reason to think there may be a sound epistemic case not to completely reject partialism. Thoughtful people can be wrong, as shown by the fact that you can find thoughtful people defending almost all kind of views, and they can be biased. And then, there are very few basic moral ideas that seem harder to abandon that how much a certain given (dis)value should count should be completely independent of the identity of the one who suffers it.

I agree that if the way different animals are affected are coincident, and if some suffer more because of some capacity they have, then knowing more about this is certainly relevant. I just don't think the situations in which animals that are very diverse typically are are similar enough to make these comparisons. (But then I must confess I'm skeptical from the start about the whole idea that given what we know now we can really learn about the possible differences in the capacities for welfare between animals. In fact I also think it may well be the case that there aren't significant differences in that respect at least among vertebrates and invertebrates with relatively complex nervous systems. But I think the above claim stands even if you disagree with me on this.)

Anyway, my opinions about the capacities of different animals and their usefulness in situations like the ones you mention may well be all wrong! But would you agree with the main point that what ultimately matters is not capacity for welfare but the actual interests at stake in each case?

If you accept this, then maybe you can share my concern that, when different animals are in different situations, considering capacity for welfare instead of the actual interests at stake (as it seems to happen often) can lead us to make wrong decisions.

Thanks!

comment by Jason Schukraft · 2020-05-19T13:26:24.315Z · score: 6 (4 votes) · EA(p) · GW(p)

Hi Oscar,

Thanks for another insightful comment. I think we agree that what ultimately matters morally is realized welfare. I think we disagree about the extent and size of differences in capacity for welfare, our ability to measure capacity for welfare, and the usefulness of thinking about capacity for welfare. (Please correct me if I have misconstrued our points of agreement and disagreement!) I'll take our points of disagreement in reverse order.

It's certainly true that differences in capacity for welfare won't always make a difference to the way we ought to allocate resources. If we are only alleviating mild suffering (or promoting mild pleasure) and we have reason to believe that more or less all welfare subjects have the capacity to experience welfare outcomes greater than mild suffering and mild pleasure, then capacity for welfare isn't really relevant. But it seems to me that most of the time humans exploit nonhuman animals, they inflict what prima facie looks to be intense suffering. If that's right, then knowing something about capacity for welfare might be important. Alleviating the suffering of the animals with a greater capacity for welfare would generally make a bigger welfare improvement.

On the second point: measuring capacity for welfare is going to be extremely difficult and doing so well is a big and long-term project. Nonetheless, I am cautiously optimistic that if we take this topic seriously, we can make real progress. Admittedly, there are a lot of ways such a project could go wrong, so maybe my optimism is misplaced. I lay out my thoughts in much more detail in the second post in this series (due to be released June 1), so maybe we should discuss the issue more then.

Finally, my reading of the literature suggests that most (though not all) plausible theories of welfare predict differences in capacity for welfare, though of course the size and extent of such differences depend on the details of the theory and various empirical facts. I would be curious to know which combination of theoretical and empirical claims you endorse that lead you to believe there aren't significant differences in capacity for welfare across species. (If you're right, thinking about capacity for welfare might still be useful if for no other reason than to dispel the old myth that such differences exist!)

Thanks again for reading and engaging with the post!

comment by Oscar_Horta · 2020-05-20T17:24:05.956Z · score: 6 (4 votes) · EA(p) · GW(p)

Hi Jason,

Thanks for reconstructing and summarizing the discussion. I think this is generally true:

I think we agree that what ultimately matters morally is realized welfare. I think we disagree about the extent and size of differences in capacity for welfare, our ability to measure capacity for welfare, and the usefulness of thinking about capacity for welfare.

I guess our strongest disagreement might be about our ability to measure capacity for welfare. And I think maybe we can agree too on some of the dangers of giving too much importance to the capacity for welfare.

Here's why I think this. My concern is that accepting capacity for welfare as a rule of thumb to consider who counts for more often leads people to assume that the interests of certain animals count for more in general than the interests of other animals, even in situations in which the harms they are facing are less important ones. It also leads people to disregard numbers. This is one of the reasons why the interests of mammals typically get much more attention than those of invertebrates and fish(es) even when the situation of those mammals as individuals is not necessarily worse than that of a fish, and even if, due to their very different numbers, their aggregate interests should count for significantly less. This kind of mistakes are made all the time, not just among the general public, but also among animal advocates.

I suspect you'll probably agree that this is problematic too. If that's so, then our disagreement concerning the usefulness of thinking about capacity for welfare will be smaller than it may seem at first!

As for your question regarding the claims I endorse, axiologically, I think only experiences can be positive or negative. Of course if one defends some forms of preference-satisfactionism and certain objective list theories of welfare one will reach a different conclusion. According to these views, being able to read novels, or being a social animal, may make your capacity for welfare higher. But I don’t find those views plausible.

Concerning my views about what types of minds there may be, to a great degree I'm just agnostic about the differences in intensity of experience. Maybe things are as you think, I just think that the evidence we have doesn't allow us to reach that conclusion. Being able to have experiences that are more complex doesn’t necessarily entail being able to have experiences that are more intense. I find it quite plausible that an animal may only have very simple experiences but equally intense to the ones that animals with complex minds could have. The point of the intensity of experiences like pain is not to help you in decision making process, like being able to deal with complex information is, but just to give you some motivation to act. I don't think that beings with more complex minds necessarily need more motivation of this kind than those with simpler ones.

Empirically, much of the evidence about the minds of animals that are very different from us is about the complexity of the information those animals can deal with. Significantly less evidence seems to tell us something that can be relevant for drawing differences between the intensity of the experiences of different animals, and such evidence is often very uncertain. However, I can see that there are exceptions to this, such as the fact that some arthropods go on with a certain behavior despite having suffered important physical harms. This strikes me as evidence in favor of your view. But I think we would need much more in order to be able to conclude something here more conclusively. And even if it were true in this case we can't be certain that this applies in the case of other animals like vertebrates. Maybe there is some point from which all beings have the capacity to have roughly equally intense experiences (but arthropods are below that level). We just don't have enough evidence (or ways to get it at this point).

Thanks!

comment by Jason Schukraft · 2020-05-22T01:51:10.966Z · score: 10 (3 votes) · EA(p) · GW(p)

Hi Oscar,

Thanks again. Much (though not all) of my credence in the claim that there are significant differences in capacity for welfare across species derives from the credence I put in non-hedonistic theories of welfare. But I agree that differences in capacity for welfare don't entail that the interests of the animal with a greater capacity ought always be prioritized over the interests of the animal with the smaller capacity. And of course I agree that numbers matter. As you know, I'm quite concerned about our treatment of some invertebrates [EA · GW]. When I express that concern to people, many suggest that even if, say, bees are sentient, they don't count for as much as, say, cows. I hope that thinking about both the number of exploited invertebrates and their capacity for welfare will help us figure out whether our current neglect of invertebrate welfare is justified. I suspect that when we get clear on what plausibly can and can't influence capacity for welfare (and to what extent), we'll see that the differences between mammals and arthropods aren't great enough to justify our current allocation of resources. At the very least, thinking more about it might reveal that we are deeply ignorant about differences in capacity for welfare across species. We can then try to account for that uncertainty in our allocation of resources.

comment by MichaelA · 2020-05-21T08:09:53.093Z · score: 4 (3 votes) · EA(p) · GW(p)

I found this whole comment thread interesting.

I agree that many thoughtful people reject impartiality (the majority of human beings probably reject impartiality). But this is not necessarily a reason to think there may be a sound epistemic case not to completely reject partialism.

I think two broad (though not necessarily knock-down) arguments against (some version of) those claims are considerations of epistemic modesty/humility [LW(p) · GW(p)] and moral uncertainty [EA · GW]. More specifically, I see that as at least a reason why it's useful to engage with the idea of non-impartial views, and to try to leave one's conceptual framework open to such views.

(That said, I also think there's clear value in sometimes having discussions that are just about one's "independent impressions" - i.e., what one would believe without updating on the views of others. For example, that helps avoids information cascades. And I do personally share strong intuitions towards an impartial/unitarian approach.)

comment by MichaelA · 2020-05-21T08:16:24.459Z · score: 6 (4 votes) · EA(p) · GW(p)

I quite appreciate the way you've engaged with hierarchical approaches and ensured your conceptual framework was open to such approaches, even if you personally aren't very sympathetic towards them.

That said, I think I can see how a reader might get the impression that you're more sympathetic to such approaches than it sounds like you are. E.g., you write:

I have suggested that we should frame the value of interventions in terms of status-adjusted welfare. If we were to compare the value of an intervention that targeted pigs with an intervention that targeted silkworms, we should consider not only the amount of welfare to be gained but also the moral status of the creatures who would gain the welfare.

To me, this reads like you're saying not just that we should have this terminology at hand, nor just that we should be ready to ignore the welfare of entities with 0 moral status, but also that we should adjust things by moral status that varies by degrees.

And as I mentioned in another comment [EA(p) · GW(p)], to me, the term "status-adjusted welfare" also gives that impression. (I'm not saying that's actually the literal meaning of your claims or terms, just that I can see how one might come to that impression.)

comment by MichaelPlant · 2020-05-18T15:00:13.445Z · score: 12 (7 votes) · EA(p) · GW(p)

Thanks for writing this up - I thought this was a very philosophically high-quality forum post, both in terms of its clarity and familiarity with the literature, and have given it a strong upvote!

With that said, I think you've been too quick in responding to the first objection. An essential part of the project is to establish the capacities for welfare across species, but that's neither necessary or sufficient to make comparisons - for that, we need to know about actual levels of well-being for different entities (or, at least the differences in their well-being). But knowing about the levels seems very hard.

Let me quickly illustrate with some details. Suppose chicken welfare has a range of +2 to -2 well-being levels, but for cows it's -5 to +5. Suppose further the average actual well-being levels of chickens and cows in agriculture are -1 and -0.5, respectively. Should we prevent one time-period of cow-existence or of chicken-existence? The answer is chicken-existence, all else equal, even though cows have a greater capacity.

Can you make decisions about what maximises well-being if you know what the capacities but not the average levels are? No. What you need to know are the levels. Okay, so can we determine what the levels, in fact, are? You say:

Of course, measuring the comparative suffering of different types of animals is not always easy. Nonetheless, it does appear that we can get at least a rough handle on which practices generally inflict the most pain, and several experts have produced explicit welfare ratings for various groups of farmed animals that seem to at least loosely converge

My worry is: what makes us think that we can even "get a least a rough handle"? You appeal to experts, but why should we suppose that the experts have any idea? They could all agree with each other and still be wrong. (Arguably) silly comparison: suppose I tell you a survey of theological experts reported that approximately 1 to 100 angels could dance on the head of a pin. What should you conclude about how many angels can dance on a pin? Maybe nothing. What you might want to know is what evidence those experts have to form their opinions.

I'm sceptical we can have evidence-based inter-species comparisons of (hedonic) welfare-levels at all.

Suppose hedonism is right and well-being consists in happiness. Happiness is a subjective state. Subjective states are, of necessity, not measurable by objective means. I might measure what I suppose are the objective correlates of subjective states, e.g. some brain functionings, but how do I know what the relationship is between the objective correlates and the subjective intensities? We might rely on self-reports to determine that relationship. That seems fine. However, how do we extend that relationship to beings that can't give us self-reports? I'm not sure. We can make assumptions (about general relationship between objective brain states and subjective intensities) but we can't check if we're right or not. Of course, we will still form opinions here, but it's unclear how one could acquire expertise at all. I hope I'm wrong about this, but I think this problem is pretty serious.

If well-being consists in objective goods, e.g. friendship or knowledge, it might be easier to measure those, although there will be much apparent arbitrariness involved in operationalising these concepts.

There will be issues with desire theories too either way, depending whether one opts for a mental-state or non-mental-state version, but that's a further issue I don't want to get into here.

comment by Jason Schukraft · 2020-05-18T16:22:21.887Z · score: 13 (6 votes) · EA(p) · GW(p)

Hi Michael,

Thanks for the comment. You're right that realized welfare is ultimately what matters. My hope is that thinking about capacity for welfare will sometimes help inform our estimates of realized welfare, though this certainly won't be true in every case. As an example of an instance where thinking about capacity for welfare does matter, consider honey bees. At any given time, there are more than a trillion managed honey bees under human control. Varroa destructor mites are a common problem in commercial hives. When a mite attaches to an adult bee, it slowly drains the bee's blood and fat. (It might be comparable to a tick the size of a baseball latching on to a human.) How does this affect the bee's welfare? If bees have a capacity for welfare roughly similar to vertebrates, it seems like in the long-run we can do a lot more good by focusing on honey bee welfare.

I believe that interspecies comparisons of welfare are extraordinarily difficult, but I think you are still too pessimistic about the prospect of making such comparisons. It's true that on many views welfare will be constituted (in whole or in part) by subjective (i.e., private) states for which we don't have direct evidence. But we can still use inference to the best explanation to justifiably infer the existence of such states. We only have access to our own subjective experiences, but we infer the existence of such states in other humans all the time. (Humans can give self-reports, but of course we can't independently verify such reports.) I think we can do the same with varying degrees of confidence for nonhuman animals.

For a discussion of possible cross-species measures of animal welfare, see this paper by Heather Browning.

Happy to really get in the weeds of this issue if you want to talk more.

comment by MichaelPlant · 2020-05-18T21:18:13.641Z · score: 2 (1 votes) · EA(p) · GW(p)

To fill out the details of what you're getting at, I think you're saying "the welfare level of an animal is X% of its capacity C. We're confident of both X and C in the given scenario for animal A is high enough that it's better to help animal A than animal B". That may be correct, but you're accepting that than you can know the welfare levels because you know the percentage of the capacity. But then I can make the same claim again: why should we be confident we've got the percentage of the capacity right?

I agree we should, in general, use inference to the best explanation. I'm not sure we know how to do that when we don't have access to the relevant evidence (the private, subjective states) to draw inferences. If it help, trying putting on the serious sceptic's hat and ask "okay, we might feel confident animal A is suffering more than animal B, and we do make these sort of judgement the whole time, but what justifies this confidence?". What I'd really like to understand (not necessary from you - I've been thinking about this for a while!) is what the chain of reasoning is that would go into that justification.

comment by MichaelStJules · 2020-05-22T21:00:49.960Z · score: 2 (1 votes) · EA(p) · GW(p)
But then I can make the same claim again: why should we be confident we've got the percentage of the capacity right?

I think even if we're not confident, bounds on welfare capacity can still be useful. For example, if I know that A produces X net units of good (in expectation), and B produces between Y and Z net units of good, then under risk-neutral expected value maximization, X < Y would tell me that B's better, and X > Z would tell me that A's better. The problem is where Y < X < Z. And we can build a distribution over the percentage of capacity or do a sensitivity analysis, something similar to this [EA · GW], say.

comment by MichaelStJules · 2020-05-18T04:04:13.734Z · score: 11 (4 votes) · EA(p) · GW(p)
One reading is that capacity for welfare directly determines (at least in part) moral status. The other reading is that moral status is grounded in various capacities that also just so happen to be relevant for determining capacity for welfare. The first interpretation runs the risk of double-counting. Even before considering moral status, we can say that lives that contain more and more of non-instrumental goods are more valuable than lives that contain fewer and less of those non-instrumental goods. It’s not clear why those lives should gain additional moral value—in virtue of a higher moral status—merely because they were more valuable in the first place. For this reason, I think it makes more sense to think that capacity for welfare does not play a direct role in determining moral status, though many of the features relevant for welfare capacity are also relevant for moral status.

I actually find the opposite conclusion more plausible: capacity for welfare is what directly determines moral status (if unitarianism is false; I think unitarianism is true), and specific features/capacities only matter through capacity for welfare and effects on welfare. If we're saying that there are features that determine moral status not through their effects on welfare or capacity for welfare, then it sounds like we're rejecting welfarism. We're saying welfare matters, but it matters more in beings with feature X, but not because of how X matters for welfare. How can X matter regardless of its connection to welfare? That seems pretty counterintuitive to me, as a welfarist. Or am I misunderstanding?

Maybe it's something like this?

Through welfare capacity:

Premise 1. Capacity for welfare (partially) determines moral status.

Premise 2. Feature X (partially) determines capacity for welfare.

Conclusion. Feature X (partially) determines moral status through capacity for welfare.

The other approach:

Premise 1'. If a feature X (partially) determines (actual welfare or) capacity for welfare, then it (partially) determines moral status.

Premise 2'. Feature X (partially) determines (actual welfare or) capacity for welfare.

Conclusion'. Feature X (partially) determines moral status because of (but not necessarily through) capacity for welfare.

Premise 1' seems less intuitive to me than Premise 1. If a feature determines actual welfare, that's already in our moral calculus without need for moral status. As a welfarist, it seems therefore that a feature can only determine moral status because it determines welfare capacity, unless there's some other way the feature can be connected to welfare. If this is the case, how else could it plausibly do this except through welfare capacity?

comment by Jason Schukraft · 2020-05-18T15:57:03.704Z · score: 6 (4 votes) · EA(p) · GW(p)

Hi Michael,

As an example of how capacity for welfare might be distinct from moral status, one might be a hedonist about welfare (and thus think that capacity for welfare is wholly determined by possible range of valenced experience and maybe subjective experience of time) but think that moral status is determined by degree of autonomy or rationality. The precise definition of welfarism is contentious, so I'll leave it to you to decide if that's a violation of welfarism.

However, I think even a welfarist should be wary of letting capacity for welfare determine moral status. The moral status of an individual tells us how much that individual's welfare is worth. If capacity for welfare determines moral status, then it seems like individuals with small capacities for welfare are unjustifiably doubly-penalized: they can only ever obtain a small amount of welfare and, in virtue of that fact, that small amount of welfare counts for less than an equal amount of welfare for an individual with a greater capacity for welfare. That strikes me as the wrong result.

On some interpretations of welfarism, I think the truth of welfarism gives us pretty good reason to endorse unitarianism. I'm also sympathetic to welfarism, but of course there are plenty of people who reject it. Anyone who endorses a retributive principle of justice, for instance, must reject welfarism.

comment by MichaelStJules · 2020-05-18T18:08:32.613Z · score: 2 (1 votes) · EA(p) · GW(p)
As an example of how capacity for welfare might be distinct from moral status, one might be a hedonist about welfare (and thus think that capacity for welfare is wholly determined by possible range of valenced experience and maybe subjective experience of time) but think that moral status is determined by degree of autonomy or rationality. The precise definition of welfarism is contentious, so I'll leave it to you to decide if that's a violation of welfarism.

I don't see how you could motivate that if we accept welfarism (unless we accept objective list theories, but again, that seems to be through welfare capacity). Why are degree of autonomy and rationality non-instrumentally relevant? Why not the width of the visible electromagnetic spectrum, or whether or not an individual can see at all, or other senses?

That strikes me as the wrong result.

I really don't know. It's hard for me to have an intuition either way since both seem wrong to me, anyway. It seems better to me to double penalize an individual for things that are relevant to welfare than to non-instrumentally penalize individuals based on things which are at most instrumentally relevant to welfare.

comment by MichaelStJules · 2020-05-18T04:11:09.806Z · score: 3 (3 votes) · EA(p) · GW(p)

Using welfare capacity to determine moral status also solves the problem of how to weight different features, including combination effects, in a non-arbitrary way, if we can define welfare capacity non-arbitrarily (although I'm skeptical of this, see my other comment [EA(p) · GW(p)]). That being said, the ranking we get out of this for moral status is still only ordinal.

comment by MichaelStJules · 2020-05-18T01:59:54.511Z · score: 11 (6 votes) · EA(p) · GW(p)
Octopuses are solitary creatures and thus plausibly will never experience true friendship or love.

Maybe if you drug them. And if drug effects do not count towards someone's capacities, this might have important moral consequences in cases of mental illness in humans, like depression.

Possibly, they tend to be solitary due to zero-sum competition, and this happens to be circumstantial. See how this octopus and human interact.

Also, mother octopuses often die protecting their eggs, although I'm not aware of them raising their young.

Of course, maybe love and friendship aren't good ways to describe these. Maybe octopuses are more like bees than cows in their maternalism.

If moral agency is a requirement for virtue, fish plausibly cannot be virtuous.

I think this would depend on how narrowly you define agency. If it requires abstract reasoning, maybe not? I think a case could be made for cleaner wrasses, who seem to pass a version of the mirror test and have complex social behaviours. Maybe groupers and moray eels, too, because of their cooperation in hunting?

comment by Jason Schukraft · 2020-05-18T17:34:57.758Z · score: 6 (5 votes) · EA(p) · GW(p)

Hi Michael,

Thanks for the comment. The examples are purely illustrative, so it's probably best not to wrangle over the specifics of cleaner wrasse behavior and octopus drug responses. I think it's plausible there are some creatures that by virtue of their natural solitary behavior are incapable of developing intimate bonds with other animals. And although definitions of moral agency certainly vary, I find it plausible that many animals are moral patients but not moral agents. If those two claims are right, then it shows that objective list theories of welfare predict differences in capacity for welfare, which is the point I aim to make in the text.

comment by Nicolas Delon · 2020-05-21T06:21:44.194Z · score: 9 (4 votes) · EA(p) · GW(p)

Great post! It lays out rigorously a number of important moving parts and will definitely move the conversation forward.

I'm worried about relying too heavily on Kagan (2019). I found his book thought-provoking, clever and illuminating in many ways, but it shouldn't serve as a point of reference or an entry point for discussions of moral status. For one thing, it's extremely recent. More importantly, Kagan has engaged with, much less cited, very little of the literature (he's candid about it, but I still think that's an issue). As a result, he has a highly idiosyncratic conception of moral status. My main worry with his view is one you note: double counting. But I fail to see how the idea of status-adjusted welfare does not reproduce this problem. In fact, it seems built into your definition of moral status:

For our purposes, I’ll let moral status be the degree to which the interests of an entity with moral standing must be weighed in (ideal) moral deliberation or the degree to which the experiences of an entity with moral standing matter morally.

But that's already stacking the deck against the idea that the strength of the reasons provided by interests depends on the nature of those interests, which may determine moral status. Instead, you seem to be presupposing that moral status determines how much those interests matter even when they're similar. But one shouldn't have to argue that unitarianism is true to find that definition plausible. However, I only find it plausible on the condition that hierarchical views as you spell them out are false.

I also happen to have a minority view of moral status, which aligns with Rachels' (2004), albeit for different reasons (for instance, I reject his moral individualism, but that's another can of worms; see Delon 2014 and 2015). On this view (but also see e.g. Sachs 2011, and to some extent DeGrazia 2008), moral status consists in what treatment is owed by moral agents to the bearers of moral status. Likewise, Sebo (2017) argues that agency makes a difference to moral status with respect to how agents ought to be treated, but as far as know Jeff is a unitarian (see his review of Kagan). Our obligations to different animals may be stronger, based on a number of factors, including their capacity for welfare, and so you might think this means moral status varies according to capacity for welfare. In a sense, yes, but that's where it ends. Yes, moral status can be a matter of degree, but this doesn't commit one to Kagan's double-counting hierarchy. You can't also have capacity for welfare tell you how much the interests of the bearer of moral status count—that's double counting. It seems to me that that's exactly what status-adjusted welfare does. As you note, for unitarians, this collapses into welfare, but that's only on the surface—they still have to compute welfare as you suggest, which I think many would deny. On the other hand, you could have a hierarchical view without double counting: it's just the view that tells you that creatures with greater capacity for welfare give us stronger reasons to protect them. The explanation may be that their interests are more numerous, complex, or strong, which is compatible with the principle of equal consideration. In fact, I think Singer has a hierarchical theory of moral status for all intents and purposes. The claim that similar interests count equally is ultimately what Kagan wants to reject and I don't understand why, nor how he can do this non-arbitrarily or without double-counting.

comment by Jason Schukraft · 2020-05-22T01:33:00.898Z · score: 5 (3 votes) · EA(p) · GW(p)

Hi Nicolas,

Thanks for the comment! There’s a lot of good stuff to unpack here. First I should acknowledge that the subject matter in question is complex, and the post intentionally simplifies some issues just to keep it readable. (For instance, the post assumes intrinsicalism about moral status.) If you’d like, I’d love to schedule a call to discuss the topic in more detail.

I agree that Kagan faces both a double-counting worry and an arbitrariness worry. On the whole, I think these two concerns are decent reasons to reject Kagan’s view. However, if I were to put on my hierarchical hat, I would suggest that so long as the intrinsic characteristics that determine moral status are distinct from the characteristics that determine capacity for welfare, the double-counting worry can be avoided. (I think there are other, more complicated ways to try to sidestep the double-counting worry as well.) The arbitrariness worry is harder to handle, but if one is wedded to certain intuitions, then it might be a bullet worth biting. If appeal to differences in moral status is the only way to avoid obligations that one finds deeply counterintuitive, then the appeal isn’t necessarily arbitrary. (Taking off my hierarchical hat, I think Sebo’s review of Kagan’s book does a good job summarizing why we should be skeptical of the sort of intuitions Kagan consistently draws on.)

I also agree that one can endorse a hierarchy of characteristic moral value without endorsing Kagan’s view. (Kagan says as much in chapter two of his book.) In the post, I’ve tried to suggest that a hierarchy based on capacity for welfare is importantly distinct from a hierarchy based on Kagan-style moral status. I’m sympathetic to the view that ultimately moral status is context-sensitive or agent-relative or somehow multidimensional, but it’s not clear how much of practical value we lose by suppressing this complication. I’ll think more about it!

comment by Nicolas Delon · 2020-05-26T14:06:04.133Z · score: 5 (3 votes) · EA(p) · GW(p)

Thanks a lot for the response, Jason! It seems like we actually agree more than it seemed.

if I were to put on my hierarchical hat, I would suggest that so long as the intrinsic characteristics that determine moral status are distinct from the characteristics that determine capacity for welfare, the double-counting worry can be avoided.

Agreed. If we accept the possibility you suggest, then I can see how status-adjusted welfare doesn't run into double-counting. The question is: what makes these status-conferring characteristics morally relevant if not their contribution to welfare? Some views, I suppose, hold that the mere possession of some intrinsically valuable features—supposedly, rationality, autonomy, being created by God, being human, and whatnot—determine moral status even if they don't contribute to welfare. That's a coherent kind of view, and perhaps you're right that a view like this would not necessarily be arbitrary, but I have a hard time finding it plausible. I just don't understand why some property should determine how to treat x if it has nothing to do with what can harm or benefit x.

If appeal to differences in moral status is the only way to avoid obligations that one finds deeply counterintuitive, then the appeal isn’t necessarily arbitrary.

Yeah, I understand the motivation behind Kagan's move. His descriptions of the distributive implications of unitarianism do make it look like mice just can't have the same moral status as human beings. But it doesn't follow that the interests of mice should count less. Many other morally relevant facts might explain why we ought not to massively shift resources towards mice. But yes, I can see the appeal of the hierarchical views as a solution to these problems. However, we should be wary of which intuitions shape our response to those sorts of cases (as Sebo argues in his review), or we're just going to construct a view that rationalizes whatever allocation of resources we find acceptable. Sometimes, Kagan's reasoning sounds like: "Come on, we're not going to help rats! Therefore they must have a much lower status than persons."

I’m sympathetic to the view that ultimately moral status is context-sensitive or agent-relative or somehow multidimensional

Me too, very much so. As for practical value, I like Kagan's eventual move towards "practical realism" a lot. There's a similar move in Rachels (2004). A helpful way to think about this, for utilitarians, is in terms of R.M. Hare's two levels of moral thinking, nicely developed for animals in Varner (2012).

comment by MichaelStJules · 2020-05-18T04:47:29.050Z · score: 9 (3 votes) · EA(p) · GW(p)
Although I grant that this position has some initial intuitive appeal, I find it difficult to endorse—or, frankly, really understand—upon reflection. For this position to succeed, there would have to exist some sort of unbridgeable value gap between small interests and big interests. And while the mere existence of such a gap is perhaps not so strange, the placement of the gap at any particular point on a welfare or status scale seems unjustifiably arbitrary. It’s not clear what could explain the fact that the slight happiness of a sufficient number of squirrels never outweighs the large happiness of a single chimpanzee. If happiness is all that non-instrumentally matters, as Kazez assumes for the sake of argument, we can’t appeal to any qualitative differences in chimpanzee versus squirrel happiness.[76] [EA · GW] (It’s not as if, for example, that chimpanzee happiness is deserved while squirrel happiness is obtained unfairly.) And how much happier must chimpanzees be before their happiness can definitively outweigh the lesser happiness of other creatures? What about meerkats, who we might assume for the sake of argument are generally happier than squirrels but not so happy as chimpanzees? There seems to be little principled ground to stand on. Hence, while we should acknowledge the possibility of non-additivity here, we should probably assign it a fairly low credence.

"Consent-based" approaches might work. They've been framed in the case of suffering, but could possibly work for happiness, too. Actually, I suppose this is similar to Mill's higher and lower pleasures (EDIT: as you mention in footnote 76), but without being dogmatic about what counts as a higher or lower pleasure even to the point of rejecting the preferences of those who experience both. See:

https://reducing-suffering.org/happiness-suffering-symmetric/#Consent-based_negative_utilitarianism

http://centerforreducingsuffering.org/clarifying-lexical-thresholds/

And, indeed, if we want to determine levels of suffering and pleasure based on the tradeoffs people would make, you will get lexicality unless you reject some tradeoffs, because some people have lexical views (myself included, if I had a very long life, I'd prefer many pin pricks one at a time spread out across days than a full day of torture with no long-term effects). How else could we ground cardinal degrees of suffering and pleasure except through individual tradeoffs?

comment by MichaelStJules · 2020-05-18T05:17:32.204Z · score: 4 (2 votes) · EA(p) · GW(p)

And while it might be the case that nonhuman animals act lexically, since they aren't as future-oriented and reflective like us, their behaviour on its own might not be a good indication of moral lexicality. If we establish that an animal is suffering to an extent similar to how we suffer when we suffer lexically, then that's a reason to believe that suffering matters lexically, and if we establish that an animal is suffering to an extent similar to how we suffer when we don't suffer lexically, then that's a reason to believe that suffering doesn't matter lexically. In this way, it could turn out to be the case that insects act lexically, but their suffering doesn't matter lexically. Of course, it could turn out to be the case that insects do suffer in ways that matter lexically.

comment by Jason Schukraft · 2020-05-18T18:15:13.631Z · score: 5 (3 votes) · EA(p) · GW(p)

Hi Michael,

Thanks for the comment. The question of value lexicality is a big issue, and I can't possibly do it justice in these comments alone, so if you want to schedule a call to discuss in more detail, I'm happy to do so.

That caveat aside, I'm pretty skeptical consent-based views can ground the relevant thresholds in a way that escapes the arbitrariness worry. The basic concern is that we can expect differences in ability to consent across circumstances and species that don't track morally relevant facts. A lot hangs on the exact nature of consent, which is surprisingly hard to pin down. See recent debates about the nature of consent in clinical trials, political legitimacy, human organ sales, sex, and general decision-making capacity.

comment by MichaelStJules · 2020-05-18T19:49:20.910Z · score: 3 (2 votes) · EA(p) · GW(p)

I think the word "consent" might have been a somewhat poor choice, since it has more connotations than we need. Rather, the concept is closer to "bearability" or just the fact that an individual's personal preferences seem to involve lexicality, which the two articles I linked to get into. For suffering, it's when someone wants to make it stop, at any cost (or any cost in certain kinds of experiences, say, e.g. any number of sufficiently mild pains, or any amount of pleasure).

There are objections to this, too, of course:

1. We have unreliable intuitions/preferences involving large numbers (e.g. a large number of pin pricks vs torture).

2. We may be trying to generalize from imagining ourselves in situations like sufficiently intense suffering in which we can't possibly be reflective or rational, so any intuitions coming out of this would be unreliable. Lexicality might happen only (perhaps by definition) when we can't possibly be reflective or rational. Furthermore, if this is the case, then this is a reason against the conjunction of trusting our own lexicality directly and not directly trusting the lexicality of nonhuman animals, including simpler ones like insects.

3. We mostly have unreliable intuitions about the kinds of intense suffering people have lexical preferences about, since few of us actually experience it.

That being said, I think each of these objections cuts both ways: they only tell us our intuitions are unreliable in these cases, they don't tell us whether lexicality should be accepted or rejected. I can think of arguments for each:

1. We should trust personal preferences (at least when informed by personal experience), even when they're unreliable, unless they are actually inconsistent with intuitions we think are more important and less unreliable, which isn't the case for me, but might be for others.

2. We should reject unreliable personal preferences that cost us uniformity in our theory. (The personal preferences are unreliable either way, but accommodating lexical ones make our theory less uniform, assuming we want to accept aggregating in certain ways in our theory in the first place, which itself might be contentious.)

I would be happy to discuss over a call, but it might actually be more productive to talk to Magnus Vinding if you can, since he's read and thought much more about this.

comment by MichaelStJules · 2020-05-18T02:08:01.215Z · score: 9 (3 votes) · EA(p) · GW(p)
If welfare is a unified concept and if welfare is a morally significant category across species, it seems as if invariabilism is the better option. Invariabilism is the simpler view, and it avoids the explanatory pitfalls of variabilism at little intuitive cost. While we should certainly leave open the possibility that variabilism is the correct view, in what follows I will assume invariabilism.

These also seems like reasons to reject objective list theories and higher and lower pleasures in favour of simpler hedonistic or desire-fulfilment theories of welfare.

“It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, are of a different opinion, it is because they only know their own side of the question. The other party to the comparison knows both sides”

But what if the human or Socrates disagrees that their pleasures are higher? It seems like we'd be overriding preferences to claim that certain kinds of pleasures are higher pleasures, and if some people who experience both don't recognize any pleasures as higher, we'd have to explain why they're wrong, and it would also seem to follow that most people are making pretty bad tradeoffs in their lives by not prioritizing higher pleasures enough.

comment by MichaelStJules · 2020-05-18T09:10:13.770Z · score: 8 (4 votes) · EA(p) · GW(p)
We want to circumscribe the set of possible worlds so that it includes all and only normal variation in the welfare values of species-typical animals.[13] [EA · GW]
(...)
Admittedly, filling in the details of this relativization will be complex. It’s not at all clear how to define ‘normal variation’ or ‘species-typical animal.’ I set aside that difficulty for now.

If meant statistically, it could be that "normal" still happens to be pretty circumstantial. Most nonhuman animals, for example, probably don't get much intellectual stimulation without humans, but some actually do through things like puzzles and games. I'm guessing you would want to count that in normal, because it's practical possibility today? But then would that mean that before we started giving animals puzzles and games, they had less moral status? This feels very different from enhancement.

And if we define moral status this way, it could be that human moral status has been increasing over time, too, due to environmental/social factors, like art and entertainment.

It could be that human moral status is actually decreasing or will decrease, because humans suffer less in modern times and will continue to suffer less and less in the future, without much increase to our peaks of happiness, because of the priority we give to suffering and its causes.

comment by Jason Schukraft · 2020-05-18T15:29:44.047Z · score: 6 (4 votes) · EA(p) · GW(p)

Hi Michael,

The way I'm using the terms, moral status and capacity for welfare are independent of realized welfare. Increasing realized welfare (e.g., through art/entertainment) doesn't raise one's capacity for welfare or moral status.

However, on some views, it does seem at least in principle possible to raise capacity for welfare through things like education. (I view your example of the intellectual stimulation of nonhuman animals as a type of education.) Educating a child might increase her capacity for certain objective goods, thereby increasing her capacity for welfare. On the other hand, it might be that educating the child simply makes it more likely that she will obtain those goods, thus raising her expected realized welfare rather than capacity for welfare. (Or perhaps education does both.) The answer depends on where we draw the line between potential and capacity, which naturally is going to be contentious. I'm hopeful that not much in practice hangs on this question, but I'm open to examples where it does.

comment by MichaelStJules · 2020-05-18T18:31:51.769Z · score: 2 (1 votes) · EA(p) · GW(p)
The way I'm using the terms, moral status and capacity for welfare are independent of realized welfare. Increasing realized welfare (e.g., through art/entertainment) doesn't raise one's capacity for welfare or moral status.

Couldn't it change the "proper subset of physically possible worlds" (or the kinds of sets of these) we use to define the welfare capacity of individuals of a given species? Where before art/entertainment might not have been included, now it is. Either we should have always included it and we were mistaken before for not doing so, since we just didn't know that this was a possibility that should have been included, or the kinds of sets we could use did actually change.

The answer depends on where we draw the line between potential and capacity, which naturally is going to be contentious. I'm hopeful that not much in practice hangs on this question, but I'm open to examples where it does.

The normal development after conception seems like such an example. Obviously it matters for the abortion debate, but, for animals, I've heard the suggestion that juveniles of species with extremely high infant/juvenile mortality rates have little use for the capacity to suffer during this period of high mortality, so this would be a reason to not develop it until later, since it has energetic costs. This was based on Zach Freitas-Groff's paper on wild animal suffering.

comment by MichaelStJules · 2020-05-18T02:27:56.009Z · score: 7 (3 votes) · EA(p) · GW(p)
There are, however, countervailing considerations. While it’s true that sophisticated cognitive abilities sometimes amplify the magnitude of pain and pleasure, those same abilities can also act to suppress the intensity of pain and pleasure.[35] [EA · GW] When I go to the doctor for a painful procedure, I know why I’m there. I know that the procedure is worth the pain, and perhaps most importantly, I know that the pain is temporary. When my dog goes to the vet for a painful procedure, she doesn’t know why she’s there or whether the procedure is worth the pain, and she has no idea how long the pain will last.[36] [EA · GW] It seems intuitively clear that in this case superior cognitive ability reduces rather than amplifies the painful experience.[37] [EA · GW]

Anecdotally, I basically started on my journey towards EA because I read something like this in the case of children hospitalized for chronic illness, from the textbook Health Psychology by Shelley E. Taylor:

Although many children adjust to these radical changes in their lives, some do not. Children suffering from chronic illness exhibit a variety of behavioral problems, including rebellion and withdrawal (Alati et al., 2005). They may suffer low self-esteem, either because they believe that the chronic illness is a punishment for bad behavior or because they feel cheated because their peers are healthy.

Maybe this also highlights another side to moral status about why many humans care more about children: vulnerability and innocence (maybe more naivety than lack of guilt).

comment by MichaelA · 2020-05-21T08:00:54.637Z · score: 5 (3 votes) · EA(p) · GW(p)
Nevertheless, I think most of us are committed to taking status-adjusted welfare seriously. If one is uncomfortable with degrees of moral status, unitarianism is a live option. Denying that any creatures have moral status, however, implies that there is no moral difference between harming a person and harming a coffee mug.[79] But most of us feel there is a moral difference, and this difference is explained by the fact that the person has moral standing and the coffee mug does not.

I found I felt like I disagreed with this, and it was interesting to try to work out why, and how I'd look at things instead. Here's what I came up with (which is meant as more like a report on my intuitive way of looking at things than a sound philosophical theory):

In essence, I'd naturally say one is simply not harming the coffee mug, because the coffee mug can't be harmed. I wouldn't naturally say that one is harming the coffee mug, but that this doesn't matter because the coffee mug lacks some special property that would make its welfare matter.

To expand: That passage seems to assume that we have to look at things the unit of analysis being an "individual" of some sort, or an object or a being or whatever. Taking that perspective, for all individuals/objects/beings in the world, we determine whether they have moral status (or how much moral status they have), how much welfare they're currently experiencing, how much we can change their welfare, etc., and we make moral judgements and decisions based on that. A coffee mug clearly doesn't have moral status. If we rejected the idea of moral status, then we'd be committed to saying people also don't have moral status, and thus that there's no moral difference between harming a person and harming a coffee mug.

The way I think I want to look at things is using welfare itself as the unit of analysis. Any and all welfare matters. And each unit of welfare matters equally. It's not that a coffee mug's welfare doesn't matter, but rather that it has no welfare, and one can't affect its welfare. So damaging it doesn't count as "harming" it in a morally relevant sense. Whereas humans can have welfare, so actions that affect their welfare matter morally.

Perhaps another way to put this is that I'd give each unit of welfare a moral status of 1. And wouldn't give moral status to any experiencers of welfare.

That said, I think that this post's way of describing things can essentially capture the outputs of this way of looking at things I have, while also capturing other moral theories and ways of looking at things. And that seems quite valuable, both for communication purposes and for reasons of moral uncertainty [EA · GW]. (Also, I'm far from an expert in the relevant areas of philosophy, so there may be reasons why this way of looking at things is conceptually confused.)

comment by Jason Schukraft · 2020-05-22T14:10:33.840Z · score: 3 (2 votes) · EA(p) · GW(p)

Hi Michael,

Thanks for your many comments. The section of the report you quote hints at the debate between moral realists and moral anti-realists, which is too vexed a topic to discuss fully here. However, it seems to me that you and I basically agree about coffee mugs. The way I would describe it is that coffee mugs lack moral standing (and hence lack moral status) because they are neither sentient nor agential. Entities that lack moral standing can be excluded from our moral reasoning (though of course they might matter instrumentally). According to you, coffee mugs should be excluded from our moral reasoning because they are not welfare subjects. Depending on your theory of welfare and moral status, the list of welfare subjects might be coextensive with the list of entities with moral standing.

comment by MichaelStJules · 2020-07-03T17:52:04.236Z · score: 4 (2 votes) · EA(p) · GW(p)

For anyone who might doubt that clock speed should have a multiplying effect (assuming linear/additive aggregation), if it didn't, then I think how good it would be to help another human being would depend on how fast they are moving relative to you, and whether they are in an area of greater or lower gravitational "force" than you, due to special and general relativity. That is, if they are in relative motion or under stronger gravitational effects, time passes more slowly for them from your point of view, i.e. their clock speed is lower, but they also live longer. Relative motion goes both ways: time passes more slowly for you from their point of view. If you don't adjust for clock speed by multiplying, there are two hypothetical identical humans in different frames of reference (relative motion or gravitational potential or acceleration; one frame of reference can be your own) with identical experiences and lives from their own points of view that should receive different moral weights from your point of view. That seems pretty absurd to me.

comment by MichaelStJules · 2020-07-03T17:40:19.132Z · score: 4 (2 votes) · EA(p) · GW(p)

If an objective list theory is true, couldn't it be the case that there are kinds of goods unavailable to us that are available to some other nonhuman animals? Or that they are available to us, but most of us don't appreciate them so they aren't recognized as goods? How could we find out? Are objective list theories therefore doomed to anthropocentrism and speciesism? How do objective list theories argue that something is or isn't one of these goods?

comment by Jason Schukraft · 2020-07-06T14:07:08.377Z · score: 6 (3 votes) · EA(p) · GW(p)

Hey Michael,

Yeah, these are good questions. I think objective list theories are definitely vulnerable to anthropocentric and speciesist reasoning. It's certainly open to an objective list theorist to hold that there are non-instrumental goods that are inaccessible to humans, though I'm not aware of any examples of this in the relevant literature. This sort of question is occasionally raised in the literature on "supra-personal moral status" (i.e., moral status greater than humans). (See Douglas 2013 for a representative example. Fun fact: this literature is actually hundreds of years old; theologians used to debate whether angels had a higher moral status than humans).

Arguing over non-instrumental goods is notoriously difficult. In practice, it usually involves a lot of appealing to intuitions, especially intuitions about thought experiments. Not a fantastic methodology, to be sure, but in most cases it's unclear what the alternative would be.

comment by MichaelA · 2020-05-21T07:33:30.834Z · score: 3 (2 votes) · EA(p) · GW(p)

Thanks for this post - I found it very interesting and very clearly written and reasoned! I learned a lot, and have added it to my list of sources relevant to the idea of "moral weight" [EA · GW].

Another term that might be used to capture both moral status and capacity for welfare is ‘moral weight.’ Although ‘status-adjusted welfare’ isn’t a perfect term, I think ‘moral weight’ suffers from two problems. First, to my ear, it doesn’t sound agnostic between the hierarchical approach and the unitarian approach. One informal way of describing unitarianism is ‘the view that rejects moral weights.’

1. I found that a little confusing. To me, "status-adjusted welfare" sounds notably less agnostic than does "moral weight" regarding the hierarchical and unitarian approaches.

As you note, "Unitarians assign all creatures with moral standing the same moral status, so for the unitarian, status-adjusted welfare just collapses to welfare." So if we're choosing to use the term "status-adjusted welfare", I think we sound like we're endorsing the hierarchical view - even if in reality we want to be open to saying "It turns out moral status is equal between animals, so there's no need for status-adjustment."

Whereas if we're choosing to use the term "moral weight", I think we sound like we're open to the hierarchical view, but we at least avoid making it sound like we're actually planning to adjust things by moral weight.

Perhaps the reason you see "status-adjusted welfare" as sounding more agnostic is because you're imagining the adjustment as potentially being a multiplication by 0, for beings that have no moral status, rather than by a number between 0 and 1? That didn't come to mind intuitively for me, because then I think I'd just want to say the being has no welfare. But maybe that's me deviating from how philosophers would usually think/talk about these matters.

2. The prior point may be related to the fact that, as best I can tell, moral weight and status-adjusted welfare aren't really different terms for the same thing (which seemed to me to be what the first sentence of that quote was implying). At least based on how I've seen the term used (mainly by Muehlhauser), "moral weight" seems to mean pretty much just the "moral status" component - just the term we multiply welfare by in order to get the number we really care about, rather than that final number.

So it seems like the synonym for "status-adjusted welfare" would be not "moral weight" but "moral-weight-adjusted welfare". And that, unlike just "moral weight", does sound to me like it's endorsing the hierarchical view.

3. Somewhat separate point, which I'm uncertain about: I'm not sure status-adjusted welfare really "captures" capacity for welfare. Given your description, it seems status-adjusted welfare is just about multiplying the welfare the being is actually at (or a given change in welfare or something like that) by the moral status of the being - without the being's capacity for welfare playing a role.

Did you mean that status-adjusted welfare "captures" capacity for welfare to the extent that a lower or higher capacity for welfare will tend to reduce or increase the amount of welfare that is being experienced or changed?

comment by Jason Schukraft · 2020-05-22T14:49:13.908Z · score: 2 (1 votes) · EA(p) · GW(p)

Hi Michael,

1. I'll admit that I'm not wedded to the term 'status-adjusted welfare.' I agree that it is less than ideal. I don't think 'moral weight' is better, but I also don't think it's much worse. If anyone has suggestions for a catch-all term for factors that might affect characteristic comparative moral value, I would be interested to hear them.

2. Interesting. My reading of Muehlhauser is that when he talks of moral weight he almost exclusively means 'capacity for welfare' and basically never means 'moral status.' From conversations with him, I get the impression he is a unitarian and so doesn't endorse differences in moral status.

3. Did you mean that status-adjusted welfare "captures" capacity for welfare to the extent that a lower or higher capacity for welfare will tend to reduce or increase the amount of welfare that is being experienced or changed?

This is close to what I meant, though I grant that maybe this isn't strong enough to qualify as 'capturing' capacity for welfare. The basic idea is that a unitarian and a hierarchist could in theory agree that, say, the status-adjusted welfare of a cow is generally higher than the status-adjusted welfare of a mealworm even if they disagree about the nature of moral status. The hierarchist might believe that the mealworm and the cow have the same welfare level, but the mealworm's welfare is adjusted downward. The unitarian might believe that the cow and the mealworm have the same moral status, but the cow has a greater capacity for welfare.

comment by MichaelA · 2020-05-23T01:03:40.195Z · score: 3 (2 votes) · EA(p) · GW(p)

1. To clarify, I don't necessarily see status-adjusted welfare as a bad term. I'd actually say it seems pretty good, as it seems to state what it's about fairly explicitly and intuitively.

I was just responding to the claim that it's better than "moral weight" in that it sounds more agnostic between unitarian and hierarchical approaches. I see it as perhaps scoring worse than "moral weight" on that particular criterion, or about the same.

(But I also still think it means a somewhat different thing to "moral weight" anyway, as best I can tell.)

2. I'm not confident about whether Muehlhauser meant moral status or capacity for welfare, and would guess your interpretation is more accurate than my half-remembered interpretation. Though looking again at his post on the matter [LW · GW], I see this sentence:

This depends (among other things) on how much “moral weight” we give to the well-being of different kinds of moral patients.

This sounds to me most intuitively like it's about adjusting a given unit of wellbeing/welfare by some factor that "we're giving" them, which therefore sounds like moral status. But that's just my reading of one sentence.

In any case, I think I poorly expressed what I actually meant, which was related to my third point: It seems like "status-adjusted welfare" is the product of moral status and welfare, whereas "moral weight" is either (a) some factor by which we adjust the welfare of a being, or (b) some factor that captures how intense the welfare levels of the being will tend to be (given particular experiences/events), or some mix of (a) and (b). So "moral weight" doesn't seem to include the being's actual welfare, and thus doesn't seem to be a synonym for "status-adjusted welfare".

(Incidentally, having to try to describe in the above paragraph what "moral weight" seems to mean has increased my inclination to mostly ditch that term and to stick with the "moral status vs capacity for welfare" distinction, as that does seem conceptually clearer.)

3. That makes sense to me.

comment by Jason Schukraft · 2020-05-23T01:52:47.028Z · score: 3 (2 votes) · EA(p) · GW(p)

Hey Michael,

Thanks again. Regarding (2), I may be conflating a conversation I had with Luke about the subject back in February with the actual contents of his old LessWrong post on the topic [LW · GW]. You're right that it's not clear that he's focusing on capacity for welfare in that post: he moves pretty quickly between moral status, capacity for welfare, and something like average realized welfare of the

"typical" conscious experience of "typical" members of different species when undergoing various "canonical" positive and negative experiences

Frankly, it's a bit confusing. (To be fair to Luke, he wrote that post before Kagan's book came out.) One hope of mine is that by collectively working on this topic more, we can establish a common conceptual framework within the community to better clarify points of agreement and disagreement.

comment by MichaelStJules · 2020-05-18T03:20:22.613Z · score: 3 (3 votes) · EA(p) · GW(p)
If there is a human being that currently scores 10 out of 100 and a mouse that currently scores 9 out of 10, prioritarianism and egalitarianism imply, all else equal, that we ought to increase the welfare of the mouse before increasing the welfare of the human.

To clarify, this is if we're increasing their welfare by the same amount, right? Prioritarianism and egalitarianism wouldn't imply that it's better for the mouse to be moved to 10 than for the human to be moved to 100.

Tatjana Višak (2017: 15.5.1 and 15.5.2) argues that any welfare theory that predicts large differences in realized welfare between humans and nonhuman animals must be false because, given a commitment to prioritarianism[52] [EA · GW] or egalitarianism,[53] [EA · GW] such a theory of welfare would imply that we ought to direct resources to animals that are almost as well-off as they possibly could be.

It seems like the opposite could be true in theory with an antifrustrationist or negative account of welfare where the max is 0, if an individual human's welfare is harder to maximize, say, given the more varied and/or numerous preferences or stronger interests we have (e.g. future-oriented preferences), although in practice, average nonhuman animal life for many species, wild or farmed, does seem to involve more suffering (per second) to me.

comment by Jason Schukraft · 2020-05-18T18:19:09.362Z · score: 4 (4 votes) · EA(p) · GW(p)

To clarify, this is if we're increasing their welfare by the same amount, right? Prioritarianism and egalitarianism wouldn't imply that it's better for the mouse to be moved to 10 than for the human to be moved to 100.

Right. The claim is that the prioritarian and the egalitarian would prefer to move the mouse from 9/10 to 10/10 before moving the human from 10/100 to 11/100. Kagan argues this is the wrong result, but because he doesn't want to throw out distributive principles altogether, he thinks the best move is to appeal to differences in moral status between the mouse and the human.

comment by MichaelStJules · 2020-05-18T01:19:55.490Z · score: 3 (2 votes) · EA(p) · GW(p)

I'm still getting through your post, so I apologize if this is addressed later in it.

In somewhat formal terms, the capacity for welfare for some subject, S, is determined by the range of welfare values S[12] [EA · GW] experiences in some proper subset of physically possible worlds.

EDIT: I don't think it necessarily means rejecting independence of irrelevant alternatives (IIA), but doing so might be part of some approaches.

I think this means rejecting the independence of irrelevant alternatives (IIA), which is something consequentialists typically take for granted, often without even knowing, by simply assuming we can rank all conceivable outcomes according to a single ranking. Rejecting it means whether choice A is better or worse than choice B can depend on what other alternatives are available. I'm not personally convinced IIA is true or false (and I think rejecting it can resolve important paradoxes and impossibility results in population ethics, like the repugnant conclusion), but I wouldn't want to reject IIA to define and value something like capacity.

Another assumption this seems to make is that S is actually the same subject across different outcomes (in which they have different levels of welfare). I think there's a better argument against this in cases of genetic enhancement, which could be used to support valuing capacities if we think subjects who differ genetically or significantly in capacities are different subjects, but I also think attempts to identify subjects across outcomes or time are poorly justified, pretty arbitrary and face good objections. This is the problem of personal identity, and Parfit's Relation R seems like the best solution I'm aware of, but it also seems too arbitrary to me. I lean towards empty individualism.

comment by Jason Schukraft · 2020-05-18T17:25:19.560Z · score: 3 (2 votes) · EA(p) · GW(p)

Hi Michael,

Thanks for the comment. The definition is meant to be neutral with respect to IIA.

The definition does assume that either S is identical across the relevant worlds or (as I mention in footnote 12 [EA · GW]) the subjects in the world stand in the counterpart relation to one another. Transworld identity is a notoriously difficult topic. I'm here assuming that there is some reasonable solution to the problem.

I'm not sure how much genetic change an individual can undergo whilst remaining the same individual. (I suspect lots, but intuitions seem to differ on this question.) As I mention in footnote 9 [EA · GW], it's also unclear how much genetic change an individual can undergo whilst remaining the same species.

comment by MichaelStJules · 2020-05-18T17:53:28.398Z · score: 2 (1 votes) · EA(p) · GW(p)

Thanks! I wasn't aware of transworld identity being a separate problem.

I'm not sure how much genetic change an individual can undergo whilst remaining the same individual. (I suspect lots, but intuitions seem to differ on this question.)

I doubt that there will be a satisfying answer here (especially in light of transworld identity), and I think this undermines the case for different degrees of moral status. If we want to allow morally relevant features to sometimes vary continuously without changing identity, then, imo, the only non-arbitrary lines to draw would be where a feature is completely absent in one but present in another. But, I think there are few features that are non-instrumentally morally relevant; indeed only welfare and welfare capacity on their own seem like they could be morally relevant. So, it seems this could only work if there are different kinds of welfare, like in objective list theories, or with higher and lower pleasures.

As I mention in footnote 9 [EA · GW], it's also unclear how much genetic change an individual can undergo whilst remaining the same species.

I think species isn't fundamental anyway; its definition is fuzzy, and it's speciesist to to refer to it non-instrumentally. It's not implausible to me that, if identity works at all (which I doubt), that a pig in one world is identical to an individual who isn't a pig in another world.

comment by MichaelStJules · 2020-05-18T07:02:52.490Z · score: 3 (3 votes) · EA(p) · GW(p)

I wrote some thoughts related to moral status (not specifically welfare capacity) and personal identity here (EDIT: to clarify, the context was a discussion of the proposed importance of moral agency to moral status, but you could substitute many other psychological features for moral agency and the same argument should apply):

It seems to me that any specific individual is only a moral agent sometimes, at most. For example, if someone is so impaired by drugs or overcome with emotion that it prevents them from reasoning, are they a moral agent in those moments? Is someone a moral agent when they're asleep (and dreaming or not dreaming)? Are these cases so different from removing and then reinserting and reattaching the brain structures responsible for moral agency? In all these cases, the connections can't be used due to the circumstances, and while the last case is the clearest since the structure has been removed, you could say the structure has been functionally removed in the others. I don't think it's accurate to say "they can engage in rational choice" under these circumstances.
Perhaps people are moral agents most of the time, but wouldn't your account mean their suffering matters less in itself while they aren't moral agents, even as normally developed adults? In particular, I think intense suffering will often prevent moral agency, and while the loss of agency may be bad in itself (although I'm not sure I agree), the loss of agency from sleep would be similarly bad in itself, so this shouldn't be much worse than a human being forced to sleep and a nonhuman animal suffering as intensely, ignoring differences in long-term effects, and if the nonhuman animal's suffering doesn't matter much in itself relative to the (temporary) loss of moral agency, then neither would the human's. Torturing someone may often not be much worse than forcing someone to sleep (ignoring long-term effects), if the torture is intense enough to prevent moral agency. Or, deliberately, coercively and temporarily preventing a person's moral agency and torturing them isn't much worse than just deliberately, coercively and temporarily preventing their moral agency. This seems very counterintuitive to me, and I certainly wouldn't feel this way about it if I were the victim. Suffering in itself can be far worse than death.

Now, let's suppose identity and moral status are preserved to some degree in more commonsensical ways, and the human prefrontal cortex confers extra moral status. Then, there might be weird temporal effects. Committing to an act of destroying someone's prefrontal cortex and torturing them would be worse than destroying their prefrontal cortex and then later and independently torturing them, because in the first case, their extra moral status still applies to the torture beforehand, but in the second, once their prefrontal cortex is destroyed, they lose that extra moral status that would make the torture worse.

comment by MichaelA · 2020-05-21T05:23:42.033Z · score: 1 (1 votes) · EA(p) · GW(p)

I think what you're saying makes sense to me, but I'm confused by the fact you say "I wrote some thoughts related to moral status (not specifically welfare capacity) and personal identity here", but then the passage appears to be about moral agency, rather than about moral status/patienthood.

And then occasionally the passage appears to use moral agency as if it means moral status/patienthood. E.g., "Perhaps people are moral agents most of the time, but wouldn't your account mean their suffering matters less in itself while they aren't moral agents, even as normally developed adults". Although perhaps that reflects the particular arguments that that passage of yours was responding to.

Could you clarify which concept you were talking about in that passage?

(It looks to me like essentially the same argument you make could hold in relation to moral status anyway, so I'm not saying this undermines your points.)

comment by MichaelStJules · 2020-05-21T05:43:05.373Z · score: 3 (2 votes) · EA(p) · GW(p)

The original context for that comment was in a discussion where moral agency was proposed to be important, but I think you could substitute other psychological features (autonomy, intelligence, rationality, social nature, social attachments/love, etc.) for moral agency and the same argument would apply to them.

comment by MichaelStJules · 2020-07-31T20:05:41.916Z · score: 2 (1 votes) · EA(p) · GW(p)

Maybe an alternative to moral status to capture "speciesist" intuitions is that we should just give more weight to more intense experiences than the ratio scale would suggest and this could apply to both suffering and pleasure (whereas prioritarianism or negative-leaning utilitarianism might apply it only to suffering, or to overall quality of a life). Some people might not trade away their peak experiences for any number of mild pleasures. This could reduce the repugnance of the repugnant conclusion (and the very repugnant conclusion, too) or even avoid it altogether if taken far enough (with lexicality, weak or strong). This isn't the same as Mill's higher and lower pleasures; we're only distinguishing them by intensity, not quality, and there need not be any kind of discontinuity.

That being said, I've come to believe that there's no fact of the matter about the degree to which one experience is better than another experience (for the same individual or across individuals). Well, I was already a moral antirealist, but I'm more confident in being able in principle (but not in practice) to compare welfare in different individual experiences, even between species, as better/worse, than in the cardinal welfare. Simon Knutsson has written about this here and here.

comment by MichaelStJules · 2020-05-18T04:39:12.312Z · score: 2 (1 votes) · EA(p) · GW(p)
Hence, if welfare constituents or moral interests are non-additive, we may not be able to use status-adjusted welfare to compare interventions.

I don't see why you couldn't combine them. You could aggregate non-additively based on status-adjusted welfare instead of welfare, or moral status could be a different kind of input to your non-additive aggregation. Your social welfare function could be a function of the sequence of pairs of moral status and welfare in outcome : .

comment by MichaelStJules · 2020-05-18T04:18:05.806Z · score: 2 (1 votes) · EA(p) · GW(p)
Many people have the intuition that human babies have the same moral status as human adults despite the fact that adults are much more cognitively and emotionally sophisticated than babies.[61] [EA · GW] Many people also have the intuition that severely cognitively-impaired humans, whose intellectual potential has been permanently curtailed, have the same moral status as species-typical humans.[62] [EA · GW] And many people have the intuition that normal variation in human intellectual capacities makes no difference to moral status, such that astrophysicists don’t have a higher moral status than social media influencers.[63] [EA · GW] These intuitions are easier to accommodate if moral status is discrete.[64] [EA · GW]

I don't think we can accommodate "Many people also have the intuition that severely cognitively-impaired humans, whose intellectual potential has been permanently curtailed, have the same moral status as species-typical humans.[62] [EA · GW]" no matter the theoretically possible extent of impairment (as long as the individual remains sentient, say) without abandoning degrees of moral status completely. Maybe actual sentient humans have never been sufficiently impaired for this, and that's what their intuitions refer to?

Also, if moral status is discrete but can differ between two individuals because of features that are present in both based on the degrees of expression, then the cutoff is going to be arbitrary, and that seems like a good argument against discrete statuses. So, it seems that different discrete moral statuses could only be justified by presence or complete absence of features. But then we get into weird (but perhaps not implausible) discontinuities, where an individual A could have an extra feature to a vanishing degree, but be a full finite degree of moral status above another individual, B, who is identical except for that feature, and have as much as status an individual, C, who has that feature to a very high degree, but is otherwise identical to both. We can make the degree that the feature is present arbitrarily small in A, and this would still hold.

comment by MichaelA · 2020-05-21T07:39:20.358Z · score: 1 (1 votes) · EA(p) · GW(p)

(Minor points)

Even before considering moral status, we can say that lives that contain more and more of non-instrumental goods are more valuable than lives that contain fewer and less of those non-instrumental goods.

Did this mean something like:

Even before considering moral status, we can say that lives that contain more types of non-instrumental goods, and/or more of each type, are more valuable than lives that contain fewer types of those non-instrumental goods, and/or less of each type.

Also, footnote 59 appears to be missing a quotation mark to open the quote.

The farming of cochineal may cause an additional 4.6 to 21 trillion deaths, primarily nymphs that do not survive to adulthood.

I assume this is the annual number of deaths?

comment by Jason Schukraft · 2020-05-22T14:20:55.735Z · score: 3 (2 votes) · EA(p) · GW(p)

Hi Michael,

Strictly speaking, the two sentences aren't equivalent. If you remove the two instances of "or" in the second sentence, then they are.

Footnote 59 has been fixed, thanks.

Yep, those are meant to be annual deaths.

comment by MichaelA · 2020-05-21T07:38:59.648Z · score: 1 (1 votes) · EA(p) · GW(p)

(Minor points)

Almost certainly, all sentient agents have moral standing.[45] It’s likely that sentience is sufficient on its own for moral standing, though that view is just slightly more controversial.

I found that first sentence slightly surprising. That'd be my preferred stance, but I'd guess that a great many people would disagree. Though I don't know how many people who've thought about it a lot disagree. I'd be interested to know whether this sentence like your own considered judgement, or a reflection of the consensus view among philosophers.

Or was that sentence actually meant to indicate that "Almost certainly, all beings with moral standing are sentient" (i.e., that sentience is almost certainly necessary, rather than sufficient, for moral standing)?

The theological-minded might prefer a view on which moral standing is grounded in the possession of a Cartesian soul. But on most such accounts, the possession of a Cartesian soul grants sentience or agency or both. So even most theologians will agree that all sentient agents have moral standing because they will thank that the class of moral agents is coextensive with the class of beings with Cartesian souls.

1. Was that last sentence meant to say they will think "that the class of moral patients is coextensive with the class of beings with Cartesian souls"?

2. It seems that one could believe that "the possession of a Cartesian soul grants sentience or agency or both", but that there are also other ways of gaining sentience or agency or both, and thus that there may be sentient beings who aren't moral patients (if possession of a Cartesian soul is required for moral patienthood). Was the second sentence meant to imply something like "the possession of a Cartesian soul is necessary for sentience or agency or both"?

comment by Jason Schukraft · 2020-05-22T14:30:17.800Z · score: 3 (2 votes) · EA(p) · GW(p)

Hi Michael,

The sentence you quote is meant to express a sufficiency claim, not a necessity claim. But note that the sentence is about both sentience and agency. I don't know of any serious contemporary philosopher who has denied that the conjunction of sentience and agency is sufficient for moral standing, though there are philosophers who deny that agency is sufficient and a small number who deny that sentience is sufficient.

It's true that one could hold a view that moral standing is wholly grounded in the possession of a Cartesian soul, that the possession of a Cartesian soul grants agency and sentience, and that there are other ways to be a sentient agent that don't require a Cartesian soul. If that were true, then agency and sentience would not be sufficient for moral standing. But I don't know anybody who holds that view. Do you?

comment by MichaelA · 2020-05-23T00:41:29.713Z · score: 0 (2 votes) · EA(p) · GW(p)
I don't know of any serious contemporary philosopher who has denied that the conjunction of sentience and agency is sufficient for moral standing, though there are philosophers who deny that agency is sufficient and a small number who deny that sentience is sufficient.

Interesting, thanks!

But I don't know anybody who holds that view. Do you?

I don't (but I know very little about the area as a whole, so I'd wouldn't update on that in particular).

I can see why, if practically no one holds that view, "even most theologians will agree that all sentient agents have moral standing". I guess I asked my question because I interpreted the passage as saying that that followed logically from the prior statements alone, whereas it sounds like instead it follows given the prior statements plus a background empirical fact about theologians' view.

comment by MichaelA · 2020-05-21T07:36:16.242Z · score: 1 (1 votes) · EA(p) · GW(p)

Minor matter: Do you see a reason to prefer the term "welfare subject" and "moral standing" to "moral patient" and "moral patienthood"? For example, are the former terms more popular in the philosophical literature?

I see five potential perks of the latter pair of terms:

• Their relationship to each other is obvious from the terms themselves (whereas with "welfare subject" and "moral standing", you'd have to explain to someone new to the topic that there's a relationship between those terms)
• Their relationship with "moral agent"/"moral agency" seems more obvious from the terms themselves.
• Compared to "moral standing", "moral patient" seems less likely to end up getting confused with "moral status"
• "moral patient" doesn't have to take a stand on whether welfare is the only thing that's non-instrumentally morally good (or whether it's non-instrumentally morally good at all), whereas focusing on whether something is a "welfare subject" could arguably be seen as implying that.
• Although in practice EAs probably will be focusing on welfare as the only non-instrumentally morally good thing, and I'm ok with that myself.
• I feel a vague sense that "welfare" (and thus "welfare subject") might sound to some people like it's focusing on a hedonistic view of wellbeing, rather than on a desire-fulfilment or objective list view. But I could very well be wrong about that.
comment by Jason Schukraft · 2020-05-22T14:36:53.950Z · score: 3 (2 votes) · EA(p) · GW(p)

Hi Michael,

First, to clarify, strictly speaking welfare subject is not meant to be synonymous with moral patient. Some people believe that things that lack moral standing can still be welfare subjects. You might think, for example, that plants aren't sentient and so don't have moral standing, but nevertheless there are things that are non-instrumentally good for plants, so plants can be welfare subjects. (I don't hold this view, but some do.)

Otherwise, I'm mostly sympathetic to your points. I don't object to talk of 'moral patienthood.' 'Moral standing' appears to be more popular in the literature, but maybe that's a terminological mistake.

comment by MichaelA · 2020-05-23T00:44:19.876Z · score: 0 (2 votes) · EA(p) · GW(p)

Thanks for that clarification and that answer!

comment by RomeoStevens · 2020-05-19T02:23:05.961Z · score: 1 (1 votes) · EA(p) · GW(p)

This is only half formed but I want to say something about a slightly different frame for evaluation, what might be termed 'reward architecture calibration.' I think that while a mapping from this frame to various preference and utility formulations is possible, I like it more than those frames because it suggests concrete areas to start looking. The basic idea is that in principle it seems likely that it will be possible to draw a clear distinction between reward architectures that are well suited to the actual sensory input they receive and reward architectures that aren't (by dint of being in an artificial environment). In a predictive coding sense, a reward architecture that is sending constant error signals that an organism can do nothing about is poorly calibrated, since it is directing the organism's attention to the wrong things. Similarly there may be other markers that could be spotted in how a nervous system is sending signals e.g. lots of error collisions vs few, in the sense of two competing error signals pulling behavior in different directions. I'd be excited about a medium depth dive into the existing literature on distress in rats and what sorts of experiments we'd ideally want done to resolve confusions.