Probability estimate for wild animal welfare prioritization

post by Stijn · 2019-10-23T20:47:21.236Z · score: 7 (11 votes) · EA · GW · 20 comments

Contents

  Major cause areas
  Lower bound probability estimate
    Overall estimate of lower bound
  Upper bound probability estimate
  Interconnections and indirect cause areas
    Short-term animal welfare
    Short-term human welfare
    Long-term human welfare
  Summary: probability estimates of major cause areas
None
20 comments

In this article I calculate my subjective probability estimate that the problem of wild animal suffering is the most important cause area in effective altruism. I will use a Fermi estimate to calculate lower and upper bounds of the probability that research about interventions to improve wild animal welfare should be given top priority. A Fermi estimate breaks the probability up into several factors such that the estimate of the total probability is the product of the estimates of the factors. This method is known in superforecasting [EA · GW] to increase accuracy or predictive power.

With the lower and upper bound estimates, and a discussion of interconnectedness of cause areas, I estimate probabilities for the four major effective altruism cause areas. These probabilities can serve as allocation percentages for a donation portfolio.

Major cause areas

Effective altruism has four major cause areas [LW · GW], which is also reflected in the four effective altruism funds. First, there is the meta-level cause area: community building, prioritization research and awareness raising about effective altruism. Next, there are three object-level cause areas: human welfare, animal welfare and the long-term future. Those three object-level cause areas can be better split into four, based on two considerations: time (short-term versus long-term) and target subject (human versus non-human animal).

With these two dimensions, we can create four object-level cause areas. The short-term human cause area involves increasing current generation human welfare, primarily by improving global health and human development and reducing extreme poverty. The long-term human cause area involves guaranteeing far-future human welfare, primarily by avoiding existential risks (X-risks) that could end human civilization. The short-term animal cause area involves increasing animal welfare, primarily by decreasing animal suffering in factory farming. Finally, the long-term animal cause area deals with wild animal welfare, primarily by doing research about safe and effective interventions in nature to improve far future wild animal welfare. Due to lack of effective interventions and knowledge, short-term wild animal welfare improvements are unfeasible (intractable).

There are other cause areas, such as effective environmentalism, and other target subjects, such ecosystems, plants, digital sentient entities or aliens, but these are less important: ecosystems and plants are most likely not sentient (have no subjective interests), digital sentience does not yet exist and aliens have not yet made contact with us.

Based on our beliefs and preferences, we can choose our preferred cause areas.

You should choose short-term cause areas in particular when:

· you prefer a person-affecting population ethic (making existing people happy instead of making extra happy people),

· you believe the population of target subjects (e.g. humans) might definitely and unavoidably go extinct in the not so far future, so attempts to improve far future welfare will be pointless,

· you believe that increasing or maximizing happiness becomes unavoidable in the far future (for example you believe we will unavoidably develop artificial superintelligent machines that automatically solve all problems of future sentient beings), so attempts to improve far future welfare will be unnecessary, or

· you prefer an agent-relative ethic: a moral agent is allowed to be partial towards those individuals who are known to exist by the agent (e.g. towards those individuals who exist at the time when the agent makes a choice to help).

You should choose long-term cause areas in particular when:

· you prefer a population ethic that strongly values positive outcomes (e.g. total utilitarianism that maximizes total future happiness or preference satisfaction), and you believe that a future state with positive aggregate welfare is possible, such that you prioritize avoiding X-risks (avoiding future non-existence of many happy individuals), or

· you prefer a population ethic with a procreation asymmetry that strongly disvalues negative outcomes (e.g. suffering focused ethics, some kinds of negative utilitarianism or variable critical level utilitarianism), and you believe that without proper interventions we get a future state with many suffering entities (with net-negative welfare), such that you prioritize avoiding S-risks (avoiding future existence of many suffering individuals).

You should choose human cause areas in particular when:

· you prefer to be maximally sure about the level of sentience (by selecting target subjects who most strongly look like you at a neurobiological level or who can talk and clearly communicate their feelings to you),

· you prefer the most efficient welfare improving solutions that require some minimum level of intelligence (e.g. economic market solutions that require an understanding of money, prices, property rights, incentives,…),

· you prefer to help those who can most effectively help others (e.g. development of poor countries will increase the number of people who can do scientific research, humans have highly developed skills of cooperation, humans can design economic mechanisms that effectively create mutually beneficial situations), or

· you believe that most humans can reach higher levels of happiness (or suffering) than non-human animals.

You should choose animal cause areas in particular when:

· you believe sentience is more important than e.g. rationality or intelligence and you believe animals are likely to be sentient and their potential welfare levels are not extremely smaller than those of humans.

Lower bound probability estimate

In this section I perform a Fermi estimate of the lower bound of the probability that wild animal welfare (the far-future animal cause area) gets priority. The total lower bound probability is the product of the probabilities of 14 conditions. I present my personal lower bound estimates for the moral validity or factual truth of each moral and factual condition. The probability estimate of each condition is conditional on the truth or validity of all the previous conditions (e.g. given that condition 1 is valid, how likely is condition 2 valid?).

1. No unwanted arbitrariness

Ethical systems should not contain unwanted arbitrariness such as discrimination on the basis of time, place or species. When someone exists, where someone exists and to which biological category (race, species, genus, order,…) someone belongs, is morally irrelevant.

My probability estimate (normative certainty) of this condition is >99%, which means I’m highly confident about the moral validity of this principle to avoid unwanted arbitrariness. This estimate is based on my moral intuitions about fundamental moral reasons. For example: if I am allowed to include unwanted arbitrariness in my ethic, everyone else is allowed to do so as well, even if I do not want their kinds of arbitrariness, so I cannot rationally want this.

If this condition turns out to be invalid, we are allowed to prioritize current generations or humans (the short-term human cause area).

If ethical systems have to avoid unwanted arbitrariness, animals and the far future matter, but we do not yet know in what sense or how much they matter. To solve that question, we need to know our moral values.

2. Consequentialist ethic

Moral individualism and consequentialism are valid moral theories. This means that individual outcomes exist and are the only things that matter. An individual outcome not necessarily only includes the level of happiness, preference satisfaction or welfare of an individual, but can also include the strength of an individual rights violation or the level of autonomy and freedom of that individual. Individual outcomes include everything that the individual cares about.

My probability estimate (normative certainty) of this condition is >99%. This estimate is based on a personal preference for autonomy and avoiding arbitrariness: if I may impose my values on others, then someone else may also impose his values on me, and I cannot want that. Hence, the only things that I should morally value, are the things that are valued by others. For example I may value the well-being of a sentient individual, because that individual also cares about her own well-being. But I may not intrinsically value e.g. the naturalness of an ecosystems, the beauty of a painting or the integrity of a culture, because the ecosystem, the painting and the culture themselves do not care about anything. Similarly, a homophobic person may not value the sexual purity of a homosexual person (when he believes that homosexuality is impure), because that value is not shared by the homosexual.

If this condition turns out to be invalid, we are allowed to prioritize environmental issues, the protection of cultural traditions, and we are allowed to impose our own values on others who cannot want that. For example it allows for ecocentric values, where our (esthetic) values of naturalness and integrity of ecosystems are considered more important than the welfare of sentient wild animals. This ecocentrism results in a hands-off policy where we should not intervene in nature to increase everyone’s welfare.

If we choose a consequentialist ethic, we still have to figure out how to compare the outcomes between different individuals. If the welfare of a wild animal is incomparable to the welfare of a human, we cannot yet decide whether to prioritize wild animal welfare.

3. Interpersonal comparability of outcomes

Outcomes (goodness or badness) of individuals can be measured and interpersonally compared to a sufficient degree that makes comparisons useful. This means that an aggregate (total) outcome exists (by aggregating individual outcomes).

My probability estimate (factual certainty) of this condition is >75%. This estimate is based on my moral judgment that considerations about fairness or equality are important and sensible, as well as on factual neurobiological (and evolutionary) similarities between sentient beings, the existence of just noticeable differences in experiences and other considerations explained here and here.

If this condition turns out to be wrong, we can choose a very narrow contractualist ethic and a welfare economics restricted to mere Pareto efficiency. Such contractualism and Pareto efficiency is usually restricted to (a subgroup of) humans, and avoids issues of equality, which means that the scope is very narrow. The contractualism can be extended to include equality of opportunity. And under slightly more general conditions, a welfare economics with a principle of fair division of resources is possible, including both Pareto efficiency and essentially envy-freeness. This means we could focus on efficient markets, equality of opportunity, fair property rights allocations, and basic rights and liberties. This is the area of short-term human welfare. However, if animals are included in the fair division of resources and basic liberties, wild animal welfare can become very important as well.

If outcomes are interpersonally comparable, we have to determine how they contribute to the aggregate outcome of all future individuals.

4. Positive and negative outcomes

Outcomes of individuals can be positive or negative. When a situation is chosen such that the overall lifetime outcome of an individual (over the course of its life) is net-positive (e.g. more positive than negative experiences), the individual has a life worth living.

My probability estimate (factual certainty) of this condition is >99%. This estimate is based on my personal experience: I can imagine a life with so much suffering, that I would prefer non-existence (i.e. not being born), which means such a life is not worth living.

If this condition turns out to be wrong, we can exclude many population ethical theories. We do not have to worry about creating lives not worth living, so antinatalist conclusions are automatically avoided. This means avoiding X-risks becomes much more important, although improving wild animal welfare might still be important due to the large number of animals in the future.

If future individual outcomes can be negative, we have to determine whether we can avoid the existence of individuals with a negative welfare.

5. Positivity of future total outcome

A future with a total (aggregate) negative outcome or a majority of lives not worth living, is avoidable. That means total future outcome can be made positive by our choices. When total future outcome is positive, positive experiences trump negative experiences (or lives with net-positive welfare trump lives with net-negative welfare), and most lives are worth living.

My probability estimate (factual certainty) of this condition is >95%. This estimate is based on my confidence in technological progress. If new technologies do not unavoidably result in extinction, it is not impossible that they will be used for the good, to decrease negative outcomes.

If this condition turns out to be wrong, there is one conclusion: choose for total extinction (e.g. antinatalism). If we cannot avoid a future dominated by suffering, the more future generations will be born, the more the total outcome of the future will be negative. Hence the avoidance of all future generations gets top priority.

If we can avoid aggregate negative outcomes, we have to determine how positive individual outcomes compare to negative outcomes.

6. Validity of asymmetric, suffering focused population ethics

Some asymmetric, suffering focused population ethic is more valid than total utilitarianism that maximizes the sum of everyone’s welfare. A suffering focused ethic is characterized by an asymmetry: when someone has a net-negative life (i.e. a negative lifetime outcome), this always implies a negative contribution to the total aggregate outcome, but when someone has a net-positive life, this does not always imply a positive contribution to the total outcome. Total utilitarianism does not have such an asymmetry. Examples of asymmetric, suffering focused ethics are some versions of negative utilitarianism, critical level utilitarianism, person-affecting views, or most generally variable critical level utilitarianism.

Total utilitarianism is susceptible to the repugnant sadistic conclusion (also called the very repugnant conclusion), which is probably the most counterintuitive implication of total utilitarianism. Consider the choice between two situations. In situation A, a number of extremely happy people exist. In situation B, the same people exist and have extreme suffering (maximal misery), and a huge number of extra people exist, all with lives barely worth living (slight positive welfare). If the extra population in B is large enough, the total welfare in B becomes larger than the total welfare in A. Hence, total utilitarianism would prefer situation B, which is sadistic (there are people with extreme suffering) and repugnant (a huge number of people have lives barely worth living and no-one is very happy).

The most simple suffering focused ethic is vulnerable to the extinction conclusion: if the only objective is to minimize suffering, the best future state is the one where no-one will be born (because it may be impossible to avoid the birth of a life not worth living or a life with suffering). More nuanced suffering focused ethics do not necessarily imply this conclusion because of boundary constraints to the objective of minimizing suffering. So the condition states that there exist consistent suffering focused ethics that avoid both the repugnant sadistic conclusion of total utilitarianism and the extinction conclusion, as well as other very counterintuitive conclusions.

My probability estimate (normative certainty) of this condition is >90%. This estimate is based on the strength of my moral intuition about the sadistic repugnant conclusion, and the flexibility of variable critical level utilitarianism to avoid very counterintuitive conclusions.

If this condition turns out to be wrong, total utilitarianism can be the preferred population ethic, which means we should strongly prioritize decreasing X-risks (if that guarantees a future with more positive than negative individual outcomes), although wild animal welfare might still be important due to the large number of future wild animals.

If a suffering focused ethic is valid, we have to determine whether human or animal suffering in the future will decrease or increase.

7. Increasing human flourishing

Human flourishing will increase and suffering will decrease in the future (if humanity does not go extinct). The number of future human lives with net-negative welfare will be small and decrease to become negligible.

My probability estimate (factual certainty) of this condition is >80%. This estimate is based on the past human trajectory: the evidence of human progress, economic growth, decrease (in absolute terms) of extreme poverty, mortality rates and violence, increase of human health, life expectancy and cooperation, welfare improving technologies,…

If this condition turns out to be wrong, we could focus on human development, anti-poverty, human health, especially if we prefer a person-affecting population ethic. However, even with increasing human suffering (decreasing flourishing), it could still be possible that the problem of wild animal suffering is bigger and hence more important.

If human flourishing will increase, we are left with animal suffering. From all anthropogenic (human-caused) animal suffering, livestock farming is the biggest problem due to the high number of livestock animals. So how does the welfare of livestock animals compare to wild animals?

8. Livestock elimination

Livestock farming and livestock animal suffering will be eliminated in the near future (e.g. this century).

My probability estimate (factual certainty) of this condition is >90%. This estimate is based on developments in animal free food technologies (plant-based and cultivated meat) as well as increases of farm animal welfare concerns and decreases of meat consumption in many highly developed countries.

If this condition turns out to be wrong, we probably should focus more on veganism and the development of alternative foods. However, it is possible that veganism indirectly increases wild animal suffering, for example when livestock farms (e.g. grasslands) are replaced by forests and natural habitats. This means that wild animal suffering could remain important.[i]

The investments in animal free food technologies (billions of dollars by large food companies), and the campaigning by vegan organizations, means that the problem of livestock animal suffering is less neglected than the problem of wild animal suffering. If livestock farming gets eliminated, wild animal suffering becomes the biggest remaining problem of animal suffering, especially if many wild animals have net-negative welfare.

9. Net-negative lives of wild animals

Many wild animals have lives not worth living, i.e. with a net-negative lifetime welfare.

My probability estimate (factual certainty) of this condition is >80%. This estimate is based on the high reproduction rates (r-selection population dynamics), the short lifespans of most animals, and the abundance of causes of suffering (diseases, injuries, parasitism, starvation, predation,…)

If this condition turns out to be wrong, we could focus on the welfare of the current generation (of humans or animals) or X-risk reduction. However, even if they have net-positive lives, wild animals could still have the lowest welfare levels (compared to humans), such that wild animal welfare improvements remain important.

If animals have net-negative welfare, their welfare levels can still be very small compared to humans.

10. Non-negligible welfare of wild animals

Wild animals have sufficiently high sentience levels such that wild animal suffering is a big problem.

My probability estimate (factual certainty) of this condition is >90%. This estimate is based on e.g. brain sizes and the fact that there are orders of magnitude more wild animals than humans. So even if a smaller brain implies a smaller welfare potential, the huge number of animals means that their total suffering can be huge.

If this condition turns out to be wrong, we could prioritize human welfare (or the development of supersentient artificial intelligence with an extremely high welfare potential).

If future wild animal suffering is not negligible, there still may be other, bigger causes of suffering.

11. Dominance of wild animal suffering

Most far future lives with net-negative welfare will be wild animals, instead of e.g. plants, digital sentient entities or computer-simulated conscious beings.

My probability estimate (factual certainty) of this condition is >90%. This estimate is based on the lack of evidence that plants are conscious, my low confidence that we can and will create huge numbers of digital sentient entities with net-negative experiences and the high probability that we can easily improve the welfare of digital sentience once it exists.

If this condition turns out to be wrong, we should focus on digital sentience welfare, and especially avoid the related S-risks (e.g. the simulation of countless digital entities that suffer).

If there are no other bigger suffering problems next to wild animal suffering, it is still possible that all attempts to improve wild animal welfare will be futile, e.g. when we go extinct.

12. No human extinction or knowledge loss

Humans will not go extinct before we can drastically improve far future wild animal welfare. It means we do not go extinct during the upcoming technology revolutions. For example, we will survive the transition towards a world with artificial superintelligence (machines that are more generally intelligent than humans). Once this superintelligence is created, it can help us in avoiding all other kinds of X-risks, so the transition towards superintelligence can be the last important barrier for human survival.

This condition also includes the non-extinction of human knowledge. A big human catastrophe that does not result in total extinction of humanity, could still result in the loss of all gained knowledge about wild animal welfare interventions. This would mean all current investments in wild animal welfare research would become futile and survived future human generations have to start research all the way from scratch.

My probability estimate (factual certainty) of this condition is >80%. This is based on expert surveys about existential risks. It means that the probability of extinction in the transition period could be as high as 20% if more resources are spend on wild animal suffering reduction instead of X-risk reduction.

If this condition turns out to be wrong, we should focus on current generation human welfare or X-risk reduction (short-term and long-term human welfare cause areas).

If humans do not go extinct, it is not guaranteed that we will develop and invent technologies that sufficiently improve wild animal welfare. The problem of wild animal suffering can simply be unsolvable.

13. Tractability of wild animal suffering

Crucial problems of wild animal suffering are solvable, and it is possible to make progress in the research for technologies that improve wild animal welfare. It implies that the problem of wild animal suffering is tractable, including the possibly hardest subproblems of procreation (r-selection population dynamics) and predation.

My probability estimate (factual certainty) of this condition is >95%. This estimate is based on progress in environmental sciences, human health (vaccines), genetic manipulation (gene drives), cultivated meat, artificial intelligence,… Given the past track record of inventions to improve human welfare (e.g. eradicate diseases), it is unlikely that we will never find technologies that significantly improve wild animal welfare.

If this condition turns out to be wrong, we should focus on current generation human welfare or tractable X-risk reduction.

If the problem of wild animal suffering is large, neglected and tractable, which would give it a top priority, it is still possible that other cause areas or interventions (e.g. about climate change, veganism,…) will automatically sufficiently improve wild animal welfare.

14. No indirect interventions

There will be no other (non-wild-animal-suffering related) interventions that automatically sufficiently solve the problem of wild animal suffering.

My probability estimate (factual certainty) of this condition is >95%. This estimate is based on the apparent complexity of the problem of wild animal suffering. It is unlikely that other interventions will have larger overall positive effects on wild animal welfare, because due to the complexity, those interventions have many spillover and flow-through positive and negative side-effects.

Overall estimate of lower bound

Multiplying the above probability estimates, the lower bound of wild animal welfare being a top priority is around 25%. This is the lowest bound, because even if some of the above conditions are not met, wild animal welfare might still be a top priority because of other reasons. My lower bound estimate, when one or more of the above conditions are not met, of the probability of wild animal suffering still being a top priority, is between 0% and 25%. Hence, the total lower bound is somewhere between ¼ and ½.

Upper bound probability estimate

I will also calculate an upper bound on the importance of wild animal welfare, by calculating lower bounds for the other major effective altruism cause areas.

Consider the reduction of X-risks (the long-term human welfare cause area). I will assume the same estimates for the first three conditions (no unwanted arbitrariness, consequentialism and interpersonal comparability of welfare) as above. Next, my estimate that total utilitarianism is valid will be 10% (the conjugate of the probability that it is invalid). The probability that the total future outcome will not unavoidably be negative, is 95% as above. Given a positive future outcome and the validity of total utilitarianism, the likelihood that an X-risk is the worst outcome is more than 99%, because by far the most net-positive lives will be in the far future, and the value of all those lives will be lost with an X-risk. The probability estimate of the tractability (solvability) of X-risk reduction is 95%. Finally, the probability estimate that humanity will not go extinct, even without any investments in X-risk reductions, is 20%. Hence the likelihood that X-risk interventions will not be futile (i.e. will be necessary and have some impact), is 80%. Together, the lowest bound on X-risk reduction priority is 5%. However, there are many other situations (with other conditions being met) where X-risk reduction is a top priority. My lower bound estimate, when one or more of the above conditions are not met, of the probability of X-risk reduction still being a top priority, is between 0 and 25%.

The other major effective altruism cause areas (short-term human and animal welfare) have lower probability estimates, but are not negligible. There is also a probability that there are yet unknown important cause areas. Together, the sum of the non-wild-animal-welfare (long-term animal) cause areas put an upper bound on the likelihood of wild animal welfare being the top priority. I estimate this upper bound to be between 50% and 90%.

Hence, the wild animal welfare priority has a wide-margins likelihood between 25%-90% and a narrow-margins likelihood around 50%.

Interconnections and indirect cause areas

Even if wild animal welfare is not the major cause area, there are several interconnections between the different cause areas. The other major cause areas have positive indirect effects for wild animal welfare. This means those other cause areas gain relative importance.

Short-term animal welfare

The most important problem for short-term animal welfare, is livestock animal suffering. Decreasing livestock farming (including fish farms), by promoting and developing animal free alternatives (e.g. plant-based egg substitutes and cultivated meat), directly reduces livestock animal suffering. But this veganism has beneficial side-effects for wild animal welfare. First, if humans decrease their animal meat consumption, the cognitive dissonance between meat consumption (behavior) and animal welfare (attitude) decreases, which means that the value of animal welfare becomes less suppressed. This facilitates a moral circle expansion, where animals are included in the moral circle. Animal welfare values can spread more easily in a vegan society, which means people become more interested in wild animal welfare. Also, the development of cultivated meat can eventually benefit wild predators, saving prey animals from unnecessary suffering.

Short-term human welfare

Economic development and poverty reduction could also increase wild animal welfare, by increasing research. If people are richer, they are more willing to spend some money on interventions that improve the welfare of others, including the welfare of wild animals. Therefore, GDP-growth is important. For example, if the poorest 4/5th of the world population becomes as rich as the richest 1/5th, the investments in wild animal welfare research could increase fivefold, because currently all research is done only in the richest part of the world.

Long-term human welfare

Avoiding existential risks that could wipe out humanity, is important, because if humans go extinct, wild animals have to wait another few million years before other intelligent lifeforms evolve that are able to develop technologies for effective wild animal welfare interventions, or they have to wait for extraterrestrial beings who care about animal welfare to arrive on earth.

Artificial superintelligence is probably the biggest X-risk, but also offers the best solutions against other X-risks as well as wild animal suffering. Therefore, research in AI-safety becomes important. We avoid unwanted artificial superintelligence (with value misalignment), and become able to develop superintelligent machines that follow our value of promoting both human and animal welfare. Safe and effective interventions in nature to improve wild animal welfare will drastically improve with safe artificial superintelligence.

Summary: probability estimates of major cause areas

With the above Fermi calculations and interconnectedness considerations of cause areas, I guesstimate the following probabilities for a major cause area to be top priority:

Long-term animal welfare (in particular reducing wild animal suffering): 1/3 or higher.

Long-term human welfare (in particular reducing existential risks): 1/4.

Short-term animal welfare (in particular reducing livestock farming and fishing/aquaculture): 1/4.

Short-term human welfare (in particular reducing extreme poverty): 1/6 or lower.

Reducing wild animal suffering is the most important cause area. Unfortunately it is also by far the most neglected in the effective altruism community. I estimate that the current total worldwide workforce involved in wild animal welfare research is less than 10 full-time equivalents (with only a few organizations: Wild Animal Initiative, Animal Ethics and to a lesser degree Rethink Priorities). This is orders of magnitudes smaller than the attention for X-risk reduction, veganism or human development.

The probability guesstimates can be used as allocation percentages for a donation portfolio (or for donation allocations at EA Funds).


-----

[i] If insects are sentient, it is not yet clear whether grassland for livestock really has less animal suffering than, for example, a forest. Also on grassland there are birds of prey, wasps, insect parasites and other animals that cause suffering, as well as diseases, food shortages,… Forests could produce more food and offer more protection for animals, but can also increase animal abundance and hence the number of animals with lives not worth living. So with livestock farming we have a situation of directly visible harm and much much greater indirect, invisible harm. With vegan agriculture we have more nature, which means we do not have direct animal harm but we still have very large indirect, invisible harm. We do not know which of the two situations has the least indirect harm. We could then use a provisional rule of thumb to limit known, direct, visible harm and therefore opt for veganism. This is reasonable: we have four numbers: x (direct harm of livestock animals), X (suffering of wild animals in nature in a world with livestock farming), y (direct harm with veganism) and Y (suffering of wild animals in nature in a vegan world). We know for sure that y is 0 and x is bigger than y, but we do not know whether X is bigger than Y. With this knowledge, our subjective probability estimate that y+Y is less than x+X is strictly greater than 50%. Even if it is 50,0001%, it is still reasonable to opt for the full 100% for y+Y (i.e. veganism). Suppose a coin has a chance of 50,0001% to be heads and you can guess a million times. Most people believe the best strategy is to alternately guess heads and tails with 500001 heads, but guessing heads a million times is better. In any case, the value of information about the relative sizes of X and Y is very high, so if we promote veganism, we should do much more research to estimate the indirect harms suffered by wild animals.

20 comments

Comments sorted by top scores.

comment by Pablo_Stafforini · 2019-10-24T02:18:13.355Z · score: 11 (4 votes) · EA(p) · GW(p)
the repugnant sadistic conclusion of total utilitarianism

Note that total utilitarianism does not lead to what is known as the "sadistic conclusion". This conclusion was originally introduced by Arrhenius, and results when adding a number of people each with net negative welfare to a population is better than adding some (usually larger) number of people each with net positive welfare to that population.

Given what you say in the rest of the paragraph, I think by 'repugnant sadistic conclusion' you mean what Arrhenius calls the 'very repugnant conclusion', which is very different from the sadistic conclusion. (Personally, I think the sadistic conclusion is a much more serious problem than the repugnant conclusion or even the very repugnant conclusion, so it's important to be clear about which of these conditions is implied by total utilitarianism.)

comment by MichaelStJules · 2019-10-25T02:53:50.387Z · score: 1 (1 votes) · EA(p) · GW(p)

To someone who already rejects Mere Addition, the Sadistic Conclusion is only a small cost, since if it's bad to add some lives with (seemingly) positive welfare, then it's a small step to accept that it can sometimes be worse to add lives with negative welfare over lives with positive welfare. The Very Sadistic Conclusion can be avoided by being very prioritarian, but not necessarily lexically prioritarian (at the cost of separability/independence without lexicality).

comment by Pablo_Stafforini · 2019-10-25T13:55:26.796Z · score: 4 (2 votes) · EA(p) · GW(p)
To someone who already rejects Mere Addition, the Sadistic Conclusion is only a small cost, since if it's bad to add some lives with (seemingly) positive welfare, then it's a small step to accept that it can sometimes be worse to add lives with negative welfare over lives with positive welfare.

The question is whether one should accept some variety of CU or NU antecedently of any theoretical commitments to either. Naturally, if one is already committed to some aspects of NU, committing to further aspects of it will incur a relatively smaller cost, but that's only because the remaining costs have already been incurred.

comment by Pablo_Stafforini · 2019-10-24T02:46:49.882Z · score: 9 (4 votes) · EA(p) · GW(p)
Suffering focused ethics can also avoid the repugnant sadistic conclusion, which is the most counterintuitive implication of total utilitarianism that maximizes the sum of everyone’s welfare. Consider the choice between two situations. In situation A, a number of extremely happy people exist. In situation B, the same people exist and have extreme suffering (maximal misery), and a huge number of extra people exist, all with lives barely worth living (slight positive welfare). If the extra population in B is large enough, the total welfare in B becomes larger than the total welfare in A. Hence, total utilitarianism would prefer situation B, which is sadistic (there are people with extreme suffering) and repugnant (a huge number of people have lives barely worth living and no-one is very happy).

As pointed out recently, suffering focused views imply that a population where everyone experiences extreme suffering is better than a population where everyone experiences extreme happiness plus a brief, mild instance of suffering, provided the latter population is sufficiently more numerous. This seems even more problematic than the implication you describe, since at least in that case you have a very large population enjoying "muzak and potatoes", whereas here there's no redeeming feature: extreme suffering is all that exists.

comment by rfranks · 2019-10-24T16:41:46.497Z · score: 8 (5 votes) · EA(p) · GW(p)
As pointed out recently, suffering focused views imply that a population where everyone experiences extreme suffering is better than a population where everyone experiences extreme happiness plus a brief, mild instance of suffering, provided the latter population is sufficiently more numerous.

This is an overgeneralization of suffering-focused views. You can believe in Lexical Threshold Negative Utilitarianism (ie there is some point at which suffering is bad enough where it becomes infinitely worse than less bad experiences) where the threshold itself is applied at the person-level rather than the aggregate suffering over all beings level. In this case, many people experiencing mild suffering is trivially better than a smaller number of people experiencing extreme suffering. Not sure if I completely buy into this kind of philosophy but I think it's plausible.

comment by Pablo_Stafforini · 2019-10-24T11:40:14.261Z · score: 4 (2 votes) · EA(p) · GW(p)

Yes, I agree that lexical NU doesn't have that implication. My comment was addressed to the particular suffering-focused view I took Stijn to be defending, which he contrasted to CU. If his defence is of "suffering-focused views" as a whole, however, then it seems unfair to compare them to CU specifically, rather than to "classical views" generally. Classical views also avoid the repugnant and very repugnant conclusions, since some specific views in this family, such as critical level utilitarianism, don't have this implication. [EDIT: Greg makes the same point in his comment; remarkably, we posted at exactly the same time.]

Concerning the merits of lexical NU, I just don't see how it's plausible to postulate a sharp value discontinuity along the suffering continuum. As discussed many times in the past, one can construct a series of pairwise comparisons involving painful experiences that differ only negligibly in their intensity. It is deeply counterintuitive that one of these experiences should be infinitely (!) worse than the other, but this is what the view implies. (I've only skimmed the essay, so please correct me if I'm misinterpreting it.)

comment by rfranks · 2019-10-25T12:48:24.145Z · score: 2 (2 votes) · EA(p) · GW(p)
Concerning the merits of lexical NU, I just don't see how it's plausible to postulate a sharp value discontinuity along the suffering continuum. As discussed many times in the past, one can construct a series of pairwise comparisons involving painful experiences that differ only negligibly in their intensity.

So, I agree that sharp values in discontinuity are not a great aspect for a moral system to have but consider

  • We put suffering and happiness on the same scale to reflect how they look in our utility functions. But really, there are lots of kinds of suffering that are qualitatively different. While we can do it sometimes, I'm not sure if we are always capable of making direct, optimized comparisons of qualitatively different experiences
  • We don't actually have a great definition of what suffering is and, if we model it in terms of preferences, it bottoms out. AKA, there's a point in suffering when I could imagine myself saying something like "This is the worst thing ever; get me out of here no matter what." Our subjective experience of suffering and our actual ability to report it breaks down
  • It's also super hard to really understand what it's like to be in edge-case extreme suffering situations without actually being in one, and most people haven't. Without that (and even potentially with it), trying to model ourselves in extreme suffering would require us to separate logical fallacies we would make in such a situation with our de-facto utility function. From an AI alignment perspective, this is hard.
  • If you're an agent and you can't reason about how bad something is while you're in a situation and you don't have a mental model of what that situation is like, getting into that kind of situation is a really bad idea. This isn't just instrumentally inconvenient; it's inconvenient in a "you know you're suffering really badly but you can only model your experience as arbitrarily bad"
  • Even if we agree that our utility functions shouldn't have strange discontinuities in suffering, there may still be a strange and discontinuous landscape of levels of suffering we can experience in the landscape of world-states. This is not directly incompatible with any kind of utilitarianism but it makes arguments along the lines of "imagine that we make this suffering just slightly, and virtually unnoticeably, worse" kind of weird. Especially in the context of extreme experiences that exist in a landscape we don't fully understand and especially in a landscape where the above points apply
  • I'm a moral anti-realist. There's no strict reason why we can't have weird dicontinuities in our utility functions if that's what we actually have. The "you wouldn't want to trade a dramatic amount of resources to move from one state of suffering to an only infinitesimally worse one" makes sense but, per the above, we need to be careful about what that actually implies about how suffering works

This is all to say that suffering is really complicated and disentangling concerns about how utility functions and suffering work in reality from what logically makes sense is not an easy task. And I think part of the reason people are suffering-focused is because of these general problems. I'm still agnostic on whether something like negative lexical threshold utilitarianism is actually true but the point is that, in light of the above things, I don't think that weird discontinuities is enough to dismiss it from the zone of plausibility.


comment by Pablo_Stafforini · 2019-10-25T13:19:43.203Z · score: 5 (3 votes) · EA(p) · GW(p)
We don't actually have a great definition of what suffering is and, if we model it in terms of preferences, it bottoms out. AKA, there's a point in suffering when I could imagine myself saying something like "This is the worst thing ever; get me out of here no matter what."

Proponents or sympathizers of lexical NU (e.g. Tomasik) often make this claim, but I'm not at all persuaded. The hypothetical person you describe would beg for the suffering to stop even if continuing to experience it was necessary and sufficient to avoid an even more intense or longer episode of extreme suffering. So if this alleged datum of experience had the evidential force you attribute to it, it would actually undermine lexical NU.

It's also super hard to really understand what it's like to be in edge-case extreme suffering situations without actually being in one, and most people haven't.

It's even harder to understand what it's like to experience comparably extreme happiness, since evolutionary pressures selected for brains capable of experiencing wider intensity ranges of suffering than of happiness. The kind of consideration you invoke here actually provides the basis for a debunking argument of the core intuition behind NU, as has been noted by Shulman and others. (Though admittedly many NUs appear not to be persuaded by this argument.)

I'm a moral anti-realist. There's no strict reason why we can't have weird dicontinuities in our utility functions if that's what we actually have.

Humans have all sorts of weird and inconsistent attitudes. Regardless of whether you are a realist or an anti-realist, you need to reconcile this particular belief of yours with all the other beliefs you have, including the belief that an experience that is almost imperceptibly more intense than another experience can't be infinitely (infinitely!) worse than it. Or, if you want a more vivid example, the belief that it would not be worth subjecting a quadrillion animals having perfectly happy lives to a lifetime of agony in factory farms solely to spare a single animal a mere second of slightly more intense agony just above the relevant critical threshold.

comment by rfranks · 2019-10-26T03:05:17.119Z · score: 1 (1 votes) · EA(p) · GW(p)
The hypothetical person you describe would beg for the suffering to stop even if continuing to experience it was necessary and sufficient to avoid an even more intense or longer episode of extreme suffering.

Yeah, I agree with this. More explicitly, I agree that it's bad that the person won't continue to experience suffering if it will cause them to experience worse suffering and that this implies that lexical trade-offs in suffering are weird. However

  • I said that "in terms of preferences, [suffering] bottoms out." In this situation, you're changing my example by proposing that there is a hypothetical yet worse form of suffering when I'm not convinced there is one after that point
  • The above point only addresses more intense suffering, not longer suffering. However I think you're wrong about bringing up different lengths of suffering. When I talk about lexicality, I'm talking about valuing different experiences in different ways. A longer episode of extreme suffering and a shorter form of the same level of extreme suffering are in the same lexicality and can be traded off
It's even harder to understand what it's like to experience comparably extreme happiness, since evolutionary pressures selected for brains capable of experiencing wider intensity ranges of suffering than of happiness.

I agree with this and touched briefly on this in my writing. Even without the evolutionary argument, I'll grant that imagining lexically worse forms of suffering also implies lexically better forms of happiness just as much. After all, in the same way that suffering could bottom out at "this is the worst thing ever and I'd do anything to make it stop", happiness could ceiling at "this is the most amazing thing ever and I'd do anything to make it continue longer."

Then you have to deal with the confusing problem of reconciling trade-offs between those kinds of experiences. Frankly, I have no idea how to do that.

Humans have all sorts of weird and inconsistent attitudes. Regardless of whether you are a realist or an anti-realist, you need to reconcile this particular belief of yours with all the other beliefs you have

I actually don't need to do this for a couple reasons:

  • I said that I thought negative lexical utilitarianism was plausible. I think there's something to it but I don't have particularly strong opinions on it. This is true for total utilitarianism as well (though, frankly, I actually learn slightly more in favor of total utilitarianism at the moment)
  • The sorts of situations where lexical threshold utilitarianism differs from ordinary utilitarianism are extreme and I think my time is more pragmatically spent trying to help the world than it is on making my brain ethically self-consistent
    • As a side-note, negative lexical utilitarianism has infinitely bad forms of suffering so even giving it a small credence in your personal morality should imply that it dominates your personal morality. But, per the above bullet, this isn't something I'm that interested in figuring out
Or, if you want a more vivid example, the belief that it would not be worth subjecting a quadrillion animals having perfectly happy lives to a lifetime of agony in factory farms solely to spare a single animal a mere second of slightly more intense agony just above the relevant critical threshold.

I would not trade a quadrillion animals having perfectly happy lives instead of agony in factory farms just to avoid a second of slightly more intense agony here. However, this isn't the model of negative lexical utilitarianism I find plausible. The one I find plausible implies that there is no continuous space of subjective experiences spanning from bad to good; at some point things just hop from finitely bad suffering that can be reasoned about and traded to infinitely bad suffering that can't be reasoned about and traded.

I guess you could argue that moralities are about how we should prefer subjective experiences as opposed to the subjective experiences themselves (...and thus that the above is completely compatible with total utilitarianism). However, as I mentioned

We don't actually have a great definition of what suffering is and, if we model it in terms of preferences, it bottoms out. AKA, there's a point in suffering when I could imagine myself saying something like "This is the worst thing ever; get me out of here no matter what."

so I'm uncertain about the truth behind distinguishing subjective experience from preferences about them.

It is in the context of that uncertainty that I think negative lexical utilitarianism is plausible.

comment by Gregory_Lewis · 2019-10-24T11:39:27.434Z · score: 2 (1 votes) · EA(p) · GW(p)

Yet in these cases, it is the lexicality, not the suffering focus, which is doing the work to avoid the counter-example. A total utilitarian could adopt lexicality in a similar way to avoid the (very/) repugnant conclusion (e.g., lives in a 'repugnant region' between zero and some 'barely worth living' should be 'counted as zero' when weighing things up, save as a tie-breaker between equally-good worlds). [I'm not recommending this approach - lexicality also has formidable costs across the scales from its potential to escape population ethics counter-examples].

It seems to miss the mark to say it is an advantage for suffering-focused views to avoid the (v/) repugnant conclusion, if the 'suffering focus' factor, taken alone, merely exchanges the (v/) repugnant conclusion for something which looks even worse by the lights of common intuition; and where the resources that can be called upon to avoid either counter-example are shared between SF and ¬SF views.

comment by MichaelStJules · 2019-10-25T02:35:27.340Z · score: 1 (1 votes) · EA(p) · GW(p)

I think the tools to avoid all three of the the Repugnant Conclusion, the Very Repugnant Conclusion and the Very Sadistic Conclusion (or the similar conclusion you described here [EA · GW]) left available to someone who accepts Mere Addition (or Dominance Addition) are worse than those available to someone who rejects it.

Using lexicality as you describe seems much worse than the way a suffering-focused view would use it, since it means rejecting Non-Elitism, so that you would prioritize the interests of a better off individual over a worse off one in a one-on-one comparison. Some degree of prioritarianism is widely viewed as plausible, and I'd imagine almost no one would find rejecting Non-Elitism acceptable. Rejecting Non-Elitism without using lexicality (like Geometrism) isn't much better, either. You can avoid this by giving up General Non-Extreme Priority (with or without lexicality) instead, and I wouldn't count this against such a view compared to a suffering-focused one.

However, under a total order over populations, to avoid the RC, someone who accepts Mere Addition must reject Non-Antiegalitarianism and Minimal Inequality Aversion (or Egalitarian Dominance, which is even harder to reject). Rejecting them isn't as bad as rejecting Non-Elitism, although I'm not yet aware of any theory which rejects them but accepts Non-Elitism. From this paper:

As mentioned above, Sider's theory violates this principle. Sider rejects his own theory, however, just because it favours unequal distributions of welfare. See Sider (1991, p. 270, fn 10). Ng states that 'Non-Antiegalitarianism is extremely compelling'. See Ng (1989, p. 239, fn 4). Blackorby, Bossert and Donaldson (1997, p. 210), hold that 'weak inequality aversion is satisfied by all ethically attractive . . . principles'. Fehige (1998, p. 12), asks rhetorically '. . . if one world has more utility than the other and distributes it equally, whereas the other doesn't, then how can it fail to be better?'. In personal communication, Parfit suggests that the Non-Anti-Egalitarianism Principle might not be convincing in cases where the quality of the good things in life are much worse in the perfectly equal population. We might assume, however, that the good things in life are of the same quality in the compared populations, but that in the perfectly equal population these things are equally distributed. Cf. the discussion of appeals to non-welfarist values in the last section.

And the general Non-Sadism condition is so close to Mere Addition itself that rejecting it (and accepting the Sadistic Conclusion) is not that great a cost to someone who already rejects Mere Addition, since they've already accepted that adding lives with what might be understood as positive welfare can be bad, and if it is bad, it's small step to accept that it can sometimes be worse than adding a smaller number of lives of negative welfare.

comment by Stijn · 2019-10-24T13:27:14.127Z · score: 1 (1 votes) · EA(p) · GW(p)

Perhaps I'm too sloppy with the terminology. I've rewritten the part about suffering focused ethics in the main text. What I meant is that these theories are characterized by a (procreation) asymmetry. That allows the avoidance of the repugnant sadistic conclusion (which is indeed called the very repugnant conclusion by Arrhenius).

So the suffering focused ethic that I am proposing, does not imply that sadistic conclusion that you mentioned (where the state with everyone experiencing extreme suffering is considered better). My personal favorite suffering focused ethic is variable critical level utilitarianism: a flexible version of critical level utilitarianism where everyone can freely choose their own non-negative critical level, which can be different for different persons, different situations and even different choice sets. This flexibility allows to steer away from the most counterintuitive conclusions.

comment by Pablo_Stafforini · 2019-10-24T14:37:01.150Z · score: 2 (1 votes) · EA(p) · GW(p)
So the suffering focused ethic that I am proposing, does not imply that sadistic conclusion that you mentioned... My personal favorite suffering focused ethic is variable critical level utilitarianism: a flexible version of critical level utilitarianism where everyone can freely choose their own non-negative critical level

As long as the critical level is positive, critical-level utilitarianism does imply the sadistic conclusion. A population where everyone experiences extreme suffering would be ranked above a population where everyone is between neutrality and the critical level, provided the latter population is sufficiently large. The flexibility of the positive critical level can't help avoid this implication.

comment by Stijn · 2019-10-26T07:42:01.996Z · score: 1 (1 votes) · EA(p) · GW(p)

Suppose we can choose between A: adding one person with negative utility -100, versus B: adding thousand people, each with small positive utility +1. If the critical level was fixed at say +10, then situation A decreases social welfare with 100, whereas B decreases it with 900, so traditional critical level theory indeed implies a sadistic conclusion to choose A. However, variable critical level utilitarianism can avoid this: the one person in A can choose a very high critical level for him in A, the thousand people in B can set their critical levels in B at say +1. Then B gets chosen. In general, people can choose their critical levels such that they can steer away from the most counterintuitive conclusions. The critical levels can depend on the situation and the choice set, which gives the flexibility. You can also model this with game theory, as in my draft article: https://stijnbruers.files.wordpress.com/2018/02/variable-critical-level-utilitarianism-1.pdf

comment by Stijn · 2019-10-29T19:13:27.614Z · score: 1 (1 votes) · EA(p) · GW(p)

A small addendum: a simplified expected value estimate of reducing X-risks versus wild animal suffering: https://stijnbruers.wordpress.com/2019/10/29/reducing-existential-risks-or-wild-animal-suffering-part-ii-expected-value-estimate/

comment by MichaelStJules · 2019-11-02T19:27:30.778Z · score: 2 (2 votes) · EA(p) · GW(p)

When you say "we do not invest in _ research", do you mean EAs specifically, or all humans? It's worth noting some people not associated with EA will probably do research in each area regardless.

The probability that if we do not invest in X-risk reduction research (but we invest in wild animal suffering reduction research instead), humans will go extinct and animals will not go extinct, and if we do invest in that X-risk research, humans will not go extinct, is p.

I'm having trouble understanding this probability. I don't think it can be interpreted as a single event (even conditionally), unless you're thinking of probabilities over probabilities or probabilities over statements, not actual events that can happen at specific times and places (or over intervals of time, regions in space).

Letting

= humans go extinct

= non-human animals go extinct

= we invest in X-risk reduction research (or work, in general)

= we invest in WAS research (or work, in general)

Then the probability of "if we do not invest in X-risk reduction research (but we invest in wild animal suffering reduction research instead), humans will go extinct and animals will not go extinct" looks like

while the probability of "if we do invest in that X-risk research, humans will not go extinct" looks like

The events being conditioned on between these two probabilities are not compatible since the first has , while the second has . So, I'm not sure taking their product would be meaningful either. I think it would make more sense to multiply these two probabilities by the expected value of their corresponding events and just compare them. In general, you would calculate:

Where is the value, is now the level of investment in X-risk work, is now the level of investment in WAS work and is the aggregate value. Then you would compare this for different values of and , i.e. different levels of investment (or compare the partial derivatives with respect to each of and , at a given level of and ; this would tell you the marginal expected value of extra resources going to each of X-risk work and WAS work).

With being 1 if humans go extinct and 0 otherwise (the indicator function), being 1 if non-humans animals go extinct and 0 otherwise, and depending on them, that expected value could further be broken down to get

You specify further that

This probability is the product of the probability that there will be a potential extinction event (e.g. 10%), the probability that, given such an event, the extra research in X-risk reduction (with the resources that would otherwise have gone to wild animal suffering research) to avoid that extinction event is both necessary and sufficient to avoid human extinction (e.g. 1%) and the probability that animals will survive the extinction event even if humans do not (e.g. 1%).

But you're conditioning on the probability of a potential extinction event as if X-risk reduction research has no effect on it, only the probability of actual human extinction from that event; X-risk research aims to address both.

The probability that is "both necessary and sufficient" for is also a bit difficult to think about. One way might be the following, but I think this would be difficult to work with, too:


comment by Open_Thinker · 2019-10-26T15:33:30.911Z · score: 1 (1 votes) · EA(p) · GW(p)

(There are a couple reposts of this to Reddit's EA subreddit.)

Maybe I simply missed it, but where do the personal probability estimates come from? If they are simply pulled out of the air, then any mathematical conclusions in the summary are likely invalid; a different result could be obtained just by playing with the numbers, even if the same arguments are maintained.

comment by Stijn · 2019-10-26T18:21:27.081Z · score: 2 (2 votes) · EA(p) · GW(p)

the personal probability estimates are pulled out of my 'air' of intuitive judgments. You are allowed to play with the numbers according to your intuitive judgments. Breaking down the total estimate into factors allows you to make more accurate estimates, because you better reflect on all your beliefs that are relevant for the estimate

comment by Open_Thinker · 2019-10-26T21:55:35.233Z · score: 1 (1 votes) · EA(p) · GW(p)

What is the actual calculations you used?

For the wild animal welfare lower bound: 0.99 * 0.99 * 0/75 * 0.99 * 0.95 * 0.9 * 0.8 * 0.9 * 0.8 * 0.9 * 0.9 * 0.8 * 0.95 * 0.95 = 21% ?

How do you determine whether something is 0.90, 0.95, 0.99, or some other number?

In your summary, you state that animal causes have a combined 7/12 chance of being the top priority, whereas human causes have a combined 5/12 chance. However, the error margins are huge, with the original wild animals priority having "wide-margins" of 25-90%.

It does not seem to me that there can be any conclusive determinations made with this when the options are so close relatively and the margins so wide. The calculation is entirely subjective based on your own admission. I am afraid that giving it a veneer of objectivity in this way is in fact misleading, not clarifying.

comment by Stijn · 2019-10-27T18:57:17.268Z · score: 2 (2 votes) · EA(p) · GW(p)

As mentioned, those percentages wher my own subjective estimates, and they were determined based on the considerations that I mentioned ("This estimate is based on"). When I clearly state that these are my personal, subjective estimates, I don't think it is misleading: it does not give a veneer of objectivity.

The clarifying part is that you can now decide whether you agree or disagree with the probability estimates. Breaking the estimate into factors helps you to clarify the relevant considerations and improves your accuracy. It is better than simply guessing the overall estimate of the probability that wild animal suffering is the priority.

If you don't like the wide margins, perhaps you can improve the estimates? But knowing we often have an overconfidence bias (our error estimates are often too narrow), we should a priori not expect narrow error margins and we should correct this bias by taking wider margins.