[link] Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term'

post by Pablo_Stafforini · 2019-11-05T20:24:00.445Z · score: 38 (14 votes) · EA · GW · 4 comments

New GPI paper: Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term'. Abstract:

The Asymmetry is the view in population ethics that, while we ought to avoid creating additional bad lives, there is no requirement to create additional good ones. The question is how to embed this view in a complete normative theory, and in particular one that treats uncertainty in a plausible way. After reviewing the many difficulties that arise in this area, I present general ‘supervenience principles’ that reduce arbitrary choices to uncertainty-free ones. In that sense they provide a method for aggregating across states of nature. But they also reduce arbitrary choices to one-person cases, and in that sense provide a method for aggregating across people. The principles are general in that they are compatible with total utilitarianism and ex post prioritarianism in fixed-population cases, and with a wide range of ways of extending these views to variable-population cases. I then illustrate these principles by writing down a complete theory of the Asymmetry, or rather several such theories to reflect some of the main substantive choice-points. In doing so I suggest a new way to deal with the intransitivity of the relation ‘ought to choose A over B’. Finally, I consider what these views have to say about the importance of extinction risk and the long-run future.

4 comments

Comments sorted by top scores.

comment by MichaelStJules · 2019-11-06T01:34:55.444Z · score: 13 (4 votes) · EA · GW

There's also a video in which the author presents the work there. Here's the direct link.

comment by MichaelStJules · 2019-11-06T03:40:16.170Z · score: 5 (2 votes) · EA · GW

The Supervenience Theorem is quite strong and interesting, but perhaps too strong for many with egalitarian or prioritarian intuitions. Indeed, this is discussed with respect to the conditions for the theorem. In its proof, it's shown that we should treat any problem like the original position behind the veil of ignorance (the one-person scenario; for individuals, we treat ourselves as having probability of being any of those individuals, and we consider only our own interests in that case), so that every interpersonal tradeoff is the same as a personal tradeoff. This is something that I'm personally quite skeptical of. In fact, if each individual ought to maximize their own expected utility in a way that is transitive and independent of irrelevant alternatives when only their own interests are at stake, then fixed-population Expected Totalism follows (for a fixed population, we should maximize the unweighted total expected utility). The Supervenience Theorem is something like a generalization of Harsanyi's Utilitarian Theorem this way. EDIT: Ah, it seems like this link is made indirectly through this paper, which is cited.

That being said, the theorem could also be seen as an argument for Expected Totalism, if each of its conditions can be defended or to whoever leans towards accepting them.

If we've already given up the independence of irrelevant alternatives (whether A or B is better should not depend on what other outcomes are available), it doesn't seem like much of an extra step to give up separability (whether A or B is better should only depend on what's not common to A and B) or Scale Invariance, which is implied by separability. There are different ways to care about the distribution of welfares, and prioritarians and egalitarians might be happy to reject Scale Invariance this way.

Prioritarians and egalitarians can also care about ex ante priority/equality, e.g. everyone deserves a fair chance ahead of time, and this would be at odds with Statewise Supervenience. For example, given H=heads and T=tails, each with probability 0.5, they might prefer the second of these two options, since it looks fairer to Adam ahead of time, as he actually gets a chance at a better life. Statewise Supervenience says these should be equivalent:


If someone cares about ex post equality, e.g. the final outcome should be fair to everyone in it, they might reject Personwise Supervenience, because personwise-equivalent scenarios can be unfair in their final outcomes. The first option here looks unfair to Adam if H happens (ex post), and unfair to Eve if T happens (ex post), but there's no such unfairness in the second option. Personwise Supervenience says we should be indifferent, because from Adam's point of view, ignoring Eve, there's no difference between these two choices, and similarly from Eve's point of view. Note that maximin, which is a limit of prioritarian views, is ruled out.

There are, of course, objections to giving these up. Giving up Personwise Supervenience seems paternalistic, or to override individual interests if we think individuals ought to maximize their own expected utilities. Giving up Statewise Supervenience also has its problems, as discussed in the paper. See also "Decide As You Would With Full Information! An Argument Against Ex Ante Pareto" by Marc Fleurbaey and Alex Voorhoeve, as well as one of my posts [EA · GW] which fleshes out ex ante prioritarianism (ignoring the problem of personal identity) and the discussion there.

comment by MichaelStJules · 2019-11-06T01:31:48.693Z · score: 1 (1 votes) · EA · GW

Regarding the definition of the Asymmetry,

2. If the additional people would certainly have good lives, it is permissible but not required to create them

is this second part usually stated so strongly, even in a straight choice between two options? Normally I only see "not required", not also "permissible", but then again, I don't normally see it as a comparison of two choices only. This rules out average utilitarianism, critical-level utilitarianism, negative utilitarianism, maximin and many other theories which may say that it's sometimes bad to create people with overall good lives, all else equal. Actually, basically any value-monistic consequentialist theory which is complete, transitive and satisfies the independence of irrelevant alternatives and non-antiegalitarianism, and avoids the repugnant conclusion is ruled out.

comment by MichaelStJules · 2019-11-06T01:31:19.206Z · score: 1 (1 votes) · EA · GW

Interesting!

What if we redefine rationality to be relative to choice sets? We might not have to depart too far from vNM-rationality this way.

The axioms of vNM-rationality are justified by Dutch books/money pumps and stochastic dominance, but the latter can be weakened, too, since many outcomes are indeed irrelevant, so there's no need to compare to them all. For example, there's no Dutch book or money pump that only involves changing the probabilities for the size of the universe, and there isn't one that only involves changing the probabilities for logical statements in standard mathematics (ZFC); it doesn't make sense to ask me to pay you to change the probability that the universe is finite. We don't need to consider such lotteries. So, if we can generalize stochastic dominance to be relative to a set of possible choices, then we just need to make sure we never choose an option which is stochastically dominated by another, relative to that choice set. That would be our new definition of rationality.

Here's a first attempt:

Let be a set of choices or probabilistic lotteries over outcomes (random variables), and let be the set of all possible outcomes which have nonzero probability in some choice from (or something more general to accommodate general probability measures). Then for , we say stochastically dominates with respect to if:

for all , and the inequality is strict for some . This can lift comparisons using , a relation , between elements of to random variables over the elements of . need not even be complete over or transitive, but stochastic dominance thus defined will be transitive (perhaps at the cost of losing some comparisons). could also actually be specific to , not just to .

We could play around with the definition of here.

When we consider choices to make now, we need to model the future and consider what new choices we will have to make, and this is how we would avoid Dutch books and money pumps. Perhaps this would be better done in terms of decision policies rather than a single decision at a time, though.

(This approach is based in part on "Exceeding Expectations: Stochastic Dominance as a General Decision Theory" by Christian Tarsney, which also helps to deal with Pascal's wager and Pascal's mugging.)