# The Impossibility of a Satisfactory Population Prospect Axiology

post by elliottthornley · 2021-05-12T15:35:33.662Z · EA · GW · 12 comments

Here's a recent paper of mine that some EAs might be interested in. The link is to the open-access version. Here’s the preprint for those who prefer LaTeX-style typesetting.

Overview: At least since Derek Parfit’s Reasons and Persons, philosophers have been searching for a satisfactory population axiology: a theory of the value of populations. Unfortunately, the project has proved difficult. Some claim that it’s impossible. Several philosophers offer impossibility theorems which seem to prove that no population axiology can satisfy each of a small number of adequacy conditions. Of these impossibility theorems, Gustaf Arrhenius’s six theorems are perhaps the most compelling.

However, it’s recently been pointed out that each of Arrhenius’s theorems depends on a dubious assumption: Finite Fine-Grainedness. This assumption states, roughly, that you can get from a very positive welfare level to a very negative welfare level via a finite number of slight decreases in welfare. Lexical population axiologies deny Finite Fine-Grainedness, and so can satisfy all of Arrhenius’s plausible adequacy conditions. These lexical views have other advantages as well. They cohere nicely with most people’s intuitions in cases like Haydn and the Oyster, and they offer a neat way of avoiding the Repugnant Conclusion.

In this paper, I rework Arrhenius’s impossibility theorems so that lexical views do not escape them. I point out that, since all of our population-affecting actions have a non-zero probability of bringing about more than one distinct population, it is population prospect axiologies that are of practical relevance. I then prove impossibility theorems which state that no population prospect axiology can satisfy each of a small number of adequacy conditions. These theorems do not depend on Finite Fine-Grainedness, so even lexical views violate at least one of their conditions.

How we should respond to these theorems is another question. Though I don't say it in the paper, I believe that the Total View is as satisfactory as population prospect axiologies get. We should accept the Repugnant Conclusion (and even the Very Repugnant Conclusion) because each of the alternatives is even worse.

comment by Ben_West · 2021-05-13T04:24:05.105Z · EA(p) · GW(p)

Thanks for posting this! If I understand your "risky" assumptions correctly, it seems to be targeted at people who believe (as a simple example):

1. Apples are better than oranges, and furthermore no amount of oranges can equate to one Apple
2. Nonetheless, it's better to have a high probability of receiving an orange than a small probability of getting one Apple

Is that correct?

If so, what is the argument for believing both of these? My assumption is that someone who thinks that apples are lexically better than oranges would disagree with (2) and believe that any probability of an Apple is better than any probability of an orange.

Side question: the "risky" axioms seem quite similar to the Archimedean axiom in some variants of the VNM utility theorem. I think you also assume completeness and transitivity – are they enough to recover the entire VNM theorem? (I.e. do your axioms imply that there is a real-valued utility function whose expectation we must be trying to maximize?)

Replies from: MichaelStJules, elliottthornley
comment by MichaelStJules · 2021-05-13T08:46:14.051Z · EA(p) · GW(p)

Side question: the "risky" axioms seem quite similar to the Archimedean axiom in some variants of the VNM utility theorem. I think you also assume completeness and transitivity – are they enough to recover the entire VNM theorem? (I.e. do your axioms imply that there is a real-valued utility function whose expectation we must be trying to maximize?)

This is interesting. It looks like the risky versions would follow from the Archidemean axiom + their non-risky vesions.

I don't think you could get the independence axiom from the other axioms, though. Well, technically anything satisfying all of the axioms would satisfy independence, since nothing satisfies all of the axioms, since it's an impossibility theorem, but if you consider only the risky axioms (or the Archimedean axiom), completeness and transitivity, I don't see how you could get the independence axiom. Maybe maximizing the median value of some standard population axiology like total utilitarianism is a counterexample?

Replies from: elliottthornley
comment by elliottthornley · 2021-05-13T14:07:15.698Z · EA(p) · GW(p)

comment by elliottthornley · 2021-05-13T14:05:29.093Z · EA(p) · GW(p)

Thanks for your comment! I think the following is a closer analogy to what I say in the paper:

Suppose apples are better than oranges, which are in turn better than bananas. And suppose your choices are:

1. An apple and  bananas for sure.
2. An apple with probability  and an orange with probability , along with  oranges for sure.

Then even if you believe:

• One apple is better than any amount of oranges

It still seems as if, for some large  and small , 2 is better than 1. 2 slightly increases the risk you miss out on an apple, but it compensates you for that increased risk by giving you many oranges rather than many bananas.

On your side question, I don't assume completeness! But maybe if I did, then you could recover the VNM theorem. I'd have to give it more thought.

comment by deanspears · 2021-05-12T19:51:34.902Z · EA(p) · GW(p)

Yes!  Nice paper!  Lexical views don't get as much attention in economics as in philosophy, but it's well worth tracking down and sealing off that apparent leak. And, as you point out, being sensible about risk puts a lot of discipline on our proposals for population ethics.

... so let's stop writing in a way that assumes that avoiding the RC is necessary to be "satisfactory." :)  Then a satisfactory population ethics is possible!

Replies from: elliottthornley
comment by elliottthornley · 2021-05-13T13:37:50.993Z · EA(p) · GW(p)

Thanks!

And agreed! The title of the paper is intended as a riff on the title of the chapter where Arrhenius gives his sixth impossibility theorem: 'The Impossibility of a Satisfactory Population Ethics.' I think that an RC-implying theory can still be satisfactory.

comment by MichaelStJules · 2021-05-13T08:22:00.420Z · EA(p) · GW(p)

What goes wrong if we try to use lexical totalism again to avoid your new theorem? You can capture lexicality with a function taking values only in the real numbers, no vectors or anything.

Basically, you just need the maximum difference in the slighter values (which you use  to denote) to never exceed a finite sure difference in the higher value (which you use  to denote). But you can squash the whole real line into a finite interval with a function like arctan. Consider capturing lexical totalism with the function  defined by

where you sum  and  across individuals/instances before applying , and then take the expected value of  for ordering prospects.

By using the integers for the  values, they're spaced out enough that any sure difference in them will always dominate any difference in , since the range of  has length 1. If I understood correctly, this function should also satisfy your risky versions of general non-extreme priority and non-elitism, since for a fixed difference in , letting the probability of that difference go to 0 makes the difference to the expected value of  go to 0, and so can be outweighed by a finite difference in  should also satisfy all of the other exact conditions, since it's the same as lexical totalism in the exact cases.

I discuss this kind of thing more here [EA(p) · GW(p)].

Replies from: elliottthornley
comment by elliottthornley · 2021-05-13T15:03:41.066Z · EA(p) · GW(p)

Thanks! This is a really cool idea and I'll have to think more about it. What I'll say now is that I think your version of lexical totalism violates RGNEP and RNE. That's because of the order in which I have the quantifiers. I say, 'there exists p such that for any k...'. I think your lexical totalism only satisfies weaker versions of RGNEP and RNE with the quantifiers the other way around: 'for any k, there exists p...'.

Replies from: MichaelStJules
comment by MichaelStJules · 2021-05-13T19:22:11.784Z · EA(p) · GW(p)

Hmm, and the population  also comes after, rather than having the  possibly depend on . It does look like your conditions are more "uniform" than my proposal might satisfy, i.e. you get existential quantifiers before universal quantifiers, rather than existential quantifiers all last (compare continuity vs uniform continuity, and convergence of a sequence of functions vs uniform convergence). The original GNEP and NE axioms have some uniformity, too.

I think informal explanations of the axioms often don't get this uniformity across, but that suggests to me that the uniformity itself is not so intuitive and compelling at all in the first place, and it's doing a lot of the work in these theorems. Especially when the conditions are uniform in the unaffected background population , i.e. you require the existence of an object that works for all , that seems to strongly favour separability/additivity/the independence of unconcerned agents, which of course favours totalism.

Uniformity also came up here [EA(p) · GW(p)], with respect to Minimal Tradeoffs.

Replies from: elliottthornley
comment by elliottthornley · 2021-05-15T12:07:27.835Z · EA(p) · GW(p)

Yes, that all sounds right to me. Thanks for the tip about uniformity and fanaticism!  Uniformity also  comes up here, in the distinction between the Quantity Condition and the Trade-Off Condition.

comment by MichaelStJules · 2021-05-13T08:30:35.787Z · EA(p) · GW(p)

I think there's a typo in the definition of Risky General Non-Extreme Priority (exact formulation): you have a , but I think that should be .

Replies from: elliottthornley
comment by elliottthornley · 2021-05-13T14:12:13.499Z · EA(p) · GW(p)

Ah no, that's as it should be!  is saying that  is one of the very positive welfare levels mentioned on page 4.