# Person-affecting intuitions can often be money pumped

post by Rohin Shah (rohinmshah) · 2022-07-07T12:23:16.394Z · EA · GW · 72 comments

## Contents

  FAQ
Further resources
None


This is a short reference post for an argument I wish was better known. Note that it is primarily about person-affecting intuitions that normal people have, rather than a serious engagement with the population ethics literature, which contains many person-affecting views not subject to the argument in this post.

EDIT: Turns out there was a previous post [EA · GW] making the same argument.

A common intuition people have is that our goal is "Making People Happy, not Making Happy People". That is:

1. Making people happy: if some person Alice will definitely exist, then it is good to improve her welfare
2. Not making happy people: it is neutral to go from "Alice won't exist" to "Alice will exist"[1]. Intuitively, if Alice doesn't exist, she can't care that she doesn't live a happy life, and so no harm was done.

This position is vulnerable to a money pump[2], that is, there is a set of trades that it would make that would achieve nothing and lose money with certainty. Consider the following worlds:

• World 1: Alice won't exist in the future.
• World 2: Alice will exist in the future, and will be slightly happy.
• World 3: Alice will exist in the future, and will be very happy.

(The worlds are the same in every other aspect. It's a thought experiment.)

Then this view would be happy to make the following trades:

1. Receive $0.01[3] to move from World 1 to World 2 ("Not making happy people") 2. Pay$1.00 to move from World 2 to World 3 ("Making people happy")
3. Receive $0.01 to move from World 3 to World 1 ("Not making happy people") The net result is to lose$0.98 to move from World 1 to World 1.

## FAQ

Q. Why should I care if my preferences lead to money pumping?

This is a longstanding debate that I'm not going to get into here. I'd recommend Holden's series on this general topic, starting with Future-proof ethics.

Q. In the real world we'd never have such clean options to choose from. Does this matter at all in the real world?

Q. What if we instead have <slight variant on a person-affecting view>?

Often these variants are also vulnerable to the same issue. For example, if you have a "moderate view" where making happy people is not worthless but is discounted by a factor of (say) 10, the same example works with slightly different numbers:

Let's say that "Alice is very happy" has an undiscounted worth of 2 utilons. Then you would be happy to (1) move from World 1 to World 2 for free, (2) pay 1 utilon to move from World 2 to World 3, and (3) receive 0.5 utilons to move from World 3 to World 1.

The philosophical literature does consider person-affecting views to which this money pump does not apply. I've found these views to be unappealing for other reasons but I have not considered all of them and am not an expert in the topic.

If you're interested in this topic, Arrhenius proves an impossibility result that applies to all possible population ethics (not just person-affecting views), so you need to bite at least one bullet.

Q. Why doesn't this view anticipate that trade 2 will be available, and so reject trade 1?

You can either have a local decision rule that doesn't take into account future actions (and so excludes this sort of reasoning), or you can have a global decision rule that selects an entire policy at once. I'm talking about the local kind.

You could have a global decision rule that compares worlds and ignores happy people who don't exist in all worlds. In that case you avoid this money pump, but have other problems -- see Chapter 4 of On the Overwhelming Importance of Shaping the Far Future.

You could also take the local decision rule and try to turn it into a global decision rule by giving it information about what decisions it would make in the future. I'm not sure how you'd make this work but I don't expect great results.

Q. This is a very consequentialist take on person-affecting views. Wouldn't a non-consequentialist version (e.g. this comment [EA(p) · GW(p)]) make more sense?

Personally I think of non-consequentialist theories as good heuristics that approximate the hard-to-compute consequentialist answer, and so I often find them irrelevant when thinking about theories applied in idealized thought experiments. If you are instead sympathetic to non-consequentialist theories as being the true answer, then the argument in this post probably shouldn't sway you too much. If you are in a real-world situation where you have person-affecting intuitions, those intuitions are there for a reason and you probably shouldn't completely ignore them until you know that reason.

Q. Doesn't total utilitarianism also have problems?

Yes! While I am more sympathetic to total utilitarianism than person-affecting views, this post is just a short reference post about one particular argument. I am not defending claims like "this argument demolishes person-affecting views" or "total utilitarianism is the correct theory" in this post.

## Further resources

1. ^

For this post I'll assume that Alice's life is net positive, since "asymmetric" views say that if Alice would have a net negative life, then it would be actively bad (rather than neutral) to move Alice from "won't exist" to "will exist".

2. ^

A previous version of this post incorrectly [EA(p) · GW(p)] called this a Dutch book.

3. ^

By giving it $0.01, I'm making it so that it strictly prefers to take the trade (rather than being indifferent to the trade, as it would be if there was no money involved). ## 72 comments Comments sorted by top scores. comment by elliottthornley · 2022-07-08T11:48:27.959Z · EA(p) · GW(p) My impression is that each family of person-affecting views avoids the Dutch book here. Here are four families: (1) Presentism: only people who presently exist matter. (2) Actualism: only people who will exist (in the actual world) matter. (3) Necessitarianism: only people who will exist regardless of your choice matter. (4) Harm-minimisation views (HMV): Minimize harm, where harm is the amount by which a person's welfare falls short of what it could have been. Presentists won't make trade 2, because Alice doesn't exist yet. Actualists can permissibly turn down trade 3, because if they turn down trade 3 then Alice will actually exist and her welfare matters. Necessitarians won't make trade 2, because it's not the case that Alice will exist regardless of their choice. HMVs won't make trade 1, because Alice is harmed in World 2 but not World 1. Replies from: rohinmshah, MichaelStJules, Michael_Wiebe comment by Rohin Shah (rohinmshah) · 2022-07-08T21:55:15.584Z · EA(p) · GW(p) I agree that most philosophical literature on person-affecting views ends up focusing on transitive views that can't be Dutch booked in this particular way (I think precisely because not many people want to defend intransitivity). I think the typical person-affecting intuitions that people actually have are better captured by the view in my post than by any of these four families of views, and that's the audience to which I'm writing. This wasn't meant to be a serious engagement with the population ethics literature; I've now signposted that more clearly. EDIT: I just ran these positions (except actualism, because I don't understand how you make decisions with actualism) by someone who isn't familiar with population ethics, and they found all of them intuitively ridiculous. They weren't thrilled with the view I laid out but they did find it more intuitive. Replies from: elliottthornley comment by elliottthornley · 2022-07-11T10:26:23.400Z · EA(p) · GW(p) Okay, that seems fair. And I agree that the Dutch book is a good argument against the person-affecting intuitions you lay out. But the argument only shows that people initially attracted to those person-affecting intuitions should move to a non-Dutch-bookable person-affecting view. If we want to move people away from person-affecting views entirely, we need other arguments. The person-affecting views endorsed by philosophers these days are more complex than the families I listed. They're not so intuitively ridiculous (though I think they still have problems. I have a couple of draft papers on this.). Also a minor terminological note, you've called your argument a Dutch book and so have I. But I think it would be more standard to call it a money pump. Dutch books are a set of gambles all taken at once that are guaranteed to leave a person worse off. Money pumps are a set of trades taken one after the other that are guaranteed to leave a person worse off. Replies from: rohinmshah comment by Rohin Shah (rohinmshah) · 2022-07-11T13:03:02.500Z · EA(p) · GW(p) If we want to move people away from person-affecting views entirely, we need other arguments. Fwiw, I wasn't particularly trying to do this. I'm not super happy with any particular view on population ethics and I wouldn't be that surprised if the actual view I settled on after a long reflection was pretty different from anything that exists today, and does incorporate something vaguely like person-affecting intuitions. I mostly notice that people who have some but not much experience with longtermism are often very aware of the Repugnant Conclusion and other objections to total utilitarianism, and conclude that actually person-affecting intuitions are the right way to go. In at least two cases they seemed to significantly reconsider upon presenting this argument. It seems to me like, amongst the population of people who haven't engaged with the population ethics literature, critiques of total utilitarianism are much better known than critiques of person affecting intuitions. I'm just trying to fix that discrepancy. Also a minor terminological note, you've called your argument a Dutch book and so have I. But I think it would be more standard to call it a money pump. Thanks, I've changed this. Replies from: elliottthornley comment by elliottthornley · 2022-07-12T11:02:10.229Z · EA(p) · GW(p) I'm just trying to fix that discrepancy. I see. That seems like a good thing to do. Here's another good argument against person-affecting views that can be explained pretty simply, due to Tomi Francis. Person-affecting views imply that it's not good to add happy people. But Q is better than P, because Q is better for the hundred already-existing people, and the ten billion extra people in Q all live happy lives. And R is better than Q, because moving to R makes one hundred people's lives slightly worse and ten billion people's lives much better. Since betterness is transitive, R is better than P. R and P are identical except for the extra ten billion people living happy lives in R. Therefore, it's good to add happy people, and person-affecting views are false. Replies from: MichaelStJules, rohinmshah comment by MichaelStJules · 2022-09-08T18:40:38.959Z · EA(p) · GW(p) There are also Parfit's original Mere Addition argument and Huemer's Benign Addition argument for the Repugnant Conclusion. They're the familiar A≤A+<B arguments, adding a large marginally positive welfare population, and then redistributing the welfare evenly, except with Huemer's, A<A+, strictly, because those in A are made slightly better off in A+. Huemer's is here: https://philpapers.org/rec/HUEIDO I think this kind of argument can be used to show that actualism endorses the RC and Very RC in some cases, because the original world without the extra people does not maximize "self-conditional value" (if the original people in A are better off in A+, via benign addition), whereas B does, using additive aggregation. I think the Tomi Francis example also only has R maximizing self-conditional value, among the three options, when all three are available. And we could even make the original 100 people worse off than 40 each in R, and this would still hold. Voting methods extending from pairwise comparisons also don't seem to avoid the problem, either: https://forum.effectivealtruism.org/posts/fqynQ4bxsXsAhR79c/teruji-thomas-the-asymmetry-uncertainty-and-the-long-term?commentId=ockB2ZCyyD8SfTKtL [EA(p) · GW(p)] I guess HMVs, presentist and necessitarian views may work to avoid the RC and VRC, but AFAICT, you only get the procreation asymmetry by assuming some kind of asymmetry with these views. And they all have some pretty unusual prescriptions I find unintuitive, even as someone very sympathetic to person-affecting views. Frick’s conditional interests still seem promising and could maybe be used to justify the procreation asymmetry for some kind of HMV or negative axiology. comment by Rohin Shah (rohinmshah) · 2022-07-13T07:44:04.326Z · EA(p) · GW(p) Nice, I hadn't seen this argument before. comment by MichaelStJules · 2022-07-08T17:55:19.874Z · EA(p) · GW(p) This all seems right if all the trades are known to be available ahead of time and we're making all these decisions before Alice would be born. However, we can specify things slightly differently. Presentists and necessitarians who have made trade 1 will make trade 2 if it's offered after Alice is born, but then they can turn down trade 3 at that point, as trade 3 would mean killing Alice or an impossible world where she was never born. However, if they anticipate trade 2 being offered after Alice is born, then I think they shouldn't make trade 1, since they know they'll make trade 2 and end up in World 3 minus some money, which is worse than World 1 for presently existing people and necessary people before Alice is born. HMVs would make trade 1 if they don't anticipate trade 2/World 3 minus some money being an option, but end up being wrong about that. Replies from: elliottthornley comment by elliottthornley · 2022-07-11T09:53:50.925Z · EA(p) · GW(p) Agreed comment by Michael_Wiebe · 2022-07-08T16:46:12.117Z · EA(p) · GW(p) Is the difference between actualism and necessitarianism that actualism cares about both (1) people who exist as a result of our choices, and (2) people who exist regardless of our choices; whereas necessitarianism cares only about (2)? Replies from: elliottthornley comment by elliottthornley · 2022-07-11T09:54:33.143Z · EA(p) · GW(p) Yup! Replies from: Michael_Wiebe comment by Michael_Wiebe · 2022-07-11T16:31:40.373Z · EA(p) · GW(p) Hm, then I find necessitarianism quite strange. In practice, how do we identify people who exist regardless of our choices? Replies from: elliottthornley comment by elliottthornley · 2022-07-12T11:18:03.284Z · EA(p) · GW(p) I think in ordinary cases, necessitarianism ends up looking a lot like presentism. If someone presently exists, then they exist regardless of my choices. If someone doesn't yet exist, their existence likely depends on my choices (there's probably something I could do to prevent their existence). Necessitarianism and presentism do differ in some contrived cases, though. For example, suppose I'm the last living creature on Earth, and I'm about to die. I can either leave the Earth pristine or wreck the environment. Some alien will soon be born far away and then travel to Earth. This alien's life on Earth will be much better if I leave the Earth pristine. Presentism implies that it doesn't matter whether I wreck the Earth, because the alien doesn't exist yet. Necessitarianism implies that it would be bad to wreck the Earth, because the alien will exist regardless of what I do. comment by MichaelStJules · 2022-07-07T18:52:07.418Z · EA(p) · GW(p) More generally, Arrhenius proves an impossibility result that applies to all possible population ethics (not just person-affecting views), so (if you want consistency) you need to bite at least one of those bullets. That result (The Impossibility Theorem), as stated in the paper, has some important assumptions not explicitly mentioned in the result itself which are instead made early in the paper and assume away effectively all person-affecting views before the 6 conditions are introduced. The assumptions are completeness, transitivity and the independence of irrelevant alternatives. You could extend the result to include incompleteness, intransitivity, dependence on irrelevant alternatives or being in principle Dutch bookable/money pumpable as alternative "bullets" you could bite on top of the 6 conditions. (Intransitivity, dependence on irrelevant alternatives and maybe incompleteness imply Dutch books/money pumps, so you could just add Dutch books/money pumps and maybe incompleteness.) Replies from: MichaelStJules, rohinmshah comment by MichaelStJules · 2022-07-07T19:25:43.785Z · EA(p) · GW(p) There are some other similar impossibility results that apply to I think basically all aggregative views, person-affecting or not (although there are non-aggregative views which avoid them [EA(p) · GW(p)]). See Spears and Budolfson: The results are basically that all aggregative views in the literature allow small changes in individual welfares in a background population to outweigh the replacement of an extremely high positive welfare subpopulation with a subpopulation with extremely negative welfare, an extended very repugnant conclusion. The size and welfare levels of the background population, the size of the small changes and the number of small changes will depend on the exact replacement and view. The result is roughly: For any positive welfare population and negative welfare population, there exists some background population + small average welfare changes to it such that the negative welfare population + the changes to the background population are preferred to the positive welfare population (without the changes to the background population). This is usually through a much much larger number of small changes to the background population than the number of replaced individuals, or the small changes happening to individuals who are extremely prioritized (as in lexical views and some person-affecting views). (I think the result actually also adds a huge marginally positive welfare population along with the negative welfare one, but I don't think this is necessary or very interesting.) comment by Rohin Shah (rohinmshah) · 2022-07-07T20:05:15.784Z · EA(p) · GW(p) You could extend the result to include incompleteness, intransitivity, dependence on irrelevant alternatives or being in principle Dutch bookable/money pumpable as alternative "bullets" you could bite on top of the 6 conditions. Yeah, this is what I had in mind. comment by JP Addison (jpaddison) · 2022-07-07T20:12:54.215Z · EA(p) · GW(p) Mod note: I've enabled agree-disagree voting [LW · GW] on this thread. This is still in the experimental phase, see the first time we did so here [EA(p) · GW(p)]. Still very interested in feedback. comment by RedStateBlueState · 2022-07-07T12:46:23.075Z · EA(p) · GW(p) Maybe I have the wrong idea about what “person-affecting view” refers to, but I thought a person-affecting view was a non-consequentialist ideology that would not take trade 3, ie it is neutral about moving from no person to happy person but actively dislikes moving from happy person to no person. Replies from: Lukas_Gloor, rohinmshah comment by Lukas_Gloor · 2022-07-07T13:56:47.306Z · EA(p) · GW(p) Wouldn't the view dislike it if the happy person was certain to be born, but not in the situation where the happy person's existence is up to us? But I agree strongly with person-affecting views working best in a non-consequentialist framework! I think I find step 1 the most dubious – Receive$0.01 to move from World 1 to World 2 ("Not making happy people").

If we know that world 3 is possible, we're accepting money for creating a person under conditions that are significantly worse than they could be. That seems quite bad even if Alice would rather exist than not exist.

My reply violates the independence of irrelevant(-seeming) alternatives condition. I think that's okay.

To give an example, imagine some millionaire (who uses 100% of their money selfishly) would accept $1,000 to bring a child into existence that will grow up reasonably happy but have a lot of struggles – let's say she'll only have the means of a bottom-10%-income American household. Seems bad if the millionaire could instead bring a child into existence that is better positioned to do well in life and achieve her goals! Now imagine if a bottom-10%-income American family wants to bring a child into existence, and they will care for the child with all their resources (and are good parents, etc.). Then, it seems neutral rather than bad. I think of person-affecting principles not as "parts of a consequentialist theory of value" but rather as part of a set of non-consequentialist principles – something like "population ethics as a set of appeals or principles by which newly created people/beings can hold their creators accountable." Replies from: rohinmshah comment by Rohin Shah (rohinmshah) · 2022-07-08T06:26:03.364Z · EA(p) · GW(p) Added an FAQ: Q. This is a very consequentialist take on person-affecting views. Wouldn't a non-consequentialist version (e.g. this comment [EA(p) · GW(p)]) make more sense? Personally I think of non-consequentialist theories as good heuristics that approximate the hard-to-compute consequentialist answer, and so I often find them irrelevant when thinking about theories applied in idealized thought experiments. If you are instead sympathetic to non-consequentialist theories as being the true answer, then the argument in this post probably shouldn't sway you too much. If you are in a real-world situation where you have person-affecting intuitions, those intuitions are there for a reason and you probably shouldn't completely ignore them until you know that reason. In your millionaire example, I think the consequentialist explanation is "if people generally treat it as bad when Bob takes action A with mildly good first-order consequences when Bob could instead have taken action B with much better first-order consequences, that creates an incentive through anticipated social pressure for Bob to take action B rather than A when otherwise Bob would have taken A rather than B". (Notably, this reason doesn't apply in the idealized thought experiment where no one ever observes your decisions and there is no difference between the three worlds other than what was described.) Replies from: Lukas_Gloor comment by Lukas_Gloor · 2022-07-08T08:28:38.895Z · EA(p) · GW(p) if people generally treat it as bad when Bob takes action A with mildly good first-order consequences when Bob could instead have taken action B with much better first-order consequences, On my favored view, this isn't the case. I think of creating new people/beings as a special category. I also am mostly on board with consequentialism applied to limited domains of ethics, but I'm against treating all of ethics under consequentialism, especially if people try to do the latter in a moral realist way where they look for a consequentialist theory that defines everyone's standards of ideally moral conduct. I am working on a post titled "Population Ethics Without an Objective Axiology." Here's a summary from that post: • The search for an objective axiology assumes that there’s a well-defined “impartial perspective” that determines what’s intrinsically good/valuable. Within my framework, there’s no such perspective. • Another way of saying this goes as follows. My framework conceptualizes ethics as being about goals/interests.[There are, I think, good reasons for this – see my post Dismantling Hedonism-inspired Moral Realism [EA · GW] for why I object to ethics being about experiences, and my post Against Irreducible Normativity [EA · GW] on why I don’t think ethics is about things that we can’t express in non-normative terminology.] Goals can differ between people [EA · GW] and there’s no goal correct goal for everyone to adopt. • In fixed-population contexts, a focus on goals/interests can tell us exactly what to do: we best benefit others by doing what these others (people/beings) would want us to do. • In population ethics, this approach no longer works so well – it introduces ambiguities. Creating new people/beings changes the number of interests/goals to look out for. Relatedly, creating people/beings of type A instead of type B changes the types of interests/goals to look out for. In light of these options, a “focus on interests/goals” leaves many things under-defined. • To gain back some clarity, we can note that population ethics has two separate perspectives: that of existing people/beings and that of newly created people/beings. (Without an objective axiology, these perspectives cannot be unified.) • Population ethics from the perspective of existing people is analogous to settlers standing in front of a giant garden: There’s all this unused land and there’s a long potential future ahead of us – what do we want to do with it? How do we address various tradeoffs? • In practice, newly created beings are at the whims of their creators. However, “might makes right” is not an ideal that altruistically-inclined/morally-motivated creators would endorse. Population ethics from the perspective of newly created people/beings is like a court hearing: newly created people/beings speak up for their interests/goals. (Newly created people/beings have the opportunity to appeal to their creators' moral motivations and altruism, or at least hold them accountable to some minimal standards of pro-social conduct.) • The degree to which someone’s life goals are self-oriented vs. inspired by altruism/morality produces a distinction between minimalist morality and maximally ambitious morality. Minimalist morality is where someone respects both population-ethical perspectives sufficiently to avoid harm on both of them, while following self-oriented interests/goals otherwise. By contrast, effective altruists want to spend (at least a portion of) their effort and resources to “do what’s most moral/altruistic.” They’re interested in maximally ambitious morality. • Without an objective axiology, the placeholder “do what’s most moral/altruistic” is under-defined. In particular, there’s a tradeoff where cashing out “doing what’s most moral/altruistic” primarily according to the perspective of existing people leaves less room for altruism on the second perspective (that of newly created people), and vice versa. • Besides, what counts as “doing what’s most moral/altruistic” according to the second perspective is under-defined. Without an objective axiology, the interests of newly created people/beings depend on who we create. (E.g, some newly created people would rather not be created than be at a small risk of experiencing intense suffering; others would gladly take significant risks and care immensely about a chance of a happy existence. It is impossible to do right from both perspectives.) -- Some more thoughts to help make the above intelligible: I think there's an incongruence behind how people think of population ethics in the standard way. (The standard way being something like: look for an objective axiology, something that has "intrinsic value," then figure out how we are to relate to that value/axiology and whether to add extra principles around it.) The following two beliefs seem incongruent: 1. There’s an objective axiology 2. People’s life goals are theirs to choose: they aren’t making a mistake of rationality if they don’t all share the same life goal There’s a tension between these beliefs – if there was an objective axiology, wouldn’t the people who don’t orient their goals around that axiology be making a mistake? I expect that many effective altruists would hesitate to say “One of you must be wrong!” when two people discuss their self-oriented life goals and one cares greatly about living forever and the other doesn’t. The claim “people are free to choose their life goals” may not be completely uncontroversial. Still, I expect many effective altruists to already agree with it. To the degree that they do, I suggest they lean in on this particular belief and explore what it implies for comparing the “axiology first” framework to my framework “population ethics without an objective axiology.” I expect that leaning in on the belief “people are free to choose their life goals” makes my framework more intuitive and gives a better sense of what the framework is for, what it’s trying to accomplish. To help understand what I mean by minimalist morality vs. maximally amibitious morality, I'll now give some examples of how to think about procreation. These will closely track common sense morality. I'll indicate for each example whether it arises from minimalist morality or some person's take on maximally ambitious morality. Some examples also arise from there not being an objective axiology: • Parents are obligated to provide a very high standard of care for their children (principle from minimalist morality). • People are free to decide against becoming parents (“there’s no objective axiology”). • Parents are free to want to have as many children as possible (“there’s no objective axiology”), as long as the children are happy in expectation (principle from minimalist morality). • People are free to try to influence other people’s moral stances and parenting choices (“there’s no objective axiology”) – for instance, Joanne could promote anti-natalism and Marianne could promote totalism (their respective interpretations of “doing what’s most moral/altruistic”) – as long as they remain within the boundaries of what is acceptable in a civil society (principle from minimalist morality). So, what's the role for (something like) person-affecting principles in population ethics? Basically, if you only want minimalist morality and otherwise want to pursue self-oriented goals, person-affecting principles seem like a pretty good answer to "what should be ethical constraints to your option space for creating new people/beings." In addition, I think person-affecting principles have some appeal even for specific flavors of "doing what's most moral/altruistic," but only in people who lean toward interpretations of this that highlight benefitting people who already exist or will exist regardless of your actions. As I said in the bullet point summary, there's a tradeoff where cashing out “doing what’s most moral/altruistic” primarily according to the perspective of existing people leaves less room for altruism on the second perspective (that of newly created people), and vice versa. (For the "vice versa," note that, e.g., a totalist classical utilitarian would be leaving less room for benefitting already existing people. They would privilege an arguably defensible but certainly not 'objectively correct' interpretation of what it means to benefit newly created people, and they would lean in on that perspective more so than they lean into the perspective "What are existing people's life goals and how do I benefit them.") Replies from: rohinmshah comment by Rohin Shah (rohinmshah) · 2022-07-08T09:38:57.733Z · EA(p) · GW(p) I can't tell what you mean by an objective axiology. It seems to me like you're equivocating between a bunch of definitions: 1. An axiology is objective if it is universally true / independent of the decision-maker / not reliant on goals / implied by math. (I'm pointing to a cluster of intuitions rather than giving a precise definition.) 2. An axiology is objective if it provides a decision for every possible situation you could be in. (I would prefer to call this a "complete" axiology, perhaps.) 3. An axiology is objective if its decisions can be computed by taking each world, summing some welfare function over all the people in that world, and choosing the decision that leads to the world with a higher number. (I would prefer to call this an "aggregative" axiology, perhaps.) Examples of definition 1: The search for an objective axiology assumes that there’s a well-defined “impartial perspective” that determines what’s intrinsically good/valuable. [...] if there was an objective axiology, wouldn’t the people who don’t orient their goals around that axiology be making a mistake? Examples of definition 2: Without an objective axiology, the placeholder “do what’s most moral/altruistic” is under-defined. [...] I think there's an incongruence behind how people think of population ethics in the standard way. (The standard way being something like: look for an objective axiology, something that has "intrinsic value," then figure out how we are to relate to that value/axiology and whether to add extra principles around it.) Examples of definition 3: we can note that population ethics has two separate perspectives: that of existing people/beings and that of newly created people/beings. (Without an objective axiology, these perspectives cannot be unified.) I don't think I'm relying on an objective-axiology-by-definition-1. Any time I say "good" you can think of it as "good according to the decision-maker" rather than "objectively good". I think this doesn't affect any of my arguments. It is true that I am imagining an objective-axiology-by-definition-2 (which I would perhaps call a "complete axiology"). I don't really see from your comment why this is a problem. I agree this is "maximally ambitious morality" rather than "minimal morality". Personally if I were designing "minimal morality" I'd figure out what "maximally ambitious morality" would recommend we design as principles that everyone could agree on and follow, and then implement those. I'm skeptical that if I ran through such a procedure I'd end up choosing person-affecting intuitions (in the sense of "Making People Happy, Not Making Happy People", I think I plausibly would choose something along the lines of "if you create new people make sure they have lives well-beyond-barely worth living"). Other people might differ from me, since they have different goals, but I suspect not. I agree that if your starting point is "I want to ensure that people's preferences are satisfied" you do not yet have a complete axiology, and in particular there's an ambiguity about how to make decisions about which people to create. If this is your starting point then I think my post is saying "if you resolve this ambiguity in this particular way, you get Dutch booked". I agree that you could avoid the Dutch book by resolving the ambiguity as "I will only create individuals whose preferences I have satisfied as best as I can". Replies from: Lukas_Gloor comment by Lukas_Gloor · 2022-07-08T12:37:28.002Z · EA(p) · GW(p) Personally if I were designing "minimal morality" I'd figure out what "maximally ambitious morality" would recommend we design as principles that everyone could agree on and follow, and then implement those. I think this is a crux between us (or at least an instance where I didn't describe very well how I think of "minimal morality"). (A lot of the other points I’ve been making, I see mostly as “here’s a defensible alternative to Rohin’s view” rather than “here’s why Rohin is wrong to not find (something like) person-affecting principles appealing.”) In my framework, it wouldn’t be fair to derive minimal morality from a specific take on maximally ambitious morality. People who want to follow some maximally ambitious morality (this includes myself) won’t all pick the same interpretation of what that means. Not just for practical reasons, but fundamentally: for maximally ambitious morality, different interpretations are equally philosophically defensible. Some people may have the objection "Wait, if maximally ambitious morality is under-defined, why adopt confident and specific views for how you want things to be? Why not keep your views on it under-defined, too?” (See Richard Ngo’s post on Moral indefinability [LW · GW].) I have answered this objection in this section [EA · GW] of my post The Moral Uncertainty Rabbit Hole, Fully Excavated [EA · GW]. In short, I give an analogy between "doing what's maximally moral" and "becoming ideally athletically fit." In the analogy, someone grows up with the childhood dream of becoming “ideally athletically fit” in a not-further-specified way. They then have the insight that "becoming ideally athletically fit" has different defensible interpretations – e.g., the difference between a marathon runner or a 100m-sprinter ((or someone who is maximally fit in reducing heart attack risks – which are actually elevated for professional athletes!)). Now, it is an open question for them whether to care about a specific interpretation of the target concept or whether to embrace under-definedness. My advice to them for resolving this question is “think about which aspects of fitness you feel most drawn to, if any.” Minimal morality is the closest we can come to something “objective” in the sense that it’s possible for philosophically sophisticated reasoners to all agree on it (your first interpretation). (This is precisely because minimal morality is unambitious – it only tells us to not be jerks; it doesn’t give clear guidance for what else to do.) Minimal morality will feel unsatisfying to anyone who finds effective altruism appealing, so we want to go beyond it in places. However, within my framework, we can only go beyond it by forming morality/altruism-inspired life goals that, while we try to make them impartial/objective, have to inevitably lock in subjective judgment calls. (E.g., “Given that you can’t be both at once, do you want to be maximally impartially altruistic towards existing people or towards newly created people?” or “Assuming the latter, given that different types of newly created people will have different views on what’s good or bad for them, how will define for yourself what it means to maximally benefit (which?) newly created people?”) It is true that I am imagining an objective-axiology-by-definition-2 (which I would perhaps call a "complete axiology"). I don't really see from your comment why this is a problem. I agree it’s not a problem as long as you’re choosing that sort of success criterion (that you want a complete axiology) freely, rather than thinking it’s a forced move. (My sense is that you already don't think of it as a forced move, so I should have been more clear that I wasn't necessarily arguing against your views.) I agree that if your starting point is "I want to ensure that people's preferences are satisfied" you do not yet have a complete axiology, and in particular there's an ambiguity about how to make decisions about which people to create. If this is your starting point then I think my post is saying "if you resolve this ambiguity in this particular way, you get Dutch booked". I agree that you could avoid the Dutch book by resolving the ambiguity as "I will only create individuals whose preferences I have satisfied as best as I can". Yes, that describes it very well! That said, I’m mostly arguing for a framework* for how to think about population ethics rather than a specific, object-level normative theory. So, I’m not saying the solution to population ethics is “existing people’s life goals get comparatively a lot of weight.” I’m only pointing out how that seems like a defensible position, given that the alternative would be to somewhat arbitrarily give them comparatively very little weight. *By “framework,” I mean a set of assumptions for thinking about a domain, answering questions like “What am I trying to figure out?”, “What makes for a good solution?” and “What are the concepts I want to use to reason successfully about this domain?” I can't tell what you mean by an objective axiology. It seems to me like you're equivocating between a bunch of definitions: I like the three interpretations of "objective" that you distilled! I use the word “objective” in the first sense, but you noted correctly that I’m arguing as though rejecting “there’s an objective axiology” in that sense implies other things, too. (I should make these hidden inferences explicit in future versions of the summary!) I’d say I’ve been arguing hard against the first interpretation of “objective axiology” and softly against your second and third descriptions of desirable features of an axiology/"answer to population ethics." By “arguing hard” I mean “anyone who thinks this is wrong.” By “arguing softly” I mean “that may be defensible, but there are other defensible alternatives.” So, on the question of success criteria for answers to population ethics (whether we're looking for a complete axiology, per your 2nd description, and whether the axiology should "fall out" naturally from world states, rather than be specific to histories or to "who's the person with the choice?", per your 3rd description)... On those questions, I think it's perfectly defensible to end up with answers that satisfy each respective criterion, but I think it's important to keep the option space open while we're discussing population ethics within a community, so we aren't prematurely locking in that "solutions to population ethics" need to be of a specific form. (It shouldn't become uncool within EA to conceptualize things differently, if the alternatives are well formed / well argued.) I think there's a practical effect where people who think “ethics is objective” (in the first sense) might prematurely restrict their option space. (This won't apply to you.) I think they're looking for the sort of (object-level) normative theories that can fulfill the steep demands of objectivity – theories that all philosophically sophisticated others could agree on despite the widespread differences in people’s moral intuitions. With this constraint, one is likely to view it as a positive feature that a theory is elegantly simple, even if it demands a lot of “bullet biting.” (Moral reasoners couldn’t agree on the same answer if they all relied too much on their moral intuitions, which are different from person to person.) In other words, if we thought that morality was a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on (and we also have priors that the answer is about “altruism” and “impartiality”), then we’d come up with different solutions than if we started without the "coordination game" assumption. In any case, theories that fit your second and third description tend to be simpler, so they're more appealing to people who endorse "ethics is objective" (in the first sense). That's the link I see between the three descriptions. It’s no coincidence that the examples I gave in my previous comment (moral issues around procreation) track common sense ethics. The less we think “morality is objective” (in the first sense), the more alternatives we have to biting specific bullets. comment by Rohin Shah (rohinmshah) · 2022-07-07T13:54:46.450Z · EA(p) · GW(p) Yeah, you could modify the view I laid out to say that moving from "happy person" to "no person" has a disutility equal in magnitude to the welfare that the happy person would have had. This new view can't be Dutch booked because it never takes trades that decrease total welfare. My objection to it is that you can't use it for decision-making because it depends on what the "default" is. For example, if you view x-risk reduction as preventing a move from "lots of happy people to no people" this view is super excited about x-risk reduction, but if you view x-risk reduction as a move from "no people to lots of happy people" this view doesn't care. (You can make a similar objection to the view in the post though it isn't as stark. In my experience, people's intuitions are closer to the view in the post, and they find the Dutch book argument at least moderately convincing.) Replies from: Erich_Grunewald, RedStateBlueState comment by Erich_Grunewald · 2022-07-07T14:59:42.747Z · EA(p) · GW(p) My objection to it is that you can't use it for decision-making because it depends on what the "default" is. For example, if you view x-risk reduction as preventing a move from "lots of happy people to no people" this view is super excited about x-risk reduction, but if you view x-risk reduction as a move from "no people to lots of happy people" this view doesn't care. That still seems somehow like a consequentialist critique though. Maybe that's what it is and was intended to be. Or maybe I just don't follow? From a non-consequentialist point of view, whether a "no people to lots of happy people" move (like any other move) is good or not depends on other considerations, like the nature of the action, our duties or virtue. I guess what I want to say is that "going from state A to state B"-type thinking is evaluating world states in an outcome-oriented way, and that just seems like the wrong level of analysis for those other philosophies. From a consequentalist point of view, I agree. Replies from: rohinmshah comment by Rohin Shah (rohinmshah) · 2022-07-07T15:04:52.388Z · EA(p) · GW(p) I totally agree this is a consequentialist critique. I don't think that negates the validity of the critique. From a non-consequentialist point of view, whether a "no people to lots of happy people" move (like any other move) is good or not depends on other considerations, like the nature of the action, our duties or virtue. I guess what I want to say is that "going from state A to state B"-type thinking is evaluating world states in an outcome-oriented way, and that just seems like the wrong level of analysis for those other philosophies. Okay, but I still don't know what the view says about x-risk reduction (the example in my previous comment)? Replies from: Erich_Grunewald comment by Erich_Grunewald · 2022-07-07T15:23:51.031Z · EA(p) · GW(p) I don't think that negates the validity of the critique. Agreed -- I didn't mean to imply it was. Okay, but I still don't know what the view says about x-risk reduction (the example in my previous comment)? By "the view", do you mean the consequentialist person-affecting view you argued against, or one of the non-consequentialist person-affecting views I alluded to? If the former, I have no idea. If the latter, I guess it depends on the precise view. On the deontological view I find pretty plausible we have, roughly speaking, a duty to humanity, and that'd mean actions that reduce x-risk are good (and vice versa). (I think there are also other deontological reasons to reduce x-risk, but that's the main one.) I guess I don't see any way that changes depending on what the default is? I'll stop here since I'm not sure this is even what you were asking about ... Replies from: rohinmshah comment by Rohin Shah (rohinmshah) · 2022-07-07T20:00:24.683Z · EA(p) · GW(p) Oh, to be clear, my response to RedStateBlueState's comment was considering a new still-consequentialist view, that wouldn't take trade 3. None of the arguments in this post are meant to apply to e.g. deontological views. I've clarified this in my original response. comment by RedStateBlueState · 2022-07-07T14:05:36.756Z · EA(p) · GW(p) Right, the “default” critique is why people (myself included) are consequentialists. But I think the view outlined in this post is patently absurd and nobody actually believes it. Trade 3 means that you would have no reservations about killing a (very) happy person for a couple utilons! Replies from: rohinmshah, RYC comment by Rohin Shah (rohinmshah) · 2022-07-07T14:31:37.447Z · EA(p) · GW(p) Oh, the view here only says that it's fine to prevent a happy person from coming into existence, not that it's fine to kill an already existing person. comment by Richard Y Chappell (RYC) · 2022-07-07T14:43:29.541Z · EA(p) · GW(p) Trade 3 involves preventing someone from coming into existence. That's very different from killing someone who already exists. comment by MichaelStJules · 2022-07-07T14:57:03.257Z · EA(p) · GW(p) I don't actually think Dutch books and money pumps are very practically relevant in charitable/career decision-making. To the extent that they are, you should aim to anticipate others attempting to Dutch book or money pump you and model sequences of decisions, just like you should aim to anticipate any other manipulation or exploitation. EDIT: You don't need to commit to views or decision procedures which are in principle not Dutch bookable/money pumpable. Furthermore, "greedy" (as in "greedy algorithm") or short-sighted EV maximization is also suboptimal in general, since you should in general consider what options will be available in the future depending on your decisions. Also, it's worth mentioning that, in principle, EV maximization with* unbounded utility/social welfare functions can be Dutch booked/money pumped and violate the sure-thing principle, so if such arguments undermine person-affecting views, they also undermine total utilitarianism. Or at least the typical EV-maximizing unbounded versions, but you can apply a bounded squashing function to the sum of utilities before taking expected values, which will then be incompatible with Harsanyi's theorem, one of the main arguments for utilitarianism. Or you can assume, I think unreasonably, fixed bounds to total value, or that only finitely many outcomes from any given choice are possible, (EDIT) or otherwise assign 0 probability to the problematic possibilities. *Added in an edit. Replies from: rohinmshah, Derek Shiller comment by Rohin Shah (rohinmshah) · 2022-07-08T05:31:12.032Z · EA(p) · GW(p) I don't think the case for caring about Dutch books is "maybe I'll get Dutch booked in the real world". I like the Future-proof ethics series on why to care about these sorts of theoretical results. I definitely agree that there are issues with total utilitarianism as well. Replies from: Samuel Shadrach comment by acylhalide (Samuel Shadrach) · 2022-07-10T15:45:42.084Z · EA(p) · GW(p) If I may ask, why do you believe there exist any future-proof ethics? I kinda suspect no ethics are future-proof in this sense, hence had to ask. Replies from: rohinmshah comment by Rohin Shah (rohinmshah) · 2022-07-11T08:04:42.307Z · EA(p) · GW(p) I kinda suspect no ethics are future-proof in this sense Which sense do you mean? I like Holden's description: I expect some readers will be very motivated by something like "Making ethical decisions that I will later approve of, after I've done more thinking and learning," while others will be more motivated by something like "Making ethical decisions that future generations won't find abhorrent." Personally I'm thinking more of the former reason than the latter reason. I think "things I'd approve of after more thinking and learning" is reasonably precise as a definition, and seems pretty clearly like a thing that can be approximated. Replies from: Samuel Shadrach comment by acylhalide (Samuel Shadrach) · 2022-07-13T16:37:40.338Z · EA(p) · GW(p) I mean something like: If I'm exposed to persuasive sequence of words A, I'll become strongly convinced of one set of values, and if I'm exposed to a different persuasive sequence of words B, I'd become strongly convinced of a different set of values. Instead of words, it could also be observations or experiences. Which I'm assuming is part of "learning and thinking" as intended here. (And with digital wireheading for instance, we might be able to generate a lot of such experiences to subject each other to.) It isn't obvious to me that different sets of experiences in different futures will still cause us to converge to the same "future-proof" values. And maybe that's cause humans are faulty reasoners, and ideal reasoners can in fact do better. But if I had to self-modify into an ideal reasoner I'm not sure what a "moral reasoning process" even looks like. Our best (or atleast good) formal model for an ideal reasoner is one that just happens to know its unchangeable utility function from birth, not one that reasons about what values it should have, in any meaningful sense. And if you cannot formalise what these moral reasoning processes could look like, even in theory, I also find it easier to believe any such process can be attacked. (Probably such security mindset.) Keen on your thoughts! Replies from: rohinmshah comment by Rohin Shah (rohinmshah) · 2022-07-13T18:58:56.775Z · EA(p) · GW(p) I definitely think these processes can be attacked. When I say "what I'd approve of after learning and thinking more" I'm imagining that there isn't any adversarial action during the learning and thinking. If I were forcibly exposed to a persuasive sequence of words, or manipulated / tricked into think that some sequence of words informed of benign facts but were in fact selected to hack my mind, that no longer holds. Replies from: Samuel Shadrach comment by acylhalide (Samuel Shadrach) · 2022-07-14T18:34:38.667Z · EA(p) · GW(p) This is fair! So to restate, your claim is that in the absence of such adversaries, moral reasoning processes will in fact all converge to the same place. Even if we're exposed to wildly different experiences/observations/futures, the only thing that determines whether there's convergence or divergence is whether those experiences contain intelligent adversaries or not. I have some intuitions against this claim too, but I'm not sure how to make my thoughts airtight or present them well. I'll still try! (If you think anything here is valuable or that I should spend more time trying to present anything here better, do tell! My comment might be a bit rambly right now, sorry.) Question 1: What precisely about our moral reasoning process make them unlikely to be attacked by "natural" conditions but attackable by an intelligently designed one? If I had to mathematically formally written down every possible future, what makes this distinction between natural and not natural a sharp distinction, and what makes it perfectly 100% set-theoretically overlap with the distinction between futures where our reasoning processes converge and where they don't? One way to answer this is to point at some deep structures we can see today, in people's moral reasoning processes. If you have any I'd be keen to see them. Another is to rely on intuitions we have today and trust that in the future we can formalise it. I find this plausible, but if you're claiming this I also don't know how to attach a lot of confidence to just intuitions. Maybe I need to see your intuitions! Yet another is to claim that sure, in theory there might be "natural" conditions that can attack our reasoning processes, it's just that those natural conditions are super unlikely in practice. This will then shift the question more from theoretical to practical, from theoretically possible futures to futures actually likely in practice. As a practical matter, I don't know how we can say with high confidence anything about what our natural conditions will look like millions of years from now. As another practical matter, our world today is full of intelligent adversaries trying to hack each other's moral reasoning processes. Sure, the intelligence differential isn't as big as could maybe be with an AI or digital minds, but there are both intelligence and power differentials. Speaking of power differentials specifically, Most people are socially punished for various thoughts to the point those thoughts can't be expressed in public; enough such punishment can cause people to end up lying even to themselves. For instance, see hegemonic culture. Maybe the future will not contain such adversaries or they won't have similar intelligence or power differentials, but it seems non-obvious at first glance. Question 2: Could natural conditions ever play the equivalent of intelligent adversaries? For instance imagine we had Robin Hanson-style Malthusian competition playing out among digital minds for millions of years. (I'm not sure this is actually likely but imagine it happened.) Assume all the agents competing have very similar but not perfectly identical values and reasoning processes. Feel free to interpret "similar but not identical" however you wish, I'm just trying to somehow capture the intuition that human values and moral reasoning processes atleast at surface-level seem similar but not identical, there are some structures that are identical. Now if you did run Malthusian competition for millions of years (which seems a dumb algorithm and a natural algorithm), are you confident we wouldn't end up with agents whose values converged somewhere else? (Versus worlds not run by competition) Actually on second thought, maybe in this example I am conflating "dumb natural algorithms/conditions selecting agents whose values converge differently" with "dumb natural algorithms/conditions selecting observations that can attack the moral reasoning process of the very same agent". Maybe I'll try thinking of an example specifically for the latter. But also, shouldn't your convergence claim also resist selection processes? For instance if there's imperfect communication between Americans and Chinese, would they both still converge to the same values? If the Chinese died but Americans lived, or if the Americans lived but Chinese died, will that affect where their moral reasoning processes converge long-term? There is a trivial way in which dumb natural algorithms can find observations and experiences to attack you, which is by selecting for intelligent adversaries who can create those observations. For instance on the stuff about Malthusian competition and hegemonic culture, competition itself is a dumb natural state not an intelligent adversary. But it can select for intelligent adversaries who will then be able to create experiences to attack your reasoning process. So if a capitalist is able to create a bunch of observations to convince you into thinking that working yourself harder is a moral good ... you could say that this is because of the capitalist being an intelligent adversary to your moral reasoning process. But you could also say it's because a really dumb natural algorithm of capitalistic competition selected this specific capitalist to be in a position to attack your reasoning process in the first place. But if you are looking specifically for ways in which natural conditions can convince people of things without creating intelligent adversaries at all, ... my intuitions are weaker here. I can still try writing about this if you feel it'll be useful, because even here I feel different futures may get us to converge to different values. Replies from: rohinmshah comment by Rohin Shah (rohinmshah) · 2022-07-15T05:55:23.905Z · EA(p) · GW(p) When I said that there isn't any adversarial action, I really should have said that you are safe and your learning process is under your control. By default I'm imagining a reflection process under which (a) all of your basic needs are met (e.g. you don't have to worry about starving), (b) you get to veto any particular experience happening to you, (c) you can build tools (or have other people build tools) that help with your reflection, including by building situations where you can have particular experiences, or by creating simulations of yourself that have experiences and can report back, (d) nothing is trying to manipulate or otherwise attack you (unless you specifically asked for the manipulation / attack), whether it is intelligently designed or natural, (e) you don't have any time pressure on finishing the reflection. To be clear this is pretty stringent -- the current state of affairs where you regularly go around talking to people who try to persuade you of stuff doesn't meet the criteria. So to restate, your claim is that in the absence of such adversaries, moral reasoning processes will in fact all converge to the same place. Given conditions of safety and control over the reflection. It's also not that I think every such process converge to exactly the same place. Rather I'd say that (a) I feel pretty intuitively happy about anything that you get to via such a process, so it seems fine to get any one of them and (b) there is enough convergence that it makes sense to view that as a target which we can approximate or move towards. Even if we're exposed to wildly different experiences/observations/futures, the only thing that determines whether there's convergence or divergence is whether those experiences contain intelligent adversaries or not. Part of the reflection process would be to seek out different experiences / observations, so I'm not sure they would be "wildly different". What precisely about our moral reasoning process make them unlikely to be attacked by "natural" conditions but attackable by an intelligently designed one? [...] Could natural conditions ever play the equivalent of intelligent adversaries? If they're attacked by natural conditions that violates my requirements too. (I don't think I ever said the adversarial action had to be "intelligently designed" instead of "natural"?) In this process fundamentally everything that happens to you is meant to be your own choice. It's still possible that you make a mistake, e.g. you send a simulation of yourself to listen to a persuasive argument and then report back, the simulation is persuaded that <bad thing> is great, comes back and persuades you of it as well. (Obviously you've already considered that possibility and taken precautions, but it happens anyway; your precautions weren't sufficient.) But it at least feels unlikely, e.g. you shouldn't expect to make a mistake (if you did, you should just not do the thing instead). Replies from: Samuel Shadrach comment by acylhalide (Samuel Shadrach) · 2022-07-15T14:43:35.061Z · EA(p) · GW(p) Thanks for the reply! Sorry, I really tried writing you a reply, even deleted a few I wrote, but I think I should probably spend some time on it myself first so I can present it better. If I've to really shorten, in general I feel like we don't have that much "free choice" even in simulations, we're anchored to the observations we've actually had, and our creativity is very limited. And different futures can provide us with wildly different observations all unimaginable to people in other futures, and people in 2022. But defending this and other things will require lot more effort on my part. Sorry. Thanks for your time anyways! Replies from: Lukas_Gloor comment by Lukas_Gloor · 2022-07-15T15:31:16.841Z · EA(p) · GW(p) My post The Moral Uncertainty Rabbit Hole, Fully Excavated [EA · GW] seems relevant to the discussion here. In that post, I describe examples of "reflection environments" that define ideal reasoning conditions (to specify one's "idealized values"). I talk about pitfalls of reflection environments and judgment calls we'd have to make within that environment. (Pitfalls being things that are bad if they happen but could be avoided at least in theory. Judgment calls are things that aren't bad per se but seem to introduce path dependencies that we can't avoid, which may reduce the chance of convergent outcomes.) I talk about "reflection strategies," which describe how someone goes about their moral reflection inside a reflection environment. I distinguish between conservative and open-minded reflection strategies. They differ primarily on whether someone has already formed convictions (it's a gradual difference). I describe how open-minded reflection strategies come at some risk of leading to under-defined outcomes. (I argue that this isn't necessarily problem, but it's something people want to be aware of.) Here's a section from somewhere in the middle of the post that summarizes some conclusions: Conclusion: “One has to actively create oneself” “Moral reflection” sounds straightforward – naively, one might think that the right path of reflection will somehow reveal itself. However, as we think of the complexities of setting up a suitable reflection environment and how we’d proceed inside it, what it would be like and how many judgment calls we’d have to make, we see that things can get tricky. Joe Carlsmith summarized it as follows in an excellent post [EA · GW] (what Carlsmith calls “idealizing subjectivism” corresponds to what I call “deferring to moral reflection”): >My current overall take is that especially absent certain strong empirical >assumptions, idealizing subjectivism is ill-suited to the role some hope it can >play: namely, providing a privileged and authoritative (even if subjective) >standard of value. Rather, the version of the view I favor mostly reduces to the >following (mundane) observations: • If you already value X, it’s possible to make instrumental mistakes relative to X. • You can choose to treat the outputs of various processes, and the attitudes of various hypothetical beings, as authoritative to different degrees. >This isn’t necessarily a problem. To me, though, it speaks against treating your >“idealized values” the way a robust meta-ethical realist treats the “true values.” >That is, you cannot forever aim to approximate the self you “would become”; you >must actively create yourself, often in the here and now. Just as the world can’t >tell you what to value, neither can your various hypothetical selves — unless you >choose to let them. Ultimately, it’s on you. In my words, the difficulty with deferring to moral reflection too much is that the benefits of reflection procedures (having more information and more time to think; having access to augmented selves, etc.) don’t change what it feels like, fundamentally, to contemplate what to value. For all we know, many people would continue to feel apprehensive about doing their moral reasoning “the wrong way” since they’d have to make judgment calls left and right. Plausibly, no “correct answers” would suddenly appear to us. To avoid leaving our views under-defined, we have to – at some point – form convictions by committing to certain principles or ways of reasoning. As Carslmith describes it, one has to – at some point – “actively create oneself.” (The alternative is to accept the possibility that one’s reflection outcome may be under-defined.) It is possible to delay the moment of “actively creating oneself” to a time within the reflection procedure. (This would correspond to an open-minded reflection strategy; there are strong arguments to keep one’s reflection strategy at least moderately open-minded.) However, note that, in doing so, one “actively creates oneself” as someone who trusts the reflection procedure more than one’s object-level moral intuitions or reasoning principles. This may be true for some people, but it isn’t true for everyone. Alternatively, it could be true for someone in some domains but not others. Overall, I think Holden's notion of future-proof values is intelligible and holds up to deeper analysis, but I'd imagine that a lot of people underestimate the degree to which it's useful to already form convictions on some ways of reasoning or some components of one's values, to avoid that the reflection outcome becomes under-defined to a degree we might find unsatisfying. Replies from: Samuel Shadrach comment by acylhalide (Samuel Shadrach) · 2022-07-16T05:10:48.695Z · EA(p) · GW(p) Thanks for this comment! comment by Derek Shiller · 2022-07-07T15:28:47.720Z · EA(p) · GW(p) unbounded social welfare functions can be Dutch booked/money pumped and violate the sure-thing principle Do you have an example? Replies from: MichaelStJules comment by MichaelStJules · 2022-07-07T16:15:19.953Z · EA(p) · GW(p) See this comment by Paul Christiano on LW based on St. Petersburg lotteries [LW(p) · GW(p)] (and my reply). Replies from: Derek Shiller, RedStateBlueState comment by Derek Shiller · 2022-07-07T16:45:39.718Z · EA(p) · GW(p) Interesting. It reminds me of a challenge for denying countable additivity: God runs a lottery. First, he picks two integers at random (each integer has an equal and 0 probability of being picked, violating countable additivity.) Then he shows one of the two at random to you. You know in advance that there is a 50% chance you'll see the higher one (maybe he flips a coin), but no matter what it is, after you see it you'll be convinced it is the lower one. I'm inclined to think that this is a problem with infinities in general, not with unbounded utility functions per se. Replies from: MichaelStJules comment by MichaelStJules · 2022-07-07T22:41:20.983Z · EA(p) · GW(p) I'm inclined to think that this is a problem with infinities in general, not with unbounded utility functions per se. I think it's a problem for the conjunction of allowing some kinds of infinities and doing expected value maximization with unbounded utility functions. I think EV maximization with bounded utility functions isn't vulnerable to "isomorphic" Dutch books/money pumps or violations of the sure-thing principle. E.g., you could treat the possible outcomes of a lottery as all local parts of a larger single universe to aggregate, but then conditioning on the outcome of the first St. Petersburg lottery and comparing to the second lottery would correspond to comparing a local part of the first universe to the whole of the second universe, but the move from the whole first universe to the local part of the first universe can't happen via conditioning, and the arguments depend on conditioning. Bounded utility functions have problems that unbounded utility functions don't, but these are in normative ethics and about how to actually assign values (including in infinite universes), not about violating plausible axioms of (normative) rationality/decision theory. comment by RedStateBlueState · 2022-07-07T16:31:44.036Z · EA(p) · GW(p) After reading the linked comment I think the view that total utilitarianism can be dutch booked is fairly controversial (there is another unaddressed comment I quite agree with), and on a page like this one I think it's misleading to state as fact in a comment that total utilitarianism can be dutch booked in a similar way that person-affecting views can be dutch booked. Replies from: MichaelStJules comment by MichaelStJules · 2022-07-07T16:52:58.848Z · EA(p) · GW(p) I should have specified EV maximization with an unbounded social welfare function, although the argument applies somewhat more generally; I've edited this into my top comment. Looking at Slider's reply [LW(p) · GW(p)] to the comment I linked, assuming that's the one you meant (or did you have another in mind?): 1. Slider probably misunderstood Christiano about truncation, because Christiano meant that you'd truncate the second lottery at a point that depends on the outcome of the first lottery. For any actual value outcome X of the original St. Petersburg's lottery, half St. Pesterburg can be truncated at some point and still have a finite expected value greater than X. (EDIT: However, I guess the sure-thing principle isn't relevant here with conditional truncation, since we aren't comparing only two fixed options anymore.) 2. I don't understand what Slider meant in the second paragraph, and I think it's probably missing the point. 3. The third paragraph misses the point: once the outcome is decided for the first St. Petersburg lottery, it has finite value, and half St. Petersburg still has infinite expected value, which is greater than a finite value. Replies from: RedStateBlueState comment by RedStateBlueState · 2022-07-07T17:15:04.296Z · EA(p) · GW(p) Yes, I should have thought more about Slider's reply before posting, I take back my agreement. Still, I don't find dutch booking convincing in Christiano's case. The reason to reject a theory based on dutch booking is that there is no logical choice to commit to, in this case to maximize EV. I don't think this applies to the Paul Christiano case, because the second lottery does not have higher EV than the first. Yes, once you play the first lottery and find out that it has a finite value the second one will have higher EV, but until then the first one has higher EV (in an infinite way) and you should choose it. But again I think there can be reasonable disagreement about this, I just think equating dutch booking for the person-affecting view and for the total utilitarianism view is misleading. These are substantially different philosophical claims. Replies from: MichaelStJules, MichaelStJules comment by MichaelStJules · 2022-07-08T01:05:15.592Z · EA(p) · GW(p) Yes, once you play the first lottery and find out that it has a finite value the second one will have higher EV, but until then the first one has higher EV (in an infinite way) and you should choose it. I think a similar argument can apply to person-affecting views and the OP's Dutch book argument: Yes, starting with World 1, once you make trade 1 to get World 2 and find out that trade 2 to World 3 is available, trade 1 will have negative value, but until then trade 1 has positive value and you should choose it. comment by MichaelStJules · 2022-07-07T18:21:12.312Z · EA(p) · GW(p) I agree that you can give different weights to different Dutch book/money pump arguments. I do think that if you commit 100% to complete preferences over all probability distributions over outcomes and invulnerability to Dutch books/money pumps, then expected utility maximization over each individual decision with an unbounded utility function is ruled out. As you mention, one way to avoid this St. Petersburg Dutch book/money pump is to just commit to sticking with A, if A>B ex ante, and regardless of the actual outcome of A (+ some other conditions, e.g. A and B both have finite value under all outcomes, and A has infinite expected value), but switching to C under certain other conditions. You may have similar commitment moves for person-affecting views, although you might find them all less satisfying. You could commit to refusing one of the 3 types of trades in the OP, or doing so under specific conditions, or just never completing the last step in any Dutch book, even if you'd know you'd want to. I think those with person-affecting views should usually refuse moves like trade 1, if they think they're not too unlikely to make moves like trade 2 after, but this is messier, and depends on your distributions over what options will become available in the future depending on your decisions. The above commitments for St. Petersburg-like lotteries don't depend on what options will be available in the future or your distributions over them. comment by Jacy · 2022-07-07T12:55:02.050Z · EA(p) · GW(p) Trade 3 is removing a happy person, which is usually bad in a person-affecting view, possibly bad enough to not be worth less than$0.99 and thus not be Dutch booked.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2022-07-07T13:55:39.308Z · EA(p) · GW(p)

Responded [EA(p) · GW(p)] in the other comment thread.

Replies from: Harrison D
comment by Harrison Durland (Harrison D) · 2022-07-08T04:02:55.666Z · EA(p) · GW(p)

I'm honestly a bit unclear on how that responds to Jacy's point, especially if Jacy's point is similar to what I'm about to write a separate comment about (with the caveat that I might just be unclear/confused on the concept of "person-affecting" ethics).

comment by MichaelStJules · 2022-07-07T18:38:51.675Z · EA(p) · GW(p)

In practice, I think those with person-affecting views should refuse moves like trade 1 if they "expect" to subsequently make moves like trade 2, because World 1 ≥ World 3*. This would depend on the particulars of the numbers, credences and views involved, though.

EDIT: Lukas discussed and illustrated this earlier here [EA(p) · GW(p)].

*EDIT2: replaced > with ≥.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2022-07-08T05:26:00.894Z · EA(p) · GW(p)

In practice, I think those with person-affecting views should refuse moves like trade 1 if they "expect" to subsequently make moves like trade 2, because World 1 > World 3.

You can either have a local decision rule that doesn't take into account future actions (and so excludes this sort of reasoning), or you can have a global decision rule that selects an entire policy at once. I was talking about the local kind.

You could have a global decision rule that compares worlds and ignores happy people who will only exist in some of the worlds. In that case I'd refer you to Chapter 4 of On the Overwhelming Importance of Shaping the Far Future.

(Nitpick: Under the view I laid out World 1 is not better than World 3? You're indifferent between the two.)

Replies from: MichaelStJules
comment by MichaelStJules · 2022-07-08T09:22:57.891Z · EA(p) · GW(p)

You can either have a local decision rule that doesn't take into account future actions (and so excludes this sort of reasoning), or you can have a global decision rule that selects an entire policy at once. I was talking about the local kind.

Thanks, it's helpful to make this distinction explicit.

Aren't such local decision rules generally vulnerable to Dutch book arguments, though? I suppose PAVs with local decision rules are vulnerable to Dutch books even when the future options are fixed (or otherwise don't depend on past choices or outcomes), whereas EU maximization with a bounded utility function isn't.

I don't think anyone should aim towards a local decision rule as an ideal, though, so there's an important question of whether your Dutch book argument undermines person-affecting views much at all relative to alternatives. Local decision rules will undweight option value, value of information, investments for the future, and basic things we need to do survive. We'd underinvest in research, and individuals would underinvest in their own education. Many people wouldn't work, since they only do it for their future purchases. Acquiring food and eating it are separate actions, too.

(Of course, this also cuts against the problems for unbounded utility functions I mentioned.)

You could have a global decision rule that compares worlds and ignores happy people who will only exist in some of the worlds. In that case I'd refer you to Chapter 4 of On the Overwhelming Importance of Shaping the Far Future.

I'm guessing you mean this is a bad decision rule, and I'd agree. I discuss some alternatives (or directions) I find more promising here [EA(p) · GW(p)].

(Nitpick: Under the view I laid out World 1 is not better than World 3? You're indifferent between the two.)

Woops, fixed.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2022-07-08T09:51:47.619Z · EA(p) · GW(p)

I don't think anyone should aim towards a local decision rule as an ideal, though, so there's an important question of whether your Dutch book argument undermines person-affecting views much at all relative to alternatives. Local decision rules will undweight option value, value of information, investments for the future, and basic things we need to do survive.

I think it's worth separating:

1. How to evaluate outcomes
2. How to make decisions under uncertainty
3. How to make decisions over time

The argument in this post is just about (1). Admittedly I've illustrated it with a sequence of trades (which seems more like (3)) but the underlying principle is just that of transitivity which is squarely within (1). When thinking about (1) I'm often bracketing out (2) and (3), and similarly when I think about (2) or (3) I often ignore (1) by assuming there's some utility function that evaluates outcomes for me. So I'm not saying "you should make decisions using a local rule that ignores things like information value"; I'm more saying "when thinking about (1) it is often a helpful simplifying assumption to consider local rules and see how they perform".

It's plausible that an effective theory will actually need to think about these areas simultaneously -- in particular, I feel somewhat compelled by arguments from (2) that you need to have a bounded mechanism for (1), which is mixing those two areas together. But I think we're still at the stage where it makes sense to think about these things separately, especially for basic arguments when getting up to speed (which is the sort of post I was trying to write).

Replies from: MichaelStJules
comment by MichaelStJules · 2022-07-08T16:14:00.116Z · EA(p) · GW(p)

Do you think the Dutch book still has similar normative force if the person-affecting view is transitive within option sets, but violates IIA? I think such views are more plausible than intransitive ones, and any intransitive view can be turned into a transitive one that violates IIA using voting methods like beatpath/Schulze. With an intransitive view, I'd say you haven't finished evaluating the options if you only make the pairwise comparisons.

The options involved might look the same, but now you have to really assume you're changing which options are actually available over time, which, under one interpretation of an IIA-violating view, fails to respect the view's assumptions about how to evaluate options: the options or outcomes available will just be what they end up being, and their value will depend on which are available. Maybe this doesn't make sense, because counterfactuals aren't actual?

Against an intransitive view, it's just not clear which option to choose, and we can imagine deliberating from World 1 to World 1 minus $0.98 following the Dutch book argument if we're unlucky about the order in which we consider the options. comment by MichaelStJules · 2022-07-08T07:58:59.783Z · EA(p) · GW(p) Suppose that if I take trade 1, I have a p≤100% subjective probability that trade 2 will be available, will definitely take it if it is, and conditional on taking trade 2, a q≤100% subjective probability that trade 3 will be available and will definitely take it if it is. There are two cases: 1. If p=q=100%, then I stick with World 1 and don't make any trade. No Dutch book. (I don't think p=q=100% is reasonable to assume in practice, though.) 2. Otherwise, p<100% or q<100% (or generally my overall probability of eventually taking trade 3 is less than 100%; I don't need to definitely take the trades if they're available). Based on my subjective probabilities, I'm not guaranteed to make both trades 2 and 3, so I'm not guaranteed to go from World 1 to World 1 but poorer. When I do end up in World 1 but poorer, this isn't necessarily so different from the kinds of mundane errors that EU maximizers can make, too, e.g. if they find out that an option they selected was worse than they originally thought and switch to an earlier one at a cost. 1. A more specific person-affecting approach that handles uncertainty is in Teruji Thomas, 2019. The choices can be taken to be between policy functions for sequential decisions instead of choices between immediate decisions; the results are only sensitive to the distributions over final outcomes, anyway. 2. Alternatively (or maybe this is a special case of Thomas's work), as long as you guarantee transitivity within each set of possible definite outcomes from your corresponding policy functions (even at the cost of IIA), e.g. by using voting methods like Schulze/beatpath, then you can always avoid (strictly) statewise dominated policies as part of your decision procedure*. This rules out the kinds of Dutch books that guarantee that you're no better off in any state but worse off in some state. I'm not sure whether or not this approach will be guaranteed to avoid (strict) stochastically dominated options under the more plausible extensions of stochastic dominance when IIA is violated, but this will depend on the extension. *over a finite set of policies to choose from. Say outcome distribution (strictly) statewise dominates outcome distribution given the set of alternative outcome distributions (including and ) if where the inequality is evaluated statewise by fixing a state for and all the alternatives in , i.e. for state , with respect to a probability measure over . comment by MichaelStJules · 2022-07-07T20:58:08.859Z · EA(p) · GW(p) Q. In step 2, Alice was definitely going to exist, which is why we paid$1. But then in step 3 Alice was no longer definitely going to exist. If we knew step 3 was going to happen, then we wouldn't think Alice was definitely going to exist, and so we wouldn't pay $1. If your person-affecting view requires people to definitely exist, taking into account all decision-making, then it is almost certainly going to include only currently existing people. This does avoid the Dutch book but has problems of its own, most notably time inconsistency. For example, perhaps right before a baby is born, it take actions that as a side effect will harm the baby; right after the baby is born, it immediately undoes those actions to prevent the side effects. Do you mean in the case where we don't know yet for sure if we'll have the option to undo the actions after the baby is born? If we do know for sure the option will be available, we'll be required to undo them, and the net welfare of those who will definitely exist anyway will be worse if we do the actions and then undo them than not taking them at all, then we wouldn't take the actions that would harm the baby in the first place, because it's worse for those who will definitely exist anyway. A solution to time inconsistency could be to make commitments ahead of time, which is also a solution for some other decision-theoretic problems, like St. Petersburg lotteries for EU maximization with unbounded utility functions [LW(p) · GW(p)]. Or, if we're accepting time inconsistency in some cases, then we should acknowledge that our reasons for it aren't generally decisive, and so not necessarily decisive against time inconsistent person-affecting views in particular. Replies from: rohinmshah comment by Rohin Shah (rohinmshah) · 2022-07-08T05:39:35.420Z · EA(p) · GW(p) I was imagining a local decision rule that was global in only one respect, i.e. choosing which people to consider based on who would definitely exist regardless of what decision-making happens. But in hindsight I think this is an overly complicated rule that no one is actually thinking about; I'll delete it from the post. comment by Devin Kalish · 2022-07-07T15:41:39.243Z · EA(p) · GW(p) Maybe this is a little off topic, but while Dutch book arguments are pretty compelling in these cases, I think the strongest and maybe one of the most underrated arguments against intransitive axiologies is Michael Huemer's in "In Defense of Repugnance" https://philpapers.org/archive/HUEIDO.pdf Basically he shows that intransitivity is incompatible with the combination of: If x1 is better than y1 and x2 is better than y2, then x1 and x2 combined is better than y1 and y2 combined and If a state of affairs is better than another state of affairs, then it is not also worse than that state of affairs By giving the example of: A>B>C>D>A A+C > B+D > C+A My nitpicky analytic professor pointed out that technically this form of the argument only works on axiologies in which cycles of four states exist, not all intransitive axiologies, but it's simple to tweak it to work for any number of states by lining them up in order, and comparing the same state shifted over once and then twice, leading to the even more absurd conclusion that a state of affairs can be both better and worse than itself: A>B>C>D>E>A A+B+C+D+E > B+C+D+E+A > C+D+E+A+B In a way, I think this result is even more troubling than the dutch book, Because it prevents you from ranking worlds even relative to a single other world in a way that isn't sensitive to just the way the same world is described. Replies from: MichaelStJules, Devin Kalish comment by MichaelStJules · 2022-07-07T16:29:00.837Z · EA(p) · GW(p) Person-affecting views aren't necessarily intransitive; they might instead give up the independence of irrelevant alternatives, so that A≥B among one set of options, but A<B among another set of options. I think this is actually an intuitive way to explain the repugnant conclusion: If your available options are S, then the rankings among them are __: 1. S={A, A+, B}: A>B, B>A+, A>A+ 2. S={A, A+}: A+≥A 3. S={A, B}: A>B 4. S={A+, B}: B>A+ A person-affecting view would need to explain why A>A+ when all three options are available, but A+≥A when only A+ and A are available. However, violating IIA like this is also vulnerable to a Dutch book/money pump. Replies from: David Johnston, Devin Kalish comment by David Johnston · 2022-08-24T04:19:12.690Z · EA(p) · GW(p) I think this makes more sense than initial appearances. If A+ is the current world and B is possible, then the well-off people in A+ have an obligation to move to B (because B>A). If A is the current world, A+ is possible but B impossible, then the people in A incur no new obligations by moving to A+, hence indifference. If A is the current world and both A+ and B are possible, then moving to A+ saddles the original people with an obligation to further move the world to B. But the people in A, by supposition, don't derive any benefit from the move to A+ and the obligation to move to B harms them. On the other hand, the new people in A+ don't matter because they don't exist in A. Thus A+>A in this case. Basically: options create obligations, and when we're assessing the goodness of a world we need to take into account welfare + obligations (somehow). comment by Devin Kalish · 2022-07-07T17:07:18.842Z · EA(p) · GW(p) I'm really showing my lack of technical savy today, but I don't really know how to embed images, so I'll have to sort of awkwardly describe this. For the classic version of the mere addition paradox this seems like an open possibility for a person affecting view, but I think you can force pretty much any person affecting view into intransitivity if you use the version in which every step looks like some version of A+. In other words, you start with something like A+, then in the next world, you have one bar that looks like B, and in addition another, lower but equally wide bar, then in the next step, you equalize to higher than the average of those in a B-like manner, and in addition another equally wide, lower bar appears, etc. This seems to demand basically any person affecting view prefer the next step to the one before it, but the step two back to that one. Replies from: MichaelStJules, Devin Kalish comment by MichaelStJules · 2022-07-07T19:08:51.419Z · EA(p) · GW(p) Views can be transitive within each option set, but have previous pairwise rankings changed as the option set changes, e.g. new options become available. I think you're just calling this intransitivity, but it's not technically intransitivity by definition, and is instead a violation of the independence of irrelevant alternatives. Transitivity + violating IIA seems more plausible to me than intransitivity, since the former is more action-guiding. Replies from: Devin Kalish comment by Devin Kalish · 2022-07-07T19:27:05.488Z · EA(p) · GW(p) I agree that there's a difference, but I don't see how that contradicts the counter example I just gave. Imagine a person affecting view that is presented with every possible combination of people/welfare levels as options, I am suggesting that, even if it is sensitive to irrelevant alternatives, it will have strong principled reasons to favor some of the options in this set cyclically if not doing so means ranking a world that is better on average for the pool of people the two have in common lower. Or maybe I'm misunderstanding what you're saying? Replies from: MichaelStJules comment by MichaelStJules · 2022-07-07T20:04:09.274Z · EA(p) · GW(p) There are person-affecting views that will rank X<Y or otherwise not choose X over Y even if the average welfare of the individuals common to both X and Y is higher in X. A necessitarian view might just look at all the people common to all available options at once, maximize their average welfare, and then ignore contingent people (or use them to break ties, say). Many individuals common to two options X and Y could be ignored this way, because they aren't common to all available options, and so are still contingent. Christopher J. G. Meacham, 2012 (EA Forum discussion here [EA · GW]) describes another transitive person-affecting view, where I think something like "the available alternatives are so relevant, that they can even overwhelm one world being better on average than another for every person the two have in common", which you mentioned in your reply, is basically true. For each option, and each individual in the the option, we take the difference between their maximum welfare across options and their welfare in that option, add up them up, and then minimize the sum. Crucially, it's assumed when someone doesn't exist in an option, we don't add their welfare loss from their maximum for that option, and when someone has a negative welfare in an option but don't exist in another option, their maximum welfare across options will at least be 0. There are some technical details for matching individuals with different identities across worlds when there are people who aren't common to all options. So, in the repugnant conclusion, introducing B makes A>A+, because it raises the maximum welfares of the extra people in A+. Some views may start from pairwise comparisons that would give the kinds of cycles you described, but then apply a voting method like beatpath voting to rerank or select options and avoid cycles within option sets. This is done in Teruji Thomas, 2019. I personally find this sort of approach most promising. Replies from: Devin Kalish comment by Devin Kalish · 2022-07-07T20:53:04.728Z · EA(p) · GW(p) This is interesting, I'm especially interested in the idea of applying voting methods to ranking dilemmas like this, which I'm noticing is getting more common. On the other hand it sounds to me like person-affecting views mostly solve transitivity problems by functionally becoming less person-affecting in a strong, principled sense, except in toy cases. Meacham sounds like it converges to averagism on steroids from your description as you test it against a larger and more open range of possibilities (worse off people loses a world points, but so does more people, since it sums the differences up). If you modify it to look at the average of these differences, then the theory seems like it becomes vulnerable to the repugnant conclusion again, as the quantity of added people who are better off in one step in the argument than the last can wash out the larger per-individual difference for those who existed since earlier steps. Meanwhile the necessitarian view as you describe it seems to yield either no results in practice if taken as described in a large set of worlds with no one common to every world, or if reinterpreted to only include the people common to the very most worlds, sort of gives you a utility monster situation in which a single person, or some small range of possible people, determine almost all of the value across all different worlds. All of this does avoid intransitivity though as you say. comment by Devin Kalish · 2022-07-07T17:09:53.039Z · EA(p) · GW(p) Or I guess maybe it could say that the available alternatives are so relevant, that they can even overwhelm one world being better on average than another for every person the two have in common? comment by Devin Kalish · 2022-07-07T15:47:19.167Z · EA(p) · GW(p) (also, does anyone know how to make a ">" sign on a new line without it doing some formatty thing? I am bad with this interface, sorry) Replies from: technicalities comment by Gavin (technicalities) · 2022-07-07T15:51:38.773Z · EA(p) · GW(p) You could turn off markdown formatting in settings Replies from: Devin Kalish comment by Devin Kalish · 2022-07-07T16:12:59.403Z · EA(p) · GW(p) Seems to have worked, thanks! comment by Amber Dawn (Amber) · 2022-07-08T08:30:30.486Z · EA(p) · GW(p) I don't understand this - why would someone with this view want to receive$0.01 to move from World 1 to World 2, and World 3 to World 1, rather than being neutral either way?