Choosing the Zero Point

post by orthonormal · 2020-05-22T03:06:46.651Z · score: 34 (15 votes) · EA · GW · 2 comments

This is a link post for https://www.lesswrong.com/posts/rMfpnorsMoRwyn4iP/choosing-the-zero-point

Summary: You can decide what state of affairs counts as neutral, and what counts as positive or negative. Bad things happen if humans do that in our natural way. It's more motivating and less stressful if, when we learn something new, we update the neutral point to [what we think the world really is like now].

A few years back, I read an essay by Rob Bensinger about vegetarianism/veganism, and it convinced me to at least eat much less meat. This post is not about that topic. It's about the way that essay differed, psychologically, from many others I've seen on the same topic, and the general importance of that difference.

Rob's essay referred to the same arguments I'd previously seen, but while other essays concluded with the implication "you're doing great evil by eating meat, and you need to realize what a monster you've been and immediately stop", Rob emphasized the following:

Frame animal welfare activism as an astonishingly promising, efficient, and uncrowded opportunity to do good. Scale back moral condemnation and guilt. LessWrong types can be powerful allies, but the way to get them on board is to give them opportunities to feel like munchkins with rare secret insights, not like latecomers to a not-particularly-fun party who have to play catch-up to avoid getting yelled at. It’s fine to frame helping animals as challenging, but the challenge should be to excel and do something astonishing, not to meet a bare standard for decency.

That shouldn't have had different effects on me than other essays, but damned if it didn't.


Consider a utilitarian Ursula with a utility function U. U is defined over all possible ways the world could be, and for each of those ways it gives you a number. Ursula's goal is to maximize the expected value of U.

Now consider the utility function V, where V always equals U + 1. If a utilitarian Vader with utility function V is facing the same choice (in another universe) as Ursula, then because that +1 applies to every option equally, the right choice for Vader is the same as the right choice for Ursula. The constant difference between U and V doesn't matter for any decision whatsoever!

We represent this by saying that a utility function is only defined up to positive affine transformations. (That means you can also multiply U by any positive number and it still won't change a utilitarian's choices.)

But humans aren't perfect utilitarians, in many interesting ways. One of these is that our brains have a natural notion of outcomes that are good and outcomes that are bad, and the neutral zero point is more or less "the world I interact with every day".

So if we're suddenly told about a nearby bottomless pit of suffering, what happens?

Our brains tend to hear, "Instead of the zero point where we thought we were, this claim means that we're really WAY DOWN IN THE NEGATIVE ZONE".

A few common reactions to this:

The thing about Rob's post is that it suggested an alternative. Instead of keeping the previous zero point and defining yourself as now being very far below it, you can reset yourself to take the new way-the-world-is as the zero point.

Again, this doesn't change any future choice a utilitarian you would make! But it does buy human you peace of mind. What is true is already so- the world was like this even when you didn't believe it.

The psychological benefits of this transformation:

A few last notes:

Now go forth, and make the world better than the new zero!

2 comments

Comments sorted by top scores.

comment by Aaron Gertler (aarongertler) · 2020-05-29T07:36:15.009Z · score: 2 (1 votes) · EA(p) · GW(p)

This essay aligns with my experience in trying to share effective altruism with other people. While I think that a "moral obligation" framing gets closer to my personal reasons for being altruistic, that's almost never how I frame EA in conversation nowadays.

If you like this essay, I also strongly recommend "Excited Altruism".

comment by UriKatz · 2020-05-22T13:01:14.443Z · score: 2 (2 votes) · EA(p) · GW(p)

In my own mind I would file this post under “psychological hacks”, a set of tools that can be extremely useful when used correctly. I am already considering how to apply this hack to some moral dilemmas I am grappling with. I share this because I think it highlights two important points.

First off, the post is endorsing the common marketing technique of framing. I am not an expert in the field, but am fairly confident this technique can influence people’s thoughts, feelings & behavior. Importantly, the framing exercise is not merely confined to the conclusion of the post: “choosing a new zero point“. A big part of the framing is the language the post employs. I am referring to the use of terms like “utility functions” and “positive affine transformations”, and, more broadly, explaining Rob Bensinger’s quote using a popular framework in economics & philosophy. I suspect this is just as significant to the behavioral effect the framing hack produces as the final recommendation the post makes.

Secondly, I wonder if you believe “choosing a new zero point“ is something we should do as often as possible, or whether there is a more limited scope of problems it applies to. Might we be normalizing the current state of the world, and suggesting a brighter future that we can, but do not have, to strive for. What if small incremental changes are not enough? One example of this would be climate change. Another would be problems like genocide or slavery. Is it enough to be slightly better than the average citizen in a society that permits slavery?