Altruistic Motivations

post by So8res · 2019-01-04T20:38:24.711Z · score: 29 (17 votes) · EA · GW · 6 comments

This is a link post for

(Reposted under Nate Soares' account, with Nate's permission, by Forum admin aarongertler [EA · GW]. Nate gave blanket permission to cross-post his old essays, and this is one of my favorites.)

I count myself among the effective altruists. (In fact, I'm at an effective altruism conference at the time of posting.) The effective altruism movement is about figuring out how to do good better, and there are a number of different ways that members of the movement attempt to motivate the idea.

The first camp describes effective altruism as a moral obligation. If you see a drowning child in a pond near you, you are morally obligated to jump in and save them. If a child is dying halfway around the world and can be saved with a donation, then (they argue), you're morally obliged to do that too. This camp talks frequently of "oughts" and "shoulds".

There is another camp which presents a different view. They talk of effective altruism as an exciting opportunity to do lots of good with very little effort. We live in a world where $100 can make a difference, they say, and they suggest looking at underfunded effective charities as a unique opportunity to do lots of good.

I reject both these motivations.

I reject the "altruism is an obligation" motivation because I agree with members of the second camp that guilt and shame are poor motivators, and that self-imposed obligations are often harmful. Be it not upon me to twist your arm and shame you into helping your fellow beings.

I reject the "these are exciting opportunities" motivation because I find it disturbing, on some deep level.

Imagine a stranger comes up to you and says "Hey! I have great news for you! A mad scientist has rigged up a bomb that will destroy Tokyo, and they've linked it to your bank account, such that the only way to disarm it is to wire them $500. Isn't this a wonderful opportunity?"

Something is drastically wrong with that image.

Yes, lives are cheap: it costs on the order of a few thousand dollars to save a life, last time I checked. But I cannot bring myself to say "Lives are cheap! Sale! Everything must go! Buy buy buy!" — because lives are not lawn ornaments. I'm not a life-collector, and I'm not trying to make my "lives saved" score high for its own sake. I save lives for their sake, and if saving a life is extremely cheap, then something has gone horribly wrong. The vast gap between the cost of a life and the value of a life is a measure of how far we have to go; and I cannot pretend that that grim gap is a cause for celebration.

At most, I acknowledge that there is some thrill to being part of the era where people can still eliminate entire diseases in one fell swoop, where people can still affect our chance of expanding beyond our homeworld before it's too late. We have available to us feats of benevolence and altruism that will be completely unavailable to those who follow, who are born in a grown-up civilization where nobody has to die against their will. If you get your kicks from addressing civilization-level extinction threats(colloquially known as "fate-of-the-universe level shit"), then this century is your last chance. But even then, I hesitate to call this an "exciting opportunity." It is terrific, perhaps; but only insofar as the word "terrific" shares a root with "terror." It is exciting, but only in the sense that poker is exciting for the player who has put everything on the line. This is real life, and the stakes are as high as stakes can go. Lives hang in the balance. The entire future hangs in the balance. To call this an "exciting opportunity" rings false, to my ears.

The motivation for effective altruism that I prefer is this:

Low-cost lives are not something to celebrate. They are a reminder that we live on an injured planet, where people suffer for no reason save poor luck. And yet, we also live in a world without any external obligations, without any oughtthorities to ordain what is right and what is wrong.

So drop your obligations. Don't try to help the world because you "should." Don't force yourself because you ought to. Just do what you want to do.

And then, once you are freed of your obligations, if you ever realize that serving only yourself has a hollowness to it; or if you ever realize that part of what you care about is your fellow people; or if you ever learn to see the darkness in this world and discover that you really need the world to be different than it is; if you ever find something on this pale blue dot worth fighting for, worth defending, worth carrying with us to the stars:

then know that there are those of us who fight,

and that we'd be honored to have you at our side.


comment by bmg · 2019-01-06T21:46:55.644Z · score: 22 (10 votes) · EA · GW

I'm a bit concerned that this post is blurring the distinction between two different questions: “Do we have obligations to others?” and “What way of 'framing' effective altruism to yourself is most productive or sits best emotionally?”

For example, it may be the case that "guilt and shame are poor motivators," but this would have no bearing on the question of whether or not we have moral obligations. People who say that we "ought to" help others don't normally say it because they think that obligation is an instrumentally useful framing -- they say it because they believe that what they're saying is true.

Just do what you want to do.

Internalising this principle might make many people happier -- and might even lead many altruistically-inclined people to do more good in the long run.

But I also think the principle is probably false. It implies, for example, that sadists and abusers should just do what they want to do as well. If there are actually any "oughtthorities to ordain what is right and what is wrong," then it seems unlikely these oughtthorities would endorse harming others in such cases. On the other hand, if the post is right about there not being any oughtthorities (i.e. normative facts), then the principle is still at minimum no more correct than the principle that people should "just do what helps others the most."

comment by Wei_Dai · 2019-01-05T08:28:32.911Z · score: 2 (2 votes) · EA · GW

Now I feel bad for naming one of the sections of a recent post "AI design as opportunity and obligation to address human safety problems". I wonder what the Nate-approved way of saying that would be. :)

comment by rohinmshah · 2019-01-05T16:06:05.061Z · score: 2 (2 votes) · EA · GW

Actually, my summary of that post initially dropped the obligation frame because of these reasons :P (Not intentionally, since I try to have objective summaries, but I basically ignored the obligation point while reading and so forgot to put it in the summary.)

I do think the opportunity frame is much more reasonable in that setting, because "human safety problems" are something that you might have been resigned to in the past, and AI design is a surprising option that might let us fix them, so it really does sound like good news. On the other hand, the surprising part about effective altruism is "people are dying for such preventable reasons that we can stop it for thousands of dollars", which is bad news that it's really hard to be excited by.

comment by AmritSidhu-Brar · 2019-01-07T21:16:15.661Z · score: 1 (1 votes) · EA · GW

Thank you very much for sharing this; I think it's a really powerful idea and piece of writing!

I feel similarly to you, however I also strongly identify with the opportunity framing myself – I think this is beause I've always seen it a little differently to how you're expressing it:

For me the "excitement" in the opportunity framing isn't in finding out that there are people in a very bad situation whom I have an opportunity to help; it comes in finding out that something can be done about problems that I, if in a non-specific sense, already knew about. Before finding out about EA, I (and I'd imagine many others) already knew that the world has lots of terrible experiences and unhappy people in it, and cared about that, but thinking that there was nothing I could do about it, the only practical response was to ignore it and shut the feelings away. The excitement of EA for me is in finding out that, in fact, you can do something real and measurable to help, without unachievable resources. I'm not celebrating finding out about the bomb; I already knew about the bomb – I've just found out for the first time that there's a way out.

I therefore wouldn't see the opportunity framing as having the problem that you identify. (Although I certainly agree that it's highly distasteful to come anywhere near excitement at how terrible the world is; I have definitely experienced discourse within EA that has made me uncomfortable for feeling like it's approaching that.) Is this different to how others who identify with the opportunity framing feel? Rereading Excited Altruism and Cheerfully I see that that distinction isn't mentioned, but I suppose I'd always assumed that that was how others felt?

comment by anonymous_ea · 2019-01-07T00:06:52.157Z · score: 1 (1 votes) · EA · GW

Thanks for sharing this!

(Reposted under Nate Soares' account, with Nate's permission, by Forum admin aarongertler [EA · GW]. Nate gave blanket permission to cross-post his old essays, and this is one of my favorites.)

I don't understand this. Admins can post on behalf on users, making it appear as if the user themself posted?

comment by aarongertler · 2019-01-07T06:17:54.433Z · score: 2 (2 votes) · EA · GW

Yes. That's currently how our cross-posting program works. Nate's blog isn't active at the moment, but he let us know that we could cross-post old material.