post by [deleted] · · ? · GW · 0 comments

This is a link post for


Comments sorted by top scores.

comment by weeatquince · 2018-03-23T18:36:21.728Z · EA(p) · GW(p)

In general I would be very wary of taking definitions written for an academic philosophical audience and relying on them in other situations. Often the use of technical language by philosophers does not carry over well to other contexts

The definitions and explanations used here: and here: are in my mind, better and more useful than the quote above for almost any situation I have been in to date.

ADDITIONAL EVIDENCE FOR THE ABOVE For example I have a very vague memory of talking to Will on this and concluding that he had a slightly odd and quite broad definition of "welfarist", where "welfare" in this context just meant 'good for others' without any implications of fulfilling happiness / utility / preference / etc. This comes out in the linked paper, in the line "if we want to claim that one course of action is, as far as we know, the most effective way of increasing the welfare of all, we simply cannot avoid making philosophical assumptions. How should we value improving quality of life compared to saving lives? How should we value alleviating non-human animal suffering compared to alleviating human suffering? How should we value mitigating risks ...." etc

comment by MichaelPlant · 2018-03-20T21:50:23.502Z · EA(p) · GW(p)

The thing I find confusing about what Will says is

effective altruism is the project of using evidence and reason to figure out how to benefit others

I draw attention to 'benefit others'. Two of EA's main causes are farm animal welfare and reducing risks of human extinction. The former is about causing happy animals to exist rather than miserable ones, and the latter is about ensuring future humans exist (and trying to improve their welfare). But it doesn't really make sense to say that you can benefit someone by causing them to exist. It's certainly bizarre to say it's better for someone to exist than not to exist, because if the person doesn't exist there's no object to attach any predicates to. There's been a recent move by some philosophers, such as McMahan and Parfit, to say it can be good (without being better) for someone to exist, but that just seems like philosophical sleight of hand.

A great many EA philosophers, including I think Singer, MacAskill, Greaves, Ord either are totalists or very sympathetic to it. Totalis the view the best outcome is the one with the largest sum of lifetime well-being of all people - past, present, future and it's known as impersonal view in population ethics. Outcomes are not deemed good, on impersonal views, because they are good for anyone, or because the benefit anyone, they are good because there is more of the thing which is valuable, namely welfare.

So there's something fishy about saying EA is trying to benefit others when many EA activities, as mentioned, don't benefit anyone, and many EAs think we shouldn't, strictly, be trying to benefit people so much as realising more impersonal value. It would make more sense to replace 'benefit others as much as possible' with 'do as much good as possible'.

comment by Halstead · 2018-03-21T10:27:39.772Z · EA(p) · GW(p)

Does it harm someone to bring them into existence with a life of intense suffering?

comment by MichaelPlant · 2018-03-21T23:38:44.510Z · EA(p) · GW(p)

No. It might be impersonally bad though.

comment by Halstead · 2018-03-22T10:09:38.176Z · EA(p) · GW(p)

On your view, is it good for someone to prevent them from dying? Doesn't the same argument apply - if the person doesn't exist (is dead) there's no object to attach any predicates to.

comment by MichaelPlant · 2018-03-22T11:34:23.804Z · EA(p) · GW(p)

No, I also don't think it makes sense to say death is good or bad for people. Hence it's not true to say you benefit someone by keeping them alive. Given most people do want to say there's something good about keeping people alive, it makes sense to adopt an impersonal locution.

I'm not making an argument about what the correct account of ethics is here, I'm just making a point about the correct use of language. Will's definition can't be capturing what he means and is thus misleading, so 'do the most good' is better than 'benefit others'.

comment by Halstead · 2018-03-22T15:33:52.325Z · EA(p) · GW(p)

In line with the above, one could stick with the EA definition and when asked to gloss it, say that different people understand benefitting others in different ways, some in such a way that creating new people etc counts as a benefit, others not. One downside of that is that it excludes the logically possible option of [your account of benefitting others; morality isn't all about benefitting others sometimes it's about impersonal good]

comment by Halstead · 2018-03-22T14:20:04.742Z · EA(p) · GW(p)

On your account, as you say, bringing people into a life of suffering doesn't harm them and preventing someone from dying doesn't benefit them. So, you could also have said "lots of EA activities are devoted to preventing people from dying and preventing lives of suffering, but neither activity benefits anyone, so the definition is wrong". This is a harder sell, and it seems like you're just criticising the definition of EA on the basis of a weird account of the meaning of 'benefitting others'.

I would guess that the vast majorty of people think that preventing a future life of suffering and saving lives both benefit somebody. If so, the vast majority of people would be committed to something which denies your criticism of the definition of EA.

comment by MichaelPlant · 2018-03-22T17:12:13.317Z · EA(p) · GW(p)

weird account of the meaning of 'benefitting others'.

The account might be uncommon in ordinarly langauge, but most philosophers accept creating lives doesn't benefit the created person. I'm at least being consistent and I don't think that consistency is objectionable. Calling it the view weird is unhelpful.

But suppose people typically think it's odd to claim you're benefiting someone by creating them. Then the stated definition of what's EAs about will be at least somewhat misleading to them when you explain EA in greater detail. Consistent with other things I've written on this forum, I think EA should take avoiding being misleading very seriously.

I'm not claiming this is a massive point, it just stuck out to me.

comment by Halstead · 2018-03-22T17:58:27.485Z · EA(p) · GW(p)

Agreed, weirdness accusation retracted.

I suppose there are two ways of securing neutrality - letting people pick their own meaning of 'doing good', and letting people pick their own meaning of 'benefiting others'

comment by Jamie_Harris · 2018-03-20T23:14:32.375Z · EA(p) · GW(p)

All points make sense. I find that when introducing the idea, however, people seem slightly confused by the idea of "doing as much good as possible" (I tend to use nearly identical phrasing). I think the idea seems too abstract to them, and I feel compelled to give some kind of more concrete example to help explain. Although I haven't really tried it out as an alternative, the idea of EA aiming to "benefit others" seems that it might be slightly clearer / more imaginable?

If you agree, this then raises the question of whether we should distinguish a definition of EA for "academic" and "outreach" / explanatory purposes. I'd argue that we should probably avoid separating a definition out for different contexts, so might need to keep thinking about how to word a definition which is clear, but also allows for nuance?

comment by arikagan · 2018-06-06T01:06:38.705Z · EA(p) · GW(p)

I'd agree with being hesitant to distinguish definitions of EA for "academic" and "outreach" purposes. It seems like that's asking for someone to use the wrong definition in the wrong context.

comment by Sanjay · 2018-03-21T12:49:11.922Z · EA(p) · GW(p)

Really? "doing as much good as possible" is confusing people? I tend to use that language, and I haven't noticed people getting confused (maybe I haven't been observant enough!)

comment by adamaero · 2018-03-22T00:36:24.870Z · EA(p) · GW(p)

Aren't you going further from the definition though?

Any short definition about EA by itself I find to be abstract. Most people I encounter assume it's about doing as much good small things as possible--or worse that it's a political philosophy (red/blue thinking). It's only when I give examples of myself or ask what their cause interests could be that they slowly break away from the abstract dictionary definitions.

comment by Jamie_Harris · 2018-04-02T17:45:20.828Z · EA(p) · GW(p)

Maybe "confusing" was the wrong word. But I tend to get the sense that people just have no idea what the concept means in practice when I say that, because its so vague / abstract. I'm guessing that people are thinking along the lines "what does he mean by 'doing good'? Surely he means something else / something more specific?" But I might just be misreading people slightly too.

comment by kbog · 2018-03-24T21:40:40.143Z · EA(p) · GW(p)

It's not confusing, but it's vague.

comment by MichaelPlant · 2018-03-21T23:39:26.392Z · EA(p) · GW(p)

maybe I haven't been observant enough

I've often observed your lack of observance :)

comment by kbog · 2018-03-24T21:34:53.810Z · EA(p) · GW(p)

Literally everything that doesn't benefit existing beings fails to "benefit others", under your view. E.g. banning Agent Orange is not something that "benefits others". But banning Agent Orange, and lots of other things that benefit future generations, are regarded as benefiting others. This doesn't depend on the totalist view, it's largely uncontroversial in philosophy, and it's commonly assumed in the colloquial sense of benefiting others.

Philosophical sleight of hand would be to deny that we are benefiting others, something that colloquial and common sense views would affirm, just because of a technical philosophical point.

comment by stijnbruers · 2018-04-15T19:01:19.124Z · EA(p) · GW(p)

I suggest to leave it up to the other persons to decide whether they are benefitted. For example: I have a happy, positive life, so I claim that my parents benefitted me when they caused my existence. So there does exist someone (me, now, in this situation) who claims to be benefitted by the choice of someone else (my parents 38 years ago), even if in the counterfactual I do not exist. So my parents made a choice for a situation where there is a bit more benefit added to the total benefit. If you disagree in the sense that you don't think you were benefitted by your parents when they chose for your existence (even when you are as happy as I am), then that means your parents did not create an extra bit if benefit and you were not benefitted. More on this here:

comment by Tuukka_Sarvi · 2018-03-21T13:06:35.119Z · EA(p) · GW(p)

Good point. The choice of moral stance (ie. totalist, person-affecting, "moral uncertanitist" etc) is the biggest factor behind any preference ordering for allocation of resources and courses of action. Thus, it is possible that further rigorous study of ethics, if lesser uncertainty between the competing views or greater agreement among scholars is achieved, could bring very high returns in terms of impact

comment by Jan_Kulveit · 2018-03-20T23:29:09.236Z · EA(p) · GW(p)

I agree it may seem to point toward some "person-affecting views" which many EAs consider to be wrong.

Possibly the aim was to describe the motivation is altruistic?

The disadvantage of 'do as much good as possible' may be it would associtate EA with utilitarianism even more than it is.

I think about EA as a movement trying to answer a question "how to change the world for better most effectively with limited resources" in a rational way, and act on the answer. Which seems to me a tiny bit more open than 'do as much good as possible' as it requires just some sort of comparison on world-sates, while 'as much good as possible' seems to depend on more complex structure.