Why is effective altruism new and obvious?

post by Katja_Grace · 2014-09-30T22:10:16.444Z · EA · GW · Legacy · 34 comments

Contents

  Explanation 1: Turning something into a social movement is a higher bar than thinking of it
  Explanation 2: There are lots of obvious things, and it takes some insight to pick the important ones to emphasize
  Explanation 3: Effectiveness and altruism don't appear to be the contentious points
  Explanation 4: The disagreement isn't about effectiveness or altruism
None
33 comments

Ben Kuhn, playing Devil's advocate:

Effective altruists often express surprise that the idea of effective altruism only came about so recently. For instance, my student group recently hosted Elie Hassenfeld for a talk in which he made remarks to that effect, and I’ve heard other people working for EA organizations express the same sentiment. But no one seems to be actually worried about this—just smug that they’ve figured out something that no one else had.
The “market” for ideas is at least somewhat efficient: most simple, obvious and correct things get thought of fairly quickly after it’s possible to think them. If a meme as simple as effective altruism hasn’t taken root yet, we should at least try to understand why before throwing our weight behind it. The absence of such attempts—in other words, the fact that non-obviousness doesn’t make effective altruists worried that they’re missing something—is a strong indicator against the “effective altruists are actually trying” hypothesis.

I think this is a good point. If you find yourself in a small group advocating for an obvious and timeless idea, and it's 2014, something a bit strange is probably going on. As a side note, if people actually come out and disagree, this is more worrying and you should really take some time out to be puzzled by it.

I can think of a few reasons that 'effective altruism' might seem so obvious and yet the EA movement might only just be starting.

I will assume that the term 'effective altruism' is intended to mean roughly what the words in it suggest: helping other people in efficient ways. If you took 'effective altruism' to be defined by principles regarding counterfactual reasons and supererogatory acts and so on, I don't think you should be surprised that it is a new movement. However I don't think that's what 'effective altruism' generally means to people; rather these recent principles are an upshot of people's efforts to be altruistic in an effective manner.

Explanation 1: Turning something into a social movement is a higher bar than thinking of it

Perhaps people have thought of effective altruism before, and they just didn't make much noise about it. Perhaps even because it was obviously correct. There is no temptation to start a 'safe childcare' movement because people generally come up with that on their own (whether or not they actually carry it out). On the other hand, if an idea is too obviously correct for anyone to advocate for, you might expect more people to actually be doing it, or trying to do it. I'm not sure how many people were trying to be effective altruists in the past on their own, but I don't think it was a large fraction.

However there might be some other reason that people would think of the idea, but fail to spread it.

Explanation 2: There are lots of obvious things, and it takes some insight to pick the important ones to emphasize

Consider an analogy to my life. Suppose that after having been a human for many years I decide to exercise regularly and go to sleep at the same time every night. In some sense, these things are obvious, perhaps because my housemates and parents and the media have pointed them out to me on a regular basis basically since I could walk and sleep for whole nights at a time. So perhaps I should not be surprised if these make my life hugely better. However my acquaintences have also pointed out so many other 'obvious' things - that I should avoid salt and learn the piano and be nice and eat vitamin tablets and wear make up and so on - that it's hard to pick out the really high priority things from the rest, and each one takes scarce effort to implement and try out.

Perhaps, similarly, while 'effectiveness' and 'altruism' are obvious, so are 'tenacity' and 'wealth' and 'sustainability' and so on. This would explain the world taking a very long time to get to them. However it would also suggest that we haven't necessarily picked the right obvious things to emphasize.

If this were the explanation, I would expect everyone to basically be on board with the idea, just not to emphasize it as a central principle in their life. I'm not sure to what extent this is true.

Explanation 3: Effectiveness and altruism don't appear to be the contentious points

Empirically, I think this is why 'effective altruism' didn't stand out to me as an important concept prior to meeting the Effective Altruists.

As a teen, I was interested in giving all of my money to the most cost-effective charities (or rather, saving it to give later). It was also clear that virtually everyone else disagreed with me on this, which seemed a bit perplexing given their purported high valuation of human lives and the purported low cost of saving them. So I did actually think about our disagreement quite a bit. It did not occur to me to advocate for 'effectiveness' or 'altruism' or both of them in concert, I think because these did not stand out as the ideas that people were disagreeing over. My family was interested in altruism some of the time, and seemed reasonably effective in their efforts. As far as I could tell, where we differed in opinion was in something like whether  people in foreign countries really existed in the same sense as people you can see do; whether it was 'okay' in some sense to buy a socially sanctioned amount of stuff, regardless of the opportunity costs; or whether one should have inconsistent beliefs.

Explanation 4: The disagreement isn't about effectiveness or altruism

A salient next hypothesis then is that the contentious claim made by Effective Altruism is in fact not about effectiveness or altruism, and is less obvious.

'Effective' and 'altruism' together sound almost tautologically good. Altruism is good for the world almost by definition, and if you are going to be altruistic, you would be a fool to be ineffective at it.

In practice, Effective Altruism advocates for measurement and comparison. If measurement and comparison were free, this would obviously be a good idea. However since they are not, effective altruism stands for putting more scarce resources into measurement and comparison, when measurement is hard, comparison is demoralizing and politically fraught, and there are many other plausible ways that in practice philanthropy could be improved. For instance, perhaps it's more effective to get more donors to talk to each other, or to improve the effectiveness of foundation staff at menial tasks. We don't know, because at this meta level we haven't actually measured whether measuring things is the most effective thing to do. It seems very plausible, but this is a much easier thing to imagine a person reasonably disagreeing with.

Effective altruists sometimes criticize people who look to overhead ratios and other simple metrics of performance, because of course these are not the important thing. We should care about results. If there is a charity that gets better results, but has a worse overhead ratio, we should still choose it! Who knew? As far as I can tell, this misses the point. Indeed, overhead ratio is not the same as quality. But surely nobody was suggesting that it was. If you were perfectly informed about outcomes, indeed you should ignore overhead ratios. If you are ignorant about everything, overhead ratios are a gazillion times cheaper to get hold of than data on results. According to this open letter, overhead ratios are 'inaccurate and imprecise' because 75-85% of organizations incorrectly report their spending on grants. However this means that 15-25% report it correctly, which in my experience is a lot more than even try to report their impact, let alone do it correctly. Again, the question of whether to use heuristics like this seems like an empirical one of relative costs and accuracies, where it is not at all obvious that we are correct.

Then there appear to be real disagreements about ethics and values. Other people think of themselves as having different values to Effective Altruists, not as merely liking their aggregative consequentialism to be ineffective. They care more about the people around them than those far away, or they care more about some kinds of problems than others, and they care about how things are done, not just the outcome. Given the large number of ethical disagreements in the world, and unpopularity of utilitarianism, it is hardly a new surprise that others don't find this aspect of Effective Altruism obviously good.

If Effective Altruism really stands for pursuing unusual values, and furthermore doing this via zealous emphasis on accurate metrics, I'm not especially surprised that it wasn't thought of years ago, nor that people disagree. If this is true though, I fear that we are using impolite debate tactics.

34 comments

Comments sorted by top scores.

comment by Stefan_Schubert · 2014-10-01T10:49:12.789Z · EA(p) · GW(p)

I think that many Marxists have thought of themselves as effective or scientific and altruistic. Marxism is a rather muddled intellectual tradition, but I think that because it used to have both altruistic and scientific connotations, lots of people who could have become effective altruists became Marxists instead. Influential ideologies have a tendency to "suck in" anyone in the vicinitiy in the logical space of ideas. Hence I think that the fall of Marxism did facilitate the birth of effective altruism.

Another reason why effective altruism is spreading now rather than, say, thirty years ago, is that the public debate and society at large is becoming ever more empirically minded. Watch, e.g., the development in journalism, where datajournalists such as Nate Silver are gradually replacing armchair pundits. Effective altruism is thus part of a larger trend that is making all parts of society ever more evidence-based (of course, this trend started a very long time ago - it goes back at least to the Scientific Revolution - though it perhaps is steeper now than before).

Replies from: RyanCarey
comment by Peter_Hurford · 2014-10-01T01:59:10.974Z · EA(p) · GW(p)

Some other potential reasons:

  • The people with high amounts of disposable income who also are philosophically minded enough to notice and care about altruism and effectiveness tend to be computer programmers. Such careers are relatively recent.

  • The rise of high-quality evidence for what works in aid is relatively recent.

  • I don't know how strong the connection is, but the growth of the EA movement seems to be strongly connected to the rise in atheism. Any common cause for both is probably relatively recent.

  • The growth of the movement probably depends on the internet in some large part, and the internet is relatively recent.

Replies from: Katja_Grace
comment by Katja_Grace · 2014-10-02T05:34:48.123Z · EA(p) · GW(p)

Interesting suggestions.

I'd expect the internet to make many minority causes and interests more successful by letting their rare supporters get together, and I think it has had this effect. However that doesn't seem to explain why they are minority causes to begin with.

Do you mean that before computer programming the philosophically minded just didn't have lucrative professions?

Have we recently passed some threshold in high quality evidence for what works in aid? I'd expect in future we think of 2014 level of evidence as low, and still say we only recently got good evidence.

Replies from: tomstocker
comment by tomstocker · 2015-05-22T12:36:07.881Z · EA(p) · GW(p)

Before the internet, it probably didn't make sense to organise around such a high level of abstraction away from concrete goals. Before the modern economy it probably didn't make that much sense to invest so much time into thinking about alternatives in this way, and some utilitarians seem to have done so anyway.

comment by atucker · 2014-10-01T00:02:36.647Z · EA(p) · GW(p)

I agree with your points about there being disagreement about EA, but I don't think that they fully explain why people didn't come up with it earlier.

I think that there are two things going on here -- one is that the idea of thinking critically about how to improve other people's lives without much consideration of who they are or where they live and then doing the result of that thinking isn't actually new, and the other is that the particular style in which the EA community pursues that idea (looking for interventions with robust academic evidence of efficacy, and then supporting organizations implementing those interventions that accountably have a high amount of intervention per marginal dollar) is novel, but mostly because the cultural background for it seeming possible as an option at all is new.

To the first point, I'll just list Ethical Culture, the Methodists, John Stuart Mill's involvement with the East India Company, communists, Jesuits, and maybe some empires. I could go into more detail, but doing so would require more research than I want to do tonight.

To the second point, I don't think that anything resembling modern academic social science existed until relatively recently (around the 1890s?), and so prior to that there was nothing resembling peer-reviewed academic evidence about the efficacy of an intervention.

Giving them time to develop methods and be interrupted by two world wars, we would find that "evidence" was not actually developed until fairly recently, and that prior to that people had reasons for thinking that their ideas are likely to work (and maybe even be the most effective plans), but that those reasons would not constitute well-supported evidence in the sense used by the current EA community.

Also the internet makes it much easier for people with relatively rare opinions to find each other, and enables much more transparency much more easily than was possible prior to it.

Replies from: Katja_Grace
comment by Katja_Grace · 2014-10-02T07:17:03.556Z · EA(p) · GW(p)

the other is that the particular style in which the EA community pursues that idea (looking for interventions with robust academic evidence of efficacy, and then supporting organizations implementing those interventions that accountably have a high amount of intervention per marginal dollar) is novel, but mostly because the cultural background for it seeming possible as an option at all is new.

The kinds of evidence available for some EA interventions, e.g. existential risk ones, doesn't seem different in kind to the evidence probably available earlier in history. Even in the best cases, EAs often have to lean on a combination of more rigorous evidence and some not very rigorous or evidenced guesses about how indirect effects work out etc. So if the more rigorous evidence available were substantially less rigorous than it is, I think I would expect things to look pretty much the same, with us just having lower standards - e.g. only being willing to trust certain people's reports of how interventions were going. So I'm not convinced that some recently attained level of good evidence has much to do with the overall phenomenon of EA.

Replies from: atucker, Evan_Gaensbauer
comment by atucker · 2014-10-02T16:28:51.293Z · EA(p) · GW(p)

My other point was that EA isn't new, but that we don't recognize earlier attempts because they're not really doing evidence in a way that we would recognize.

I also think that x-risk was basically not something that many people would worry about until after WWII. Prior to WWII there was not much talk of global warming, and AI, genetic engineering, nuclear war weren't really on the table yet.

comment by Evan_Gaensbauer · 2014-10-17T12:26:30.915Z · EA(p) · GW(p)

So I'm not convinced that some recently attained level of good evidence has much to do with the overall phenomenon of EA.

Effective altruism as a social movement emerged as the confluence of clusters of non-profit organizations based out of San Francisco, New York, and Oxford

comment by Michael_PJ · 2014-10-02T22:02:48.226Z · EA(p) · GW(p)

I think you're quite right about this, and I would identify one of the key points of disagreement as cosmopolitanism. I think that post resonated with a lot of people precisely because it highlighted something that everyone kind of knew was an unusual feature of EA arguments, but nobody had quite put their finger on.

(And you yourself say something similar in your comment on that post.)

Cosmopolitanism is quite unconventional, but it's also difficult to tackle head on, because it often involves a conflict between people's stated moral views and their behaviour. Lots of people have views that, on the face of it, would make them cosmopolitan, but they rarely act in such a way. That's partly because the implications can seem very demanding. So paying more attention to cosmopolitan arguments confronts people with a double threat - that of having an increased moral obligation, and that of being shown to be a hypocrite for not having accepted it before.

I could see this aversion just quietly diverting people away from really thinking about cosmopolitan ideas.

comment by Niel_Bowerman · 2014-10-02T13:49:16.966Z · EA(p) · GW(p)

I'm unsure whether these are the reasons why effective altruism started, or simply a compelling narrative, but I often think of EA as having come about as a result of advances in three different disciplines:

  1. The rise in evidence-based development aid, with the use of randomized controlled trials led by economists such as those at the Poverty Action Lab. These provide high-quality research about what works and what doesn’t in development aid.

  2. The development of the heuristics and biases literature by psychologists Daniel Kahneman and Amos Tversky. This literature shows the failures of human rationality, and thereby opens up the possibility of increasing one’s impact by deliberately countering these biases.

  3. The development of moral arguments, by Peter Singer and others, in favor of there being a duty to use a proportion of one’s resources to fight global poverty, and in favor of an ‘expanded moral circle‘ that gives moral weight to distant strangers, future people and non-human animals.

This gave rise to three communities: the rationalist (e.g. LessWrong), the philosophical (e.g. Giving What We Can), and the randomistas as they are often referred to (e.g. J-PAL and GiveWell)). These three communities merged to form effective altruism.

I wrote this up based on William MacAskill's arguments at http://effectivealtruism.org/history/ but I would be interested to hear how much people think this explains.

Replies from: Toby_Ord
comment by Toby_Ord · 2014-10-03T13:29:49.986Z · EA(p) · GW(p)

I agree with (1) and (3), but I don't think (2) played a large role. Regarding (1), I think that the conceptual development of QALYs (which DALYs largely copied) was as important as the randomisation, since it began to allow like for like comparisons across much wider areas.

Replies from: Bernadette_Young, RyanCarey
comment by Bernadette_Young · 2014-11-11T11:50:09.702Z · EA(p) · GW(p)

I think this was crucial.

For an analogously apparently obvious but relatively new concept see 'Evidence Based Medicine'. Applying the results of rigorous research to guide clinical practice probably sounds blindingly obvious but only emerged in the latter part of the 20th century and was bitterly contested.

comment by RyanCarey · 2014-10-04T17:24:00.447Z · EA(p) · GW(p)

I agree that saying that Kahnemann and Tversky were an influence of the founding of the effective altruism movement is arguably overselling it. But it's also arguably underselling the influence of the broader rationality literature. Kahnemann and Tversky's work are a major inspiration to effective altruists. Dweick's growth mindset stuff and CFAR's work on improving one's thinking plays the crucial role of inspiring effective altruists to try to improve themselves, not just sacrifice a larger fraction of some fixed capability-set. Eliezer Yudkowsky wrote about effective altruism and used the exact phrase ages ago. MIRI, arguably the first effective altruist organisation in spirit, preceded the widespread use of the term. Now, the rationality literature that has developed around LessWrong forms a nexus of ideas that greatly overlaps with effective altruism, especially in discussion of existential risk and artificial intelligence. Many people conceptualise altruism as one application for rationality, thereby becoming effective altruists. So the influence of rationality literature on effective altruism in past, present and future is significant, I think.

Replies from: yboris
comment by yboris · 2014-12-10T02:50:53.705Z · EA(p) · GW(p)

Small spelling correction: "Carol Dweck" (not Dweick).

ps - her research on Growth Mindset is, I think, the most cost-effective educational intervention out there at the moment.

comment by Ben_Kuhn · 2014-10-01T02:09:17.774Z · EA(p) · GW(p)

In practice, Effective Altruism advocates for measurement and comparison.

I think I'm mostly convinced that the contentious claim is not about effectiveness and altruism, but I don't think this is the only contentious claim that we do make! At least for some people, I think another contentious claim is that there is any normative force to e.g. the observation that the average American adult could likely save several lives a year if they spent less money on completely inessential things.

Replies from: Katja_Grace
comment by Katja_Grace · 2014-10-02T05:35:58.372Z · EA(p) · GW(p)

I agree that those I mentioned are probably not the only contentious claims, and that the one you mention in particular is probably another.

comment by DavidRooke · 2014-10-01T08:50:24.421Z · EA(p) · GW(p)

Wow this is a great post - thanks Katja!

My answer to the narrow question is that the idea has only recently emerged because of the recent emergence of social networks, allowing communities with a set of values outside the societal norm to emerge. As you describe it you thought deeply on your own about why people behaved the way they did, but on your own in a society with very particular values, effectively reinforced by marketing and group pressure you would have most likely simply conformed to your community norms over time (thinking about the meaning of life is not widely encouraged past university). Given that it requires an unusual level of "why" type thinking to focus on the issue this level of intellectual thinking was unlikely to achieve critical mass in any one particular physical location. That the movement is struggling to engage older mainstream groups is a demonstration of how deep conformance to social norms becomes, which only deepens I think the longer you are exposed to it, and makes the movement's emergence online in a young intellectual community (which would be the first place to see critical mass) only to be expected.

Why the societal norm is to give so little and relatively so ineffectively despite the minimal "happiness" utility to an individual of the incremental dollar as incomes rise above a modest level in Western society (despite the prevalence of Christian values requiring a particular focus on caring for those in extreme poverty) is a deeper question. In my view it is probably due to slow steady evolutionary change of society, with no particular shock to the system to cause a widespread re-evaluation.

Until perhaps the 1960s rising incomes in developed economies were generally well aligned to rising happiness in a very real sense - cars, washing machines, TVs, central heating all improved peoples lives in very measurable ways. At this point it would have been very hard to argue in a coherent way that it was better to give to those you have never seen and knew nothing about (if you could meaningfully give to them at all) than to those you knew and loved close at hand.

The rising incomes that increased happiness created a strong almost pavlovian link between efficiency, profit and "a good thing". With the introduction of the television powerful mass marketing became possible, increasingly playing at a sub-conscious level to our deep desires and motivations (fear, status, happiness) to create a need to consume more in order to allow profits to continue to grow as any capitalist organisation requires if it is to continue to flourish (which might otherwise have stalled). That, according to the World Happiness Report (page 3 http://www.earth.columbia.edu/sitefiles/file/Sachs%20Writing/2012/World%20Happiness%20Report.pdf) US GNP per capita has tripled since 1960 whilst happiness has remained relatively unchanged is an indication of the tenuous link that now exists in developed economies between growth and life improvement, but it is this key aspect that has allowed a "life improvement arbitrage" to be created that is now accurately observed and acted on by the effective altruism movement.

Now the concept has been created, and is to some extent obvious, it can be relatively easily understood by a very large group of people who with persuasion will wish to effectively invest in creating a better world. In selling the concepts to them though, and given the level of norming to societal benchmarks that has occurred I believe it will be necessary to use the same powerful marketing focussed on deep motivations to shift people's behaviour as was necessary to allow the life improvement arbitrage to be created in the first place. There is no reason that this cannot be carried out with the same rigour of comparison as effective altruism would bring to any other activity - a commercial enterprise is rigorous in its assessment of the return on the marketing dollar, and there is no reason for the EA movement to be any different. There are many more things that EAs can examine now that the concept has been created.

Replies from: casebash
comment by casebash · 2015-12-16T08:47:29.317Z · EA(p) · GW(p)

I agree that social networks are very important for allowing these kind of groups to grow. I'm sure there have been small groups around the world dedicated towards doing maximal good, but without the Internet it is very hard for a coherent movement to form.

comment by Ilya · 2014-10-14T06:02:26.375Z · EA(p) · GW(p)

There have always been some effective altruists “discarded” around the world. But only in the 21st century, thanks to the advancement of the internet and skyrocketing existential threats, such people have ways to connect into a cohesive viable group, thus making a kernel of the world to come.

comment by Pablo (Pablo_Stafforini) · 2014-10-05T16:02:25.002Z · EA(p) · GW(p)

Here's a slightly different angle from which to approach Ben's challenge. Instead of focusing on possible vindicating explanations of why the EA movement is so recent, we may consider other recent intellectual developments which we find plausible. Personally, I find myself endorsing many ideas that I would have antecedently expected to have been discovered much earlier, conditional on their being true. Each of these ideas, if considered individually, might be vulnerable to a critique of the sort Ben raises against EA. When these ideas are considered collectively, however, they seem to provide enough reason to question the "efficient market in ideas hypothesis" that Ben's argument assumes.

Replies from: Dale
comment by Dale · 2014-10-05T17:40:33.346Z · EA(p) · GW(p)

Or reason to think that maybe we just like exciting new ideas, regardless of their truth.

comment by Jeff_Kaufman · 2016-07-23T15:45:03.010Z · EA(p) · GW(p)

Another possible explanation: people tried something similar, the scientific charity movement.

comment by TopherHallquist · 2014-10-03T00:19:22.531Z · EA(p) · GW(p)

Somewhat echoing atucker: the moral ideas behind effective altruism have been around for a long time, but are also quite contrarian and have never been widely embraced. But the moral ideas—even in a form pretty damn close to their current one, like Peter Singer's writings in the 70s—aren't enough to give you EA as we know it. You also need a fair amount of expertise to come up with a strong game plan for putting them into practice. Singer couldn't have founded GiveWell, for example.

(One odd thing: as far as I know, Singer has never been involved in the nuclear disarmament movement. That would've seemed like the obvious existential risk to care about in the 70s or 80s.)

Replies from: RyanCarey
comment by RyanCarey · 2014-10-03T00:46:58.025Z · EA(p) · GW(p)

It probably wasn't immediately obvious how important the future is.

The ancestors of far future concern are Sagan and Parfit. E.g Sagan: "If we are required to calibrate extinction in numerical terms, I would be sure to include the number of people in future generations who would not be born.... (By one calculation), the stakes are one million times greater for extinction than for the more modest nuclear wars that kill "only" hundreds of millions of people. There are many other possible measures of the potential loss—including culture and science, the evolutionary history of the planet, and the significance of the lives of all of our ancestors who contributed to the future of their descendants. Extinction is the undoing of the human enterprise."

Replies from: lukeprog
comment by lukeprog · 2014-10-13T00:56:11.880Z · EA(p) · GW(p)

Singer probably read Reasons and Persons not long after it came out, but then the Berlin Wall fell a couple years later, and nuclear risk would have looked less pressing. Also, I'm not sure it ever looked to anyone like nuclear risk was at all likely to be an existential catastrophe cutting off billions of future generations.

Replies from: Denkenberger
comment by Denkenberger · 2015-04-17T23:25:39.202Z · EA(p) · GW(p)

Russell-Einstein Manifesto (1955) link

"No one knows how widely such lethal radio-active particles might be diffused, but the best authorities are unanimous in saying that a war with H-bombs might possibly put an end to the human race."

Reasons and Persons paraphrased:

“I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

1 Peace.

2 A nuclear war that kills 99 per cent of the world’s existing population.

3 A nuclear war that kills 100 per cent.

99% kill would be worse than peace, and 100% kill would be worse than 99% kill. Which is the greater of these two differences? Most people believe that the greater difference is between peace and 99% kill. I believe that the difference between 99% kill and 100% kill is very much greater. The Earth will remain habitable for at least another billion years. Civilisation began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilised human history. The difference between 99% kill and 100% kill may thus be the difference between this tiny fraction and all of the rest of this history. If we compare this possible history to a day, what has occurred so far is only a fraction of a second.”

So I think many people were worried about the extinction risk of nuclear war.

comment by Evan_Gaensbauer · 2014-10-17T12:21:37.001Z · EA(p) · GW(p)

They care more about the people around them than those far away, or they care more about some kinds of problems than others, and they care about how things are done, not just the outcome.

It seems to me that part of effective altruism has been not just increasing the effectiveness of altruism by recommending people change their actions, or where their philanthropic dollars go, to interventions with higher leverage, but also pointing out that people would be more effective if they changed their values. For example, Peter Singer's 'expanding circle', meat-free diet advocacy, etc.

People don't like to be told they need to change their values, or that they should change their values, or that the world would be a better place if they had some values that they didn't have already. Really, one's values tend to be near the core of one's social identity, so an attack on values can be perceived as an the attack on the self. The obvious example of this is the friend you know who doesn't like vegetarians for pointing out how bad eating meat is, while that friend doesn't bring up any particular philosophical objections, but just doesn't like being called out for doing something they've always been raised to think of as normal.

Replies from: Katja_Grace, Larks
comment by Katja_Grace · 2014-11-08T10:14:09.423Z · EA(p) · GW(p)

Changing one's values does not more effectively promote the values one has initially, so it seems one should be averse to it. I think the expanding circle case is more complicated - the advocates of a wider circle are trying to convince the others that those others are mistaken about their own existing values, and that by consistency they must care about some entities they think they don't care about. This is why the phenomenon looks like an expanding circle - points just outside a circle look a lot like points just inside it, so consistency pushes the circle outwards (this doesn't explain why the circle expands rather than contracting).

Replies from: Tom_Ash, Evan_Gaensbauer, Evan_Gaensbauer
comment by Tom_Ash · 2014-11-11T07:13:33.755Z · EA(p) · GW(p)

Changing one's values does not more effectively promote the values one has initially, so it seems one should be averse to it.

Unless you're a moral realist, and want to have the correct values.

comment by Evan_Gaensbauer · 2014-11-10T13:30:54.677Z · EA(p) · GW(p)

That makes more sense. I haven't read much philosophy, or engaged with that sort of thinking very deeply, so I often get confused about what I or others (are supposed to) mean by the word 'value'. I meant that people would be more effective if they altered their actions to be more in line with their values after they were updated for consistency. If someone says "I don't value X" one day, and "I now value X" the next day, I myself semantically think of that as a 'change of values' rather than 'an update of values toward greater behavioral consistency'. The latter definition seems to be the more common one around these parts, and also more precise, so I'll just go with that one from now on.

comment by Evan_Gaensbauer · 2014-11-10T13:30:12.021Z · EA(p) · GW(p)

That makes more sense. I haven't read much philosophy, or engaged with that sort of thinking very deeply, so I often get confused about what I or others (are supposed to) mean by the word 'value'. I meant that people would be more effective if they altered their actions to be more in line with their values after they were updated for consistency. If someone says "I don't value X" one day, and "I now value X" the next day, I myself semantically think of that as a 'change of values' rather than 'an update of values toward greater behavioral consistency'. The latter definition seems to be the more common one around these parts, and also more precise, so I'll just go with that one from now on.

comment by Larks · 2014-11-09T23:15:08.308Z · EA(p) · GW(p)

people would be more effective if they changed their values.

If you changed your value to "Evan's Gaensbauer's house being painted blue" you could probably promote that very efficiently. It would also be worthless - the point is to promote the values we already have, and avoid value deathism