The Outside Critics of Effective Altruism

post by RyanCarey · 2015-01-05T18:37:48.862Z · EA · GW · Legacy · 70 comments

Note: if you've come here because you would like to give your first impression of effective altruism, then introductions are here and here.

Note2: Robin Hanson has outlined some problems with exposing misalignment between others' actions and professed beliefs about charity.


Today, Robin Hanson wrote a blog post that explains the importance of outside criticism.

Friendly local criticism isn't usually directed at trying to show a wider audience flaws in your arguments. If your audience won’t notice a flaw, your friendly local critics have little incentive to point it out. If your audience cared about flaws in your arguments, they’d prefer to hear you in a context where they can expect to hear motivated capable outside critics point out flaws


If you are the one presenting arguments, and if you didn’t try to ensure available critics, then others can reasonably conclude that you don’t care much about persuading your audience that your argument lacks hidden flaws.

This raises the question: who are the best critics of effective altruism?

Ben Kuhn has given some criticism but he's an insider. (Since countered by Katja.) Geuss has written some helpful criticism but he's also involved with effective giving. Giles has passed on some thoughts from a friend. These critics have been heroic but they are few in number. It figures, as most of us aren't incentived to say bad things about a movement with which we affiliate, and if we were forced to, we might still pull some punches.

So what about outsiders? Well, 80,000 Hours have received some criticism on earning to give. They also debated some socialists. But these discussions were brief and narrowly focussed clashes between entrenched political ideologies. Others have targeted us for criticism that was so vitriolic that it was hard to find the constructive parts, such as William Schambra, Ken Berger and Robert Penna and the always sarcastic RationalWiki. Edit: also some criticism by Scott Walter.

So several years into our movement, that's all we have to show for criticism. A few insiders and a few fanatics? It's not to say we can't harvest some insights from there - by god we should try. But one would hope we have more.

If we cast the net wider, Warren's son Peter Buffett has debated William MacAskill on the effectiveness of charity, which is kind-of cool. There are more general aid critics: William Easterly, who is a fairly thoughtful economist and Dambisa Moyo, who I know less about. But these they don't really get to the heart of what we care about - if most aid is ineffective, then it would just be important to research it even harder.

Alternatively, we can look at more narrowly focused critics. LessWrong is often mentioned as a useful source for criticism, and it has usefully challenged philosophical positions held by some effective altruists. Its founder, Eliezer Yudkowsky has challenged hedonistic utilitarianism and some forms of moral realism in the Fun Theory sequence, the enigmatic (or merely misunderstood) Metaethics sequence and the fictionalised dilemma Three Worlds Collide. But these mostly address utilitarians and spare other effective altruists. Of course, Eliezer no outsider to effective altruism - he played some part in founding it. The most upvoted post on LessWrong of all time was in fact feedback from Holden Karnofsky about its sponsor-organisation MIRI. Again, the relevance of this to most EAs is a stretch.

In turn, Holden Karnofsky has recieved suggestions for GiveWell might react to philosophical considerations by LessWrong veterans like Paul Christiano, Carl ShulmanEliezer and Nick Beckstead. Again, all insiders.

So here's how I sum up our problem. Almost all of our critics are insiders. Barring a couple of heroic attempts at self-criticism, we've primarily attracted criticism about donating and earning to give. We've also offended a couple of fanatics, and I don't have a strong view on whether we've learned from those. This is unsurprising. Taking self-criticism is hard and endorsing it or writing it is harder. Eliezer would say it feels like shooting one of your own men. Scott Siskind says, "Criticizing the in-group is a really difficult project I’ve barely begun to build the mental skills necessary to even consider. I can think of criticisms of my own tribe. Important criticisms, true ones. But the thought of writing them makes my blood boil."

But criticism seems especially important now as effective altruism is growing fast, our culture is starting to consolidate on the Facebook group and here and as we model it in the popular talks and introductory materials that we give to new community members.

To develop the effective altruist movement, it's essential that we ask people how we've failed, or how our ideas are inadequate.

So an important challenge for all of us is to find better critics.

Let me know if there's any big criticm that I've missed, or if you know someone who can engage with and poke holes in our ideas.

Related: The perspectives on Effective Altruism we Don't Hear by Jess Whittlestone. The Evaporative Cooling of Group Beliefs


Comments sorted by top scores.

comment by Lila · 2015-01-06T00:44:33.295Z · EA(p) · GW(p)

Scott Alexander writes about the motte and bailey doctrine:

Basically, people will retreat to obvious platitudes (the motte) when defending their position, when in fact they're actually trying to promote more controversial ideas (the bailey). The motte for EA is "doing the most good" and the bailey is, well, everything else we promote. Ideally the place to launch criticism is the bailey. Unfortunately, a lot of the criticism has been directed to the motte, which leads to bizarre statements like "well maybe suffering isn't bad, we don't want everyone to be happy all the time" or "it's impossible to know which things are better than others". This may be part of the reason much of the criticism has fallen flat so far.

comment by Robert_Wiblin · 2015-01-06T01:18:05.005Z · EA(p) · GW(p)

We have been criticised by people who have encountered us briefly for things like:

  • Being excessively critical of others, especially too quickly and without establishing a relationship first
  • Not being friendly, focussing too much on trying to prove how smart we are
  • Not being diverse enough (especially on gender, racial or religious lines, and to a lesser extent socioeconomic, political, cultural and academic discipline)
    • As a result, being unwelcoming to people who are either superficially or substantively different to existing participants
    • As a result, converging prematurely on ideas that people from other backgrounds or fields would have good counterarguments for.

These seem like pretty serious concerns that we must address. They certainly all don't apply to everyone, but looking at how we collectively come across I am not surprised some people react this way.

comment by Evan_Gaensbauer · 2015-01-12T06:38:32.542Z · EA(p) · GW(p)

not being diverse enough (especially on gender, racial or religious lines, and to a lesser extent socioeconomic, political, cultural, and academic discipline)

One explanation for this I've encountered is just that effective altruism has been packaged by people of one sort that's intuitive to people of the same sort. For example, the emphasis on quantification of social goods appeals to students of economics, and the idea of using new methods of reasoning and calculation to uncover greater effectiveness appeals to students of computer science, mathematics, and philosophy. Those groups tend to be largely dominated by young, white men to begin with. Additionally, effective altruism originated with an academic and secular approach to ethics, a discipline that tends to be followed by less religious people. Further, it doesn't seem a coincidence that effective altruism, what with its focus on philanthropy, has primarily gained traction in wealthier countries in the Anglosphere.

In essence, the communites which first fed the numerical growth of effective altruism aren't, and haven't been, very diverse to begin with. So, effective altruism may not be so much at fault by this point for failing to be diverse. Going forward, it's the responsibility of the movement to reach out, and broaden its horizons, with nuance and respect, to other communities. When poiting out the lack of diversity, do critics point out:

  • A) effective altruism doesn't appear diverse, so this image problem leads to miscommunication wherein a more diverse crowd never joins the movement, because they fear feeling awkward, or out of place?


  • B) because of its lack of diversity, whether explicity or implicity, effective altruism seems to be offensive or ignorant of the needs and perspectives of individuals from more diverse or differentiated backgrounds?

An early impression I had of effective altruism, in getting to know it, one might find it elitist, based on its community origins in elite universities, such as the Oxbridge school in Britain, or the Ivy League universities in the United States. HIstorically, I'd figure racial minorities, some religious minorities, and poorer people, or ones of less elite classes, generally, don't have access to this level of education. Some of my own discomfort with effective altruism is because others have access to this elite education, and I feel like I never did, and I'm a white man from Canada who was raised by a middle-class family. If I feel a bit left out already having so much opportunity, it must feel more awkward for those with less privilege than myself.

The heavier emphasis on earning to give a couple of years ago might have turned away a greater diversity of people who otherwise might have found effective altruism convincing or appealing in its ideas. To acquire a middle-class income, let alone a very high one that would be ideal for earning to give, requires access to acquire money to education, which is getting more expensive. To have that money, and access to a better education, is likelier the domain of people who were raised wealthier. Additionally, being raised in poverty is correlated with personal needs, such as poor health in the family, or lack of access to good resources, which might need to be persistently addressed. If I was in such a less privileged position, I might think earning to give is a lofty, 'pie-in-the-sky' goal beyond my means when I have enough trouble taking care of myself and my loved ones in a society that won't prevent me from falling through its cracks. So, I won't fault someone else in such a position for thinking the same.

This effect compounds with how marginalized groups such as women and minorities tend to have less access to opportunities to pursue a lifestyle that enables one to live life relatively risk-free, and in comfort, while still giving more of oneself that effective altruism requires. I'm not versed in the factors of all that, so I'll stop here.

comment by Dale · 2015-01-14T02:30:10.568Z · EA(p) · GW(p)

Those groups tend to be largely dominated by young, white men to begin with.

Most social movements are lead by young white men - EA is nothing special here.

To acquire a middle-class income ... requires access to acquire money to education, which is getting more expensive.

That's not true - at least in the US the government provides essentially unlimited credit to people who want to go to college, and furthermore it is possible to earn a very good living without having gone to college. Furthermore if you are the right kind of minority you will even benefit from affirmative action in admissions/scholarships.

comment by Evan_Gaensbauer · 2015-01-20T06:13:40.246Z · EA(p) · GW(p)

By "group" in my initial comment, I meant the populations composing students of the formal sciences, economics, and philosophy. However, I concur most social, i.e., activist, advocacy, etc., movements are led by young white men. My point was that since this is true of many groups, not just effective altruism, I don't believe effective altruism is somehow especially exclusionary relative to other social movements. I believe this point still stands.

I fully concede the point your rebuttal on income level, access to education, and social privilege. I should have qualified that I meant to acquire a high-class income is easier with more access to finances and other social privileges, but lack of access doesn't preclude one. I admit I don't know as much about how the American government funds post-secondary education.

I'm from Canada. In Canada, public post-secondary institutions are funded by the federal government such that a university education is much cheaper for Canadian students than it is for our counterparts in the US. However, in my personal experience, and that of my friends, access to credit is more restricted. So, if one is poorer, one can't access loans and is less likely to have saved enough money to pay tuition out of pocket. I assumed things were the same in the United States, but apparently they're not.

comment by Bernadette_Young · 2015-01-07T20:13:54.428Z · EA(p) · GW(p)

One way we can invite constructive criticism is by our response to criticisms we receive. That's difficult for an amorphous 'group', but perhaps some invited posts on the forum trying yo vine to grips with these? Perhaps CEA could publish their current thinking on areas where they have encountered criticism (both from 'inside' and 'outside' the organisation)?

comment by SASS · 2015-01-06T02:13:51.686Z · EA(p) · GW(p)


I have not publicized my support of Effective Altruism at this point due to a fear of appearing arrogant.

One could argue that this applies as well to any altruistic or charitable movement but that isn't true: with EA there is also the tacit and easily verbalized assumption that my method of charitable giving is more effective than and thus superior to other people's, and that I'm therefore not only more generous but also more edified and generally intelligent than proponents of Ineffective Altruism, of which there are legion.

An example: I was considering posting a comment on the Facebook thread dealing with this same issue. I didn't because I knew my friends would see it.

Another example: I originally liked the EA Facebook page in the dead of night, when the least number of friends would be likely to see it. A calculation on my part.

I've had conversations about EA with a few people to whom I'm very close, and responses have been mixed-to-positive so far, but I cannot see myself broadcasting my stance in regards this issue in any public forum.

From a grassroots/proselytization perspective, I seriously doubt that I am in a marginal minority when it comes to this qualm. I'm surprised to have not seen this criticism in the post above but would be happy to know that I am in a marginal minority on this issue. Since people telling other people about EA is ahem useful to the movement, I see this as an important and inhibiting issue that has, for better and worse, been hardboiled into the name and concept.

comment by Evan_Gaensbauer · 2015-01-12T06:12:18.254Z · EA(p) · GW(p)

Full disclosure: below, I endorse the actions of the foundation called Charity Science, which is run by personal friends of mine, which is an issue aside from what would be my otherwise detached admiration of their work.

I have not publicized my support of Effective Altruism at this point due to a fear of appearing arrogant.

My support of effective altruism isn't very publicized yet, either, for this reason. Actually, I'm not only afraid of appearing arrogant, but also I don't want to push ideas on others that I'm also afraid really are arrogant. That is, if my friends pointed out the arrogance of effective altruism, I wouldn't be too surprised if they were right about that. On the other hand, this fear may be more due to shame than humility. I might be afraid of looking too weird, and arrogant, but learning that sooner rather than later if it's rightly so I'm pursuing wrong ideas, I'd be wasting less time on them. Temporary embarrassment may be a small price to pay for learning a hard and proper lesson to not waste my time trying to do good on the wrong sort of lifestyle or activism.

Peter Hurford has achieved relative success in direct research efforts, movement coordination, earning to give, and career decisions, among supporters of effective altruism. As far as I can tell, he is quite public with his support of effective altruism. However, he's engaged in other intellectual endeavors, sometimes criticizes effective altruism, and may court outside criticism from his social network as well. Additionally, in the past he's expressed an aversion to public-facing activism (at least off the Internet), e.g., leafleting for animal rights or veganism, and other causes. This is despite the fact that he supports animal welfare, and other causes embraced by effective altruism. I'm not aware if his aversion to activism directed toward strangers is for the same reasons that you and I don't publicize our support of effective altruism generally.

I publicize my support of effective altruism by sharing links on the subject I like on social media, by bringing up object-level causes within effective altruism, e.g., the Against Malaria Foundation, and spreading ideas by word of mouth among friends. Charity Science is an organization which runs fundraisers. Moreover, they act as an online donation portal which coordinates and allows individuals to run their own fundraisers, for their birthdays or Christmas. Peter Hurford is one such individual who's been quite successful. Of course, even if it seems too stressful or presumptuous to cast a net as wide as Mr. Hurford did, one can experiment by sending emails only to closer friends and family who you feel won't react harshly to such a request. This may be one way of expanding a comfort zone with effective altruism, while also raising money.

From a grassroots/proselytization perspective, I seriously doubt that I am in a marginal minority when it comes to this qualm. I'm surprised to have not seen this criticism in the post above but would be happy to know that I am in a marginal minority on this issue. Since people telling other people about EA is ahem useful to the movement, I see this as an important and inhibiting issue that has, for better and worse, been hardboiled into the name and concept.

Agreed. This strikes me as substantial enough of a concern that it could merit a couple of questions in the next effective altruism survey. I know Tom Ash, the fellow who ran the survery this year, so I'll ask him about it.

comment by Larks · 2015-01-07T01:09:58.784Z · EA(p) · GW(p)

Another example: I originally liked the EA Facebook page in the dead of night, when the least number of friends would be likely to see it. A calculation on my part.

If your friends would be that opposed, just don't like the page! Page likes just aren't valuable enough to cause you distress. Their main value is broadcasting to your friends anyway.

comment by Giles · 2015-01-06T01:22:10.953Z · EA(p) · GW(p)

Another criticsm: the movement isn't as transparent as you might expect. (Remember, GiveWell was originally the Clear Fund - started up not necessarily because existing charitable foundations were doing the wrong thing, but because they were too secretive).

When compiling this table of orgs' budgets, I found that even simple financial information was difficult to obtain from organizations' websites. I realise I can just ask them - and I will - but I'm thinking about the underlying attitude. (As always, I may be being unfair).

Also, what Leverage Research are up to is anybody's guess.

comment by Benjamin_Todd · 2015-01-10T22:52:20.797Z · EA(p) · GW(p)

You can find all the data on 80k in our latest financial report and summary business plan:

Some even more current updates also here:!forum/80k_updates

comment by Evan_Gaensbauer · 2015-01-12T07:17:45.468Z · EA(p) · GW(p)

Having met Geoff Anders, the executive director of Leverage Research, and its other employees multiple times, and taking it upon myself specifically to ask pointed questions attempting to clarify their work, I can informally relay the following information[1]:

  • Leverage Research has and continues to successfully raise funds for its own financial needs without doing broad-based outreach to the effective altruism community at large. Leverage Research seems confident in its funding needs for the future to the point at which they won't be sourcing funds from the effective altruism community at large anytime soon.

  • Given that Leverage Research considers itself an early-stage, non-profit research organization, whose research goals pivot rapidly as its researchers update their minds on what is the best work they can do in the face of new evidence and developments, Leverage Research perceives it as difficult to portray their research at any given time in granular detail. That is, Leverage Research is so dynamic an organization at this point that for it to maximally disclose the details of its current research would be an exhaustive and constant effort.

  • Because of the difficutly Leverage Research has in expressing its research agenda accurately and precisely at any point in time, and because they've sourced their funding needs from private donors who were provided information to their own satisfaction, Leverage Research doesn't perceive it as absolutely crucial that they make specific financial or organizational information easily accessible, e.g., on its website. Personally, I haven't ever privately contacted Leverage Research seeking a disclosure of, or access to, such information. I have no knowledge of how such interactions may or may not have gone between other third parties, and Leverage Research.

  • The information available under the 'Our Team' heading on Leverage Research's website seems to overview only its employees who head its executive functioning, and individuals who oversee its largest projects. At any given time, Leverage Research works with several researchers, both from within and from outside of the effective altruism community, who pursue projects. Leverage Research allows researchers it takes on as part of its team, either temporarily or permanently, to pursue their own research agenda. Based on its own goals, Leverage Research lets its associated researchers pursue their research in a freeform manner, rather than assigining tasks and measured goals with a heavy hand. Leverage Research seems to do this on the basis of the belief that enabling researchers with this greater independence will, directly or indirectly, more effectively lead to the fulfiilment of the organization's medium- and long-term goals.

  • Leverage Research often hires a number of interns, or researchers, on a trial basis, to assess whether the independent research goals and findings of its prospective associates will be consistent with the goals and mission of the organization. The changing nature of its team, and the relative independence of each (subgroup of) researcher(s), is in large part why Leverage Research finds it difficult to do justice in expressing its research goals.

  • Much of Leverage Research's time and human resources is taken up by helping build the effective altruism movement. Effectively, this breaks down to Leverage Research assisting or collaborating with other organizations on events and projects. Such organizations include the Machine Intelligence Research Institute, the Center for Applied Rationality, and the Centre for Effective Altruism. The details of such projects and events could be confirmed by representatives from either Leverage Research, or another of these organizations.

  • Additionally, Leverage Research has taken sole responsibility for organizing the Effective Altruism Summit for the years 2013 and 2014, an undertaking which delivered with impressive results. Personally, I believe the 2014 Effective Altruism Summit was quite successful. These Summits have been unusual conference-style events for a unique social movement for which there is no prior infrastructure, or logistical support, for organizing events so ambitious in attendance, and diverse in content. In hindsight, it's my impression the effective altruism community at large underestimated how much effort, dedication, and person-hours worth of work were required of Leverage Reserach to wholly plan and throw such events.

[1] I take full responsiblity for any of this information if it's incorrect, inaccurate or mistaken. This is from my memory alone, based on personal correspondence with individuals affiliated with, but not necessarily employed by, Leverage Research. I will defer to representatives of Leverage Research on this information, and redact or correct this comment accordingly. I remain reticent out of concern not to misrepresent Leverage Research, so all further questions should be addressed to a representative of that organization, and not my person. I have no past or present affiliation with Leverage Research.

comment by Giles · 2015-01-13T19:05:46.380Z · EA(p) · GW(p)

Thanks - I knew they were involved in the EA Summit but I didn't know they were the sole organizers. I also knew they weren't soliciting donations. I partially retract my earlier statement about them! (Also I hope I didn't cause anyone any offense - I've met them and they're super super nice and hardworking too)

comment by Peter_Hurford · 2015-01-08T01:52:09.038Z · EA(p) · GW(p)

If you ever find .impact or Charity Science insufficiently transparent, let me know. I think the reason why you might have trouble finding income and expenses for those two orgs is that their official income and expenses are both essentially $0. You can see some of the financial flow into both orgs here.

comment by AABoyles · 2015-07-08T18:42:53.871Z · EA(p) · GW(p)

The Boston Review held a Forum on Effective Altruism with some excellent criticism by academic, non-EAs.

comment by Bernadette_Young · 2015-01-12T09:29:27.722Z · EA(p) · GW(p)

I've recently been considering the analogy between Effective Altruism and the movement towards Evidence Based Medicine. The strongest similarity is that they both seek to use the conscientious application of high quality evidence to guide decision making. EBM has been criticised extensively, and many of its critics care deeply about the 'project' of medicine. It strikes me that these critiques could provide useful points to consider for EA. Maybe gathering and considering these would be a useful project for a CEA intern or similar.

comment by William_MacAskill · 2015-01-05T21:00:39.974Z · EA(p) · GW(p)

There's also the feedback we get in talks, and the comments on all the articles and media attention we've gotten, which is very extensive. I've also presented on these topics in an academic setting.

And I asked for feedback here:

From this, I feel I know the most common criticisms of EA (as practiced, rather than in theory) pretty well.

  • doesn't appreciate the importance of systemic change
  • too focused on the measurable rather than unquantifiable benefits
  • smuggles utilitarian assumptions under the table (e.g. that you can aggregate small benefits and weigh them against large benefits; e.g. that you shouldn't be much more concerned with avoiding causing harm than with actively doing good) ...

However, I haven't seen a smart outside person spend a considerable amount of time to evaluating and criticising effective altruism. These objections are just the ones that people think of off the top of their head. I'd really like to see what someone who spent e.g. a week investigating EA and criticising it would say.

comment by AlasdairGives · 2015-01-05T22:13:07.071Z · EA(p) · GW(p)

I agree with these but they also reinforce the fact that "Effective altruism" as a category is quite unwieldy. "too focused on the measurable rather than unquantifiable benefits" - well, we have a huge chunk of people calling themselves EA's who mostly care about totally unquantifiable GCR research. Simiarly for Ultilitarianism and comparing animal and human suffering or other such notions. The "4 causes" commonly identified with EA have quite distinct weaknesses and it would be good (in my view) if people started assessing them on their own merits and not lumping them under one banner.

comment by RyanCarey · 2015-01-05T22:46:54.590Z · EA(p) · GW(p)

Nitpick: You can't count the global catastrophes (yep, still zero for this decade) but you might be able to tell if it's working in other ways... Maybe. But yeah, I agree that that's the big weakness of GCR research.

comment by Denkenberger · 2015-04-10T02:43:07.061Z · EA(p) · GW(p)

Asteroid/comet impact, super volcanic eruptions, and even nuclear war risks are quantifiable within an order of magnitude or two: link. There are additional uncertainties in the cost and efficacy of interventions such as storing food or alternate foods. However, if you value future generations, one-three orders of magnitude uncertainty is not a significant barrier to making a quantified case.

comment by RyanCarey · 2015-01-05T22:42:10.992Z · EA(p) · GW(p)

I agree that we get heaps of feedback from talks and media so i imagine you personally encounter as much criticism of EA as any other single person, and since it's often brief spots with a big audience, that a lot of it feels like it's not well thought through.

doesn't appreciate the importance of systemic change - too focused on the measurable rather than unquantifiable benefits - smuggles utilitarian assumptions under the table.

Mightn't there be value in these criticisms?

The systemic change criticism seems valid for EA five years ago. Now, GiveWell have started seriously analysing advocacy, GoodVentures have started funding it and FHI/CEA have started engaging policymakers so we've decided that these activities are crucial. Next time we could listen sooner right?

Regarding smuggling in utilitarianism - well, there are related objections about to moralising, demandingness and self-sacrifice, which we've started to address in the last year or two, and which seem important. When we write in research articles or books, it seems like we are starting to get more careful about stating ethical assumptioms, which seems good.

So, as non-smart or poorly-considered as this criricism may be, we've reasons to expect gold there, and any discussion should help the movement's self-awareness, psychological health and resiliency to further criticism.

comment by Giles · 2015-01-06T15:16:40.809Z · EA(p) · GW(p)

However, I haven't seen a smart outside person spend a considerable amount of time to evaluating and criticising effective altruism.

Would they do it if we paid them?

comment by GabrielEhrnstGrundin · 2015-01-14T12:57:08.685Z · EA(p) · GW(p)

I originally posted this on the Facebook thread that linked to this discussion, but that thread was deleted, so I'm reposting it here.

The strongest counterargument against EA that I know of is an attack on its underlying methodological individualism. By "individualism" here I mean analysing our actions as those of individuals deciding and acting in isolation. That is, looking at what we ought to do regardless of how this correlates with the behaviour of others.

To see why this could be a problem, take Downs paradox of voting, as illustrated here. In that video, Diana Thomas argues—persuasively in my view—that voting (and being an informed voter) is irrational if seen solely as an individual act. Some have attempted to counter this by saying that voters are acting altruistically, rather than egoistically. I think such explanations are insufficient because they ignore the irreducibility of voting. The impact of voting for a specific candidate is an emergent property of sufficiently many doing it. Voting only makes sense when seen as a collective, rather than individual, act.

The fundamental question underlying EA is "how can I have the most impact?" Turning this question into a movement, thus changing the "I" into a "we", doesn't necessarily mean that the answer stays the same.

comment by astclaire · 2015-01-13T07:14:38.785Z · EA(p) · GW(p)

This past summer I was introduced to the Effective Altruism movement via The Center for Applied Rationality (CFAR). I love the CFAR crew and found a few kindred spirits who are also EA's.

I became interested in EA because I'm constantly running into charitable or grassroots organizations that are incredibly ineffective with fighting poverty and misinformation within minority communities, specifically in urban spaces like the Southside of Chicago. I believe that I've found some of the root causes and was hoping to glean some information or techniques for improving the effectiveness of these efforts, bring this information back to our communities and in return, provide value to the EA movement.

Though I'm deeply interested in the core tenants of Effective Altruism, I find the vocabulary, culture, and causes very distant from my own. I'm an African-American grassroots female hacktivist from the Southside of Chicago. I have a Master's and a dual degree in Economics and Urban Planning and 99% of the time have no idea how to decipher what the hell is coming out of these forums.

From reading a few of the comments below, I can tell that you've already run into the diversity issue, so I won't harp on that. Otherwise, here are some other more urgent questions I have and imagine others will have as well.

Other than donating to effective charities,

  • What can I DO to contribute digitally?
  • What can I DO to contribute physically?
  • What can I DO to contribute locally?
  • Give me 5 foundational articles/videos/books, etc. that I should consume to be a beginning EA.
  • Are there any upcoming events that I can attend?

I feel like these are common denominators that may even catalyze a solution for the diversity problem.

comment by Austen_Forrester · 2015-01-13T20:40:15.375Z · EA(p) · GW(p)

Great to hear from you, St. Claire!

I sympathize with you in not understanding or being able to relate to the culture of the EA community. I feel the same way (ie. I'm religious, do industrial work, etc) and at first I was turned off of the community for that reason until I realized that the community will not grow and become more mainstream – and therefore it's ideas won't receive widespread acceptance – unless more diverse people join it. I also had a hard time understanding what people were talking about on this forum, but after a while you learn the terminology/unorthodox views and it becomes comprehensible. Actually, I've noticed the writing on the forum gradually becoming better and easier to read.

Sounds like the NGO's you deal with aren't adequately measuring and evaluating their impact, and need technical assistance in that department. Unfortunately, I don't know where to get this information but hopefully someone on the forum can point you in the right direction.

comment by RyanCarey · 2015-01-13T09:59:38.677Z · EA(p) · GW(p)

Hey AstClaire. Tahnks for your thoughts.

If you're in San Francisco, with CFAR, then there are definitely events there, which will be announced on Facebook or here. If you're in Chicago, there are people there, and I'm not sure whether they meet.

For what to contribute, here is one collection of activities. For foundational articles, if you click More on Effective Altruism in the sidebar, you will see a bunch. To pick 5: Efficiency Measures Miss the Point, Efficient Charity,- Do Unto Others, To Save the World... Go Work on Wall Street, Your Dollar Goes Further Overseas, Preventing Human Extinction.

It'd be good to know what vocabulary, culture and causes are distant, to figure out whether there's some divide that's fundamental, or it's just the way we talk about things. EAs have usually thought about the causes a lot, so those views there are fairly stable, but people often aren't very careful about culture and vocabulary, so that could have a lot of room to change.

comment by Larks · 2015-01-07T01:18:27.291Z · EA(p) · GW(p)

There is also this post by Scott Walter which I thought had some pretty good points.

comment by Evan_Gaensbauer · 2015-01-12T08:52:28.006Z · EA(p) · GW(p)

My whole response to this essay was to be here in a single comment. However, it was too long for a single comment. I have thus decided I may share it on this forum as a post in its own right at a later date. I'm not sure it's really worth the effort, as it's of mild concern. Also, frankly, I'm afraid Mr. Walter might return to this forum, take what written here out of context and yet again conflate effective altruism as a slippery slope to Nazi-level eugenicism because of Peter Singer's association with effective altruism[1].

I agree Mr. Walter raises some serious points of concern, and legitimate criticism, such as effective altruism perhaps being too demanding of what personal lifestyle choices we its adherents may feel some peer pressure to make.

[1] The full argument doesn't seem much less bizarre.

comment by MatthewDahlhausen · 2015-01-06T00:32:19.500Z · EA(p) · GW(p)


Manageable, with further work:

  • Sorting out the ethics of animal suffering and catastrophic risk.
  • Weights marginal benefit heavily over systematic change. This may be inappropriate for very wealth philanthropists, or a group of pooled funders that may achieve the same effect.
  • Doesn't give appropriate weight to stopping problems before they become a crisis, especially for inter-generational effects. E.g., there hasn't been a lot of rigor in how EAs assess family planning.

Difficult to overcome:

  • devaluing of systematic change, and ignoring the biggest money flows. E.g., the amount of money that goes to aid is a fraction of a percent of the capital flight, foreign debt repayments, and natural resource capital/wealth that leaves poor countries. EA asks people to give, but doesn't approach the problem of stopping the biggest leaks of wealth from where it is needed most. This has a lot to do with challenging wealth directly and supporting systematic change.

Crippling, if true:

comment by Robert_Wiblin · 2015-01-06T01:22:33.197Z · EA(p) · GW(p)

I always find it quite strange to find people asserting that there are strong limits to growth when:

i) most technologies that are possible haven't been invented yet ii) humans only occupy a tiny spec of the universe.

It's more accurate to say there are limits to the rate of growth we can achieve - limits set by our ingenuity at any point in time.

comment by MatthewDahlhausen · 2015-01-06T03:15:16.259Z · EA(p) · GW(p)

I don't think there is enough information to rule out the strong sustainability hypothesis. (This is not to say it is true, just that there isn't enough information to go either way).

It's not just about what technologies we have to discover, it's about how fast they can be discovered, developed, and implemented to overcome problems. Technology is value-neutral; sometimes it solves problems, sometimes is makes new one, sometimes it does both. There are good reasons to think that we are much more robust to pressures that collapsed a lot of earlier civilizations, but the scale of the problems we face is also unprecedented. Biocapacity and energy throughput concerns have proved impressively stubborn to technical solutions in the last several decades. And we don't have a infinite amount of time to figure them out before they become serious collapse pressures.

comment by Denkenberger · 2015-04-10T23:04:26.354Z · EA(p) · GW(p)

I have a background in energy and I have studied these issues extensively, so I could write many pages, but I will try to be brief. We actually already have the technology to support 10 billion people at the US standard of living sustainably. It is good to think about the dynamics and embodied energy. But because typical renewable energy pays back the energy investment in about three years, if we just took the energy output of renewable energy and reinvested it, the amount of renewable energy production would grow at about 30% per year. Therefore, if we just reinvested our current renewable energy production, we would be at 100% renewable in a couple decades. The energy payback time of nuclear energy power plants (not mining) is more like one third of a year, so this is even more favorable. The HANDY paper does not consider technological improvement, which is probably appropriate for the timescale of past collapses (but note that in the longer term, our carrying capacity has gone from millions as hunter gatherers to billions now even with higher consumption per capita, so technological change is key). However, now that we have markets and R&D, we don't need the government to intervene to get to a sustainable solution quickly. The book "Limits to Growth" does consider technological improvement. But for some reason it estimates the carrying capacity of the earth is much below current consumption, perhaps because it does not recognize we can make nitrogen fertilizer with renewable hydrogen. I think the carrying capacity issue is why "Limits to Growth" nearly always predicts collapse. It is conceivable that we will overreact to these slow problems much more so than we did in 2008, and this could turn into a catastrophe. But more likely these resource constraints could reduce our resilience slightly to actual catastrophes. From a food perspective, there is around a 10% chance of nuclear winter this century, and when you include lesser catastrophes like regional nuclear war, volcanic eruptions, abrupt climate change, pandemics disrupting food trade, etc., it is greater than even chance. So I am worried much more about these catastrophes than resource constraints.

comment by DisposableUsername · 2015-08-22T18:43:35.633Z · EA(p) · GW(p)

But because typical renewable energy pays back the energy investment in about three years, if we just took the energy output of renewable energy and reinvested it, the amount of renewable energy production would grow at about 30% per year.

How many scarce materials would be needed? How much land area? How much toxic waste would be produced, e.g. from solar elecontronic components? Energy investment is not the only input needed for renewables.

(If you have a link that answers these and similar questions, that would be good.)

comment by Denkenberger · 2016-03-03T14:08:19.768Z · EA(p) · GW(p)

Thanks for the good questions. Wind power can use scarce materials, like rare earth permanent magnet generators. But it is possible just to use copper. Some photovoltaic technologies use scarce materials, but silicon is abundant. US per person primary energy use is ~10 kW: Energy Information Administration. “Annual Energy Review 2007.” If we start with renewable electricity, we need less primary energy, 4-8 kW, so say 6 kW. So 10 billion people require 60 trillion watts (TW). Current wind technology could provide 72 TW: Archer, C. and M. Jacobson. “Evaluation of global wind power.” JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 110, D12110, doi:10.1029/2004JD005462, 2005. Solar maximum on land is ~6,000 TW, but practical ~600 TW: Lewis, N.S. “Powering the Planet” California Institute of Technology presentation. If solar is 10% efficient and average solar radiation is 200 W/square meter, this requires ~0.1 acre/person: 5% of ecological footprint quota, but could be in desert or on rooftops. Of course we need to be careful with toxic waste, but landfills take up a negligible amount of land.

comment by Larks · 2015-01-06T00:19:18.901Z · EA(p) · GW(p)

It seems worthwhile to differentiate between

  • criticisms of EA as an idea
  • criticisms of EAs, as individuals and as a movement

For example, EA individuals tend to have a left-wing bias (and when the survey's results are released hopefully we'll have data on this) but this isn't inherent to EA ideals - many EA ideals are quite right wing.

comment by RyanCarey · 2015-01-06T00:44:36.789Z · EA(p) · GW(p)

You make criticisms of EA as individuals out like it's not interesting. But if you called it "EA in practice", then it would seem like something that can also be usefully criticised.

comment by Larks · 2015-01-07T00:20:10.213Z · EA(p) · GW(p)

Sorry, that wasn't my intention. I think both can be valuable.

comment by Giles · 2015-01-05T21:54:28.651Z · EA(p) · GW(p)

Here's the link to the Facebook group post in case people add criticisms there.

Glad you linked to Holden Karnofski's MIRI post. Other possibly relevant posts from the GiveWell blog:

There are more on a similar philosophical slant (search for "explicit expected value") but the above seem the most criticismy.

comment by Giles · 2015-01-05T19:41:08.865Z · EA(p) · GW(p)

Great topic!

I think you missed this one from Rhys Southan which is lukewarm about EA: Art is a waste of time says EA

I don't see the Schambra piece as particularly vitriolic.

I don't know where to find good outside critics, but I think there's still value in internal criticism, as well as doing a good job processing the criticism we have. (I was thinking of creating a wiki page for it, but haven't got around to it yet).

Some self-centered internal criticism; I don't know how much this resonates with other people:

  • I posted some things on LW back in 2011 which were badly received (and which I'm too embarrassed to link to). This was either a problem with me, or the LW community, or more likely both
  • I spend a lot of time on EA social media when I could be doing more productive stuff
  • I feel like a standard-issue generic EA - like I've internalized all the memes but don't have huge amounts of unique ideas or abilities to bring to the table
  • Similarly my mental model of people in the EA movement is that they're fairly interchangeable, rather than each having their own strengths, weaknesses and personalities
  • In particular, I haven't really managed to make friends with anyone I met through EA
  • I spend a lot of time talking about EA but haven't actually donated much to charity yet
  • In the past I've felt strong affiliation to an EA subtribe (xrisk), viewing the poverty and animal people as outgroups


  • We mostly speak English and are not as ethnically diverse as we could be
  • One of the central premises of EA, that some charities are so very many times more effective than others, seems pretty bold. I'd like to be able to point to a mountain of evidence to back it up but I'm not sure where this is to be found.
comment by carneades · 2016-12-24T09:25:23.454Z · EA(p) · GW(p)

I think that you dismiss the critiques of Moyo and Easterly to quickly. They are critiques of top down aid, which EA is a champion of. Easterly in particular is critical of organizations which plan without understanding or asking the needs of the communities. Yes, this means that more research is needed, but of a drastically different kind. The problem with allowing organizations to assess themselves, is that they will not look for faults that they know exist. AMF, for example, stops testing communities for malaria after the lifespan of the donated nets run out, since this would demonstrate how ineffective their programs are. The problem is that currently EA is convinced they are doing sufficient research to identify top charities when the charities that they promote (like AMF) are not merely ineffective, they are actively doing more harm than good. You want a critique of the methods of EA for assessing effective charities, here it is

comment by John_Maxwell (John_Maxwell_IV) · 2015-06-24T08:47:42.240Z · EA(p) · GW(p)

This post seems pretty good to me, although I don't agree with all of it.

comment by Dale · 2015-04-24T02:33:09.976Z · EA(p) · GW(p)

Patri recently linked to a post that was basically written directly to EAs:

On Saving the World and Other Delusions

comment by mhpage · 2015-04-22T14:20:31.802Z · EA(p) · GW(p)

I'm new to EA (and so effectively an outsider), and here are a few critiques that immediately come to mind, and which I have not seen mentioned elsewhere. The first two are simply aspects of EA that might render it unpalatable or too counter-intuitive for the masses:

  1. It would seem to follow that robbing from the rich and giving to the poor is ethically required. Imagine a man eating a feast with two dozen turkeys, and right next to him is a family full of starving children. If you could steal a turkey and give it to the family without anyone noticing, shouldn't you? If so, then EA folks should really be entering into an alliance with Anonymous.

  2. Not only are rich people ethically compelled to give away money, but people are ethically compelled to take reasonable steps to become rich. That means that all of those Ivy league graduates who went to work in the charity industry were behaving unethically.

  3. Assuming a set amount of money available for charities, it's utility-maximizing for that money to come from the richest people (given the decreasing marginal utility of money). EA's focus on students giving away a few percent of their stipend is therefore a wildly inefficient way to maximize utility. (The response, of course, is that it's not a zero-sum game, but it is to the extent the EA movement is focusing resources on anyone but the wealthiest people.)

comment by Larks · 2015-04-23T00:26:29.388Z · EA(p) · GW(p)

Imagine a man eating a feast with two dozen turkeys, and right next to him is a family full of starving children. If you could steal a turkey and give it to the family without anyone noticing, shouldn't you?

No, because stealing is morally wrong in itself. Being an EA does not mean you have to endorse utilitarianism! (though some people do neglect the distinction). There are other aspects of morality, and respecting people's rights is one of them.

comment by mhpage · 2015-04-23T18:06:38.763Z · EA(p) · GW(p)

EA is about (in part) extrapolating from what you would do in the near to what you should do in the far. The classic introduction hypothetical is the person drowning right in front of you. Most people's moral instincts are that they should suffer some costs to save the person's life. Ergo, they should suffer some costs to save starving people across the world.

If you were to poll the world about whether people think it's right or wrong to steal one of the two dozen turkeys from the rich man and give it to the starving family, I suspect a sizable percentage would say it's right--or at least not wrong. You might not, but I hardly think that would be a rare response. My point is that extrapolating from that moral premise leads you to very counter-intuitive places.

comment by Vincent_deB · 2015-04-22T19:18:53.259Z · EA(p) · GW(p)

all of those Ivy league graduates who went to work in the charity industry were behaving unethically.

It's better to say they were behaving suboptimally.

comment by mhpage · 2015-04-22T22:35:07.215Z · EA(p) · GW(p)

Vincent, your comment goes to the point I was trying to make. If a rich person has two options: (a) give money to charity; or (b) buy a yacht, and chooses (b), we (or at least I) don't say he is behaving sub-optimally but that he is behaving unethically. Putting aside whether the Ivy league grad would enjoy working for a charity more than working in finance, how is her choice any different from the rich person's choice? If she takes a job at a charity (assuming one for which she is entirely replaceable), rather than taking a job in finance and giving away half of her salary, she is effectively throwing away the money should could have made in finance rather than donating it to charity. How is that different than taking the job and buying a yacht? It seems intuitively different because her motives are different, but that's irrelevant if you're a consequentialist (which seems like part of EA's fabric).

From a marketing perspective, I see why we don't want to encourage stealing (which I still think could be done in a utility-maximizing manner) or claims that charity-minded Ivy league grads are as bad as yacht-buying millionaires, but if the only reason we don't go there is for marketing reasons, that seems like a problem.

As to why this is a critique: I worry that the marketing strategy for EA whitewashes how radical its underlying premise truly is: that we owe the same duty to someone across the world as we do to someone right in front of us. Fully embracing that premise can lead us to extraordinarily counterintuitive (and unpalatable for many) places.

comment by Vincent_deB · 2015-04-23T02:20:55.385Z · EA(p) · GW(p)

As to why this is a critique: I worry that the marketing strategy for EA whitewashes how radical its underlying premise truly is: that we owe the same duty to someone across the world as we do to someone right in front of us. Fully embracing that premise can lead us to extraordinarily counterintuitive (and unpalatable for many) places.

That I agree with. Obscuring/whitewashing it may be tactically wise however, and I think there's been some posts here about whether EA really is consequentialist.

comment by RyanCarey · 2015-04-22T15:10:23.043Z · EA(p) · GW(p)

Hey mhpage. I think these are reasonable sorts of questions that lots of people are likely to suggest, so it's good to tackle them straight away. My responses would be:

  1. Do you think that stealing from the rich is likely to be effective? It seems to me that it would probably lead you to get arrested and muck up your chances of helping for decades to come. At any rate, the idea that it would be compulsory would arise if you believed in 'utilitarianism' or had a related view that there are no 'supererogatory acts'. So that issue is central to those philosophies, rather than to effective altruism.

  2. Effective altruists would be committed to the idea that it's a good way of helping people, and they promote it. Whether there's any 'ethical compulsion' is something that people will vary on depending on their philosophies.

  3. There are still reasons to focus on students, even for the trivial reason that some of them will be wealthy later. There's also other ways of helping than donating funds. And effective altruists are pretty interested in meeting high-net-worth individuals anyhow.

comment by mhpage · 2015-04-22T15:42:22.091Z · EA(p) · GW(p)

Thanks, Ryan. The distinction between EA and utilitarianism is not one I've sufficiently focused on, and it's a useful one to bear in mind. (With that said, I do think there are effective ways certain people could steal from the rich and give to the poor -- e.g., hackers.)

comment by Vincent_deB · 2015-04-22T19:17:39.466Z · EA(p) · GW(p)

If there were I'd expect them to be well-researched and discussed by non-altruists. I haven't heard of any, and would expect to have.

comment by Giles · 2015-01-16T05:37:55.926Z · EA(p) · GW(p)

I was googling "effective altruism arrogant" and it turned up a few links which I'm posting here so I don't lose them:

comment by John_Maxwell (John_Maxwell_IV) · 2015-01-09T06:09:29.795Z · EA(p) · GW(p)

A thought: It seems like the EA community has a pretty strong focus on criticism, whether it's internal or external. Is it possible that this can itself be counterproductive? If the EA community is a fun place to be, that's good for both recruiting and retention, right?

Or to steelman Robin Hanson's recent post, if the EA community is ever to expand beyond high-scrupulosity, taking-abstract-moral-arguments-seriously, relentlessly-self-criticizing folks, it may need to find a way to help people achieve conventional self-interested goals like making friends, finding mates, and signaling desirable qualities.

(I don't necessarily agree with this position but it seems like an interesting one. There may be some kind of quality vs quantity tradeoff where we can either have a smaller movement full of dedicated, careful, effective nerds or a larger movement that could spin out of the control of its founders.)

comment by Giles · 2015-01-09T17:57:29.050Z · EA(p) · GW(p)

I don't know if this is relevant to the criticism theme, but I found it was necessary for me to take some of Hanson's ideas seriously before becoming involved in EA, but his insistence on calling everything hypocrisy was a turn-off for me. Are there any resources on how we evolved to be such-and-such a way (interested in self+immediate family, signalling etc.) but that that's actually a good thing because once we know that we can do better?

comment by Bitton · 2015-01-09T22:35:58.405Z · EA(p) · GW(p)

Off the top of my head:

  • The Selfish Gene by Richard Dawkins
  • The Origins of Virtue by Matt Ridley
  • Moral Tribes by Joshua Greene
  • Darwin's Dangerous Idea by Dennett
  • Freedom Evolves by Dennett
  • The Expanding Circle by Peter Singer

They might mean that our evolved morality is "good" in a different sense than you're looking for.

I haven't read them yet but The Ant and the Peacock, Moral Minds, Evolution of the Social Contract, Nonzero, Unto Others, and The Moral Animal are probably good picks on the subject.

comment by Giles · 2015-01-13T18:55:12.377Z · EA(p) · GW(p)

Thanks - most of those names ring a bell but the Selfish Gene is the only one I've read. I guess some of the value of reading them is gone for me now that my mind is already changed? But I'll keep them in mind :-)

comment by Evan_Gaensbauer · 2015-01-06T02:33:20.022Z · EA(p) · GW(p)

To develop the effective altruist movement, it's essential that we ask people how we've failed, or how our ideas are inadequate. [...] So an important challenge for all of us is to find better critics. [...] Let me know if there's any big criticism that I've missed, or if you know someone who can engage with an poke holes in our ideas.

On the important challenge of finding better critics, my personal strategy is going to be to seek a greater quantity of critics. My rationale for this is that we won't know which criticism(s) is or are the best until they're received, so it's worth courting many critics to widen the net of criticism effective altruism receives. I don't intend to seek as many critics as possible, broadcasting the request as publicly as possible across my social networks, because that seems to be casting a net so wide as to attract poor criticism. However, if there is someone I know who expresses, or has previously expressed, a perspective on effective altruism, even a negative one, I will invite them to generate a criticism of effective altruism.

This still seems like a strategy that will bring in poor criticisms. However, I believe an aversion to courting more criticism rather than less is the same attitude that's led to effective altruism thinking it's received too little criticism in the first place. If effective altruism receives more poor criticisms, its not anything the movement can't bear. Honestly, I perceive it difficult to believe anyone who might make great effort to criticize effective altruism sincerely could report something of much lesser quality than the worst drivel effective altruism has already received. Also, it seems biased, disingenuous, and hypocritical to seek more critics, but pick and choose specific critics we might like more in particular.

comment by Evan_Gaensbauer · 2015-01-06T00:40:22.129Z · EA(p) · GW(p)

What sort of criticism is the effective altruism community seeking? I notice much of the prior criticism cited are medium- or high-profile media criticism to effective altruism, in the form of a response to, e.g., William MacAskill's articles publised on Medium, or Peter Singer's TED talk. However, from the perspective of effective altruism itself, there isn't an incentive for it to be popular, or widely read. The important thing to effective altruism is that the criticism of its ideas are noted, and that its critics are engaged.

I ask because I have friends, or others in my network, who might have criticisms of effective altruism. They wouldn't be as high-profile as some of the published ones above, nor from individuals who have as much professional experience. However, given the low quality of some of the above criticisms, this wouldn't seem to be a prior concern in its own right. I mean, present university students, and other young adults, are largely responsible for founding effective altruism, so they should be just as capable in making valuable criticism of it.

comment by Giles · 2015-01-06T01:11:48.395Z · EA(p) · GW(p)

"Giles has passed on some thoughts from a friend" is one of the things cited, so if a particular criticism isn't listed we can assume it's because Ryan doesn't know about it, not that it's inherently too low status or something. I definitely want to hear what your friends have to say!

comment by William_S · 2015-01-05T23:33:00.352Z · EA(p) · GW(p)

I wonder what you would get if you offered a cash prize to whoever wrote the "best" criticism of EA, according some criteria such as the opinion of a panel of specific EAs, or online voting on a forum. Obviously, this has a large potential for selection effects, but it might produce something interesting (either in the winner, or in other submissions that don't get selected because they are too good).

comment by RyanCarey · 2015-01-05T23:50:09.442Z · EA(p) · GW(p)

Might be better to put up a cash prize for a suggested improvement rather than a critique then but maybe that's me being weak-spirited.

comment by PeterMcIntyre · 2015-01-10T12:27:05.667Z · EA(p) · GW(p)

I think one of my concerns with this would be the consistency and commitment effect created by incentivising a criticism, leading to someone seeing herself as an EA critic, or opposed to these ideas. Similar to companies having rewards for customers writing why it's their favourite company or product in the world. See also the American prisoners of war of China in the Korean war (I think), having small incentives to write criticisms of America or Capitalism. If it were being seriously considered, it'd be good to see some more done to work out if this would be a real consequence.

Source: Influence, Cialdini.

comment by redslider · 2015-01-17T20:17:03.291Z · EA(p) · GW(p)

I suppose I could be counted among those "outside critics" the topic mentioned. What surprised me, however, was that I expected to find an article eschewing the role of criticism and suggesting ways of removing critics, inside and outside the ranks of its members. This is what one often encounters in organizations that feel threatened by anything but the most complementary remarks on what they are doing. In addition, I stopped by this site for one, and only one, purpose. To briefly describe my thoughts about a world where "altruism" would be a superfluous term, not to criticize anything about what people are, perhaps must, do in the world we have now, as it is given. In that, I always regard any tasks which even temporarily mitigate the amount of suffering or damage caused on this planet as essential if we are even to tread water for awhile.

Actually, however, I don't regard myself as a critic at all. I don't really know enough about your group to even begin to presume I could judge it in any general or particular way. I also did not regard my one, and only, post (a comment on the topic "What is effective altruism") as a "critique" in any real sense of that term. But I did think it might be useful to offer a view of a world beyond charity and altruism. I've always regarded such portraits as useful, if for no other reason than the opportunity permitting one to check what they are doing to be sure they are not closing important doors to the future even as they open doors in the present. So, that was my reason for coming and making that post.

My own job however, as I define it, is not related to the here and now, except as I and the newspaper I steward suggest things that might be done in the here and now that would substantially alter the underlying reality we have all come to accept as the normative script for the future. That I think is a mistake that is made all too often. We refer to the reality of the present as if it were the only possible reality, and often refer to it in invariant terms such as "human nature". I don't agree with that position at all, and regard it as one of the most persistent and pernicious obstacles to changing our reality and the obvious future it offers us. I happen to think reality is an alterable feature of the human project, and subject to rewrite if we wish to do that. The old scripts, the ones that have been written for us and handed to us, are only 'real' as long as we accept them as reality and tacitly consent to them. In any case, that's how I view the matter.

On a personal level, I might actually offer a critique on the concept of "altruism" but I won't. I'll only briefly mention that for me, altruism (our habits and practices of it) implies a dependency on unaccountable individuals and institutions to set the priorities and provide for the essentials of survival to the people of the world. My view is that, when it comes to essentials, that is a job societies and civilizations as a whole should be doing. Whether they do it or not is beside the point. The fact remains, they should be. I regard it as a core function of having a society in the first place. And so, you will understand that I view charity and altruism as an obstacle to understanding that and rewriting the scripts of reality that would make it so. All, without for one minute disregarding how essential charity and altruism are at the moment and that they need to at least be supported even in the midst of demanding that we transfer large swatches of what they do to the public responsibility form managing and delivering such basic services.

But that's just my personal view on these matters, and I've really no intention of posting anything to that effect or arguing the case further. But I did think the piece on omoiyari and a world without need for welfare or altruism or anything like those might be useful in some way. I hope it is, and wish you all the best. omoiyari, Red Slider

comment by Vincent_deB · 2015-01-08T09:56:11.099Z · EA(p) · GW(p)

Here is someone's initial exploration of a potential criticism:

(A poll about whether a nonprofit with a charismatic and intelligent leader and an unfalsifiable premise of how their charities does good would succeed in getting funding from the EA community.)

comment by William_MacAskill · 2015-01-06T17:00:27.419Z · EA(p) · GW(p)

This piece by George Monbiot represents one strand of potential deep criticism, which is that many goods are incommensurable in value:

This is a pretty common view in philosophy, and it would make the EA project much more limited in what it could achieve.

comment by Robert_Wiblin · 2015-01-06T18:56:50.718Z · EA(p) · GW(p)

If many things are incommensurable at least we wouldn't be doing harm - our actions would often be merely neutral.

comment by Evan_Gaensbauer · 2015-01-06T02:25:20.346Z · EA(p) · GW(p)

Is there any particular topic, or set of ideas, from effective altruism criticism is being sought for? Alternatively, is there a particular format for the criticism that's being preferred? In particular, if I know some folks who might criticize effective altruism, I could ask them to publish their perspective on this forum. On the other hand, the threat of receiving downvotes, and being in the element of an intellectual opponent, seems to me a (fair) reason one might not want to publish criticism(s) of effective altruism on this forum.

I believe if it's explained to potential critics that effective altruism is based upon a tenet of self-reflection, and rationality, so it's earnestly seeking criticism of its ideas so that it can change and improve itself, those critics might be more willing to specify their focus. After Holden Karnofsky published his critique of the MIRI, MIRI temporarily hired former Givewell analyst Jonah Sinick to assess the validity of their organizational strategy relative to the organization's goals. Additionally, Luke Muehlhauser and Holden Karnofsky in particular have maintained a dialogue on what Holden, or Givewell, might think of the MIRI's ongoing work, and how they've improved over time. If the MIRI as an organization can solicit what criticism it receives in pursuit of self-improvement to be tailored to specific subject matter, perhaps effective altruism as a whole can do the same.

comment by RyanCarey · 2015-01-06T10:53:56.238Z · EA(p) · GW(p)

I think that it's useful to hear what people think of the whole idea. For reference, there's Singer's TED talk, Will's 'What is Effective Altruism?', and lots more intoductory essays. Apart from that, my suggestions for new critics would be:

  • It's probably better not to read a lot existing critiques at this stage because it might make you less imaginative.
  • To keep it constructive, it often helps if you can suggest what would count as an improvement.
  • You can email criticism to me or post it here (as a comment or as a new thread)
  • If you're writing something more substantial, you can get people to give feedback by sending it around as a google doc.
comment by Philip_W · 2015-01-06T16:02:09.430Z · EA(p) · GW(p)

Eliezer Yudkowsky has challenged utilitarianism and some forms of moral realism in the Fun Theory sequence, the enigmatic (or merely misunderstood) Metaethics sequence and the fictionalised dilemma Three Worlds Collide.

I'm confused. AFAIK Yudkowsky's position is utilitarian, and none of the linked posts and sequences challenge utilitarianism. 3WC being an obvious example where only one specific branch - average preference utilitarianism - is argued to be wrong. The sequences are attempts to specify parts of the utility function and its behavior - even going so far as to argue for deontological laws as part of utilitarianism for corrupt humans - not refutations.

the enigmatic (or merely misunderstood) Metaethics sequence

This looks like mind projection fallacy. If so, the obvious explanation is that you don't understand Yudkowsky's position properly.

comment by RyanCarey · 2015-01-06T17:32:55.412Z · EA(p) · GW(p)

AFAIK Yudkowsky's position is utilitarian, and none of the linked posts and sequences challenge utilitarianism.

I've added the word 'hedonistic' and fixed a duplicate link. Maybe he's an atypical utilitarian, depending on our definitions. He's consequentialist and I think he endorses following a utility function but he certainly opposes simple hedonistic utilitarianism, or the maximisation of any simple good.

the enigmatic (or merely misunderstood) Metaethics sequence

This looks like mind projection fallacy. If so, the obvious explanation is that you don't understand Yudkowsky's position properly.

Yes, I found Eliezer's Metaethics sequence difficult but so did lots of people. Eliezer agrees:

I've been pondering the unexpectedly large inferential distances at work here—I thought I'd gotten all the prerequisites out of the way for explaining metaethics, but no. I'm no longer sure I'm even close. I tried to say that morality was a "computation", and that failed; I tried to explain that "computation" meant "abstracted idealized dynamic", but that didn't work either. No matter how many different ways I tried to explain it, I couldn't get across the distinction my metaethics drew between "do the right thing", "do the human thing", and "do my own thing".