How to criticise Effective Altruism

post by tobytrem · 2021-06-03T11:24:55.561Z · EA · GW · 2 comments


    we aren't doing the right thing)
    we are doing the right thing in the wrong way)
    a specific application of the correct goals and procedures needs adjusting)

I've become increasingly frustrated with attempts to critique Effective Altruism. Critiques from within the movement often seem to have large blind spots, and critiques from outside generally miss the mark, focusing their attack on the version of the movement that is publicised, not on the actual claims its members take to be true. 

I think that it is really important that we regularly hear well targeted critiques of EA, the movement and the philosophy. Partly this is because good critiques can help us do good better, adjusting our methods to better fit reality. But just as importantly, hearing the substantial disagreements that other intelligent people have about all aspects of EA is necessary for genuine intellectual and epistemic humility. Without this humility we are likely to go wrong. 

Recently I ran a discussion session for my local group in which we went over some critiques of EA. To focus our conversation, I drew up a taxonomy of possible critiques. This helped us to formulate new critical questions, but it also assisted in the clarification and understanding of critiques which come from outside of the movement, whose intention can often be lost in translation. In this post I will explain the taxonomy that we worked with. 

I split possible criticisms of EA into Goal-level, Procedural and Object-level critiques. 


(or we aren't doing the right thing)

I am characterising the goal (or project) of EA as "doing the most good". I think it is best framed in this way because the need for action, effectiveness, maximisation and the quantification of good are all implied by the sentence, while (besides maximisation) it implies no specific designation of what “the good” is.  If you disagree, then this section should only run a little differently. 

Critiques at the Goal-level are those that disagree with holding the ideal of "doing the most good". It is my impression that many EAs join the movement because they already assume this to be the correct goal, and therefore attempts to steel-man out-group critiques that aim at this level might lead to a mis-direction of the objection to another claim. It is important not to do this because there are very real objections, even to this fundamental claim.

These arguments can be intrinsic (the goal is internally incoherent or false), or extrinsic (a movement with this goal should not exist). 

Some examples: 

There are probably many other ways to critique the goal of EA, and the existence of a movement with that goal. I'd love to see more examples in the comments. 

Also it is worth noting here that disagreements over what "good" is are not critiques of EA based on this taxonomy. You have to have an idea of what "good" is to engage with EA at all, but critiques of your idea of "good" target something prior to EA. I think this marries well with the wide base of axiologies that EA allows, from negative utilitarians to hedonists. 


(or we are doing the right thing in the wrong way)

By procedural I mean, broadly, the ways in which the movement goes about achieving its goals. This includes institutional critiques, but also those that criticise general social norms or emphases within the presently existing community. We could also refer to this layer as discussing 'strategy' but in a broad sense which incorporates the strategic decisions (often implicit and undeliberated) to allow norms to develop and perpetuate as well as explicit strategic decisions made by influential organisations. 

All critiques of this kind are aimed at the movement as it actually exists, and they argue that EA as it is is not EA as its goal implies it should be. 

Some examples of both institutional and social or attitudinal critiques:

Again there will be many more examples of potential procedural issues with EA, I'd love to hear about some more in the comments. 


(or a specific application of the correct goals and procedures needs adjusting)

This level of critique is much of what is relevant to EAs on a daily basis. As new empirical information is uncovered we might realise that we should shift resources around, GiveWell should change its recommendations, or graduates should be discouraged from applying to roles in AI safety. These critiques are very important, but they should be separated from the philosophy of EA (Goal-level) and the contingently existing movement itself (Procedural). 


Thanks for reading! In the comments, it would be great to hear any problems with this taxonomy. The most important errors to point out would be types of critique which cannot fit into any category, these are more consequential than counter-examples that seem to fit multiple categories (though examples of those are welcome as well). If there are several good objections, I will publish an updated version of the taxonomy in a few weeks. It would also be great to see some more (taxonomised) critiques of EA in the comments, your own or your favourites from elsewhere. 


Comments sorted by top scores.

comment by Aaron Gertler (aarongertler) · 2021-06-14T06:22:01.520Z · EA(p) · GW(p)

I don't have time to comment on the taxonomy right now, but my favorite critique of EA is eight years old and I don't see it cited often enough in conversations like these, so I'm sharing it here [EA · GW]. (It probably covers all of your categories to some extent.)

Replies from: tobytrem
comment by tobytrem · 2021-06-17T09:29:32.705Z · EA(p) · GW(p)

Thanks! I read this a while back and I remember it was great, but I haven't yet looked with an eye to taxonomising its arguments. Could be a useful exercise.