The totalitarian implications of Effective Altruism

post by Ed_Talks · 2022-06-14T16:52:02.516Z · EA · GW · 16 comments

This is a link post for

Here's a link to my entry to the Criticism and Red Teaming Contest. 

My argument is that EA’s underlying principles default towards a form of totalitarianism. Ultimately, I conclude that we need a reformulated concept of EA to safeguard against this risk. 

Questions, comments and critiques are welcomed. 


EDIT 16 JUNE 2022: Just a quick note to thank everyone for their comments. This is my first full post on the forum and it's really rewarding to see people engaging with the post and offering their critiques.


Comments sorted by top scores.

comment by peterhartree (Peter_Hartree) · 2022-06-15T14:56:50.523Z · EA(p) · GW(p)

I wasn't convinced by your argument that basic EA principles have totalitarian implications.

The argument given seems too quick, and relies on premises that seem pretty implausible to me, namely:

(a) that EA "seeks to displace all prior traditions and institutions"

(b) that it is motivated by "the goal of bringing all aspects of society under the control of [its] ideology"

Given that this is the weakest part of the piece, I think the title is unfortunate.

Replies from: Ed_Talks
comment by Ed_Talks · 2022-06-16T12:20:53.382Z · EA(p) · GW(p)

Thanks for your three comments, all of which make excellent points. To briefly comment on each one:


The distinction you draw between (a) do the most good (with your entire life) and (b) do the most good (with whatever fraction of resources you've decided to allocate to altruistic ends) is a really good one. I firmly agree with your recommendation that the EA materials make it clearer that EA is recommending (b). If EA could reformulate its objectives in terms of (b) this would be exactly the type of strengthened weak-EA I am arguing for in my piece.


Thanks for the links here. All of these are good examples of discussions of a form of weak EA as discussed by Michael Nielsen in his notes and built upon in my piece. I note that in each of the linked cases, there is a form of subjective 'ad-hocness' to the use of weak EA to moderate EA's strong tendencies. I therefore have the same concerns as outlined in my piece. 


You've touched upon what was actually (and still is) my second largest concern with the piece (see my response to ThomasWoodside above for the first). 

I'm conscious that totalitarianism is a loaded term. I'm also conscious that my piece does not spend much time kicking the tyres of the concept. I deliberated for a while as to whether the piece would be stronger if I found another term, or limited my analysis to totalisation.  I expect that the critique you've made is a common one amongst those who did not enjoy the piece. 

My rationale for sticking with the term totalitarianism was twofold:

(A) my piece argues that we need to take (what I argue are the) logical outcomes of strong EA seriously, even if such consequences are clearly not the case today. As set out in my piece, my view is that the logical outcomes of an unmitigated form of strong EA would be (i) a totalising framework (i.e. it would have the ability to touch all human life), and (ii) a small number of centralised organisations which are able to determine the moral value of actions. When you put these two outcomes together, there is at least potential for an ideology which I think fits quite neatly into Dreher's definition of totalitarianism as used in my piece and applied in your comment above. I therefore reached the view that to duck away from use of the term would be unfaithful to my own argument, as it would be turning a blind eye to what I see as a potential strong EA of tomorrow due to  the state of EA today. 

(B) I thought totalitarianism was the best way of capturing  and synthesising the two separate strains of my argument (externalisation and totalisation). Totalisation is only one element of this. 

Thanks again for your really engaging comments. 

comment by ThomasW (ThomasWoodside) · 2022-06-14T17:59:55.819Z · EA(p) · GW(p)

This reminds me of Adorno and Horkheimer'sThe Dialectic of Enlightenment, which argues, for some of the same reasons you do, that "Enlightenment is totalitarian." A piece that feels particularly related:

For the Enlightenment, whatever does not conform to the rule of computation and utility is suspect.

They would probably say "alienation" rather than "externalization," but have some of the same criticisms.

(I don't endorse the Frankfurt School or critical theory. I just wanted to note the similarities.)

One thing to consider is moral and epistemic uncertainty. The EA community already does this to some extent, for instance MacAskill's Moral Uncertainty, Ord's Moral Parliament, the unilateralist's curse [? · GW], etc. but there is an argument that it could be taken more seriously.

Replies from: Ed_Talks
comment by Ed_Talks · 2022-06-15T12:46:36.070Z · EA(p) · GW(p)

This is a really interesting parallel - thank you!  

It ties neatly into one of my major concerns with my piece -whether it can be interpreted as anti-rationality / a critique of empiricism (which is not the intention). 

My reflexive reaction to the claim that "enlightenment is totalitarian" is fairly heavy scepticism (whereas, obviously, I lean in the opposite direction as regards to EA), so I'm curious what distinctions there are between the arguments made in Dialectic and the arguments made in my piece. I will have a read of Dialectic and think through this further. 

comment by Locke · 2022-06-14T18:53:55.025Z · EA(p) · GW(p)

Strong EA "doing the most good", which has risks of slipping to "at any cost" and thus totalitarianism as you say, perhaps should be called "optimized altruism."

comment by acylhalide (Samuel Shadrach) · 2022-06-14T18:26:54.307Z · EA(p) · GW(p)

Nice article.

One mental move you can make to avoid this totalisation is to frame "doing the most good" not as the terminal goal of your life, but as an instrumental goal to your terminal goal which is "be happy/satisfied/content". Doing good is obviously one of the things that brings you satisfaction (if it didn't you obviously would not do it), but it isn't the only thing.

Accepting this frame risks ending up not doing good at all, cause there's plenty of people who are happy without doing much good for others (atleast as measured by an EA lens). Which may be something you're okay with, or something you wish to defend against. But for some people I think doing good is important enough to their own life satisfaction that I don't think losing the goal of doing good is a concern.

(Personally I think the whole terminal goal / instrumental goal abstraction breaks down a bit when discussing humans, but a lot of EAs are clearly utilitarians who think in those terms so I wished to provide some thoughts in a frame they could parse.)

Replies from: Ed_Talks
comment by Ed_Talks · 2022-06-15T12:56:45.371Z · EA(p) · GW(p)

Thanks for engaging with my piece and for these interesting thoughts - really appreciate it. 

 I agree that, on a personal level, turning 'doing the most good' into an instrumental goal towards the terminal goal of 'being happy' sounds like an intuitive and healthy way to approach decision-making. My concern however is that this is not EA, or at least not EA as embodied by its fundamental principles as explored in my piece. 

The question that comes to my mind as I read your comment is: 'is instrumental EA (A) a personal ad hoc exemption to EA (i.e. a form of weak EA), or (B) a proposed reformulation of EA's principles?'

If the former, then I think this is subject to the same pressures as outlined in my piece. If the latter, then my concern would be that the fundamental objective of this reformulation is so divorced from EA's original intention that the concept of EA becomes meaningless. 

Replies from: Samuel Shadrach
comment by acylhalide (Samuel Shadrach) · 2022-06-15T18:13:57.258Z · EA(p) · GW(p)

. My concern however is that this is not EA, or at least not EA as embodied by its fundamental principles as explored in my piece. 


And I think your last para is mostly valid too.

comment by Richard Y Chappell (RYC) · 2022-06-15T17:25:53.695Z · EA(p) · GW(p)

I think J.S. Mill's On Liberty  offers a compelling argument for why utilitarians (and, by extension, Strong EAs) ought to favour pluralism, "experiments in living", and significant spheres of personal liberty.

So, as a possible suggestion for the "What should EA do?" section: Read On Liberty, and encourage other EAs to do likewise.  (In the coming year I'll be adding a 'study guide' on this to, which should be more accessible to a modern audience than the 19th century original.)

fwiw, my sense is that more EAs already share a Millian ethos rather than a totalitarian one!  But it's certainly important to maintain this.

Replies from: Ed_Talks
comment by Ed_Talks · 2022-06-16T11:47:02.413Z · EA(p) · GW(p)

Thanks for the recommendation. This dovetails nicely with my 4th recommendation (identify a firm philosophical foundation for the weakened form of EA I am proposing). The 'spheres of personal liberty' concept sounds like a decent starting point for a reformulation of the principle. 

comment by Johannes_Schwenke · 2022-06-15T14:14:19.628Z · EA(p) · GW(p)

Hi, I enjoyed your article. Parts of this remind me of Popper's "Utopia and Violence" in Conjectures and Refutations. Given that (strong) longtermist philosophy leads one to consider the value of an action in light of how much it could help bring about a particular utopia (often a techno-utopia), you might find inspiration to expand your critique in Popper's essay. (I don't want to endorse any specific view here, I just thought this might help you build a better argument).

Some quotes:

That the Utopian method, which chooses an ideal state of society as the aim which all our political actions should serve, is likely to produce violence can be shown thus. Since we cannot determine the ultimate ends of political actions scientifically, or by purely rational methods, differences of opinion concerning what the ideal state should be like cannot always be smoothed out by the method of argument. They will at least partly have the character of religious differences. And there can be no tolerance between these different Utopian religions. Utopian aims are designed to serve as a basis for rational political action and discussion, and such action appears to be possible only if the aim is definitely decided upon. Thus the Utopianist must win over, or else crush, his Utopianist competitors who do not share his own Utopian aims, and who do not profess his own Utopianist religion.

But he has to do more. He has to be very thorough in eliminating and stamping out all heretical competing views. For the way to the Utopian goal is long. Thus the rationality of his political action demands constancy of aim for a long time ahead; and this can only be achieved if he not merely crushes competing Utopian religions, but as far as possible stamps out all memory of them. 


Work for the elimination of concrete evils rather than for the realization of abstract goods. Do not aim at establishing happiness by political means. Rather aim at the elimination of concrete miseries. Or, in more practical terms: fight for the elimination of poverty by direct means--for example, by making sure that everybody has a minimum income. Or fight against epidemics and disease by erecting hospitals and schools of medicine. Fight illiteracy as you fight criminality. But do all this by direct means. Choose what you consider the most urgent evil of the society in which you live, and try patiently to convince people that we can get rid of it. 

But do not try to realize these aims indirectly by designing and working for a distant ideal of a society which is wholly good. However deeply you may feel indebted to its inspiring vision, do not think that you are obliged to work for its realization, or that it is your mission to open the eyes of others to its beauty. Do not allow your dreams of a beautiful world to lure you away from the claims of men who suffer here and now. Our fellow men have a claim to our help; no generation must be sacrificed for the sake of future generations, for the sake of an ideal of happiness that may never be realized. In brief, it is my thesis that human misery is the most urgent problem of a rational public policy and that happiness is not such a problem. The attainment of happiness should be left to our private endeavours.

Replies from: Ed_Talks
comment by Ed_Talks · 2022-06-16T11:54:05.446Z · EA(p) · GW(p)

Thanks for this, and I can definitely see the parallels here. 

Interestingly, from an initial read of the extracts you helpfully posted above, I can see Popper's argument working for or against mine. 

On one hand, it is not hard to identify a utopian strain in EA thought (particularly in long-termism as you have pointed out). On the other, I think there is a strong case to be made that EA is doing exactly what Popper suggests when he says:  Work for the elimination of concrete evils rather than for the realization of abstract goods. Do not aim at establishing happiness by political means. Rather aim at the elimination of concrete miseries. I see the EA community's efforts in areas like malaria and direct cash transfers as falling firmly within the 'elimination of concrete evils' camp. 

Replies from: Johannes_Schwenke
comment by Johannes_Schwenke · 2022-06-24T18:29:14.205Z · EA(p) · GW(p)

I agree 100% that the EA community's efforts in areas like malaria and direct cash transfers are falling quite firmly within the 'elimination of concrete evils' camp. IIRC you differentiate between the philosophical foundations and actual practice of effective altruism in your essay. So even if most EA work currently is part of the aforementioned camp, the philosophical foundations might not actually imply this.

comment by Karthik Tadepalli (therealslimkt) · 2022-06-15T19:29:39.364Z · EA(p) · GW(p)

I'm skeptical of the section of your argument that goes "weak EA doesn't suffer from totalization, but strong EA does, and therefore EA does."

The presence of a weak EA does not undermine the logic of a strong EA. If EA’s fundamental goal is to achieve “as much [good] as possible”, its default position will always point towards totalisation.

Why do you take strong EA as the "default" and weak EA as something that's just "present"? I could equally say

The presence of a strong EA does not undermine the logic of a weak EA. If EA's fundamental goal of achieving as much good as possible is subject to various self-imposed exemptions, its default position does not point towards totalization.

Adjudicating between these boils down to whether strong EA or weak EA is the better "true representation" of EA. And in answering that, I want to emphasize - EA is not a person with goals or positions. EA is what EAs do. This is normally a semantic quibble because we use "EA has the position X" as a useful shorthand for "most EAs believe X, motivated by their EA values and beliefs". But making this distinction is important here, because it distinguishes between weak EA (what EAs do) and strong EA (what EAs mostly do not do). If most EAs believe in and practice weak EA, then I feel like it's the only reasonable "true representation" of EA.

You address this later on by saying that weak EA may be dominant today, but we can't speak to how it might be tomorrow. This doesn't feel very substantial. Suppose someone objects to utilitarianism on the grounds "the utilitarian mindset could lead people to do horrible things in the name of the greater good, like harvesting people's organs." They then clarify, "of course no utilitarian today would do that, but we can't speak to the behavior of utilitarians tomorrow, so this is a reason to be skeptical of utilitarianism today." Does this feel like a useful criticism of utilitarianism? Reasonable people could disagree, but to me it feels like appealing to the future is a way to attribute beliefs to a large group even when almost nobody holds them, because they could hold those views.

Moreover, I think future beliefs and practices are reasonably predictable, because movements experience a lot of path-dependency. The next generation of EAs is unlikely to derive their beliefs just by introspecting towards the most extreme possible conclusions of EA principles. Rather, they are much more likely to derive their beliefs from a) their pre-existing values, b) the beliefs and practices of their EA peers and other EAs who they respect. Both of these are likely to be significantly more moderate than the most extreme possible EA positions.

Internalizing this point moderates your argument to a different form, "EA principles support a totalitarian morality". I believe this claim to be true, but the significance of that as "EA criticism" is fairly limited when it is so removed from practice.

comment by peterhartree (Peter_Hartree) · 2022-06-15T15:02:00.406Z · EA(p) · GW(p)

I agree with the following statement, which is well put:

EA needs to find a better way to articulate its relationship with the individual and with personal agency.

I think there are some good examples of this, but they're not sufficiently prominent in the introductory materials.

One I saw recently, from Luke Muehlhauser:

  1. I was born into incredible privilege. I can satisfy all of my needs, and many of my wants, and still have plenty of money, time, and energy left over. So what will I do with those extra resources?
  2. I might as well use them to help others, because I wish everyone was as well-off as I am. Plus, figuring out how to help others effectively sounds intellectually interesting.
  3. With whatever portion of my resources I’m devoting to helping others, I want my help to be truly other-focused [EA · GW]. In other words, I want to benefit others by their own lights, as much as possible (with whatever portion of resources I’ve devoted to helping others).

In a not-very-prominent article in the Key Ideas series, Ben Todd writes:

One technique that can be helpful is setting a target for how much energy you want to invest in personal vs. altruistic goals. For instance, our co-founder Ben sees making a difference as the top goal for his career and forgoes 10% of his income. However, with the remaining 90% of his income, and most of his remaining non-work time, he does whatever makes him most personally happy. It’s not obvious this is the best tradeoff, but having an explicit decision means he doesn’t have to waste attention and emotional energy reassessing this choice every day, and can focus on the big picture.

There's also You have more than one goal, and that's fine [EA · GW] by Julia Wise.

comment by peterhartree (Peter_Hartree) · 2022-06-15T14:53:34.714Z · EA(p) · GW(p)

Thanks for the post. I'll post some quick responses, split into separate comments...

I agree that "do the most good" can be understood in a totalising way. One can naturally understand it as either:

(a) do the most good (with your entire life).

(b) do the most good (with whatever fraction of resources you've decided to allocate to altruistic ends).

I read it as (b).

In my experience, people who think there are strong moral arguments for (a) tend to nonetheless think that (b) is a better idea to promote (on pragmatic grounds).

I've long thought [EA · GW] it'd be good if introductions to effective altruism would make it clearer that:

(i) EA is compatible with both (a) and (b)

(ii) EA is generally recommending (b)