Unflattering reasons why I'm attracted to EA

post by dotsam · 2022-06-03T14:27:22.913Z · EA · GW · 17 comments

I have noticed the following distasteful motivations for my interest in EA surface within me from time to time. I'm disclosing them as they may also be reasons why people are suspicious of EA.


Comments sorted by top scores.

comment by Linch · 2022-06-03T23:37:13.117Z · EA(p) · GW(p)

For me, it's some subset of the above, plus some related points: 

  • Most of my personal and professional successes are due to EA
  • I feel more successful and happy doing EA stuff than non-EA stuff, at least if I take a gradient descent approach. And there's a bunch of momentum in not changing.
    • This isn't exactly true if I zoom way out, for example FIRE would be within my reach if I wasn't so altruistically committed, and I suspect not working would make me happier.
      • On the other hand, I don't think I would necessarily have even recognized this without EA.
  • I feel smarter thinking about EA stuff than thinking about e.g. math or programming
  • Having a "noble" central purpose in my life makes the individual failures in (the rest of) my life feel more bearable
  • EA lets me justify putting off doing a bunch of things that in other cultures would be called "growing up," like learning to drive or having children
  • Non-EA liberal Western society feels increasingly identity-driven, and I like to feel appreciated for my intellectual and community contributions, regardless of how I look
comment by Julia_Wise · 2022-06-03T17:41:16.658Z · EA(p) · GW(p)

Upvoting for honesty on under-the-surface things!

comment by Geoffrey Miller (geoffreymiller) · 2022-06-03T15:25:28.597Z · EA(p) · GW(p)

Good post with a fairly comprehensive list of the conscious, semi-conscious, covert, or adaptively self-deceived reasons why we may be attracted to EA.

I think these apply to any kind of virtue signaling, do-gooding, or public concern over moral, political, or religious issues, so they're not unique to EA. (Although the 'intellectual puzzle' piece may be somewhat distinctive with EA).

We shouldn't beat ourselves up about these motivations, IMHO.  There's no shame in them. We're hyper-social primates, evolved to gain social, sexual, reproductive, and tribal success through all kinds of moralistic beliefs, values, signals, and behaviors. If we can harness those instincts a little more effectively in the direction of helping other current and future sentient beings, that's a huge win. 

We don't need pristine motivations. Don't buy into the Kantian nonsense that only disinterested or purely 'altruistic' reasons for altruism are legitimate. There is no naturally evolved species that would be capable of pure Kantian altruism. It's not an evolutionarily stable strategy, in game theory terms. 

We just have to do the best we can with the motivations that evolution gave us. I think Effective Altruism is doing the best we can.

The only trouble comes if we try to pretend that none of these motivations should  have any legitimacy in EA. If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA. And if we undermine the payoffs for any of these incentives through some misguided puritanism about what motives we can expect EAs to have, we might undermine EA. 

Replies from: ofer, Daniel Kirmani
comment by ofer · 2022-06-03T18:40:24.441Z · EA(p) · GW(p)

If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA.

This seems plausible. On the other hand, it may be important to be nuanced here. In the realms of anthropogenic x-risks and meta-EA, it is often very hard to judge whether a given intervention is net-positive or net-negative. Conflicts of interest can cause people to be less likely to make good decisions from an EA perspective.

comment by Daniel Kirmani · 2022-06-03T17:37:21.767Z · EA(p) · GW(p)

If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA.

What're the costs/benefits of reversing this shame? By "reversing shame" I mean explicitly pitching EA to people as an opportunity for them to pursue their non-utilitarian desires.

comment by IanDavidMoss · 2022-06-03T19:10:42.991Z · EA(p) · GW(p)

Really appreciate you writing this! Echoing others, I think many of these more self-serving motivations are pretty common in the community. With that said, I think some of these are much more potentially problematic than others, and the list is worth disaggregating on that dimension. For example, your comment about EA helping you not feel so fragile strikes me as prosocial, if anything, and I don't think anyone would have a problem with about someone gaining hope that their own suffering could be reduced from engaging in EA.

The ones that I think are most worrying and worth pushing back on (not just for you, but for all of us in the community) are:

  • Affiliation with EA aligns me with high-status people and elite institutions, which makes me feel part of something special, important and exclusive (even if it's not meant to be)
  • EA is partly an intellectual puzzle, and gives me opportunities to show off and feel like I'm right and other people are wrong / I don't have to get my hands dirty helping people, yet I can still feel as or more legitimate than someone who is actually on the front line
  • It is a way to feel morally superior to other people, to craft a moral dominance hierarchy where I am higher than other people

The first one is tricky, as affiliation with high-status people and organizations can be instrumentally quite useful for achieving impact--indeed, in some contexts it's essential--and for that reason we shouldn't reject it on principle. And just like I think it's okay to enjoy money, I think it's okay to enjoy the feeling of doing something special and important! The danger is in having the status become its own reward, replacing the drive for impact. I feel that this is something we need to be constantly vigilant about, as it's easy to mistake social signals of importance for actual importance (aka LARPing at impact.)

I grouped the "intellectual puzzle" and "get my hands dirty" items because I see them as two sides of the same coin. In recent years it feels to me that EA has lost touch a bit with its emotional core, which is arguably easier to bring forward in the contexts of animal welfare and global poverty than x-risk (and to the extent there is an emotional core to x-risk, it is mostly one of fear rather than compassion). I personally love solving intellectual puzzles and it's a big reason why I keep coming back to this community, but it mustn't come at the expense of the A in EA. I group this with "get my hands dirty" because I think for many of us, hard intellectual puzzles are our bread and butter and actually take less effort/provoke less discomfort than putting ourselves in a position to help people suffering right in front of us. I similarly see this one as a balance to strike.

The last one is the only one that I think is just unambiguously bad. Not only is it incorrect on its face, or at least at odds with what I see as EA's core values, but it is a surefire way to turn off people who might otherwise be motivated to help. And indeed there has been a history of people in EA publicly communicating in a way that came across to others as morally arrogant, especially in early years of the movement, which created rifts with mainstream nonprofit/social sector practice that are still there today (e.g.).

Replies from: Samuel Shadrach
comment by acylhalide (Samuel Shadrach) · 2022-06-06T18:53:39.911Z · EA(p) · GW(p)

Small point but the linked tweet in your last para doesn't come across as someone who feels EAs are morally arrogant, atleast if I read the thread without any other context. He's both appreciative and critical of EA, and his criticisms seem mostly on the actual work rather than attitudes or traits of the people involved.

comment by Daniel Kirmani · 2022-06-03T14:48:27.788Z · EA(p) · GW(p)

I made my account to upvote this. EA would do well to think more clearly about the practical nature of altruism and self-deception.

comment by rodeo_flagellum · 2022-06-03T14:57:42.049Z · EA(p) · GW(p)

I admit, some of these apply to me as well. I would be interested in reading further on the phenomenon, which I can't seem to find a term for, of "ugly intentions (such as philanthropy purely for status) that produce a variety of good outcomes for self and others, where the actor knows that this variety of good outcomes for others is being produced but is in it for other reasons".

Your post reminds me of some passages from the chapter on charity in the book The Elephant in the Brain (rereading it now to illustrate some points), and could probably be grouped under some of the  categories in the final list. I would recommend this reading this book, generally speaking. 


What Singer has highlighted with this argument is nothing more than simple, everyday human
hypocrisy—the gap between our stated ideals (wanting to help those who need it most) and our
actual behavior (spending money on ourselves). By doing this, he’s hoping to change his readers’
minds about what’s considered “ethical” behavior. In other words, he’s trying to moralize.

Our goal, in contrast, is simply to investigate what makes human beings tick. But we will still
find it useful to document this kind of hypocrisy, if only to call attention to the elephant. In
particular, what we’ll see in this chapter is that even when we’re trying to be charitable, we
betray some of our uglier, less altruistic motives.

Warm Glow

Instead of acting strictly to improve the well-being of others, Andreoni theorized, we do charity in part because of a selfish psychological motive: it makes us happy. Part of the reason we give to homeless people on the street, for example, is because the act of donating makes us feel good, regardless of the results.

Andreoni calls this the “warm glow” theory. It helps explain why so few of us behave like effective altruists. Consider these two strategies for giving to charity: (1) setting up an automatic monthly payment to the Against Malaria Foundation, or (2) giving a small amount to every panhandler, collection plate, and Girl Scout. Making automatic payments to a single charity may be more efficient at improving the lives of others, but the other strategy—giving more widely, opportunistically, and in smaller amounts—is more efficient at generating those warm fuzzy feelings. When we “diversify” our donations, we get more opportunities to feel good.


  • Visibility. We give more when we’re being watched.
  • Peer pressure. Our giving responds strongly to social influences.
  • Proximity. We prefer to help people locally rather than globally.
  • Relatability. We give more when the people we help are identifiable (via faces and/or stories) and give less in response to numbers and facts.
  • Mating motive. We’re more generous when primed with a mating motive.

This list is far from comprehensive, but taken together, these factors help explain why we donate so inefficiently, and also why we feel that warm glow when we donate. Let’s briefly look at each factor in turn.

Simler and Hanson then cover each of the listed entities in greater depth.

comment by gabriel_wagner · 2022-06-03T14:39:54.191Z · EA(p) · GW(p)

Thanks a lot for writing this down with so much clarity and honesty!

I think I share many of those feelings,  but would not have been able to write this.

comment by timunderwood · 2022-06-06T09:06:27.219Z · EA(p) · GW(p)

It's all good -- what matters is whether we make a (the biggest possible) positive difference in the world, not how the motivational system decided to pick this as a goal.

I do think that it is important for the EA community/system/whatever it is to successfully point the stuff that is done for making friends and feeling high status towards stuff that actually makes that biggest possible difference.

Replies from: tamgent
comment by tamgent · 2022-06-06T19:42:18.427Z · EA(p) · GW(p)

I think the issue is that some of these motivations might cause us to just not actually make as much positive difference as we might think we're making. Goodharting ourselves.

Replies from: timunderwood
comment by timunderwood · 2022-06-09T12:41:47.216Z · EA(p) · GW(p)

Ummmm, so we say we want to do good, but we actually want to make friends and get laid, so we figure out ways to 'do good' that leads to lots of hanging out with interesting people,and chances to demonstrate how cool we are to them. Often these ways of 'doing good' don't actually benefit anyone who isn't part of the community.

This is at least the worry, which I think is a separate problem from Goodharting, ie when the cea provides money to fly someone from the US to go to an eagx conference in Europe, I don't think there is any metric that is trying to be maximized, but rather just a vague sense that this might something something person becomes effective and then lots of impact.

Now it could interact with Goodharting in a case where, for example, community organizers get funds and status primarily based on numbers of people attending events, when what actually matters is finding the right people, and having the right sorts of events.

comment by quinn · 2022-06-06T21:35:47.720Z · EA(p) · GW(p)

Thanks for posting. I endorse a subset of these, another subset is quite alien to me. 

I want to zero in on 

I feel guilty about my privilege in the world and I can use EA as a tool to relieve my guilt (and maintain my privilege)

Because I find it odd that you conflated relieving guilt and maintaining privilege into a single point, and the idea that installing oneself as an altruist in a cruel system (economic, ecological, or otherwise) is hedging against losing relative status or power within that system is a claim that needs to be justified

As an example, surely many of us will have at least glanced at leftist comments to the effect that donating to AMF is a convenient smokescreen, keeping us blissfully ignorant of postcolonial mechanisms which are the true root cause of disvalue for the people AMF is (ostensibly) helping, and that if we were real altruists we would be anti imperialism activists. These comments, with whatever level of quality we find them, often point at this very claim. 

Those of us who have taken substantial paycuts for (ostensibly) altruistic purposes may simply be trading cash for intra-community status-- this observation can justify arguments that we're not genuine altruists (whatever that is), but they do not on their own point to a bid at maintaining privilege. 

Obviously Joe Ineffective Philanthropy Schmoe, who donates to the opera for tax breaks and PR, can be accused of using the polite fiction of philanthropy to shore up their privilege. If Joe is laundering money for the paperclip mafia by starting an alignment foundation (via some inscrutable mechanism), this accusation only increases. 

But such a line of attack seems orthogonal to actually existing effective altruism. 

Moreover, I may be right about the orthogonality but wrong about the emotional substructure. The emotional substructure may not make 100% sense, it may be a voice that assimilates guilt about privilege into some monologue about how you're falling short of franciscan altruism or some self-sacrifice emphasizing notion of altruism. This, however, is I think a mistake, because having an emotional substructure of guilt may not relate at all to the merits of franciscan altruism or mechanisms by which philanthropy fails to think systemically or etc. 

My two cents: guilt is a reasonable mechanism to draw one's attention to the stakes and the opportunities of their privilege, but is not "emotionally competitive" with responsibility. You, a member of the species that beat smallpox, are plausibly alive at a hinge of history. Who knows what levers are lying around under your nose. You, in a veil of ignorance sense, would prefer people of your privilege to do a minimum of try. There's a line in an old jewish book about not being free to abandon it, nor obligated to complete it (where it is presumably the brokenness of the world, etc.), which is emotionally very effective for me.

Guilt seems like it wants to emphasize my feelings about the unjust, from a cosmopolitan point of view, situation we find ourselves in. My subjective state, my inner monologue. It seems indifferent to arguments that making myself suffer as much as the people I want to help may not help those people as much as possible. In other words, it is negative. Responsibility is positive, it asks "what actions can you take?" This is at least a reasonable place to start. 

Replies from: Jay Bailey
comment by Jay Bailey · 2022-06-07T02:49:19.755Z · EA(p) · GW(p)

I think the correct steelmanning of dotsam's point is:

1. As a member of <group>, I have a great deal of privilege.
2. In order to remove this privilege, we need sweeping societal changes that upend the current power structures.
3. EA does not focus on upending current power structures in a radical way.
4. EA makes me feel less guilty about my privilege despire this.
5. Therefore, EA allows me to maintain my privilege by relieving my guilt by taking actions that doesn't actually require overthrowing current power structures, i.e, the actions that would affect me personally the most.

Under this set of assumptions, most people find ways to maintain their privilege not by actively reinforcing power structures, but by avoiding the moral imperative to overthrow them. EA's are at least slightly more principled, because their price for this is something like "Donate 10% of your income" instead of "Attend a protest", "Sign a petition", or "Decide that you're inherently worthy of what you have and privilege doesn't exist."

Personally, I don't agree with this chain of logic because I disagree with Point 2 above, but I think the chain of logic holds if you agree with points 1 and 2. (And I suppose you also need to add the assumptions that one can tractably work on upending these power structures, and that doing so won't cause more harm than good.)

comment by Locke · 2022-06-06T17:00:42.710Z · EA(p) · GW(p)

What's the problem with enlightened self interest? :) 

comment by JoyOptimizer · 2022-06-04T08:02:36.098Z · EA(p) · GW(p)

This is a list of EA biases to be aware of and account for.