Unsurprising things about the EA movement that surprised me

post by Ada-Maaria Hyvärinen · 2022-03-30T17:08:56.082Z · EA · GW · 22 comments


  There is a social EA movement
  Size of EA
  EA material
  EA cause areas
  EA “brand” and EA language
  EA and other movements

I learned a lot about EA from my local group before reading that much by myself. So when I finally started to actively read EA material, I was quite confused about EA as a global social movement and who the people behind all those texts were. Here is a list that would have helped me put EA texts in the correct context; maybe it will be helpful for someone else.

There is a social EA movement

Size of EA

EA material

EA cause areas

EA “brand” and EA language

EA and other movements

Also, there are still a lot of things I don’t know about EA as a social movement – for example, I’m quite out of the loop of what leading EA figure founded what organization, or what is the relation between EA and animal advocacy movements. (I know they are related, but I don’t know the details.)

Is there anything that surprised you about EA or the EA movement?


Comments sorted by top scores.

comment by Vaidehi Agarwalla (vaidehi_agarwalla) · 2022-03-30T21:57:18.785Z · EA(p) · GW(p)

Really enjoyed this post, and will likely be sharing this with newer EAs in the future! A lot of this is what I'd call "landscape knowledge" which only becomes clear once you start interacting with the community itself. Thanks for writing it!

comment by AllAmericanBreakfast · 2022-03-31T01:26:17.861Z · EA(p) · GW(p)

When you first get to EA, it feels like there is an EA text about everything.

EA also stands for Endless Articles!

Replies from: tamgent
comment by tamgent · 2022-04-02T13:35:45.994Z · EA(p) · GW(p)

Really? I thought it stood for Easy Answers

Replies from: evelynciara
comment by BrownHairedEevee (evelynciara) · 2022-04-06T15:33:53.445Z · EA(p) · GW(p)

I thought it was Endless Arguments

comment by Luke Freeman (lukefreeman) · 2022-03-30T23:52:03.829Z · EA(p) · GW(p)

Thanks for writing this post! This is a great resource, especially for newcomers 😀 

comment by Adam Binks · 2022-03-30T23:48:29.808Z · EA(p) · GW(p)

Great post, thank you! This is useful as a guide to what to try and add in to intro fellowships, in particular:

There are a lot of real professional people in EA, and those people are influencing things in the real world – EA is by no means just a philosophy discussion club, even if your local EA club is one (and it does not have to be one forever!)

I think this is a really important realisation to have as someone doing an intro fellowship/getting into EA. My guess is that realising this makes it a lot easier to think seriously about making career choices based on ideas/methods from EA. 

So, how can we help new people realise this sooner?

A quick brainstorm:

  • Include some readings/podcasts in intro fellowships where people talk in the first person about their EA-aligned work
  • Encourage new members to attend EAG(x)
  • Have talks/Q&As with people currently doing EA-aligned work
  • Include a few bios of individuals and their stories of getting into this kind of work
  • Chat to new members about what previous members of your group have gone on to do (if your group is mature enough)

I think it'd be ideal if, people understand that EA is not just a philosophy discussion group, but a thing that they could shape their career around, from when they first learn about it.

Replies from: Ada-Maaria Hyvärinen, Tom Gardiner
comment by Ada-Maaria Hyvärinen · 2022-04-01T05:46:50.552Z · EA(p) · GW(p)

With EA career stories I think it is important to to keep in mind that new members might not read them the same way as more engaged EAs who already know what organization is considered cool and effective within EA.  When I started attending local EA meetups I met a person who worked at OpenPhil (maybe as a contractor? I can't remember the details), but I did not find it particularly impressive because I did not know what OpenPhilanthropy was and assumed the "phil" stood for "philosophy". 

comment by Tom Gardiner · 2022-03-31T19:25:11.920Z · EA(p) · GW(p)

I was going to suggest the last point, but you're way ahead of me! In the next couple of years, the first batch of St Andrews EAs will have fully entered the world of work/advanced study, and keeping some record of what the alumni are doing would be meaningful. 
[As highlighted in the thread post, we are two EAs who know each other outside the forum.]

comment by Devin Kalish · 2022-03-30T18:04:13.739Z · EA(p) · GW(p)

Re: "In particular, there is no secret EA database of estimates of effectiveness of every possible action (sadly). When you tell people effective altruism is about finding effective, research-based ways of doing good, it is a natural reaction to ask: “so, what are some good ways of reducing pollution in the Baltic Sea / getting more girls into competitive programming / helping people affected by [current crisis that is on the news]” or “so, what does EA think of the effectiveness of [my favorite charity]”. Here, the honest answer is often “nobody in EA knows”"

Yeees, this is such a common first reaction I have found in people first being introduced to Effective Altruism. I always really want to give some beginning of an answer but feel self-conscious that I can't even give an honest best guess from what I know without sort of disgracing the usual standards of rigor of the movement, and misrepresenting its usual scope.

Replies from: Paige_Henchen
comment by Sunny1 (Paige_Henchen) · 2022-04-06T14:41:57.150Z · EA(p) · GW(p)

Well, no one has the "real" answers to any of these questions, even the most EA of all EAs. The important thing is to be asking good questions in the first place. I think it's both most truthful and most interpersonally effective to say something like "gee, I've never thought about that before. But here's a question I would ask to get started. What do you think?"

comment by OldSchoolEA · 2022-03-30T23:56:03.170Z · EA(p) · GW(p)

As EA grew from humble, small, and highly specific beginnings (like but not limited high impact philanthropy), it became increasingly big tent.

In becoming big tent, it has become tolerant of ideas or notions that previously would be heavily censured or criticized in EA meetings.

Namely, this is in large part because early EA was more data driven with less of a focus on hypotheticals, speculation, and non-quantifiable metrics. That’s not to say current EA isn’t these things- it’s just relatively less stressed compared to 5-10 years ago.

In practice, this means today’s EA is more willing to consider altruistic impact that can’t be easily accurately measured or quantified, especially with (some) Longtermist interests. I find this to be a rather damning weakness, although one could make the case it is also a strength.

This also extends to outreach.

For example, I wouldn’t be surprised if an EA would give a dollar or volunteer for the seeing eye-dog organizations [or any other ineffective organizations] under the justification that this is “community-building” and like the borg, someday we will be able to assimilate them and make them more effective or recruit new people in EA.

To me and other old guard EAs, it’s wishful thinking, because it makes EA less pure to its epistemic roots, esp. overtime as non-EA ideas enter the fold and influence the group. One example of this is how DEI initiatives are wholeheartedly welcomed by EA organizations whereas in fact there is little evidence the DEI/progressive way of hiring personnel and staff results in better performance outcomes compared to normal hiring that doesn’t factor in or give advantage/edge to a candidate based on their ethnicity, gender, race, or sexual orientation.

But even more so, with cause prioritization. In the past, I felt that it became very difficult to have your championed or preferred cause even considered remotely effective. In fact, the null hypothesis was that your cause isn’t effective... and that most causes weren’t.

Now it’s more like any and all causes are assumed effective or potentially effective off the get-go and then are supported by some marginal amount of evidence. A less elitist and stringent approach, but inevitable once you become big tent. Some people feel this made EA a friendlier place. Let’s just say that today you’d be less likely to be kicked out of an EA meeting for being naively optimistic and without a care for figures/numbers, and more likely to be kicked out for being overtly critical (or even mean) even if that criticalness was the strict attitude of early EA meetings that turned a lot of people off from EA (including myself, when I first heard about it. I later came around to appreciate and welcome that sort of attitude and its relative rarity in the world. Strict and robust epistemics are underappreciated).

For ex. If you think studying jellyfish is the most effective way to spend your life and career, draft an argument or forum post explaining the potential boonful future consequences of your research and bam, you are now an EA. In the past it would have been received as, why study jellyfish when you could use your talents to accomplish X or Y and something greater and follow a proven career path that is less risky and more profitable (intellectually and fiscally) than jellyfish study.

Unlike Disney and its iron claw grip over its brands and properties, it’s much easier to call oneself or identify an EA nowadays or as part of the EA sphere… because well, anything and everything can be EA. The EA brand, whilst once tightly controlled and small, has now grown and it can be difficult to tell the fake Gucci bags from the real deal when both are sold at the same market.

My greatest fear is that EA will overtime become just A, without the E, and lose the initial ruthless data and results driven form of moral concern.

Replies from: Linch, jackmalde, Stephen Clare, NunoSempere
comment by Linch · 2022-03-31T14:17:43.188Z · EA(p) · GW(p)

I don't think core EA is more "big tent" now than it used to be. Relatively more intellectual effort is devoted to longtermism now than global health and development, which represents more a shift in focus than a widening of focus.

What you might be seeing is an influx of money across the board, which results at least partially in decreasing the bar of funding for more speculative interventions.

Also, many people now believe that the ROI of movement building is incredibly high, which I think was less true even a few years ago. So net positive but not very exciting movement building interventions -- both things that look more like traditional "community building" and things that look like "support specific promising young EAs" -- are much more likely to be funded than before.  In the "support specific promising young EAs" case, this might be true even if they say dumb things or are currently pursuing lower-impact cause areas, as long as the CB case for it is sufficiently strong (above some multiplier for funding, and reasonably probable to be net positive).

Replies from: Linch
comment by Linch · 2022-04-24T14:57:43.387Z · EA(p) · GW(p)

I think I no longer endorse this comment. Not sure but it does seem like there's a much broader set of things that people research, fund, and work on (e.g. I don't think there was that much active work on biosecurity 5 years ago).

comment by Jack Malde (jackmalde) · 2022-03-31T18:09:43.169Z · EA(p) · GW(p)

I actually don't relate to much of what you're saying here. 

For ex. If you think studying jellyfish is the most effective way to spend your life and career, draft an argument or forum post explaining the potential boonful future consequences of your research and bam, you are now an EA. In the past it would have been received as, why study jellyfish when you could use your talents to accomplish X or Y and something greater and follow a proven career path that is less risky and more profitable (intellectually and fiscally) than jellyfish study.

I know jellyfish is a fictional example.  Can you give a real example of this happening? I'm not sure what you mean by "bam, you are now an EA". What is the metric for this?

I wrote a post about two years ago arguing that the promotion of philosophy education in schools could be a credible longtermist intervention [EA · GW]. I think reception was fairly lukewarm and it is clear that my suggestion has not been adopted as a longtermist priority by the community. Just because there were one or two positive comments and OKish karma doesn't mean anything - no one has acted on it. It seems to me that it's a similar story for most new cause suggestions.

comment by Stephen Clare · 2022-03-31T13:02:22.803Z · EA(p) · GW(p)

Now it’s more like any and all causes are assumed effective or potentially effective off the get-go and then are supported by some marginal amount of evidence.

This doesn't seem true to me, but I'm not an "old guard EA".  I'd be curious to know what examples of this you have in mind.

comment by NunoSempere · 2022-03-31T12:08:34.850Z · EA(p) · GW(p)

Strongly upvoted, but this should be its own top-level post.

comment by Venkatesh · 2022-03-31T06:54:38.544Z · EA(p) · GW(p)

For me, the big revelation was that EA was not just about causes that are supported by RCTs/empirical evidence. It has this whole element of hits-based giving. In fact, the first time I realized this, I ended up creating a question on the forum [EA · GW] about the misleading definition.

comment by PabloAMC · 2022-03-30T18:37:38.065Z · EA(p) · GW(p)

EA aims to be cause neutral, but there is actually quite a lot of consensus in the EA movement about what causes are particularly effective right now.

Actually, notice that the consensus might be based more on internal culture, because founder effects are still quite strong. That being said I think the community puts effort in remaining cause neutral, and that's good.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2022-03-30T21:28:44.538Z · EA(p) · GW(p)

I don't think a consensus on what cause is most effective is incompatible with cause-neutrality as it's usually conceived (which I called cause-impartiality here [EA · GW]).

Replies from: PabloAMC
comment by PabloAMC · 2022-03-30T22:23:24.633Z · EA(p) · GW(p)

Makes sense, and I agree

comment by Jeremy (captainjc) · 2022-04-07T01:37:23.240Z · EA(p) · GW(p)

This is great. I remember a similar post from maybe a year or two ago, but I am unable to find it. Something along the lines of "things that it took me way too long to figure out about EA". Anyone else remember this?

comment by Guy Raveh · 2022-03-31T03:23:37.849Z · EA(p) · GW(p)

Regarding the first 4 points about EA as a movement - when I first read about EA on Wikipedia, I basically thought "oh, cool!", and then went on with my life because I couldn't imagine there was an actual movement rather than just a philosophical school. Only a few months later a local group showed up and I got to know about it.

Regarding the confusing jargon - I disagree, you won't get used to it, and people should stop writing like that.

I was really surprised by the focus on AI safety when I first looked at 80000hours. Then over time I became convinced it actually was important.