What is the top concept that all EAs should understand?

post by Nathan Young (nathan) · 2022-07-05T11:56:22.627Z · EA · GW · 15 comments

This is a question post.

If you had the power to ensure all EAs understood 1 concept back to front. What would it be?

Only one concept per answer.

Please do not read this and feel obliged to understand all these. I don't want obligation.

I am asking because I might create a test to see how well EAs do know these concepts then see if I can get content for concepts people don't know. This is meant to make things easier not harder.

Answers

answer by EdMathieu · 2022-07-05T12:02:13.425Z · EA(p) · GW(p)

Expected value seems very important. It underlies a lot of other important concepts, is relevant to both neartermism and longtermism, and is extremely frequently brought up in EA discussions and arguments.

comment by quinn · 2022-07-05T15:01:33.333Z · EA(p) · GW(p)

I've come, through the joking to serious pipeline, to telling people that EAs are just people who are really excited about multiplication, and who think multiplication is epistemically and morally sound. 

comment by levin · 2022-07-05T15:16:46.273Z · EA(p) · GW(p)

I think this is right, and its prevalence maybe the single most important difference between EA and the rest of the world.

answer by DonyChristie · 2022-07-05T12:28:24.499Z · EA(p) · GW(p)

Cause-Neutrality

I've been worried that the basic mental  motions of being able to evenhandedly consider switching between different causes in a single session of thought or conversation will be marginalized as people settle more into established hierarchies around certain causes. 

(I will fill out my answer more sometime in the future probably; others are welcome to comment and add to  it.)

answer by Konstantin Pilz · 2022-07-05T14:17:52.994Z · EA(p) · GW(p)

Scope insensitivity

answer by JoshYou · 2022-07-06T01:22:23.247Z · EA(p) · GW(p)

The margin/marginal value.  

Anyone trying to think about how to do the most good will be very quickly and deeply confused if they aren't thinking at the margin. E.g. "if everyone buys bednets, what happens to the economy?" 

answer by Tessa · 2022-07-06T20:43:54.725Z · EA(p) · GW(p)

Distribution of cost-effectiveness [? · GW] feels like one of the most important concepts from the EA community. The attitude that, for a given goal that you have, some ways of achieving that goal will be massively more cost-effective than others is an assumption that underlies a lot of cause comparisons, and the value of doing such comparisons at all.

answer by david_reinstein · 2022-07-05T17:35:23.318Z · EA(p) · GW(p)

Not following the rules here ... but HERE [? · GW] is the 'official' list

answer by Geoffrey Miller · 2022-07-05T15:20:02.364Z · EA(p) · GW(p)

Virtue signaling -- as a game theoretic concept, a moral-psychological instinct, and a ritualized cultural manifestation of our desire to do good (and to appear good). 

Effective Altruism is basically the systematic, rational, evidence-based attempt to overcome our human instincts for virtue-signaling, and to harness our desire to do good in more effective directions. So it's important to know what we're fighting against.

answer by Nathan Young · 2022-07-05T12:15:18.647Z · EA(p) · GW(p)

AI timelines

I think if every EA knew the current best set of AI forecasts and P(AI doom) then that would be really useful.

comment by Ada-Maaria Hyvärinen · 2022-07-05T13:27:55.608Z · EA(p) · GW(p)

Can you recommend me a place where I could find this information or will it spoil your test? I have looked into this on various places but I still have no idea on what the current best set of AI forecasts or P(AI doom) would be.

answer by levin · 2022-07-05T13:58:27.747Z · EA(p) · GW(p)

This is more for people who are already EAs than for e.g. intro syllabus content or something, but: the HUGE differences in effectiveness between different actions. Not just abstractly knowing this but deeply internalizing it, and how it applies not just to different charities/orgs but to your own potential actions.

answer by Tiago Santos · 2022-07-05T12:28:48.791Z · EA(p) · GW(p)

Economizing on virtue. Good explanation here: https://link.springer.com/article/10.1007/BF01298375

15 comments

Comments sorted by top scores.

comment by Yonatan Cale (hibukki) · 2022-07-05T12:08:28.938Z · EA(p) · GW(p)

I'd like to push back against [what I'm guessing is] the intended purpose of this question:

Do you want to make a list of things that all EAs should read?

If so - note [I believe] there is currently a failure mode in the community where people "get stuck" reading more and more and more before they do something "concrete" (which is not "reading more").

 

I'm not saying nobody should read lots of things. Some should.

 

I AM saying - this is probably feeding into an already existing problem, and adding another "list of things to know [with a subtext that it's 'so basic that every EA should know it']" does have potential to have negative value for the community.

Replies from: nathan
comment by Nathan Young (nathan) · 2022-07-05T12:14:07.351Z · EA(p) · GW(p)

I agree. I want concepts people should understand.

That's not about telling people what to read, that's about them having certain models in their heads.