EA is becoming increasingly inaccessible, at the worst possible time

post by Ann Garth · 2022-07-22T15:40:14.554Z · EA · GW · 13 comments

Contents

  Summary
  A few notes
  An influx of interest in EA makes accessibility really important right now
    Lots of people are getting introduced to EA who weren’t before, and more people are going to be introduced to EA soon
    Today’s prospective EAs differ systematically from yesterday’s prospective EAs, and are more likely to be “casual EAs”
    I think we should try to recruit casual EAs
  Problem 1: EA is (practically) inaccessible, especially for casual EAs
    For many people, working on EA is either impossible or so difficult they won’t do it
    Today and tomorrow’s prospective EAs are especially likely to find EA inaccessible
  Problem 2: EA is becoming (perceptually) inaccessible, and therefore less diverse, as a focus on longtermism and existential risk takes over
    Right now, EA is (perceptually) trending very hard toward longtermism and existential risk
    Longtermism is a bad “on-ramp” to EA
  To help solve both of these problems, EA should help casual EAs increase their impact in a way that’s an “easier lift” than current EA consensus advice
    Don’t make people learn new skills
    Don’t make people move into new fields
    Emphasize donations more prominently
    Support EA organizations working in this area
None
13 comments

Many thanks to Jonah Goldberg for conversations which helped me think through the arguments in this essay. Thanks also to Bruce Tsai [EA · GW], Miranda Zhang [EA · GW], David Manheim [EA · GW], and Joseph Lemien [EA · GW] for their feedback on an earlier draft.

Summary

A few notes

On language: In this post I will use longtermism and existential risk pretty much interchangeably. Logically, of course, they are distinct: longtermism is a philosophical position that leads many people to focus on the cause area(s) of existential risk. However, in practice most longtermists seem to be highly (often exclusively) focused on existential risks. As a result, I believe that for many people — especially people new to EA or not very involved in EA, which is the group I’m focusing on here — these terms are essentially viewed as synonymous.
I will also consider AI risk to be a subsection of existential risk. I believe this to be the majority view among EAs, though not everyone thinks it is correct [EA · GW].

On the structure of this post: The two problems I outline below are separate. You may think only one of them is a problem, or that one is much more of a problem than the other. I’m writing about them together because I think they’re related, and because I think there are solutions (outlined at the end of this post) that would help address both of them.

On other work: This post was influenced by many other EA thinkers, and I have tried to link to their work throughout. I should also note that Luke Freeman wrote a post [EA · GW] earlier this year which covers similar ground as this post, though my idea for this post developed independently from his work.

An influx of interest in EA makes accessibility really important right now

Lots of people are getting introduced to EA who weren’t before, and more people are going to be introduced to EA soon

EA is becoming more prominent, as a Google Trends search for “effective altruism” shows pretty clearly.

EA is also making strides into the intellectual mainstream. The New York Times wrote about EA in a 2021 holiday giving guide. Vox’s Future Perfect (an EA-focused vertical in a major news outlet) started in 2018 and is bringing EA to the mainstream. Heck, even Andrew Yang is into EA!

I (and others [EA · GW]) also think there will be a lot more people learning about EA soon, for numerous reasons.

As a result of these factors, I expect that pretty soon EA will be well-known, at least within elite intellectual circles. This will create a step change in EA’s visibility, with an order of magnitude more people knowing about EA than did before. This increased visibility creates a large group of “prospective EAs” who might join our movement.

I believe that these prospective EAs’ first impressions of EA will have a big impact on whether or not they actually join/meaningfully engage with EA. This is because highly-educated people generally have a number of other interests and potential cause areas in which they could work, including many cause areas which are more in line with most people’s intuitions and the work of others in their social circles (e.g., local political advocacy). Without a strong pull toward EA, and with many pulls in other directions, one or two bad experiences with EA, or even hearing bad things about EA from others, might be enough to turn many of them off from engaging.

Today’s prospective EAs differ systematically from yesterday’s prospective EAs, and are more likely to be “casual EAs”

Back in the day, I think people learned about EA in two big ways: 1) they were highly EA-aligned, looking for people like them, and stumbled across EA, or 2) they were introduced to EA through social networks. Either way, these people were predisposed to EA alignment. After all, if you’re friends with a committed EA, you’re more likely than the average person to be weird/radical/open to new ideas/values-driven/morally expansive/willing to sacrifice a lot.

But now people are learning about EA from the New York Times, which (in my opinion) is a much poorer signal of EA alignment than being friends with an EA! Also, as more people are learning about EA, basic assumptions about population distributions would suggest that today’s paradigmatic prospective EA is closer to the mean on most traits than yesterday’s paradigmatic prospective EA.

To be clear, I think most people have some sense of moral commitment and want to do good in the world. But I also think that most people are less radical and less willing to sacrifice in service of their values than most EAs. I will refer to these people as “casual EAs”: people who are interested in EA and generally aligned with its values, but won’t follow those ideas as far as the median current EA does.

I also think that current and future prospective EAs are likely to be older than previous groups of prospective EAs. Right now, most EA outreach happens through college/university campus groups. As EA awareness-raising shifts towards major news outlets, a wider range of ages will be introduced to EA.

I think we should try to recruit casual EAs

There are reasons [EA · GW] not to try to attract people who don’t seem values-aligned with EA, or who don't seem committed enough to their values to make the sacrifices that EA can often require.[1] But I also think there are a lot of reasons we should recruit these people.

I think there are two problems that make EA increasingly inaccessible, especially for casual EAs. I will explore them in turn.

 

Problem 1: EA is (practically) inaccessible, especially for casual EAs

For many people, working on EA is either impossible or so difficult they won’t do it

There are a lot of reasons for this.

All of these constraints also apply to earning to give, at least as far as it’s commonly understood.[2] 80,000 Hours lists “tech startup founder” and “quantitative trading” as the best earning-to-give paths, with “software engineering,” “startup early employee,” “data science,” and “management consulting” as the second-best list. These career paths are all quite difficult! 80,000 Hours does list other careers which may be easier to enter, but the paradigmatic stereotype of an earning-to-give career is inaccessible to many people.

If direct work and earning to give don’t feel possible, but the community portrays those things as the only ways to make an impact/consider oneself a true EA, then a lot of smart, talented, values-aligned people will feel locked out of EA. Indeed, there’s [EA · GW] evidence [EA · GW] that this is already happening to some extent.

Today and tomorrow’s prospective EAs are especially likely to find EA inaccessible

The upcoming cohort of prospective EAs, described above, are more likely to be older and thus further along in their careers. This means they'll have more preexisting family/financial commitments and more career capital in their existing fields, both of which make it costlier for them to transition to direct EA work or high-earning careers.

Additionally, casual EAs’ commitment to EA likely isn’t enough to outweigh the risk of upending their lives to follow an EA path, again reducing the likelihood that they transition to EA jobs.
 

Problem 2: EA is becoming (perceptually) inaccessible, and therefore less diverse, as a focus on longtermism and existential risk takes over

I think this is close to common knowledge, but in the interest of citing my sources, below is a list of some facts which, taken together, illustrate the increasing prominence of longtermism and x-risks within EA.

Actual EA funding is not primarily distributed toward longtermism [EA · GW]. But someone new to EA or not deeply involved in EA won’t know that. For those people, a glance at the forum or recent EA books, or a vibe check of the momentum within EA, will give the strong signal that the movement is focused on existential risks and longtermism (especially AI).

Longtermism is a bad “on-ramp” to EA

I support longtermism, and I think it’s good that more people are paying attention to existential risks. But I think it’s bad for longtermism to become the (perceptually) dominant face of EA, especially for those who are new to the movement, because I think it’s poorly suited to bring people in. This is because – to put it bluntly – longtermism and x-risks and AI are weird!

Of course, some EAs may wonder why this matters. If longtermism is right, it might be that we only want longtermists to join us. I think there are a few reasons why EA as a whole, including longtermists, should want our movement to be accessible.

First, not all EAs are longtermists. Neartermists obviously believe they are doing great work, and want people to continue to engage with their work rather than being scared off by a perception of EA as only longtermist.[5]

But I think that even longtermists should want EA to be accessible. My experience, and the experience of [EA · GW] many [EA · GW] others [EA · GW], is that we joined EA because it seemed like the best way to do good in the neartermist cause areas we were already interested in,[6] learned about longtermism through our involvement with EA, and eventually shifted our focus to include longtermism. Essentially, neartermist causes served as an on-ramp to EA (and to longtermism). Getting rid of that on-ramp seems like a bad idea. Getting rid of that on-ramp seems especially bad right now, when we anticipate a large surge of prospective and casual EAs who we’d like to recruit.

Additionally, if the dominance of longtermism and existential risk continues, eventually those areas may totally take over EA — not just perceptually, but also in terms of work being done, ideas being generated, etc. – or the movement may formally or informally split into neartermists and longtermists. I think either of these possibilities would be very bad.

 

To help solve both of these problems, EA should help casual EAs increase their impact in a way that’s an “easier lift” than current EA consensus advice

A note: I have much less confidence in my arguments in this section (especially “Don’t make people move into new fields”) than I do about my arguments in the earlier sections of this post.

Don’t make people learn new skills

One way to make EA more accessible is to emphasize the ways to do EA work (and even switch into EA fields) without having to learn new skills. EA organizations are growing and maturing, and as this happens, the need for people with all sorts of “traditional” skills (operations, IT, recruiting, UI/UX, legal, etc.) will only grow. EA can do a better job of showing mid-career professionals that it’s possible to contribute to EA using the skills they already have (though this can be tricky to communicate, as a recent post [EA · GW] noted).

Don’t make people move into new fields

Obviously, we don’t want to communicate that someone can do EA work from any field – some fields are extremely low-impact and others are actively harmful. At the same time, there’s a spectrum of how broadly EA focuses, and I suggest we consider widening the aperture a bit by helping casual EAs increase their positive impact in the fields they’re already in (or in their spare time [EA · GW]).

To help them do this, I’m imagining a set of workshops and/or guides on topics like the ITN framework, how to use that framework to estimate the impact of a job or organization, how to compare probabilities, modes of thought that are useful when looking for new opportunities to be high-impact, ideas on how to convince others in an organization to shift to a higher-impact project or approach, and examples or case studies of how this could look in different fields.

I see a few concerns with this idea:

As a result, I’m not at all confident that this idea would be net-positive or worth the cost, and I’m suggesting it primarily to spark discussion. If I’m right that there’s a very large group of people who are interested in increasing their impact but unwilling to make huge lifestyle changes, helping them shift towards even-slightly-more impactful work would have a large overall impact, so I think this merits a conversation.

Emphasize donations more prominently

As I mentioned above, I think the popular conception of earning to give focuses on high-earning careers. To make EA more accessible, the movement can also be clearer that giving 10% (or more) of one’s income, regardless of what that income is, does “count” as a a great contribution to EA and enough to make someone a member of the community.

Support EA organizations working in this area

Two new EA groups (High Impact Professionals [EA · GW] and EA Pathfinder) are working to help mid-career professionals increase their impact, in context of the accessibility barriers that I laid out in this post. Presumably their ideas are much better than mine! Therefore, instead of or in addition to taking up any of the solutions I suggest above, another possibility is to simply put your time, money, and/or effort into helping these groups.

  1. ^

    This post [EA · GW] about the importance of signaling frugality in a time of immense resources is tangentially related, and very thoughtful

  2. ^

    Some people see earning to give as simply giving a large portion of what you earn, regardless of career path, and this is how it is technically defined by 80,000 Hours. However, my anecdata suggests that most people see earning to give as only applicable in high-income careers. The 80,000 Hours page about earning to give focuses almost entirely on high-earning careers in a way that reinforces this impression.

  3. ^

    Everything listed as “1y” or sooner on the forum as of July 15, 2022

  4. ^

    I benchmarked to the 215th post in each tag because that’s what the forum gives you when you click “Load More” at the bottom of the list of posts

  5. ^

    I recognize that this won’t convince an extremely dedicated longtermist, but convincing extremely dedicated longtermists to care about neartermism is outside the scope of this post (and not something I necessarily want to do)

  6. ^

    Primarily global health and development, though I think climate change could fall into this category if EA chose to approach it that way [EA · GW]

13 comments

Comments sorted by top scores.

comment by Benjamin_Todd · 2022-07-23T06:13:10.364Z · EA(p) · GW(p)

This is not directly responding to your central point about reducing accessibility, but one comment is I think it could be unhelpful to set up the tension as longtermism vs. neartermism.

[Longtermist causes are] more confusing and harder to understand than neartermist causes. AI seems like ridiculous science fiction to most people. 

I think this is true of AI (though even it has become way more widely accepted among our target audience), but it's untrue of pandemic prevention, climate change, nuclear war, great power conflict and improving decision-making (all the other ones).

Climate change is the most popular cause among young people, so I'd say it's actually a more intuitive starting point than global health. 

Likewise, some people find neartermist causes like factory farming (and especially wild animal suffering) very unintutive. (And it's not obvious that neartermists shouldn't also work on AI safety..)

I think it would be clearer to talk about highlighting intuitive vs. unintuitive causes  in intro materials rather than neartermism vs. longtermism.

I agree there's probably been a decline in accessibility due to a greater focus on AI (rather than longtermism itself, which could be presented in terms of intuitive causes).

A related issue is existential risk vs. longtermism.  That idea that we want to prevent massive disasters is pretty intuitive to people, and The Precipice had a very positive reception in the press. Whereas I agree a more philosophical longtermist approach is more of a leap.

 

My second comment is I'd be keen to see more grappling with some of the reasons in favour of highlighting weirder causes more and earlier.

For instance, I agree it's really important for EA to attract people who are very open minded and curious, to keep EA alive as a question. And one way to do that is to broadcast ideas that aren't widely accepted. 

I also think it's really important for EA to be intellectually honest, and so if many (most?) of the leaders think AI alignment is the top issue, we should be upfront about that.

Similarly, if we think that some causes have ~100x the impact of others, there seem like big costs to not making that very obvious (to instead focus on how you can do more good within your existing cause).

I agree the 'slow onramp' strategy could easily turn out better but it seems like there are strong arguments on both sides, and it would be useful to see more attempt to weigh them, ideally with some rough numbers.

Replies from: bruce
comment by bruce · 2022-07-23T23:59:57.065Z · EA(p) · GW(p)

I agree it's really important for EA to attract people who are very open minded and curious, to keep EA alive as a question. And one way to do that is to broadcast ideas that aren't widely accepted. 


To an outsider who might be suspicious that EA or EA-adjacent spaces seem cult-y [LW(p) · GW(p)], or to an insider who might think EAs [EA · GW]  are [EA(p) · GW(p)] deferring [EA · GW] too [EA · GW] much, how would EA as a movement do the above and successfully navigate between:

1) an outcome where the goal of maintaining/improving epistemic quality for the EA movement, and keeping EA-as-question alive is attained, and
2) an outcome where EA ends up self-selecting for those who are most likely to defer and embrace "ideas that aren't widely accepted", and doesn't achieve the above goal?

The assumption here is that being perceived as a cult or being a high-deferral community would be a [? · GW] bad [EA · GW] outcome [EA(p) · GW(p)], though I guess not everyone would necessarily agree with this.

(Caveat: very recently went down the Leverage rabbit hole, so this is on the front of my mind and might be more sensitive to this than usual.)

 

if many (most?) of the leaders think AI alignment is the top issue, we should be upfront about that.

Agreed, though RE: "AI alignment is the top issue" I think it's important to distinguish between whether they think:

  1. AI misalignment is the most likely reason to cause human extinction/cause global suffering (+/- within [X timeframe]).
  2. Donating to AI alignment is the most cost-effective place to donate for all worldviews
  3. Donating to AI alignment is the most cost-effective place to donate for [narrower range of worldviews]
  4. Contributing to direct AI alignment work is the best career decision for all people
  5. Contributing to direct AI alignment work is the best career decision for [narrower range of people].
  6. Prioritising AI alignment is the best way to maximise impact for EA as a movement (on the margin? at scale?)

Do you have a sense of where the consensus falls for those you consider EA leaders?
 


(Commenting in personal capacity etc)

Replies from: Sophia
comment by Sophia · 2022-07-24T01:00:35.167Z · EA(p) · GW(p)

This was such a great articulation of such a core tension to effective altruism community building. 

A key part of this tension comes from the fact that most ideas, even good ideas, will sound like bad ideas the first time they are aired. Ideas from extremely intelligent people and ideas that have potential to be iterated into something much stronger do not come into existence fully-formed. 

Leaving more room for curious and open-minded people to put forward their butterfly ideas without being shamed/made to feel unintelligent means having room for bad ideas with poor justification. Not leaving room for unintelligent-sounding ideas with poor justification selects for people who are most willing to defer. Having room to delve into more tangents off the beaten track of what has already been fleshed out leaves the danger for that side-tracking to be a dead-end (and most tangents will be) and no-one wants to stick their neck out and explore an idea that almost definitely is going nowhere (but should be explored anyway just in case). 

Leaving room for ideas that don't yet sound intelligent is hard to do while still keeping the conversation nuanced (but I think not doing it is even worse). 

Replies from: Sophia
comment by Sophia · 2022-07-24T01:00:51.380Z · EA(p) · GW(p)


Also, I think conversations by the original authors of a lot of the more fleshed-out ideas are much more nuanced than the messages that get spread.  

E.g. on 4: 80k has a long list of potential highest priority cause areas that are worth exploring for longtermists and Holden, in his 80k podcast episode and the forum post he wrote says that for most people probably shouldn't go directly into AI (and instead should build aptitudes). 

Nuanced ideas are harder to spread but also people feeling like they don't have permission in community spaces (in local groups or on the forum) to say under-developed things means it is much less likely for the off-the-beaten-track stuff that has been mentioned but not fleshed out to come up in conversation (or to get developed further). 

comment by RedStateBlueState · 2022-07-22T23:36:08.867Z · EA(p) · GW(p)

If you donate 10% of your income to an EA organization, you are an EA. No matter how much you make. No exceptions.

This should be (and I think is?) our current message.

Replies from: Ryan Beck, Ann Garth
comment by Ryan Beck · 2022-07-23T13:27:32.258Z · EA(p) · GW(p)

I always interpreted the 10% as a goal, not a requirement for EA. That's a pretty high portion for a lot of people. I worry that making that sound like a cutoff makes EA seem even more inaccessible.

The way I had interpreted the community message was more like "an EA is someone that thinks about where their giving would be most effective or spends time working on the world's most pressing problems."

comment by Ann Garth · 2022-07-23T01:32:00.140Z · EA(p) · GW(p)

I agree that it should be! Just not sure it is, at least not for everyone

comment by Tyner · 2022-07-22T20:32:38.760Z · EA(p) · GW(p)

Hi Ann,

Some quibbles with your book list.  Animal Liberation came out in 1975, not 2001.

https://www.goodreads.com/book/show/29380.Animal_Liberation

You overlooked Scout Mindset, which came out in 2021.

https://www.goodreads.com/book/show/42041926-the-scout-mindset

Also,

>Essentially, neartermist causes served as an on-ramp to EA (and to longtermism). Getting rid of that on-ramp seems like a bad idea.

Do you worry at all about a bait-and-switch experience that new people might have?

Replies from: evelynciara, Ann Garth
comment by BrownHairedEevee (evelynciara) · 2022-07-22T21:51:11.693Z · EA(p) · GW(p)

Do you worry at all about a bait-and-switch experience that new people might have?

I think we could mitigate this by promoting global health & wellbeing and longtermism as equal pillars of EA, depending on the audience.

comment by Ann Garth · 2022-07-23T01:34:22.206Z · EA(p) · GW(p)

Do you worry at all about a bait-and-switch experience that new people might have?

I would hope that people wouldn't feel this way. I think neartermism is a great on-ramp to EA, but I don't think it has to be  an on-ramp to longtermism. That is, if someone joins EA out of an interest in neartermism, learns about longtermism but isn't persuaded, and continues to work on EA-aligned neartermist stuff, I think that would be a great outcome.

 

And thank you for the fact-checking on the books!

comment by BrownHairedEevee (evelynciara) · 2022-07-22T21:50:19.400Z · EA(p) · GW(p)

Yeah, I recently experienced the problem with longtermism and similarly weird beliefs in EA being bad on-ramps to EA. I moderate a Discord server where we just had some drama involving a heated debate between users sympathetic to and users critical of EA. Someone pointed out to me that many users on the server have probably gotten turned off from EA as a whole because of exposure to relatively weird beliefs within EA like longtermism and wild animal welfare, both of which I'm sympathetic to and have expressed on the server. Although I want to be open with others about my beliefs, it seems to me like they've been "plunged in on the deep end," rather than being allowed to get their feet wet with the likes of GiveWell.

Also, when talking to coworkers about EA, I focus on the global health and wellbeing side because it's more data-driven and less weird than longtermism, and I try to refer to EA concepts like cost-effectiveness rather than EA itself.

comment by skluug · 2022-07-22T18:31:10.318Z · EA(p) · GW(p)

I think this is a great post.

One reason I think it would be cool to see EA become more politically active is that political organizing is a great example of a low-commitment way for lots of people to enact change together. It kind of feels ridiculous that if there is an unsolved problem with the world, the only way I can personally contribute is to completely change careers to work on solving it full time, while most people are still barely aware it exists. 

I think the mechanism of "try to build broad consensus that a problem needs to get solved, then delegate collective resources towards solving it" is underrated in EA at current margins. It probably wasn't underrated before EA had billionaire-level funding, but as EA comes to have about as much money as you can get from small numbers of private actors, and it starts to enter the mainstream, I think it's worth taking the prospect of mass mobilization more seriously. 

This doesn't even necessarily have to look like getting a policy agenda enacted. I think of climate change as a problem that is being addressed with by mass mobilization, but in the US, this mass mobilization has mostly not come in the form of government policy (at least not national policy). It's come from widespread understanding that it's a problem that needs to get solved, and is worth devoting resources to, leading to lots of investment in green technology.

comment by TerracottaKleinBottle · 2022-07-23T00:59:22.580Z · EA(p) · GW(p)

Possible thing to consider re: the public face of the movement -- it seems like movements often benefit from having one "normal", respectable group and one "extreme" group to make the normal group look more reasonable, cf. civil rights movements, veganism (PETA vs. inoffensive vegan celebrities)