Does 80,000 Hours focus too much on AI risk?

post by EarlyVelcro · 2019-11-02T20:14:16.598Z · score: 57 (50 votes) · EA · GW · 17 comments

(Cross-post from /r/EffectiveAltruism, with minor revisions.)

On the home page of 80,000 Hours, they present a key advice article outlining their primary recommendations for EA careers. According to them, this article represents the culmination of years of research and debate, and is one of the most detailed, advanced intros to EA yet.

However, while their article does go into some background ideas about the foundations of EA, one idea now stands above all else: a single, narrow focus on recruiting people to AI safety.

To be sure, the article mentions other careers. For example, the article brings up mitigation of climate change and nuclear war as potential alternatives before instantly dismissing them because they aren’t neglected. The article also briefly alludes to the other two "classic" EA cause areas, global poverty and animal welfare. However, these causes are rejected one sentence later for not focusing on the long-term. This ignores the fact that value spreading and ripple effects can affect the distant future. Quote (emphasis mine):

Some other issues we’ve focused on in the past include ending factory farming and improving health in poor countries. These areas seem especially promising if you don’t think people can or should focus on the long-term effects of their actions.

In the end, the article recommends only AI risk and biorisk as plausible EA cause areas. But even for biorisk it says,

We rate biorisk as a less pressing issue than AI safety, mainly because we think biorisks are less likely to be existential, and AI seems more likely to play a key role in shaping the long-term future in other ways.

This is a stark contrast to the effective altruism of the past, and the community as a whole that focuses on a diversity of cause areas. Now, according to 80,000 Hours, EA should focus on AI alone.

This confuses me. EA is supposed to be about evidence and practicality. Personally, I’m pretty skeptical of some of the claims that AI safety researchers have made for the priority of their work. To be clear, I do think it’s a respectable career, but is it really what we should recommend to everyone? Consider that:

AI safety as a field should still exist, and we should still give it funding. But is it responsible for top EA organizations to make it the single cause area that trumps all others?

17 comments

Comments sorted by top scores.

comment by Benjamin_Todd · 2019-11-03T13:44:26.557Z · score: 60 (26 votes) · EA · GW

Hi EarlyVelcro,

I’m happy to see more debate of how much we should prioritise AI safety. We intend to debate some of these issues on the podcast, and have already started recording with Ben Garfinkel [EA · GW].

However, I think you’re misrepresenting how much the key idea series recommends working on AI safety. We feature a range of other problem areas prominently and I don’t think many readers will come away thinking that our position is that “EA should focus on AI alone”.

We list 9 priority career paths, of which only 2 are directly related to AI safety, recommend a variety of other options, and say that there are many good options we don’t list.

Elsewhere on the page, we also discuss the importance of personal fit and coordination, which can make it better for an individual to enter different problem areas from those we most highlight.

The most relevant section is short, so I’d encourage readers of this thread to read the section and make up their own mind.

comment by rohinmshah · 2019-11-03T17:14:45.567Z · score: 43 (15 votes) · EA · GW
Top AI safety researchers are now saying that they expect AI to be safe by default, without further intervention from EA. See here and here.

Two points:

  • "Probably safe by default" doesn't mean "we shouldn't work on it". My estimate of 90% that you quote still leaves a 10% chance of catastrophe, which is worth reducing. (Though the 10% is very non-robust.) It also is my opinion before updating on other people's views.
  • Those posts were published because AI Impacts was looking to have conversations with people who had safe-by-default views, so there's a strong selection bias. If you looked for people with doom-by-default views, you could find them.
comment by HowieL · 2019-11-03T23:52:38.609Z · score: 42 (14 votes) · EA · GW

Hi EarlyVelcro,

Howie from 80k here.

As Ben said in his comment, the key ideas page, which is the most current summary of 80k’s views, doesn't recommend that “EA should focus on AI alone”. We don't think the EA community's focus should be anything close to that narrow.

That said, I do see how the page might give the impression that AI dominates 80k’s recommendations since most of the other paths/problems talked about are ‘meta’ or ‘capacity building’ paths. The page mentions that “we’d be excited for people to explore [our list of problems we haven’t yet investigated] as well as other areas that could foreseeably have a positive effect on the long-term future” but it doesn’t say anything about what those problems are (other than a link to our problem profiles page, which has a list).

I think it makes sense that people end up focusing on the areas we mention directly and the page could do a better job of communicating that our priorities are more diverse.

The good news is that we’re currently putting together a more thorough list of areas that we think might be very promising but aren't among our priority paths/problems.[1] Unfortunately, it didn’t quite get done in time to add it to this version of key ideas.

More generally, I think 80k’s content was particularly heavy on AI over the last year and, while it will likely remain our top priority, I expect it will make up a smaller portion of our content over the next few years.

[1] Many of these will be areas we haven't yet investigated or areas that are too niche to highlight among our priority paths.

comment by EarlyVelcro · 2019-11-04T06:57:27.144Z · score: 12 (6 votes) · EA · GW

Thank you for the thoughtful response, Howie. :)

That said, I do see how the page might give the impression that AI dominates 80k’s recommendations since most of the other paths/problems talked about are ‘meta’ or ‘capacity building’ paths.

Indeed. When Todd replied earlier that only 2 of the 9 paths were directly related to AI safety, I have to say it felt slightly disingenuous to me, even though I'm sure he did not mean it that way. Many of the other paths could be interpreted as "indirectly help AI safety." (Other than that, I appreciated Todd's comment.)

The good news is that we’re currently putting together a more thorough list of areas that we think might be very promising but aren't among our priority paths/problems.[1] Unfortunately, it didn’t quite get done in time to add it to this version of key ideas.

I'm looking forward to this list of other potentially promising areas. Should be quite interesting.

comment by Khorton · 2019-11-03T12:47:55.418Z · score: 34 (15 votes) · EA · GW

OP's suggestion that 80k diversify the causes and careers they recommend is reasonable; I'm sure 80k can comment.

Another suggestion: Individual EAs should not defer their career decisions to 80k. People should learn from 80k's excellent advice, but ultimately they need to use their own values and understanding of their own life to make good decisions.

comment by Raemon · 2019-11-03T23:14:43.150Z · score: 10 (5 votes) · EA · GW

Tying in a bit with Healthy Competition [EA · GW]:

I think it makes sense (given my understanding of the folk at 80k's views) for them to focus the way they are. I expect research to go best when it follows the interests and assumptions of the researchers.

But, it seems quite reasonable if people want advice for different background assumptions to... just start doing that research, and publishing. I think career advice is a domain that can definitely benefit from having multiple people or orgs involved, just needs someone to actually step up and do it.

comment by Khorton · 2019-11-03T23:05:34.558Z · score: 8 (4 votes) · EA · GW

A friend pointed out that it would probably be good for EA community health if 80k catered to people with a wider variety of values.

comment by ofer · 2019-11-03T05:46:10.280Z · score: 22 (7 votes) · EA · GW

There seems to be a large variance in researchers' estimates about timelines and takeoff-speed. Pointing to specific writeups that lean one way or another can't give much insight about the distribution of estimates. Also, I think that at least some researchers are less likely to discuss their estimates publicly if they're leaning towards shorter timelines and a discontinuous takeoff, which subjects the public discourse on the topic to a selection bias.

So I'm skeptical about the claim that "Most researchers seem to be moving away from a fast takeoff view of AI safety, and are now opting for a softer takeoff view".

Top AI safety researchers are now saying that they expect AI to be safe by default, without further intervention from EA. See here and here.

Again, there seems to be a large variance in researchers' views about this. Pointing to specific writeups can't give much insight about the distribution of views.

comment by Matthew_Barnett · 2019-11-03T05:59:57.728Z · score: 10 (7 votes) · EA · GW
Also, I think that at least some researchers are less likely to discuss their estimates publicly if they're leaning towards shorter timelines and a discontinuous takeoff

Could you explain more about why you think people who hold those views are more likely to be silent?

comment by ofer · 2019-11-03T10:15:09.319Z · score: 15 (7 votes) · EA · GW

Thanks for asking.

One factor that seems important is that even a small probability of "very short timelines and a sharp discontinuity" is probably a terrifying prospect for most people. Presumably, people tend to avoid saying terrifying things. Saying terrifying things can be costly, both socially and reputationally (and there's also the possible side effect of, well, making people terrified).

I hope to write a more thorough answer to this soon (I'll update this comment accordingly by 2019-11-20).

comment by Matthew_Barnett · 2019-11-03T19:33:54.146Z · score: 7 (5 votes) · EA · GW
Presumably, people tend to avoid saying terrifying things.

I'm a bit skeptical of this statement, although I admit it could be true for some people. If anything I tend to think that people have a bias for exaggerating risk rather than the opposite, although I don't have anything concrete to say either way.

comment by MichaelStJules · 2019-11-03T16:46:23.134Z · score: 3 (5 votes) · EA · GW
Saying terrifying things can be costly, both socially and reputationally (and there's also the possible side effect of, well, making people terrified).

Is this the case in the AI safety community? If the reasoning for their views isn't obviously bad, I would guess that it's "cool" to say unpopular or scary but not unacceptable things, because the rationality community has been built in part on this.

comment by ofer · 2019-11-03T19:58:39.927Z · score: 2 (2 votes) · EA · GW

Is this the case in the AI safety community?

I have no idea to what extent the above factor is influential amongst the AI safety community (i.e. the set of all AI safety (aspiring) researchers?).

If the reasoning for their views isn't obviously bad, I would guess that it's "cool" to say unpopular or scary but not unacceptable things, because the rationality community has been built in part on this.

(As an aside, I'm not sure what's the definition/boundary of the "rationality community", but obviously not all AI safety researchers are part of it.)

comment by MichaelStJules · 2019-11-03T06:28:51.008Z · score: 1 (1 votes) · EA · GW

Good points.

Also, I think that at least some researchers are less likely to discuss their estimates publicly if they're leaning towards shorter timelines and a discontinuous takeoff, which subjects the public discourse on the topic to a selection bias.

Why do you think this?


EDIT: Ah, Matthew got to it first.

comment by MichaelStJules · 2019-11-03T06:23:59.157Z · score: 6 (3 votes) · EA · GW

I think another large part of the focus comes from their views on population ethics. For example, in the article, you can "save" people by ensuring they're born in the first place:

Let’s explore some hypothetical numbers to illustrate the general concept. If there’s a 5% chance that civilisation lasts for ten million years, then in expectation, there are 5000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce the risk of premature extinction by 1 percentage point, then these efforts would in expectation save 28 future generations. If each generation contains ten billion people, that would be 280 billion lives saved. If there’s a chance civilisation lasts longer than ten million years, or that there are more than ten billion people in each future generation, then the argument is strengthened even further.

(bold mine)

I discuss this further in my section "Implications for EA priorities" in this post of mine [EA · GW]. I recommend trying this tool of theirs.

comment by Michael_Wiebe · 2019-11-03T05:15:24.907Z · score: 5 (3 votes) · EA · GW

Note that 80k sometimes takes a softer tone, eg here:

An individual can only focus on one or two areas at a time, but a large group of people working together should most likely spread out over several.
When this happens, there are additional factors to consider when choosing a problem area. Instead of aiming to identify the single most pressing issue at the margin, the aim is to work out:
1. The ideal allocation of people over issues, and which direction that allocation should move in.
2. Where your comparative advantage lies compared to others in the group.
We call this the ‘portfolio approach’.
comment by casebash · 2019-11-03T08:34:07.988Z · score: 2 (1 votes) · EA · GW

"It’s not clear that advanced artificial intelligence is going to arrive any time within the next several decades" - On the other hand, it's seems, at least to me, most likely that it will. Even if several more breakthroughs would be required to reach general intelligence, those may still come relatively fast as deep learning has now finally become useful enough in a wide enough array of applications that there is far more money and talent in the field than there ever was before by orders of magnitude. Now this by itself wouldn't necessarily guarantee fast advancement in a field, but AI research is still the kind of area where a single individual can push the research forward significantly just by themselves. And governments are beginning to realise the strategic importance of AI, so even more resources are flooding the field.

"One of the top AI safety organizations, MIRI, has now gone private so now we can’t even inspect whether they are doing useful work." - this is not an unreasonable choice and we have their past record to go on. Nonetheless, there are more open options if this is important to you.

"Productive AI safety research work is inaccessible to over 99.9% of the population, making this advice almost useless to nearly everyone reading the article." - Not necessarily. Even if becoming good enough to be a researcher is very hard, it probably isn't nearly as hard to become good enough at a particular area to help mentor other people.