AI safety scholarships look worth-funding (if other funding is sane)

post by anon-a · 2019-11-19T00:59:36.999Z · score: 21 (12 votes) · EA · GW · 6 comments

Funders tend to think that specific subsets of early-career AI safety researchers are worth funding:

The pool of AI safety-oriented PhD students across the world is a stronger cohort in total than any of these particular groups (because it includes them), and not much weaker on average. So on the face of it, if those targets are worth funding, then so too should more-general AI safety research scholarships.

People also tend to think broad swathes of early-career x-risk researchers are worth funding:

If AI safety is about as important as these other areas, a comparable amount of talent and supervision is available, then AI safety PhD scholarships should be similarly worth supporting.

Indeed, there are as many or more AI safety students entering good programs, and supervisors with some interest in safety like Marcus Hutter, Roger Grosse, David Duvenaud and others.

On the face of it, students able to bring funding would be best-equipped to negotiate the best possible supervision from the best possible school with the greatest possible research freedom.

The strongest apparent arguments against are:

This seems like a strong case. Is something being missed?

6 comments

Comments sorted by top scores.

comment by Jan_Kulveit · 2019-11-26T12:24:57.312Z · score: 5 (4 votes) · EA(p) · GW(p)
  • I don't think it's reasonable to think about FHI DPhil scholarships and even less so RSP as a mainly a funding program. (maybe ~15% of the impact comes from the funding)
  • If I understand the funding landscape correctly, both EA funds and LTFF are potentially able to fund single-digit number of PhDs. Actually has someone approached these funders with a request like "I want to work on safety with Marcus Hutter, and the only thing preventing me is funding"? Maybe I'm too optimistic, but I would expect such requests to have decent chance of success.
comment by RyanCarey · 2020-09-06T10:31:07.394Z · score: 4 (3 votes) · EA(p) · GW(p)

Rejoice! OpenPhil is now funding AI safety and other graduate studies here: https://www.openphilanthropy.org/focus/other-areas/early-career-funding-individuals-interested-improving-long-term-future

comment by richard_ngo · 2019-11-20T20:05:27.475Z · score: 3 (5 votes) · EA(p) · GW(p)
Students able to bring funding would be best-equipped to negotiate the best possible supervision from the best possible school with the greatest possible research freedom.

This seems like the key premise, but I'm pretty uncertain about how much freedom this sort of scholarship would actually buy, especially in the US (people who've done PhDs in ML please comment!) My understanding is that it's rare for good candidates to not get funding; and also that, even with funding, it's usually important to work on something your supervisor is excited about, in order to get more support.

In most of the examples you give (with the possible exceptions of the FHI and GPI scholarships) buying research freedom for PhD students doesn't seem to be the main benefit. In particular:

OpenPhil has its fellowship for AI researchers who happen to be highly prestigious

This might be mostly trying to buy prestige for safety.

and has funded a couple of masters students on a one-off basis.
FHI has its... RSP, which funds early-career EAs with slight supervision.
Paul even made grants to independent researchers for a while.

All of these groups are less likely to have other sources of funding compared with PhD students.

Having said all that, it does seem plausible that giving money to safety PhDs is very valuable, in particular via the mechanism of freeing up more of their time (e.g. if they can then afford shorter commutes, outsourcing of time-consuming tasks, etc).

comment by anon-a · 2019-11-20T23:54:28.694Z · score: 3 (2 votes) · EA(p) · GW(p)
it's usually important to work on something your supervisor is excited about, in order to get more support.

You would fund students who are picking supervisors interested in safety, like Hutter, Steinhardt, whatever.

All of these groups are less likely to have other sources of funding compared with PhD students.

The proposal would be merely to open up 0-3 scholarships per year. So the question here is not which group is less likely to have other sources of funding, but how effective it it to fund the marginal unfunded person. There are many counts in favour of funding EA PhD students over masters students, early-career EAs and independent researchers. They require less supervision. They output material that is more academically respectable (and publishable). They are more likely to stick with AI safety as a career, ...

comment by catherio · 2019-11-26T00:38:25.921Z · score: 1 (6 votes) · EA(p) · GW(p)

Catherine here, I work for Open Phil on the technical AI program area. I’m not going to comment fully on our entire case for the Open Phil AI Fellows program, but I want to just address some things that seem wrong to me here:

“early-career AI safety researchers”

The OpenPhil AI PhD Fellows are mostly not early-career “AI safety” researchers. (see the fellowship description here)

The pool of AI safety-oriented PhD students across the world is a stronger cohort in total than any of these particular groups (because it includes them), and not much weaker on average.

I don’t think this would be true, even if the “it includes them” claim were true. I think you need much more evidence to justify a claim that “a larger set containing X is not much weaker on average than the set X itself”.

there are more students from top schools moving into AI safety than econ, philosophy, and GCBRs

? I think you’re claiming there are more grad-school-bound undergrads-from-top-schools, total, aspiring to be “AI safety researchers” than to be economists? This seems definitely false to me. Am I misunderstanding?

comment by anon-a · 2019-11-26T10:50:04.834Z · score: 4 (3 votes) · EA(p) · GW(p)
I think you need much more evidence to justify a claim that “a larger set containing X is not much weaker on average than the set X itself”.
  • If OpenPhil's fellow are not expected to do research on AI safety then apparently the justification for funding is quite different, so let's put them to one side.
  • The CS DPhil scholars at Oxford seem similar to EA CS PhDs at Toronto, ANU, and other rank 10-30 schools.
  • The RSP students are also seem similar, with broader interests but less credentials.
  • Paul's grantees seem more aligned though less qualified and supervised, though there are only three.

Overall, rank 10-30 AI safety PhD students seems comparable to these three latter groups, and clearly not much weaker.

? I think you’re claiming there are more grad-school-bound undergrads-from-top-schools, total, aspiring to be “AI safety researchers” than to be economists? This seems definitely false to me. Am I misunderstanding?

Edited to clarify that this means researchers on longtermist econ issues.

But I am interested to know if this argument is wrong in any other respect!