Changes in funding in the AI safety field

post by Sebastian_Farquhar · 2017-02-03T13:09:58.217Z · EA · GW · Legacy · 11 comments

Contents

  Narrative of growth in AI Safety funding
  Estimated spending in AI Safety broken down by field of work
  Distribution of spending
  Possible implications and tentative suggestions
  Caveats and assumptions
  Footnotes
None
11 comments

This article is cross-posted from the CEA blog.

The field of AI Safety has been growing quickly over the last three years, since the publication of “Superintelligence”. One of the things that shapes what the community invests in is an impression of what the composition of the field currently is, and how it has changed. Here, I give an overview of the composition of the field as measured by its funding.

Measures other than funding also matter, and may matter more, like types of outputs, distribution of employed/active people, or impact-adjusted distributions of either. Funding, however, is a little more objective and easier to assess. It gives us some sense of how the AI Safety community is prioritising, and where it might have blind spots. For a fuller discussion of the shortcomings of this type of analysis, and of this data, see section four.

Throughout, I am including the budgets of organisations who are explicitly working to reduce existential risk from machine superintelligence. It does not include work outside the AI Safety community, on areas like verification and control, that might prove relevant. This kind of work, which happens in mainstream computer science research, is much harder to assess for relevance and to get budget data for. I am trying as much as possible to count money spent at the time of the work, rather than the time at which a grant is announced or money is set aside.

Thanks to Niel Bowerman, Ryan Carey, Owen Cotton-Barratt, Andrew Critch, Daniel Dewey, Viktoriya Krakovna, Peter McIntyre, Michael Page for their comments or help on content or gathering data in preparing this document (though nothing here should be taken as a statement of their views and any errors are mine). Further thanks to the Future of Life Institute for funding the research project that enabled this work.

The post is organised as follows:

  1. Narrative of growth in AI Safety funding
  2. Distribution of spending
  3. Soft conclusions from overview
  4. Caveats and assumptions

Narrative of growth in AI Safety funding

The AI Safety community grew significantly in the last three years. In 2014, AI Safety work was almost entirely done at the Future of Humanity Institute (FHI) and the Machine Intelligence Research Institute (MIRI) who were between them spending $1.75m. In 2016, more than 50 organisations have explicit AI Safety related programs, spending perhaps $6.6m. Note the caveats to all numbers in this document described in section 4.

In 2015, AI Safety spending roughly doubled to $3.3m. Most of this came from growth at MIRI and the beginnings of involvement by industry researchers.

In 2016, grants from the Future of Life Institute (FLI) triggered growth in smaller-scale technical AI safety work.[1] Industry invested more over 2016, specially at Google DeepMind and potentially at OpenAI.[2] Because of their high salary costs, the monetary growth in spending at these firms may overstate actual growth of the field. For example, several key researchers moved from non-profits/academic orgs (MIRI, FLI, FHI) to Google DeepMind and OpenAI. This increased spending significantly, but may have had a smaller effect on output.[3] AI Strategy budgets grew more slowly, at about 20%.

In 2017, multiple center grants are emerging (such as the Center for Human-Compatible AI (CHCAI) and Center for the Future of Intelligence (CFI)), but if their hiring is slow it will restrain overall spending. FLI grantee projects will be coming to a close over the year, which may mean that technical hires trained through those projects become available to join larger centers. The next round of FLI grants may be out in time to bridge existing grant holders onto new projects. Industry teams may keep growing, but there are no existing public commitments to do so. If technical research consolidates into a handful of major teams, it might make it easier to keep open dialogue between research groups, but might decrease individual incentives to because researchers have enough collaboration opportunities locally.

Although little can be said about 2018 at this point, the current round of academic grants which support FLI grantees as well as FHI end in 2018, potentially creating a funding cliff. (Though FLI has just announced a second funding round, and MIT Media Lab has just announced a $27m center (whose exact plans remain unspecified).[4]

Estimated spending in AI Safety broken down by field of work

 

2014

2015

2016

2017F

Technical AI Safety

Academic

0.00

0.00

1.22

1.48

 

Industry

0.00

0.70

1.60

2.10

 

Non-profit

0.95

1.65

1.84

2.08

Technical AI Safety Total

 

0.95

2.35

4.66

5.66

AI Strategy

 

0.80

0.93

1.50

2.14

AI Ethics

 

0.00

0.00

0.19

0.16

AI Outreach

 

0.00

0.00

0.17

1.10

Rationality

 

0.00

0.00

0.04

tbd

Grand Total

 

1.75

3.28

6.56

9.09

Distribution of spending

In 2014, the field of research was not very diverse. It was roughly evenly split into work at FHI on macrostrategy, with limited technical work, and at MIRI following a relatively focused technical research agenda which placed little emphasis on deep learning.

Since then, the field has diversified significantly.

The academic technical research field is very diverse, though most of the funding comes via FLI. MIRI remains the only non-profit doing technical research and continues to be the largest research group with 7 research fellows at the end of 2016 and a budget of $1.75m. Google DeepMind probably has the second largest technical safety research group with between 3 and 4 full-time-equivalent (FTE) researchers at the end of 2016 (most of whom joined at the end of the year), though OpenAI and GoogleBrain probably have 0.5-1.5 FTEs.[5]

FHI and SAIRC remains the only large-scale AI strategy center. The Global Catastrophic Risk Institute is the main long-standing strategy center working on AI, but is much smaller. Some much smaller groups (FLI Grantees and the Global Politics of AI team at Yale) are starting to form, but are mostly low-/no- salary for the time being.

A range of functions are now being filled which did not exist in the AI Safety community before. These include outreach, ethics research, and rationality training (although this last has been available through CFAR for the AI Safety community for some time). Although explicitly outreach focused projects remain small, organisations like FHI and MIRI do significant outreach work (arguably, Nick Bostrom’s Superintelligence falls into this category, for example).

2017 (forecast) - total = $10.5m

2016 - total = $6.56m

2015 - total = $3.28m

2014 - total = $1.75m

Possible implications and tentative suggestions

These are my tentative views having gotten an overview of spending and some short conversations on the topic. I could imagine updating significantly with relatively small amounts of new information.

Technical safety research

Strategy, outreach, and policy

Caveats and assumptions

[Updated 07/02/2017 after comments from Owen Cotton-Barratt]



Footnotes

  1. Although grants were awarded in 2015, there is a lag between grants being awarded and work taking place. This is a significant assumption discussed in the caveats.

  2. Although note that most of the new hires at DeepMind arrived right at the end of the year.

  3. Although it is also conceivable that a researcher at DeepMind may be ten times more valuable than that same researcher elsewhere.

  4. This will depend on personal circumstance as well as giving opportunities. It would probably be a mistake to forgo time-bounded giving opportunities to cover this cliff, since other sources of funding might be found between now and then.

  5. This is based on anecdotal hiring information, and not a confirmed number from Google DeepMind.

11 comments

Comments sorted by top scores.

comment by vipulnaik · 2017-02-04T01:45:48.279Z · EA(p) · GW(p)

I appreciate posts like this -- they are very helpful (and would be more so if I were thinking of donating money or contributing in kind to the topic).

comment by Richard_Batty · 2017-02-03T16:17:41.089Z · EA(p) · GW(p)

Is there an equivalent to 'concrete problems in AI' for strategic research? If I was a researcher interested in strategy I'd have three questions: 'What even is AI strategy research?', 'What sort of skills are relevant?', 'What are some specific problems that I could work on?' A 'concrete problems'-like paper would help with all three.

comment by Sebastian_Farquhar · 2017-02-04T10:59:06.739Z · EA(p) · GW(p)

This is a really good point, and I'm not sure that something exists which was written with that in mind. Daniel Dewey wrote something which was maybe a first step on a short form of this in 2015. A 'concrete-problems' in strategy might be a really useful output from SAIRC.

http://globalprioritiesproject.org/2015/10/three-areas-of-research-on-the-superintelligence-control-problem/

comment by Ben Pace · 2017-02-03T23:08:49.438Z · EA(p) · GW(p)

I feel like "Superintelligence" is the closest thing to this, which was largely on strategy rather than maths. While it didn't end each chapter with explicit questions for further research, it'd be my first recommendation for a strategy researcher to read and gain a sense of what work could be done.

I'd also recommend Eliezer Yudkowsky's paper on Intelligence Explosion Microeconomics, which is more niche but way less read.

comment by gsastry · 2017-02-07T02:54:20.194Z · EA(p) · GW(p)

Luke Muehlhauser posted a list of strategic questions here: http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/ (originally posted in 2014).

comment by Daniel_Eth · 2017-02-04T09:19:13.121Z · EA(p) · GW(p)

Thanks for this, I found it useful. In addition to funding, I think things like the Partnership on AI (https://www.partnershiponai.org), which includes Facebook, Google, and Apple, show that industry is taking this more seriously.

comment by Larks · 2017-02-04T17:38:45.490Z · EA(p) · GW(p)

Thanks for taking the time to produce this!

comment by kbog · 2017-02-03T16:19:57.164Z · EA(p) · GW(p)

Which organizations exactly were included? Can you give a list of the 50?

comment by marjorie · 2017-03-10T15:43:39.078Z · EA(p) · GW(p)

HOW TO GET OVER MARRIAGE OR RELATIONSHIP PROBLEMS SOLVED WITH LOVE SPELL: CHECK ON DR. SAMBO WEBSITE: http://divinespellhome.wixsite.com/drsamb

Hello to you all,i want to use this time to thank Dr. Sambo for what he has done for me last week here ,my names are marjorie mc cardle from Australia, I never believed in Love Spells or Magics until I met this special spell caster when i contact this man called divinespellhome@gmail.com Execute some business..He is really powerful..My Husband divorce me with no reason for almost 5 years and i tried all i could to have her back cos i really love him so much but all my effort did not work out.. we met at our early age at the college and we both have feelings for each other and we got married happily for 5 years with no kid and he woke up one morning and he told me hes going on a divorce..i thought it was a joke and when he came back from work he tender to me a divorce letter and he packed all his loads from my house..i ran mad and i tried all i could to have him back but all did not work out..i was lonely for almost 5 years So when i told the spell caster what happened he said he will help me and he asked for her full name and his picture..i gave him that..At first i was skeptical but i gave it a try cos have tried so many spell casters and there is no solutions when he finished with the readings,he got back to me that hes with a woman and that woman is the reason why he left me The spell caster said he will help me with a spell that will surely bring him back.but i never believe all this he told me i will see a positive result within 24 hours of the day..24hours later,he called me himself and came to me apologizing and he told me he will come back to me..I cant believe this,it was like a dream cos i never believe this will work out after trying many spell casters and there is no solution..The spell caster is so powerful and after that he helped me with a pregnancy spell and i got pregnant a month later and find a better job..we are now happy been together again and with lovely kid..This spell caster has really changed my life and i will forever thankful to him..he has helped many friends too with similar problem too and they are happy and thankful to him..This man is indeed the most powerful spell caster have ever experienced in life..Am Posting this to the Forum in case there is anyone who has similar problem and still looking for a way out..you can reach him

divinespellhome@gmail.com CONTACT THIS GREAT AND POWERFUL SPELL CASTER CALLED DR SAMBO… HIS EMAIL ADDRESS IS divinespellhome@yahoo.com CONTACT HIM NOW AND BE FAST ABOUT IT SO HE CAN ALSO ATTEND TO YOU BECAUSE THE EARLIER YOU CONTACT HIM NOW THE BETTER FOR YOU TO GET QUICK SOLUTION TO ALL YOUR PROBLEMS, visit his website at http://divinespellhome.wixsite.com/drsamb Phone number:+2348039456308.

comment by Linch · 2017-03-11T09:52:58.513Z · EA(p) · GW(p)

The quality of this intervention has already been discussed elsewhere on this forum: http://effective-altruism.com/ea/17v/ea_funds_beta_launch/ad5