2022 AI expert survey results

post by Zach Stein-Perlman (zsp) · 2022-08-04T15:54:09.651Z · EA · GW · 7 comments

This is a link post for https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/

Katja Grace, Aug 4 2022

AI Impacts just finished collecting data from a new survey of ML researchers, as similar to the 2016 one as practical, aside from a couple of new questions that seemed too interesting not to add.

This page reports on it preliminarily, and we’ll be adding more details there. But so far, some things that might interest you:

Individual inferred gamma distributions

The survey had a lot of questions (randomized between participants to make it a reasonable length for any given person), so this blog post doesn’t cover much of it. A bit more is on the page and more will be added.

Thanks to many people for help and support with this project! (Many but probably not all listed on the survey page.)

Cover image: Probably a bootstrap confidence interval around an aggregate of the above forest of inferred gamma distributions, but honestly everyone who can be sure about that sort of thing went to bed a while ago. So, one for a future update. I have more confidently held views on whether one should let uncertainty be the enemy of putting things up.


Comments sorted by top scores.

comment by gwern · 2022-08-05T21:34:40.206Z · EA(p) · GW(p)

Note: most of the discussion of this is currently on LW [LW · GW].

comment by MarkusAnderljung · 2022-08-04T22:05:01.085Z · EA(p) · GW(p)

Really excited to see this! 

I noticed the survey featured the MIRI logo fairly prominently. Is there a way to tell whether that caused some self-selection bias? 

In the post, you say "Zhang et al ran a followup survey in 2019 (published in 2022)1 however they reworded or altered many questions, including the definitions of HLMI, so much of their data is not directly comparable to that of the 2016 or 2022 surveys, especially in light of large potential for framing effects observed." Just to make sure you haven't missed this: we had the 2016 respondents who also responded to the 2019 survey receive the exact same question they were asked in 2016, including re HLMI and milestones. (I was part of the Zhang et al team)

Replies from: zsp
comment by Zach Stein-Perlman (zsp) · 2022-08-04T22:16:46.245Z · EA(p) · GW(p)
  1. I don't think we have data on selection bias (and I can't think of a good way to measure this).
  2. Yes, the 2019 survey's matched-panel data is certainly comparable, but some other responses may not be comparable (in contrast to our 2022 survey, where we asked the old questions to a mostly-new set of humans).
Replies from: MarkusAnderljung
comment by MarkusAnderljung · 2022-08-05T00:11:28.832Z · EA(p) · GW(p)

One thing you can do is collect some demographic variables on non-respondents and see whether there is self-selection bias on those. You could then try to see if the variables that see self-selection correlate with certain answers. Baobao Zhang and Noemi Dreksler did some of this work for the 2019 survey (found in D1/page 32 here: https://arxiv.org/pdf/2206.04132.pdf ). 

Replies from: zsp
comment by Zach Stein-Perlman (zsp) · 2022-08-05T00:14:59.691Z · EA(p) · GW(p)

Ah, yes, sorry I was unclear; I claim there's no good way to determine bias from the MIRI logo in particular (or the Oxford logo, or various word choices in the survey email, etc.).

Replies from: MarkusAnderljung
comment by MarkusAnderljung · 2022-08-05T19:10:02.163Z · EA(p) · GW(p)

Sounds right! 

comment by Munn (Saul) · 2022-08-04T21:34:29.247Z · EA(p) · GW(p)

Support for AI safety research is up: 69% of respondents believe society should prioritize AI safety research “more” or “much more” than it is currently prioritized, up from 49% in 2016.

Heightened support for research in AI safety by AI researchers themselves seems like a requisite step for providing more resources to AI safety researchers. I'm encouraged that AI researchers are so much more favorable toward AI safety research now than in 2016, (a) because it means AI safety research is more likely to be as important as the EA community claims it is, and (b) because more pressure from academia is necessary (perhaps not sufficient, but necessary) to increase public support of AI safety research.

TL;DR: if AI researchers believe AI safety research is important, then it probably is. Also, for AI safety research to be better supported by the public, it's probably necessary for AI researchers to want it to have more support.

- Munn