Brief summary of key disagreements in AI Risk

post by iarwain · 2019-12-26T19:40:28.354Z · score: 24 (13 votes) · EA · GW · No comments

This is a question post.

Contents

  Answers
    6 steve2152
    5 Davidmanheim
None
No comments

Does the following seem like a reasonable brief summary of the key disagreements regarding AI risk?

Among those experts (AI researchers, economists, careful knowledgeable thinkers in general) who appear to be familiar with the arguments:

Answers

answer by steve2152 · 2019-12-27T20:40:28.765Z · score: 6 (5 votes) · EA(p) · GW(p)

To add on to what you already have, there's also a flavor of "urgency / pessimism despite slow takeoff" that comes from pessimistic answers to the following 2 questions:

  • How early do the development paths between "safe AGI" and "default AGI" diverge?

On one extreme, they might not diverge at all: we build "default AGI", and fix problems as we find them, and we wind up with "safe AGI". On the opposite extreme, they may diverge very early (or already!), with entirely different R&D paths requiring dozens of non-overlapping insights and programming tools and practices.

I personally put a lot of weight on "already", on the theory that there are right now dozens of quite different lines of ongoing ML / AI research that seem to lead towards quite different AGI destinations, and it seems implausible to me that they will all wind up at the same destination (or fail), or that the destinations will all be more-or-less equally good / safe / beneficial.

  • If we know how to build an AGI in a way that is knowably and unfixably dangerous, can we coordinate on not doing so?

One extreme would be "yes we can coordinate, even if there's already code for such an AGI published on GitHub that runs on commodity hardware". The other extreme would be "No, we can't coordinate; the best we can hope for is delaying the inevitable, hopefully long enough to develop a safe AGI along a different path."

Again I personally put a lot of weight on the pessimistic view, see my discussion here [LW · GW]; but others seem to be more optimistic that this kind of coordination problem might be solvable, e.g. Rohin Shah here [LW(p) · GW(p)].

answer by Davidmanheim · 2019-12-26T20:10:20.659Z · score: 5 (4 votes) · EA(p) · GW(p)

"* Will something less than superhuman AI pose similar extreme risks? If yes: How much less, how far in advance will we see it coming, when will it come, how easy is it to solve?"

I don't think there is any disagreement that there are such things. I think that the key disagreement is whether there will be sufficient warning , and how easy it will be to solve / prevent.

Not to speak on their behalf, but my understanding of MIRI's view on this issue is that there are likely to be such issues, but they aren't as fundamentally hard as ASI alignment, and while there should be people working on the pre-ASI risks, we need all the time we can invest on solving the really hard parts of the eventual risk from ASI.

comment by Ramiro · 2019-12-27T18:25:38.158Z · score: 3 (3 votes) · EA(p) · GW(p)

Maybe we should add: Does working on pre-ASI risks improve our prospects of solving ASI (I think that's the core of the conciliation between near-term and long-term concerns about AI... but up to what point?), or does it worsen it?

No comments

Comments sorted by top scores.