New article from Oren Etzioni

post by iarwain · 2020-02-25T15:38:38.073Z · score: 23 (14 votes) · EA · GW · 3 comments

This just appeared in this week’s MIT Technology Review: Oren Etzioni, “How to know if AI is about to destroy civilization.” Etzioni is a noted skeptic of AI risk. Here are some things I jotted down:

Etzioni’s key points / arguments:

But he seems to agree with the following:

See also Eliezer Yudkowsky, “There's no fire alarm for Artificial General Intelligence [? · GW]”

3 comments

Comments sorted by top scores.

comment by WilliamKiely · 2020-02-26T05:59:35.372Z · score: 6 (4 votes) · EA(p) · GW(p)

It feels like Etzioni is misunderstanding Bostrom in this article, but I'm not sure. His point about Pascal's Wager confuses me:

Some theorists, like Bostrom, argue that we must nonetheless plan for very low-probability but high-consequence events as though they were inevitable

Etzioni seems to be saying that Bostrom argues that we must prepare for short AI timelines even though developing HLMI on a short timeline is (in Etzioni's view) a very low-probability event?

I don't know whether Bostrom thinks this or not, but isn't Bostrom's main point that even if AI systems sufficiently-powerful to cause an existential catastrophe are not coming for at least a few decades (or even a century or longer), we should still think about and see what we can do today to prepare for the eventual development of such AI systems if we believe that there are good reasons to think that they may cause an x-catastrophe when they eventually are developed and deployed?

It doesn't seem that Etzioni addresses this, except to imply that he disagrees with the view by saying it's unreasonable to worry about AI risk now and by saying that we'll (definitely?) have time to adequately address any existential risk that future AI systems may pose if we wait to start addressing those risks until after the canaries start collapsing.

comment by WilliamKiely · 2020-02-26T05:18:25.481Z · score: 2 (2 votes) · EA(p) · GW(p)

Etzioni's implicit argument against AI posing a nontrivial existential risk seems to be the following:

(a) The probability of human-level AI being developed on a short timeline (less than a couple decades) is trivial.

(b) Before human-level AI is developed, there will be 'canaries collapsing' warning us that human-level AI is potentially coming soon or at least is no longer a "very low probability" on the timescale of a couple decades.

(c) "If and when a canary “collapses,” we will have ample time before the emergence of human-level AI to design robust “off-switches” and to identify red lines we don’t want AI to cross"

(d) Therefore, AI does not pose a nontrivial existential risk.

It seems to me that if there is a nontrivial probability that he is wrong about 'c' then in fact it is meaningful to say that AI does pose a nontrivial existential risk that we should start preparing for before the canaries he mentions start collapsing.

comment by WilliamKiely · 2020-02-26T04:58:09.045Z · score: 2 (2 votes) · EA(p) · GW(p)

Etzioni also appears to agree that once canaries start collapsing it is reasonable to worry about AI threatening the existence of all of humanity.

As Andrew Ng, one of the world’s most prominent AI experts, has said, “Worrying about AI turning evil is a little bit like worrying about overpopulation on Mars.” Until the canaries start dying, he is entirely correct.