Critique of Superintelligence Part 2

post by Fods12 · 2018-12-13T05:12:50.159Z · score: 7 (9 votes) · EA · GW · 12 comments


  Premise 1: Superintelligence is coming soon
  Premise 2: Arguments against a fast takeoff

This is part 2 of a 5-part sequence:

Part 1: summary of Bostrom's argument [EA · GW]

Part 2: arguments against a fast takeoff [EA · GW]

Part 3: cosmic expansion and AI motivation [EA · GW]

Part 4: tractability of AI alignment [EA · GW]

Part 5: expected value arguments [EA · GW]

Premise 1: Superintelligence is coming soon

I have very little to say about this premise, since I am in broad agreement with Bostrom that even if it takes decades or a century, super-human artificial intelligence is quite likely to be developed. I find Bostrom's appeals to surveys of AI researchers regarding how long it is likely to be until human level AI is developed fairly unpersuasive, given both the poor track record of such predictions and also the fact that experts on AI research are not necessarily experts on extrapolating the rate of technological and scientific progress (even in their own field). Bostrom, however, does note some of these limitations, and I do not think his argument is particularly dependent upon these sorts of appeals. I therefore will pass over premise 1 and move on to what I consider to be the more important issues.

Premise 2: Arguments against a fast takeoff

Bostrom’s major argument in favour of the contention that a superintelligence would be able to gain a decisive strategic advantage is that the ‘takeoff’ for such an intelligence would likely be very rapid. By a ‘fast takeoff’, Bostrom means that the time between when the superintelligence first approaches human-level cognition and when it achieves dramatically superhuman intelligence would be small, on the order of days or even hours. This is critical because if takeoff is as rapid as this, there will be effectively no time for any existing technologies or institutions to impede the growth of the superintelligence or check it in any meaningful way. Its rate of development would be so rapid that it would readily be able to out-think and out-maneuver all possible obstacles, and rapidly obtain a decisive strategic advantage. Once in this position, the superintelligence would possess an overwhelming advantage in technology and resources, and would therefore be effectively impossible to displace.

The main problem with all of Bostrom’s arguments for the plausibility of a fast takeoff is that they are fundamentally circular, in that the scenario or consideration they propose is only plausible or relevant under the assumption that the takeoff (or some key aspect of it) is fast. The arguments he presents are as follows:

Additional positive arguments against the plausibility of a fast takeoff include the following:


Comments sorted by top scores.

comment by MagnusVinding · 2019-06-20T13:20:06.848Z · score: 7 (4 votes) · EA(p) · GW(p)

Thanks for writing this. :-)

Just a friendly note: even as someone who largely agrees with you, I must say that I think a term like "absurd" is generally worth avoiding in relation to positions one disagrees with (I also say this as someone who is guilty of having used this term in similar contexts before).

I think it is better to use less emotionally-laden terms, such as "highly unlikely" or "against everything we have observed so far", not least since "absurd" hardly adds anything of substance beyond what these alternatives can capture.

To people who disagree strongly with one's position, "absurd" will probably not be received so well, or at any rate optimally. It may also lead others to label one as overconfident and incapable of thinking clearly about low-probability events. And those of us who try to express skepticism of the kind you do here already face enough of a headwind from people who shake their heads while thinking to themselves "they clearly just don't get it".

Other than that, I'm keen to ask: are you familiar with my book Reflections on Intelligence? It makes many of the same points that you make here. The same is true of many of the (other) resources found here:

comment by Denkenberger · 2018-12-15T08:23:25.214Z · score: 7 (3 votes) · EA(p) · GW(p)

In regards to intelligence quickly turning into world domination, Yudkowsky paints this scenario, and points out that super human intelligence should be able to think of much better and faster ways:

"So let’s say you have an Artificial Intelligence that thinks enormously faster than a human. How does that affect our world? Well, hypothetically, the AI solves the protein folding problem. And then emails a DNA string to an online service that sequences the DNA, synthesizes the protein, and fedexes the protein back. The proteins self-assemble into a biological machine that builds a machine that builds a machine and then a few days later the AI has full-blown molecular nanotechnology."

comment by Fods12 · 2018-12-18T07:26:53.724Z · score: 2 (2 votes) · EA(p) · GW(p)

Hi Denkenberger, thanks for engaging!

Bostrom mentions this scenario in his book, and although I didn't discuss it directly I do believe I address the key issues here in my piece above. In particular, the amount of protein one can receive in the mail in a few days is small, and in order to achieve its goals of world domination an AI would need large quantities of such materials in order to produce the weapons or technology or other infrastructure needed to compete with world governments and militaries. If the AI chose to produce the protein itself, which it would likely wish to do, it would need extensive laboratory space to do that, which takes time to build and equip. The more expansive its operations become the more time consuming they take to build. It would likely need to hire lawyers to acquire legal permits to build the facilities needed to make the nanotech, etc. I outline these sorts of practical issues in my article. None of these are insuperable, but I argue that they aren't things that can be solved 'in a matter of days'.

comment by Denkenberger · 2018-12-19T08:04:35.558Z · score: 2 (1 votes) · EA(p) · GW(p)

Let's say they only mail you as much protein as one full human genome. Then the self-replicating nanotech it builds could consume biomass around it and concentrates uranium (there is a lot in the ocean, e.g.). Then since I believe the ideal doubling time is around 100 seconds, it would take about 2 hours to get 1 million intercontinental ballistic missiles. That is probably optimistic, but I think days is reasonable - no lawyers required.

comment by WillPearson · 2018-12-19T11:22:32.122Z · score: 2 (2 votes) · EA(p) · GW(p)
Let's say they only mail you as much protein as one full human genome.

This doesn't make sense. Do you mean proteome? There is not a 1-1 mapping between genome and proteome. There are at least 20,000 different proteins in the human proteome, it might be quite noticeable (and tie up the expensive protein producing machines), if there were 20,000 orders in a day. I don't know the size of the market, so I may be off about that.

I will be impressed if the AI manages to make a biological nanotech that is not immediately eaten up or accidentally sabotaged by the soup of hostile nanotech that we swim in all the time.

There is a lot of uranium in the sea, only because there is a lot of sea. From the pages I have found, there is only 3 micrograms of U per liter, and 0.72 percent is U235. To get the uranium 235 (80% enriched 50Kg bomb) required for a single bomb you would need to process roughly 18 km3 of sea water or 1.8 * 10^13 liters.

This would be pretty noticeable if done in a short time scale (you might also have trouble with diluting the sea locally if you couldn't wait for diffusion to even out the concentrations globally).

To build 1 million nukes you would need more sea water than the Mediterranean (3.75 million km3)

comment by Denkenberger · 2018-12-20T17:13:03.125Z · score: 1 (2 votes) · EA(p) · GW(p)

I'm not a biologist, but the point is that you can start with a tiny amount of material and still scale up to large quantities extremely quickly with short doubling times. As for competition, there are many ways in which human design technology can exceed (and has exceeded) natural biological organisms' capabilities. These include better materials, not being constrained by evolution, not being constrained by having the organism function as it is built, etc. As for the large end, good point about availability of uranium. But the super intelligence could design many highly transmissible and lethal viruses and hold the world hostage that way. Or think of much more effective ways than we can think of. The point is that we cannot dismiss that the super intelligence could take over the world very quickly.

comment by rohinmshah · 2018-12-15T09:15:14.936Z · score: 2 (2 votes) · EA(p) · GW(p)
So let’s say you have an Artificial Intelligence that thinks enormously faster than a human.

But why didn't you have an AI that thinks only somewhat faster than a human before that?

comment by Denkenberger · 2018-12-17T07:26:11.728Z · score: 2 (1 votes) · EA(p) · GW(p)

Some possibilities for rapid gain in thinking speed/intelligence are here.

comment by Flodorner · 2018-12-14T10:55:54.757Z · score: 3 (3 votes) · EA(p) · GW(p)

Another point against the content overhang argument: While more data is definitely useful, it is not clear, whether raw data about a world without a particular agent in it will be similarly useful to this agent as data obtained from its own (or that of sufficiently similar agents) interaction with the world. Depending on the actual implementation of a possible superintelligence, this raw data might be marginally helpful but far from being the most relevant bottleneck.

"Bostrom is simply making an assumption that such rapid rates of progress could occur. His intelligence spectrum argument can only ever show that the relative distance in intelligence space is small; it is silent with respect to likely development timespans. "

It is not completely silent. I would expect any meaningful measure for distance in intelligence space to at least somewhat correlate with timespans necessary to bridge that distance. So while the argument is not a decisive one regarding time spans, it also seems far from saying nothing.

"As such it seems patently absurd to argue that developments of this magnitude could be made on the timespan of days or weeks. We simply see no examples of anything like this from history, and Bostrom cannot argue that the existence of superintelligence would make historical parallels irrelevant, since we are precisely talking about the development of superintelligence in the context of it not already being in existence. "

Note that the argument from historical parallels is extremely sensitive to reference class. While it seems like there has not been "anything like this" in science or engineering (although progress seems to have been quite discontinous (but not self-reinforcing) by some metrics at times) or related to general intelligence (here it would be interesting to explore, whether or not the evolution of human intelligence happened a lot faster than an outside observer would have expected from looking at the evolution of other animals, since hours and weeks seem like a somewhat Anthropocentric frame of reference), narrow AI has gone from sub- to superhuman level in quite small time spans a lot recently (this is once again very sensitive to framing, so take it more as a point for the complexity of aruments from historical parallels, than as a direct argument for fast take-offs being likely).

"not consistent either with the slow but steady rate of progress in artificial intelligence research over the past 60 years"

Could you elaborate? I'm not extremely familiar with the history of artificial intelligence, but my impression was, that progress was quite jumpy at times, instead of slow and steady.

comment by rohinmshah · 2018-12-15T09:16:43.513Z · score: 6 (3 votes) · EA(p) · GW(p)
my impression was, that progress was quite jumpy at times, instead of slow and steady.

comment by Flodorner · 2018-12-15T20:28:24.371Z · score: 4 (4 votes) · EA(p) · GW(p)

Directly relevant quotes from the articles for easier reference:

Paul Christiano:

"This story seems consistent with the historical record. Things are usually preceded by worse versions, even in cases where there are weak reasons to expect a discontinuous jump. The best counterexample is probably nuclear weapons. But in that case there were several very strong reasons for discontinuity: physics has an inherent gap between chemical and nuclear energy density, nuclear chain reactions require a large minimum scale, and the dynamics of war are very sensitive to energy density."

"I’m not aware of many historical examples of this phenomenon (and no really good examples)—to the extent that there have been “key insights” needed to make something important work, the first version of the insight has almost always either been discovered long before it was needed, or discovered in a preliminary and weak version which is then iteratively improved over a long time period. "

"Over the course of training, ML systems typically go quite quickly from “really lame” to “really awesome”—over the timescale of days, not months or years.

But the training curve seems almost irrelevant to takeoff speeds. The question is: how much better is your AGI then the AGI that you were able to train 6 months ago?"


"Discontinuities larger than around ten years of past progress in one advance seem to be rare in technological progress on natural and desirable metrics. We have verified around five examples, and know of several other likely cases, though have not completed this investigation. "

"Supposing that AlphaZero did represent discontinuity on playing multiple games using the same system, there remains a question of whether that is a metric of sufficient interest to anyone that effort has been put into it. We have not investigated this.

Whether or not this case represents a large discontinuity, if it is the only one among recent progress on a large number of fronts, it is not clear that this raises the expectation of discontinuities in AI very much, and in particular does not seem to suggest discontinuity should be expected in any other specific place."

"We have not investigated the claims this argument is premised on, or examined other AI progress especially closely for discontinuities."

comment by Fods12 · 2018-12-18T07:28:27.228Z · score: 1 (1 votes) · EA(p) · GW(p)

Thanks for these links, this is very useful material!