My attempt to think about AI timelines

post by Ben_Snodin · 2021-05-18T17:05:47.102Z · EA · GW · 20 comments


  Key points
  My timelines
    to interpret these numbers
  How I generated these numbers
  Caveats and what the results are sensitive to
  My thoughts on this / interesting things

I recently spent some time trying to work out some kind of personal view AI timelines. I definitely don’t have any particular insight here; I just thought it was a useful exercise for me to go through for various reasons. I’m sharing this in case other non-experts like me find it useful to see how I went about this, as well as e.g. for exploration value. (note that this post is almost identical to the Google doc I shared in a shortform post last week)

Key points

My timelines

Here is a plot showing my timelines for Artificial General Intelligence (AGI) and Transformative AI (TAI):

When I showed them plots like the one above, a couple of people commented that they were surprised that my AGI probabilities are higher than my TAI ones, and I now think I didn’t think about non-AGI routes to TAI enough when I did this. I’d now probably increase the TAI probabilities a bit and lower the AGI ones a bit compared to what I’m showing here (by “a bit” I mean ~maybe a few percentage points).

How to interpret these numbers

The rough definitions I’m using here are:

The timelines are also conditioned on “relative normalness” by which I mean no global catastrophe, we’re not in a simulation, etc. The only “weird” stuff that’s allowed to happen is stuff to do with AI.

Alternative presentations

Here are my timelines alongside timelines from other notable sources:

Here are my timelines in number form:

Year% chance of AGI by year% chance of TAI by year

How I generated these numbers

At a high level, the process I used was:

  1. Create an inside view forecast
  2. Create an outside view forecast
  3. Apply adjustments according to certain heuristics with the aim of correcting for bias
  4. Combine it all together as a weighted average


For the inside view forecast I did the following:

For the outside view forecast I used the following weighted average:

For the adjustments based on heuristics I did the following:

Finally, to combine everything together for my overall (“all-things-considered”) view I did the following:

Where “adjusted” refers to the adjustments based on heuristics described above


I did the process for TAI first and then decided I’d like an AGI curve too. The process for the AGI curve was more or less the same. The main differences were:

Caveats and what the results are sensitive to

Some caveats:

I could easily have done the procedure differently and got different results. Some things the result seems especially sensitive to

My thoughts on this / interesting things

Thanks to Max Daniel for encouraging me to make this a full post.


Comments sorted by top scores.

comment by Lukas_Gloor · 2021-05-19T13:33:33.599Z · EA(p) · GW(p)

Gave a 57% probability that AGI (or similar) would not imply TAI, i.e. would not imply an effect on the world’s trajectory at least as large as the Industrial Revolution.

My impression (I could be wrong) is that this claim is interestingly contrarian among EA-minded AI researchers. I see a potential tension between how much weight you give this claim within your framework, versus how much you defer to outside views (and potentially even modest epistemology – gasp!)  in the overall forecast. 

Replies from: Carl_Shulman, Lukas_Gloor, Ben_Snodin
comment by Carl_Shulman · 2021-05-19T18:25:02.238Z · EA(p) · GW(p)

I find that 57% very difficult to believe. 10% would be a stretch. 

Having intelligent labor that can be quickly produced in factories (by companies that have been able to increase output by  millions of times over decades), and do tasks including improving the efficiency of robots (already cheap relative to humans where we have the AI to direct them, and that before reaping economies of scale by producing billions) and solar panels (which already have energy payback times on the order of 1 year in sunny areas), along with still abundant untapped energy resources orders of magnitude greater than our current civilization taps on Earth (and a billionfold for the Solar System) makes it very difficult to make the AGI but no TAI world coherent.

Cyanobacteria can double in 6-12 hours under good conditions, mice can grow their population more than 10,000x in a year. So machinery can be made to replicate quickly, and trillions of von Neumann equivalent researcher-years (but with AI advantages) can move us further towards that from existing technology.
I predict that cashing out the given reasons into detailed descriptions will result in inconsistencies or very implausible requirements.

Replies from: Ben_Snodin
comment by Ben_Snodin · 2021-05-22T07:30:09.093Z · EA(p) · GW(p)

Thanks for these comments and for the chat earlier!

  • It sounds like to you, AGI means ~"human minds but better"* (maybe that's the case for everyone who's thought deeply about this topic, I don't know). On the other hand, the definition I used here, "AI that can perform a significant fraction of cognitive tasks as well as any human and for no more money than it would cost for a human to do it", falls well short of that on at least some reasonable interpretations. I definitely didn't mean to use an unusually weak definition of AGI here (I was partly basing it on this seemingly very weak definition from Lesswrong [? · GW], i.e. "a machine capable of behaving intelligently over many domains"), but maybe I did.
  • Under at least some interpretations of "AI that can perform a significant fraction of cognitive tasks as well as any human and for no more money than it would cost for a human to do it", you don't (as I understand it) think that AGI  strongly implies TAI; but my impression is that you don't think AGI under this definition is the right thing to analyse.
  • Given your AGI definition, I probably want to give a significantly larger probability to "AGI implies TAI" than I did in this post (though on an inside view I'm probably not in "90% seems on the low end" territory, having not thought about this enough to have that much confidence).
  • I probably also want to push back my AGI timelines at least a bit (e.g. by checking what AGI definitions my outside view sources were using; though I didn't do this very thoroughly in the first place so the update might not be very large).

*I probably missed some nuance here, please feel free to clarify if so.

comment by Lukas_Gloor · 2021-05-19T13:44:11.746Z · EA(p) · GW(p)

On the object level (I made the other comment before reading on), you write: 

My impression from talking to Phil Trammell at various times is that it’s just really hard to get such high growth rates from a new technology (and I think he thinks the chance that AGI leads to >20% per year growth rates is lower than I do).

Maybe this is talking about definitions, but I'd say that "like the Industrial Revolution or bigger" doesn't have to mean literally >20% growth / year. Things could be transformative in others ways, and eventually at least, I feel like things would accelerate almost certainly in a future controlled with or by AGI. 

Edit: And I see now that you're addressing why you feel comfortable disagreeing: 

I sort of feel like other people don’t really realise / believe the above so I feel comfortable deviating from them.

I'm not sure about that. :) 

Replies from: Ben_Snodin
comment by Ben_Snodin · 2021-05-20T07:31:46.427Z · EA(p) · GW(p)

I think I might have got the >20% number from Ajeya's biological anchors report. Of course, I agree that, say, 18% growth might for 20 years might also be at least as big a deal as the Industrial Revolution. It's just a bit easier to think about a particular growth level (for me anyway). Based on this, maybe I should give some more probability to the "high enough growth for long enough to be at least as big a deal as the Industrial Revolution" than when I was thinking just about the 20% number. (Edit: just to be clear, I did also give some (though not much) probability to non-extreme-economic-growth versions of transformative AI)

I guess this wouldn't be a big change though so it's probably(?) not where the disagreement comes from. E.g. if people are counting 10% growth for 10 years as at least as big a deal as the Industrial Revolution I might start thinking that the disagreement mostly comes from definitions.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2021-05-20T12:30:55.801Z · EA(p) · GW(p)

I phrased my point poorly. I didn't mean to put the emphasis on the 20% figure, but more on the notion that things will be transformative in a way that fits neatly in the economic growth framework. My concern is that any operationalization of TAI as "x% growth per year(s)" is quite narrow and doesn't allow for scenarios where AI systems are deployed to secure influence and control over the future first. Maybe there'll be a war and the "TAI" systems secure influence over the future by wiping out most of the economy except for a few heavily protected compute clusters and resource/production centers. Maybe AI systems are deployed as governance advisors primarily and stay out of the rest of the economy to help with beneficial regulation. And so on. 

I think things will almost certainly be transformative one way or another, but if you therefore expect to always see stock market increases of >20%, or increases to other economic growth metrics, then maybe that's thinking too narrowly. The stock market (or standard indicators of economic growth) are not what ultimately matters. Power-seeking AI systems would prioritize "influence over the long-term future" over "short-term indicators of growth". Therefore, I'm not sure we see economic growth right when "TAI" arrives. The way I conceptualize "TAI" (and maybe this is different from other operationalizations, though, going by memory, I think it's compatible with the way Ajeya framed it in her report, since she framed it as "capable of executing a 'transformative task'") is that "TAI" is certainly capable of bringing about a radical change in growth mode, eventually, but it may not necessarily be deployed to do that. I think "where's the point of no return?" is a more important question than "Will AGI systems already transform the economy 1,2,4 years after their invention?"

That said, I don't think the above difference in how I'd operationalize "TAI" are cruxes between us. From what you say in the writeup, it sounds like you'd be skeptical about both, that AGI systems could transform the economy(/world) directly, and that they could transform it eventually via influence-securing detours. 

Replies from: Ben_Snodin
comment by Ben_Snodin · 2021-05-21T07:29:49.528Z · EA(p) · GW(p)

Thanks, this was interesting. Reading this I think maybe I have a bit of a higher bar than you re what counts as transformative (i.e. at least as big a deal as the industrial revolution). And again, just to say I did give some probability to transformative AI that didn't act through economic growth. But the main thing that stands out to me is that I haven't really thought all that much about what the different ways powerful AI might be transformative (as is also the case for almost everything else here too!).

comment by Ben_Snodin · 2021-05-20T07:22:51.718Z · EA(p) · GW(p)

I see a potential tension between how much weight you give this claim within your framework, versus how much you defer to outside views

I don't know, for what it's worth I feel like it's pretty okay to have an inside view that's in conflict with most other people's and to still give a pretty big weight (i.e. 80%) to the outside view. (maybe this isn't what you're saying)

(and potentially even modest epistemology – gasp!)

Not sure I understood this, but the related statement "epistemic modesty implies Ben should give more than 80% weight to the outside view" seems reasonable. Actually maybe you're saying "your inside view is so contrarian that it is very inside view-y, which suggests you should put more weight on the outside view than would otherwise be the case", maybe I can sort of see that.

Replies from: Max_Daniel
comment by Max_Daniel · 2021-05-20T08:48:35.559Z · EA(p) · GW(p)

My understanding is that Lukas's observation is more like:

  • At some points (e.g. P(AGI) timelines) you seem to give a lot of weight to (what you call) outside views and/or seem to be moved by 'modest epistemology'.
  • But for P(TAI|AGI) your bottom line is very different from what most people in the community seem to think. This suggests you're not updating much toward their view, and so don't use "outside views"/modest epistemology here.

These suggest you're using a different balance of sticking with your inside view vs. updating toward others for different questions/parameters. This does not need to be a problem, but it at least raises the question of why.

Replies from: Lukas_Gloor, Ben_Snodin
comment by Lukas_Gloor · 2021-05-20T12:08:21.351Z · EA(p) · GW(p)

Yes, that's what I meant. And FWIW, I wasn't sure whether Ben was using modest epistemology (in my terminology, outside-view reasoning isn't necessarily modest epistemology), but there were some passages in the original post that suggest low discrimination on how to construct the reference class. E.g., "10% on short timelines people" and "10% on long timelines people" suggests that one is simply including the sorts of timeline credences that happen to be around, without trying to evaluate people's reasoning competence. For contrast, imagine wording things like this: 

"10% credence each to persons A and B, who both appear to be well-informed on this topic and whose interestingly different reasoning styles both seem defensible to me, in the sense that I can't confidently point out why one of them is better than the other."


Replies from: Ben_Snodin
comment by Ben_Snodin · 2021-05-21T07:39:35.593Z · EA(p) · GW(p)

Thanks, this was helpful as an example of one way I might improve this process.

comment by Ben_Snodin · 2021-05-21T07:36:43.445Z · EA(p) · GW(p)

But for P(TAI|AGI) your bottom line is very different from what most people in the community seem to think

Ah right, I get the point now, thanks. I suppose my P(TAI|AGI) is supposed to be my inside view as opposed to my all-things-considered view, because I'm using it only for the inside view part of the process. The only things that are supposed to be all-things-considered views are things that come out of this long procedure I describe (i.e. the TAI and AGI timelines). But probably this wasn't very clear.

comment by EdoArad (edoarad) · 2021-05-19T06:01:14.583Z · EA(p) · GW(p)

Thanks for sharing the full process and your personal takeaways! 

comment by Harrison D · 2021-05-19T01:46:39.981Z · EA(p) · GW(p)

Very small note: I'd recommend explaining your abbreviations at least once in the post (i.e., do the typical "full form (abbrev)"). I was already familiar with AGI, but it took me a few minutes of searches to figure out that TAI referred to transformative AI (no thanks to Tay, the bot).

Replies from: Ben_Snodin
comment by Ben_Snodin · 2021-05-19T11:58:58.301Z · EA(p) · GW(p)

Thanks for this, I've made a slight edit that hopefully makes these clearer.

comment by Khorton · 2021-05-18T22:18:51.814Z · EA(p) · GW(p)

This was a fascinating read.

"The outside view forecasts I chose: I included almost exclusively “community” forecasts in my outside view."

Why did you choose to almost exclusively refer to EAs for your "outside view"? Is that a typical use of the term outside view?

Replies from: Ben_Snodin
comment by Ben_Snodin · 2021-05-19T11:48:35.069Z · EA(p) · GW(p)

I didn't consciously choose to mostly(?) focus on EAs for my outside view, but I suppose ultimately it's because these are the sources I know about. I wasn't exactly trying to do a thorough survey of relevant literature / thinking here (as I hope was clear!).

I guess ~how much of a biased view that gives depends on how good the possible "non-EA" sources are. I guess I'd be kind of surprised if there were really good "non-EA" sources that I missed. I'd be very interested to hear about examples.

As for the term "outside view", I feel pretty confused about the inside vs outside view distinction, and doing this exercise didn't really help with my confusion :).

comment by Jack R (JackRyan) · 2021-05-18T18:55:02.785Z · EA(p) · GW(p)

I think it’s hard to automate things

Can you elaborate on why you think this?

Replies from: Ben_Snodin
comment by Ben_Snodin · 2021-05-19T11:56:06.739Z · EA(p) · GW(p)

I really don't have strong arguments here. I guess partly from experience working on an automated trading system (i.e. actually trying to automate something), partly from seeing Robin Hanson arguing that automation has just been continuing at a steady pace for a long time (or something like that; possible I'm completely misremembering this). Partly from guessing that other people can be a bit naive here.

This very long lesswrong comment thread [LW(p) · GW(p)] has some relevant discussion. Maybe I'm saying I kind of lean towards more of the side that user 'julianjm' is arguing for.

Replies from: Carl_Shulman
comment by Carl_Shulman · 2021-05-20T03:44:38.239Z · EA(p) · GW(p)

Robin Hanson argues in Age of Em  that annualized  growth rates will reach over 400,000% as a result of automation of human labor with full substitutes (e.g. through brain emulations)! He's a weird citation for thinking the same technology can't manage 20% growth.

"I really don't have strong arguments here. I guess partly from experience working on an automated trading system (i.e. actually trying to automate something)"

This and the usual economist arguments against fast AGI growth  seem to be more about denying the premise of ever succeeding at AGI/automating human substitute minds (by extrapolation from a world where we have not yet built human substitutes to conclude they won't be produced in the future), rather than addressing the growth that can then be enabled by the resulting AI.