Why AGI Timeline Research/Discourse Might Be Overrated

post by Miles_Brundage · 2022-07-03T08:04:09.665Z · EA · GW · 30 comments

Contents

  Introduction
  What this post isn’t about
  Reason 1: A lot of the potential value of timeline research and discourse has already been realized 
  Reason 2: Many people won’t update much on a stronger evidence base even if we had it (and that’s fine)
  Reason 3: Even when timeline information is persuasive to relevant stakeholders, it isn’t necessarily that actionable 
  Reason 4: Most actions that need to be taken are insensitive to timelines
  Reason 5 (most hand-wavy reason): It hasn’t helped me much in practice
  Reason 6 (weakest reason): There are reputational risks to overinvesting in timelines research and discourse
  Conclusion
None
30 comments

TL;DR: Research and discourse on AGI timelines aren't as helpful as they may at first appear, and a lot of the low-hanging fruit (i.e. motivating AGI-this-century as a serious possibility) has already been plucked. 

Introduction

A very common subject of discussion among EAs is “AGI timelines.” Roughly, AGI timelines, as a research or discussion topic, refer to the time that it will take before very general AI systems meriting the moniker “AGI” are built, deployed, etc. (one could flesh this definition out and poke at it in various ways, but I don’t think the details matter much for my thesis here—see “What this post isn’t about” below). After giving some context and scoping, I argue below that while important in absolute terms, improving the quality of AGI timelines isn’t as useful as it may first appear.

 

Just in the past few months, a lot of digital ink has been spilled, and countless in-person conversations have occurred, about whether recent developments in AI (e.g. DALL-E 2.0, Imagen, PALM, Minerva) suggest a need for updating one’s AGI timelines to be shorter. Interest in timelines has informed a lot of investment in surveys, research on variables which may be correlated with timelines like compute, etc. At least dozens of smart-person-years have been spent on this question; possibly the number is more like hundreds or thousands. 

 

AGI timelines are, at least a priori, very important to reduce uncertainty about, to the extent that’s possible. Whether one’s timelines are “long” or “short” could be relevant to how one makes career investments—e.g. “exploiting” by trying to maximize influence over AI outcomes in the near-term, or “exploring” by building up skills that can be leveraged later. Timelines could also be relevant to what kinds of alignment research directions are useful, and which policy levers to consider (e.g. whether a plan that may take decades to pan out is worth seriously thinking about, or whether the “ship will have sailed” before then).

 

I buy those arguments to an extent, and indeed I have spent some time myself working on this topic. I’ve written or co-authored various papers and blog posts related to AI progress and its conceptualization/measurement, I’ve contributed to papers and reports that explicitly made forecasts about what capabilities were plausible on a given time horizon, and I have participated in numerous surveys/scenario exercises/workshops/conferences etc. where timelines loomed large. And being confused/intrigued by people’s widely varying timelines is part of how I first got involved in AI, so it has a special place in my heart. I’ll certainly keep doing some things related to timelines myself, and think some others with special knowledge and skills should also continue to do so.

 

But I think that, as with many research and discussion topics, there are diminishing returns on trying to understand AGI timelines better and talking widely about them. A lot of the low-hanging fruit from researching timelines has already been plucked, and even much higher levels of certainty on this question (if that were possible) wouldn’t have all the benefits that might naively be suspected.

 

I’m not sure exactly how much is currently being invested in timeline research, so I am deliberately vague here as to how big of a correction, if any, is actually needed compared to the current level of investment. As a result of feedback on this post, I may find out that there’s actually less work on this than I thought, that some of my arguments are weaker than I thought, etc. and update my views. But currently, while I think timelines should be valued very highly compared to a random research topic, I suspect that many reading this post may have overly optimistic views on how useful timelines work can be.

 

What this post isn’t about

 

Again, I’m not saying no one should work on timelines. Some valuable work has indeed been done and is happening right now. But you should have very good responses to the claims below if you think you should be betting your career on it, or spending big fractions of your time thinking and talking about it informally, given all the other things you could be working on. 

 

I’m also not going to go into detail about what I or others mean by AGI, even though one could make a lot of “timelines are overrated”-type arguments by picking at this issue. For example, perhaps (some) timeline discourse reinforces a discontinuous model of AI progress that could be problematic, perhaps a lot of AGI timeline discourse just involves people talking past each other, and perhaps our definitions and metrics for progress aren’t as useful as they could be. Those all feel like plausible claims to me but I don’t need to take a position on them in order to argue for the “maybe overrated” thesis. Even for very precise definitions amenable to AGI skeptics, including ones that allow for the possibility of gradual development, I still think there may not be as much value there as many think. Conversely, I think more extreme versions of such criticisms (e.g. that AGI is a crazy/incoherent thing to talk about) are also wrong, but won’t go into that here.

 

Lastly, while I work at OpenAI and my perspective has been influenced in part by my experience of doing a lot of practical AI policy work there, this blog post just represents my own views, not my org’s or anyone else’s.

 

Reason 1: A lot of the potential value of timeline research and discourse has already been realized 

 

In retrospect and at a high level, there are several plausible reasons why the initial big investment in timeline research/discourse made sense (I would have to double check exactly what specific people said about their motivations for working on it at the time). Two stand out to me: 

 

 

I will say more later about why I think the first motivation is less compelling than it first sounds, but for now I will focus on the second bullet. 

 

It probably made a lot of sense to do an initial round of surveys of AI researchers about their views on AGI when no such surveys had been done in decades and the old ones had big methodological issues. And likewise, encouraging people to express their individual views re: AGI’s non-craziness (e.g. in interviews, books, etc.) was useful when there wasn’t a long list of credible expert quotes to draw on. 

 

But now we have credible surveys of AI/ML researchers showing clearly that AGI this century is considered plausible by “experts”; there are numerous recent examples of ~all experts under-predicting AI progress to point to, which can easily motivate claims like “we are often surprised/could be surprised again, so let’s get prepared”; there’s a whole book taking AGI seriously by someone with ~unimpeachable AI credentials (Stuart Russell, co-author of the leading AI textbook); there are tons of quotes/talks/interviews etc. from many leaders in ML in which they take AGI in the next few decades seriously; there are tons of compelling papers and reports carefully making the point that, even for extremely conservative assumptions around compute and other variables, AGI this century seems very plausible if not likely; and AGI has now been mentioned in a non-dismissive way in various official government reports. 

 

Given all of that, and again ignoring the first bullet for now, I think there’s much less to be accomplished on the timeline front than there used to be. The remaining value is primarily in increasing confidence, refining definitions, reconciling divergent predictions across different question framings, etc. which could be important—but perhaps not as much as one might think.

 

Reason 2: Many people won’t update much on a stronger evidence base even if we had it (and that’s fine)


 

Despite the litany of existing reasons to take AGI “soonish” seriously that I mentioned above, some people still aren’t persuaded. Those people are unlikely, in my view, to be persuaded by (slightly) more numerous and better of the same stuff. However, that’s not a huge deal—complete (expert or global) consensus is neither necessary nor sufficient for policy making in general. There is substantial disagreement even about how to explain and talk about current AI capabilities, let alone future ones, and nevertheless everyday people do many things to reduce current and future risks.

 

Reason 3: Even when timeline information is persuasive to relevant stakeholders, it isn’t necessarily that actionable 


 

David Collingridge famously posed a dilemma for technology governance—in short, many interventions happen too early (when you lack sufficient information) or too late (when it’s harder to change things). Collingridge’s solution was essentially to take an iterative approach to governance, with reversible policy interventions. But, people in favor of more work on timelines might ask, why don’t we just frontload information gathering as much as possible, and/or take precautionary measures, so that we can have the best of both worlds?

 

Again, as noted above, I think there’s some merit to this perspective, but it can easily be overstated. In particular, in the context of AI development and deployment, there is only so much value to knowing in advance that capabilities are coming at a certain time in the future (at least, assuming that there are some reasonable upper bounds on how good our forecasts can be, on which more below).

 

 Even when my colleagues and I, for example, believed with a high degree of confidence that language understanding/generation and image generation capabilities would improve a lot between 2020 and 2022 as a result of efforts that we were aware of at our org and others, this didn’t help us prepare *that* much. There was still a need for various stakeholders to be “in the room” at various points along the way, to perform analysis of particular systems’ capabilities and risks (some of which were not, IMO, possible to anticipate), to coordinate across organizations, to raise awareness of these issues among people who didn’t pay attention to those earlier bullish forecasts/projections (e.g. from scaling laws), etc. Only some of this could or would have gone more smoothly if there had been more and better forecasting of various NLP and image generation benchmarks over the past few years. 

 

I don’t see any reason why AGI will be radically different in this respect. We should frontload some of the information gathering via foresight, for sure, but there will still be tons of contingent details that won’t be possible to anticipate, as well as many cases where knowing that things are coming won’t help that much because having an impact requires actually "being there" (both in space and time). 

 

Reason 4: Most actions that need to be taken are insensitive to timelines

 

One reason why timelines could be very important is if there were huge differences between what we’d do in a world where AGI is coming soon and a world where AGI is coming in the very distant future. On the extremes (e.g. 1 year vs. 100 years), I think there are in fact such differences, but for a more reasonable range of possibilities, I think the correct actions are mostly insensitive to timeline variations. 

 

Regardless of timelines, there are many things we need to be making progress on as quickly as possible. These include improving discourse and practice around publication norms in AI; improving the level of rigor for risk assessment and management for developed and deployed AI systems; improving dialogue and coordination among actors building powerful AI systems, to avoid reinvention of the wheel re: safety assessments and mitigations; getting competent, well-intentioned people into companies and governments to work on these things; getting serious AI regulation started in earnest; and doing basic safety and policy research. And many of the items on such a list of “reasonable things to do regardless of timelines” can be motivated on multiple levels—for example, doing a good job assessing and managing the risks of current AI systems can be important at an object level, and also important for building good norms in the AI community, or gaining experience in applying/debugging certain methods, which will then influence how the next generation of systems is handled. It’s very easy to imagine cases where different timelines lead to widely varying conclusions, but, as I’ll elaborate on in the next section, I don’t find this very common in practice.

 

To take one example of a type of intervention where timelines might be considered to loom large, efforts to raise awareness of risks from AI (e.g. among grad students or policymakers) are not very sensitive to AGI timeline details compared to how things might have seemed, say, 5 years ago. There are plenty of obviously-impactful-and-scary AI capabilities right now that, if made visible to someone you’re trying to persuade, are more than sufficient to motivate taking the robust steps above. Sometimes it may be appropriate and useful to say, e.g., “imagine if this were X times better/cheaper/faster etc.”, but in a world where AI capabilities are as strong as they already are, it generally suffices to raise the alarm about “AI,” full stop, without any special need to get into the details of AGI. Most people, at least those who haven’t already made up their mind that AGI-oriented folks and people bullish on technology generally are all misguided, can plainly see that AI is a a huge deal that merits a lot of effort to steer in the right direction.

 

Reason 5 (most hand-wavy reason): It hasn’t helped me much in practice

 

This is perhaps the least compelling of the reasons and I can’t justify it super well since it’s an “absence of evidence” type claim. But for what it’s worth, after working in AI policy for around a decade or so, including ~4 years at OpenAI, I have not seen many cases where having a more confident sense of either AI or AGI timelines would have helped all that much, under realistic conditions,* above and beyond the “take it seriously” point discussed under Reason 1.

 

There are exceptions but generally speaking, I have moved more every year towards the “just do reasonable stuff” perspective conveyed in Reason 4 above. 

 

*by realistic conditions, I mean assuming that the basis of the increased confidence was something like expert surveys or trend projections, rather than e.g. a “message from the future” that was capable of persuading people who aren’t currently persuaded by the current efforts, so that there was still reasonable doubt about how seriously to take the conclusions.

 

Reason 6 (weakest reason): There are reputational risks to overinvesting in timelines research and discourse

 

Back in the day (5 years ago), there was a lot of skepticism in EA world about talking publicly about (short) AGI timelines due to fear of accelerating progress and/or competition over AGI. At some point the mood seems to have shifted, which is an interesting topic in its own right but let’s assume for now that that shift is totally justified, at least re: acceleration risks. 

Even so, there are still reputational risks to the EA community if it is seen as investing disproportionately in "speculation" about obviously-pretty-uncertain/maybe-unknowable things like AGI timelines, compared to object level work to increase the likelihood of good outcomes from existing or near-term systems or robust actions related to longer-term risks. And the further along we are in plucking the low hanging fruits of timeline work, the more dubious the value of marginal new work will look to observers.

 

As suggested in the section header, I think this is probably the weakest argument: the EA community should be willing to do and say weird things, and there would have to be pretty compelling reputational risks to offset a strong case for doing more timeline work, if such a case existed. I also think there is good, non-wild-speculation-y timeline work, some of which could also plausibly boost EA's reputation (though for what it's worth, I haven't seen that happen much yet). However, since I think the usual motivations for timeline work aren’t as strong as they first appear anyway, and because marginal new work (of the sort that might be influenced by this post) may be on the more reputationally risky end of the spectrum, this consideration felt worth mentioning as potential tie-breaker in ambiguous cases. 

Reputational considerations could be especially relevant for people who lack special knowledge/skills relevant to forecasting and are thus more vulnerable to the “wild speculation” charge than others who have those things, particularly when work on timelines is being chosen over alternatives that might be more obviously beneficial.

 

Conclusion

 

While there is some merit to the case for working on and talking about AGI timelines, I don’t think the case is as strong as it might first appear, and I would not be surprised if there were a more-than-optimal degree of investment in the topic currently. On the extremes (e.g. very near-term and very long-term timelines), there in fact may be differences in actions we should take, but almost all of the time we should just be taking reasonable, robust actions and scaling up the number of people taking such actions.

 

Things that would update me here include: multiple historical cases of people updating their plans in reasonable ways as a response to timeline work, in a way that couldn’t have been justified based on the evidence discussed in Reason 1, particularly if the timeline work in question was done by people without special skills/knowledge; compelling/realistic examples of substantially different policy conclusions stemming from timeline differences within a reasonable range (e.g. "AGI, under strict/strong definitions, will probably be built this century but probably not in the next few years-- assuming no major disruptions"); or examples of timeline work being very strongly synergistic, with or a good stepping stone towards, other kinds of work I mentioned as being valuable above. 

30 comments

Comments sorted by top scores.

comment by CarlShulman · 2022-07-03T15:46:22.588Z · EA(p) · GW(p)

There are very expensive interventions that are financially constrained and could use up ~all EA funds, and the cost-benefit calculation takes probability of powerful AGI  in a given time period as an input, so that e.g. twice the probability of AGI in the next 10 years justifies spending twice as much for a given result by doubling the chance the result gets to be applied. That can make the difference between doing the intervention or not, or drastic differences in intervention size. 

Replies from: Miles_Brundage
comment by Miles_Brundage · 2022-07-03T20:49:28.304Z · EA(p) · GW(p)

Could you give an example or two? I tend to think of "~all of EA funds"-level interventions as more like timeline-shifting interventions than things that would be premised on a given timeline (though there is a fine line between the two), and am skeptical of most that I can think of, but I agree that if such things exist it would count against what I'm saying.

Replies from: CarlShulman, HaydnBelfield
comment by CarlShulman · 2022-07-04T17:41:52.532Z · EA(p) · GW(p)

The funding scale of AI labs/research, AI chip production, and US political spending could absorb billions per year, tens of billions or more for the  first two. Philanthropic funding of a preferred AI lab at the cutting edge as model sizes inflate could take all EA funds and more on its own.

There are also many expensive biosecurity interventions that are being compared against an AI intervention benchmark. Things like developing PPE, better sequencing/detection, countermeasures through philanthropic funding rather than hoping to leverage cheaper government funding.

Replies from: Miles_Brundage
comment by Miles_Brundage · 2022-07-04T23:28:15.626Z · EA(p) · GW(p)

Thanks for elaborating - I haven't thought much about the bio comparison and political spending things but on funding a preferred lab/compute stuff, I agree that could be more sensitive to timelines than the AI policy things I mentioned.  

FWIW I don't think it's as sensitive to timelines as it may first appear (doing something like that could still make sense even with longer timelines given the potential value in shaping norms, policies, public attitudes on AI, etc., particularly if one expects sub-AGI progress to help replenish EA coffers, and if such an idea were misguided I think it'd probably be for non-timeline-related reasons like accelerating competition or speeding things up too much even for a favored lab to handle).

But if I were rewriting I'd probably mention it as a prominent counterexample justifying some further work along with some of the alignment agenda stuff mentioned below. 

Replies from: CarlShulman
comment by CarlShulman · 2022-07-05T14:03:33.592Z · EA(p) · GW(p)

Oh, one more thing: AI timelines put a discount on other interventions. Developing a technology that will take 30 years to have its effect is less than half as important if your median AGI timeline is 20 years.

comment by HaydnBelfield · 2022-07-04T09:12:18.875Z · EA(p) · GW(p)

I assume Carl is thinking of something along the lines of "try and buy most new high-end chips". See eg Sam interviewed by Rob.

comment by AdamGleave · 2022-07-04T02:26:32.490Z · EA(p) · GW(p)

I agree with a lot of this post. In particular, getting more precision in timelines is probably not going to help much with persuading most people, or in influencing most of the high-level strategic questions that Miles mentions. I also expect that it's going to be hard to get much better predictions than we have now: much of the low-hanging fruit has been plucked. However, I'd personally find better timelines quite useful for prioritizing my technical research agenda problems to work on. I might be in a minority here, but I suspect not that small a one (say 25-50% of AI safety researchers).

There's two main ways timelines influence what I would want to work on. First, it directly changes the "deadline" I am working towards. If I thought the deadline was 5 years, I'd probably work on scaling up the most promising approaches we have now -- warts and all.  If I thought it was 10 years away, I'd try and make conceptual progress that could be scaled in the future. If it was 20 years away, I'd focus more on longer-term field building interventions: clarifying what the problems are, helping develop good community epistemics, mentoring people, etc. I do think what matters here is something like the log-deadline more than the deadline itself (5 vs 10 is very decision relevant, 20 vs 25 much less so) which we admittedly have a better sense of, although there's still some considerable disagreement.

The second way timelines are relevant is that my prediction on how AI is developed changes a lot conditioned on timelines. I think we should probably just try to forecast or analyze how-AI-is-developed directly -- but timelines are perhaps easier to formalize. If timelines are less than 10 years I'd be confident we develop it within the current deep learning paradigm. More than that and possibilities open up a lot. So overall longer timelines would push me towards more theoretical work (that's generally applicable across a range of paradigms) and taking bets on underdog areas of ML . There's not much research into, say, how to align an AI built on top of a probabilistic programming language. I'd say that's probably not a good use of resources right now -- but if we had a confident prediction human-level AI was 50 years away, I might change my mind.

Replies from: Guy Raveh
comment by Guy Raveh · 2022-07-04T09:41:15.101Z · EA(p) · GW(p)

Is there an argument for a <10 years timeline that doesn't go directly through the claim that it's going to be achieved in the current paradigm?

Replies from: AdamGleave
comment by AdamGleave · 2022-07-05T01:35:47.568Z · EA(p) · GW(p)

You could argue from a "flash of insight" and scientific paradigm shifts generally giving rise to sudden progress. We certainly know contemporary techniques are vastly less sample and compute efficient than the human brain -- so there does exist some learning algorithm much better than what we have today. Moreover there probably exists some learning algorithm that would give rise to AGI on contemporary (albeit expensive) hardware. For example, ACX notes there's a supercomputer than can do $10^17$ FLOPS vs the estimated $10^16 needed for a human brain. These kinds of comparisons are always a bit apples to oranges, but it does seem like compute is probably not the bottleneck  (or won't be in 10 years) for a maximally-efficient algorithm.

The nub of course is whether such an algorithm is plausibly reachable by human flash of insight (and not via e.g. detailed empirical study and refinement of a less efficient but working AGI). It's hard to rule out. How simple/universal we think the algorithm the human brain implements is one piece of evidence here -- the more complex and laden with inductive bias (e.g. innate behavior), the less likely we are to come up with it. But even if the human brain is a Rube Goldberg machine, perhaps there does exist some more straightforward algorithm evolution did not happen upon.

Personally I'd put little weight on this. I have <10% probability on AGI in next 10 years, and think I put no more than 15% on AGI being developed ever by something that looks like a sudden insight than more continuous progress. Notably even if such an insight does happen soon, I'd expect it to take at least 3-5 years for it to gain recognition and be sufficiently scaled up to work. I do think it's probable enough for us to actively keep an eye out for promising new ideas that could lead to AGI so we can be ahead of the game. I think it's good for example that a lot of people working on AI safety were working on language models "before it was cool" (I was not one of these people), for example, although we've maybe now piled too much into that area.

comment by Jan_Kulveit · 2022-07-03T09:28:07.779Z · EA(p) · GW(p)

I do broadly agree with the direction and the sentiment: on the margin, I'd be typically interested in other forecasts than "year of AGI" much more.

For example: in time where we get "AGI" (according to your definition) ... how large fraction of GDP are AI companies? ... how big is AI as a political topic? ... what does the public think?

Replies from: zsp
comment by Zach Stein-Perlman (zsp) · 2022-07-03T16:31:12.354Z · EA(p) · GW(p)

I'm currently thinking about questions including "how big is AI as a political topic" and "what does the public think"; any recommended reading?

comment by RyanCarey · 2022-07-03T10:08:19.358Z · EA(p) · GW(p)

I agree there are diminishing returns; I think Ajeya's report has done a bunch of what needed to be done. I'm less sure about timelines being decision-irrelevant. Maybe not for Miles, but it seems quite relevant for cause prioritisation, career-planning between causes, and prioritizing policies. I also think better timeline-related arguments could on-net improve, not worsen reputation, because improved substance and polish will actually convince some people.

On the other hand, one argument I might add is that researching timelines could shorten them, by motivating people to make AI that will be realised in their lifetimes, so timelines research can do harm.

On net, I guess I weakly agree - we seem not to be under-investing in timelines research, on the current margin. That said, AI forecasting more broadly - that considers when particular AI capabilities might arise - can be more useful than examining timelines alone, and seems quite useful overall.

Replies from: MaxRa
comment by MaxRa · 2022-07-16T14:41:24.327Z · EA(p) · GW(p)

That said, AI forecasting more broadly - that considers when particular AI capabilities might arise - can be more useful than examining timelines alone, and seems quite useful overall.

+1. My intuition was that forecasts on more granular capabilities would happen automatically if you want to further improve overall timeline estimates. E.g. this is my impression of what a lot of AI timeline related forecasts on Metaculus look like.

comment by acylhalide (Samuel Shadrach) · 2022-07-03T16:12:45.972Z · EA(p) · GW(p)

Single data point but I think my life trajectory could look different if I believed >50% odds of AGI by 2040, versus >50% odds by 2100. And I can't be the only one who feels this way.

On the 2100 timeline I can imagine trusting other people to do a lot of the necessary groundwork to build the field globally till it reaches peak size. On the 2040 timeline every single month matters and it's not obvious the field will grow to peak size at all. So I'd feel much more compelled to do field building myself. And so would others even if they have poor personal fit for it. On the 2040 timeline it could make sense* for EA to essentially repurpose the whole movement solely to AI risk, as funding, thinking or even talking about anything else (be it other GCRs or global health etc) would be a distraction from what's most important.

*If someone disagrees with me please let me know. This comment was a bit speculative.

P.s. Im only critiquing the "timelines don't matter to people's decisions" point, not the "we shouldn't invest more into improving timelines" one.

Replies from: erickb
comment by erickb · 2022-07-03T21:58:42.701Z · EA(p) · GW(p)

I tend to think diversification in EA is important even if we think there's a high chance of AGI by 2040. Working on other issues gives us better engagement with policy makers and the public, improves the credibility of the movement, and provides more opportunities to get feedback on what does or doesn't work for maximizing impact. Becoming insular or obsessive about AI would be alienating to many potential allies and make it harder to support good epistemic norms. And there are other causes where we can have a positive effect without directly competing for resources, because not all participants and funders are willing or able to work on AI.

Replies from: Samuel Shadrach
comment by acylhalide (Samuel Shadrach) · 2022-07-04T05:57:38.669Z · EA(p) · GW(p)

Thanks for replying!

Rest of my comment is conditional on EA folks having consensus for >50% by 2040. I personally don't actually believe this yet, but let's imagine a world where you and I did. This discussion would still be useful if we are one day year X and we knew >50% odds by year X + 20.

And there are other causes where we can have a positive effect without directly competing for resources

I do feel it competes for resources though. Attention in general is a finite resource. 

 - People learning of EA have a finite attention span and can be exposed to only so many ideas. If AI risk is by far the most important idea to convey, then all (or most) content should focus on AI risk.

 - All EA orgs and senior members have finite time and attention to spare. An org can maybe get distracted if it has multiple things to look at, and if its senior members need to keep themselves updated on multiple topics if they are to lead the org effectively. 

 - It is easier to rally people around a narrower cause - be it inside EA orgs or the general public looking to support us. Cause neutrality doesn't come naturally - it is easier to get people to accept the importance of one single cause.

because not all participants and funders are willing or able to work on AI.

I feel like community building is something more people can do, even if it's not their ideal role in terms of personal fit. Especially if we expand notion of community building beyond just AI researchers and make it a mass movement. Even without funding, a lot of people could continue in their current careers while they spread the movement. Or they could go on strike etc, like mass movements sometimes do.

But yes you're right some people can't do that, and there may be other forms of good they can do in those 20 years. Maybe I'm just biased because I'm fairly longtermist myself, and saving a few more lives in those 20 years feels not very important compared to something that could potentially be the most important thing mankind has ever done. But I'm aware not everyone shares my intuitions.

Working on other issues gives us better engagement with policy makers and the public, improves the credibility of the movement, and provides more opportunities to get feedback on what does or doesn't work for maximizing impact.

Not sure what works with policymakers!

Agree that testing multiple approaches to community building among the public is useful, I'm not sure why you need multiple cause areas for that.

Becoming insular or obsessive about AI would be alienating to many potential allies and make it harder to support good epistemic norms. 

I'm not sure on the good epistemic norms thing. Like yes people will want to be strategic rather than honest on some matters. But the basic fact that "yes we care about AI a loooot more than other things because it's happening so soon" is something we'll be more honest about, if we chose to focus on only that.

I agree it will alienate people, but I'm also not sure how much that matters. Where what "matters" too again gets measured primarily in terms of impact on AI risk.

--

Keen on your thoughts, if you have time. 

Replies from: erickb
comment by erickb · 2022-07-07T19:39:06.700Z · EA(p) · GW(p)

I don't have time for a long reply, but I think the perspective in this post would be good to keep in mind: https://forum.effectivealtruism.org/posts/FpjQMYQmS3rWewZ83/effective-altruism-is-a-question-not-an-ideology [EA · GW]

By putting an answer (reduce AI risk) ahead of the question (how can we do the most good?) we would be selling ourselves short.

Some people, maybe a lot of people, should probably choose to focus fully on AI safety and stop worrying about cause prioritization. But nobody should feel like they're being pushed into that or like other causes are worthless. EA should be a big tent. I don't agree that it's easier to rally people around a narrow cause; on the contrary, single minded focus on AI would drive away all but a small fraction of potential supporters, and have an evaporative cooling effect on the current community too.

18 years is a marathon, not a sprint.

Replies from: Samuel Shadrach
comment by acylhalide (Samuel Shadrach) · 2022-07-08T05:39:21.123Z · EA(p) · GW(p)

I don't agree that it's easier to rally people around a narrow cause; on the contrary, single minded focus on AI would drive away all but a small fraction of potential supporters, and have an evaporative cooling effect on the current community too.

I see. This is probably somewhere we might disagree then. But if you don't have time it is okay, this whole scenario anyway seems hypothetical so may not be super high value to discuss.

comment by levin · 2022-07-03T15:27:38.754Z · EA(p) · GW(p)

It's hard for me to agree or disagree with timeline research being overrated, since I don't have a great sense of how many total research hours are going into it, but I think Reason #4 is pretty important to this argument and seems wrong. The goodness of these broad strategic goals is pretty insensitive to timelines, but lots of specific actions wind up seeming worth doing or not worth doing based on timelines. I find myself seriously saying something like "Ugh, as usual, it all depends on AI timelines" in conversations about community-building strategy or career decisions like once a week.

For example, in this comment thread [EA(p) · GW(p)] about whether and when to do immediately impactful work versus career-capital building, both the shape and the median of the AI x-risk distribution winds up mattering. A more object-level consideration means that "back-loaded" careers like policy look worse relative to "front-loaded" careers like technical research insofar as timelines are earlier.

In community-building, earlier timelines generally supports outreach strategies more focused on finding very promising technical safety researchers; moderate timelines support relatively more focus on policy field-building; and long timelines support more MacAskill-style broad longtermism, moral circle expansion, etc.

Of course, all of this is moot if the questions are super intractable, but I do think additional clarity would turn out to be useful for a pretty broad set of decision-makers -- not just top funders or strategy-setters but implementers at the "foot soldier" level of community-building, all the way down to personal career choice.

comment by Quadratic Reciprocity · 2022-07-04T13:52:42.345Z · EA(p) · GW(p)

Figuring out AGI timelines might be overrated compared to other AI forecasting questions (for eg: things related to takeoff / AI takeover etc) because the latter are more neglected. However, it still seems likely to me that more people within EA should be thinking about AGI timelines because so many personal career decisions are downstream of your beliefs about timelines. 

Some things look much less appealing if you think AGI is < 10 years away, such as getting credentials and experience working on something that is not AI safety-related or spending time on community building projects directed towards high schoolers. It also feels like a lot of other debates are very linked to the question about timelines. For example, lots of people disagree about the probability of AI causing an existential catastrophe but my intuition is that the disagreement around this probability conditioning on specific timelines would be a lot less (that is, people with higher p(doom) compared to the average mostly think that way because they think timelines are shorter). 

More timelines discourse would be good for the reputation of the community because it will likely convince others of AI x-risk being a massive problem. Non-EA folks I know who were previously unconcerned about AI x-risk were much more convinced when they read Holden's posts on AI forecasting and learned about Ajeya's bioanchors model (more than when they simply read descriptions of the alignment problem). More discussion of timelines would also signal to outsiders that we take this seriously. 

It feels like people who have timelines similar to most other people in the community (~20 years away) would be more likely to agree with this than people with much shorter or longer timelines because, for the latter group, it makes sense to put more effort into convincing the community of their position or just because they can defer less to the community when deciding what to do with their career.

Lots of people in the community defer to others (esp Ajeya/bioanchors) when it comes to timelines but should probably spend more time developing their own thoughts and thinking about the implications of that. 

comment by Geoffrey Miller (geoffreymiller) · 2022-07-03T17:36:49.074Z · EA(p) · GW(p)

I agree with Miles that EA often over-emphasizes AGI time-lines, and that this has less utility than generally assumed. I'd just add two additional points, one about the historical context of machine learning and AI research, and one about the relative risks of domain-specific versus 'general' AI.

My historical perspective comes from having worked on machine learning since the late 1980s. My first academic publication in 1989 developed a method of using genetic algorithms to design neural network architectures, and has been cited about 1,100 times since then. There was a lot of excitement in the late 80s about the new back-propagation algorithm for supervised learning in multi-layer neural networks. We expected that it would yield huge breakthroughs in many domains of AI in the next decade, the 1990s.  We also vaguely expected that AGI would be developed within a couple of decades after that -- probably by 2020. Back-propagation led to lots of cool work, but practical progress was slow, and we eventually lapse into the 'AI winter' of the 1990s, until deep learning methods were developed in the 2005-2010 era. 

In the last decade, based on deep learning plus fast computers plus huge training datasets, we've seen awesome progress in many domain-specific applications of AI, from face recognition to chatbots to visual arts. But have we really made much progress in understanding how to get from domain-specific AI to true AGI of the sort that would impose sudden and unprecedented existential risks on humanity? How we even learned enough to seriously update our AGI timelines compared to what we expected in the late 1980s? I don't think so. AGI still seems about 15-30 years away -- just as it always has since the 1950s.

Even worse, I don't think the cognitive sciences have really made much serious progress on understanding what an AGI cognitive architecture would even look like -- or how it would plausibly lead to existential risks. (I'll write more about this in due course.)

My bigger concern is that a fixation on AGI timelines in relation to X risk can distract attention from domain-specific progress in AI that could impose much more immediate, plausible, and concrete global catastrophic risks on humanity. 

I'd like to see AI timelines for developing cheap, reliable autonomous drone swarms capable of assassinating heads of state and provoking major military conflicts. Or AI timelines for developing automated financial technologies capable of hacking major asset markets or crypto protocols with severe enough consequences that they impose high risks of systemic liquidation cascades in the global financial system, resulting in mass economic suffering. Or AI timelines for developing good enough automated deepfake video technologies that citizens can't trust any video news sources, and military units can't trust any orders from their own commanders-in-chief. 

There are so many ways that near-term, domain-specific AI could seriously mess up our lives, and I think they deserve more attention. An over-emphasis on fine-tuning our AGI timelines seems to have distracted quite a few talented EAs from addressing those issues. 

(Of course, a cynical take would be that under-researching the near-term global catastrophic risks of domain-specific AI will increase the probability that those risks get realized in the next 10-20 years, and they will cause such social, economic, and technological disruption that AGI research is delayed by many decades. Which, I guess, could be construed as one clever but counter-intuitive way to reduce AGI X risk.)

comment by RobBensinger · 2022-07-03T09:21:21.282Z · EA(p) · GW(p)

Agreed! Indeed, I think AGI timelines research is even less useful than this post implies; I think just about all of the work to date didn't help and shouldn't have been a priority.

I disagree with Reason 6 as a thing that should influence our behavior; if we let our behavior be influenced by reputational risks as small as this, IMO we'll generally be way too trigger-happy about hiding our honest views in order to optimize reputation, which is not a good way to make intellectual progress or build trust.

Regardless of timelines, there are many things we need to be making progress on as quickly as possible. These include improving discourse and practice around publication norms in AI; improving the level of rigor for risk assessment and management for developed and deployed AI systems;

Agreed.

improving dialogue and coordination among actors building powerful AI systems, to avoid reinvention of the wheel re: safety assessments and mitigations;

I'm not sure exactly what you have in mind here, but at a glance, this doesn't sound like a high priority to me. I don't think we have wheels to reinvent; the priority is to figure out how to do alignment at all, not to improve communication channels so we can share our current absence-of-ideas.

I would agree, however, that it's very high-priority to get people on the same page about basic things like 'we should be trying to figure out alignment at all', insofar as people aren't on that page.

getting competent, well-intentioned people into companies and governments to work on these things;

Getting some people into gov seems fine to me, but probably not on the critical path. Getting good people into companies seems more on the critical path to me, but this framing seems wrong to me, because of my background model that (e.g.) we're hopelessly far from knowing how to do alignment today.

I think the priority should be to cause people to think about alignment who might give humanity a better idea of a realistic way we could actually align AGI systems, not to find nice smart people and reposition them to places that vaguely seem more important. I'd guess most 'placing well-intentioned people at important-seeming AI companies' efforts to date have been net-negative.

 getting serious AI regulation started in earnest;

Seems like plausibly a bad idea to me. I don't see a way this can realistically help outside of generically slowing the field down, and I'm not sure even this would be net-positive, given the likely effect on ML discourse?

I'd at least want to hear more detail, rather than just "let's regulate AI, because something must be done, and this is something".

 and doing basic safety and policy research.

I would specifically say 'figure out how to do technical alignment of AGI systems'. (Still speaking from my own models.)

Replies from: RobBensinger, Miles_Brundage, Justin Otto, Oliver Balfour
comment by RobBensinger · 2022-07-03T21:50:23.865Z · EA(p) · GW(p)

Clarifying the kind of timelines work I think is low-importance:

I think there's value in distinguishing worlds like "1% chance of AGI by 2100" versus "10+% chance", and distinguishing "1% chance of AGI by 2050" versus "10+% chance".

So timelines work enabling those updates was good.[1]

But I care a lot less about, e.g., "2/3 by 2050" versus "1/3 by 2050".

And I care even less about distinguishing, e.g., "30% chance of AGI by 2030, 80% chance of AGI by 2050" from "15% chance of AGI by 2030, 50% chance of AGI by 2050".

  1. ^

    Though I think it takes very little evidence or cognition to rationally reach 10+% probability of AGI by 2100.

    One heuristic way of seeing this is to note how confident you'd need to be in 'stuff like the deep learning revolution (as well as everything that follows it) won't get us to AGI in the next 85 years', in order to make a 90+% prediction to that effect.

    Notably, you don't need a robust or universally persuasive 10+% in order to justify placing the alignment problem at or near the top of your priority list.

    You just needs that to be your subjective probability at all, coupled with a recognition that AGI is an absurdly big deal and aligning the first AGI systems looks non-easy.

Replies from: kokotajlod
comment by kokotajlod · 2022-07-04T10:09:38.164Z · EA(p) · GW(p)

What about distinguishing 50% by 2050 vs. 50% by 2027?

comment by Miles_Brundage · 2022-07-03T21:00:18.831Z · EA(p) · GW(p)

In retrospect I should have made a clearer distinction between "things that the author thinks are good and which are mostly timeline-insensitive according to his model of how things work" and "things that all reasonable observers would agree are good ideas regardless of their timelines." The stuff you mentioned mostly relates to currently-existing-AI-systems and management of their risks, and while not consensus-y, are mostly agreed on by people in the trenches of language model risks--for example, there is a lot of knowledge to share and which is being shared already about language model deployment best practices. And one needn't invoke/think one way or the other about AGI to justify government intervention in managing risks of existing and near-term systems given the potential stakes of failure (e.g. collapse of the epistemic commons via scaled misuse of increasingly powerful language/image generation; reckless deployment of such systems in critical applications). Of course one might worry that intervening on those things will detract resources from other things, but my view, which I can't really justify concisely here but happy to discuss in another venue, is that overwhelmingly the synergies outweigh the tradeoffs (e.g. there are big culture/norm benefits at the organizational and industry level--which will directly increase the likelihood of good AGI outcomes if the same orgs/people are involved--of being careful about current technologies compared to not doing so, even if the techniques themselves are very different). 

Replies from: RobBensinger
comment by RobBensinger · 2022-07-03T21:55:22.453Z · EA(p) · GW(p)

Yeah, I'm specifically interested in AGI / ASI / "AI that could cause us to completely lose control of the future in the next decade or less", and I'm more broadly interested in existential risk / things that could secure or burn the cosmic endowment. If I could request one thing, it would be clarity about when you're discussing "acutely x-risky AI" (or something to that effect) versus other AI things; I care much more about that than about you flagging personal views vs. consensus views.

comment by Jotto (Justin Otto) · 2022-07-03T16:35:14.791Z · EA(p) · GW(p)

I agree on regulations.  Our general prior should look like public choice theory.  Regulations have a tendency to drift toward unintended kinds, usually with a more rent-seeking focus than planned. They also tend to have more unintended consequences than people predict.

There probably are some that pass a cost-benefit, but as a general prior, we should be very reluctant, and have very high standards.  Getting serious AI regulations started has a very high chance of misfiring or overshooting or backfiring.

comment by Oliver Balfour · 2022-07-04T03:39:36.868Z · EA(p) · GW(p)

I'd guess most 'placing well-intentioned people at important-seeming AI companies' efforts to date have been net-negative.

Could you please elaborate on this? The reasoning here seems non-obvious.

comment by Jotto (Justin Otto) · 2022-07-03T15:50:36.529Z · EA(p) · GW(p)

I agree with the spirit of claim.  Timeline information is probably not used for much.

One thing I disagree with:

Regardless of timelines, there are many things we need to be making progress on as quickly as possible. These include...[snip]...getting competent, well-intentioned people into companies and governments to work on these things; getting serious AI regulation started in earnest..."

But convincing smart people to work on alignment is also convincing those smart people to not work on something else, and there are large opportunity costs.  It doesn't seem true that it's regardless of timelines, unless you assume the variability in plausible timelines is on the short side.

Also, the regulations that are actually value-adding seems at least somewhat timeline dependent.

Still, I think this essay makes an important point -- there's a lot of babble about timelines, which is extremely unlikely to have alpha on those predictions.  And there's a large opportunity cost to spending time talking about it.  Smart people's time are extremely valuable, but even ignoring that, life is short.

The best timeline estimates are far more likely to come from institutions that specialize into forecasting, who can take advantage of the most modern, best methods.  Other people who aren't using those methods can still talk about the topic if they want to, but it's very unlikely they'll come up with timelines that are better.

comment by Yonatan Cale (hibukki) · 2022-07-06T09:02:35.129Z · EA(p) · GW(p)

TL;DR: Some people care about whether AGI risk is "longtermism" or a threat to their own life [1 [EA · GW]] [2 [EA · GW]]