"Tech company singularities", and steering them to reduce x-risk

post by Andrew Critch (critch) · 2022-05-13T17:26:32.644Z · EA · GW · 4 comments

Contents

  A tech company singularity as a point of coordination and leverage
    How to steer tech company singularities?
      How to help steer scientists away from AAGI: 
      How to convince the public that AAGI is bad: 
      How to convince regulators that AAGI is bad:
  Summary
None
4 comments

The purpose of this post (also available on LessWrong [LW · GW]) is to share an alternative notion of “singularity” that I’ve found useful in timelining/forecasting.

Notice here that I’m focusing on a company’s ability to do anything another company can do, rather than an AI system's ability to do anything a human can do.  Here, I’m also focusing on what the company can do if it so chooses (i.e., if its Board and CEO so choose) rather than what it actually ends up choosing to do.  If a company has these capabilities and chooses not to use them — for example, to avoid heavy regulatory scrutiny or risks to public health and safety — it still qualifies as a fully general tech company.

This notion can be contrasted with the following:

Now, consider the following two types of phase changes in tech progress:

  1. A tech company singularity is a transition of a technology company into a fully general tech company.  This could be enabled by safe AGI (almost certainly not AAGI, which is unsafe), or it could be prevented by unsafe AGI destroying the company or the world.
  2. An AI singularity is a transition from having merely narrow AI technology to having AGI technology.

I think the tech company singularity concept, or some variant of it, is important for societal planning, and I’ve written predictions about it before, here:

A tech company singularity as a point of coordination and leverage

The reason I like this concept is that it gives an important point of coordination and leverage that is not AGI, but which interacts in important ways with AGI.  Observe that a tech company singularity could arrive

  1. before AGI, and could play a role in
    1. preventing AAGI, e.g., through supporting and enabling regulation;
    2. enabling AGI but not AAGI, such as if tech companies remain focussed on providing useful/controllable products (e.g., PaLM, DALL-E);
    3. enabling AAGI, such as if tech companies allow experiments training agents to fight and outthink each other to survive.
  2. after AGI, such as if the tech company develops safe AGI, but not AAGI (which is hard to control, doesn't enable the tech company to do stuff, and might just destroy it).

Points (1.1) and (1.2) are, I think, humanity’s best chance for survival.  Moreover, I think there is some chance that the first tech company singularity could come before the first AI singularity, if tech companies remain sufficiently oriented on building systems that are intended to be useful/usable, rather than systems intended to be flashy/scary.

How to steer tech company singularities?

The above suggests an intervention point for reducing existential risk: convincing a mix of

… to shame tech companies for building useless/flashy systems (e.g., autonomous agents trained in evolution-like environments to exhibit survival-oriented intelligence), so they remain focussed on building usable/useful systems (e.g., DALL-E, PaLM) preceding and during a tech company singularity.  In other words, we should try to steer tech company singularities toward developing comprehensive AI services [LW · GW] (CAIS) rather than AAGI.

How to help steer scientists away from AAGI: 

How to convince the public that AAGI is bad: 

How to convince regulators that AAGI is bad:

How to convince investors that AAGI is bad: point out

Speaking personally, I have found it fairly easy to make these points since around 2016.  Now, with the rapid advances in AI we’ll be seeing from 2022 onward, it should be easier.  And, as Adam Scherlis (sort of) points out [EA Forum comment [EA(p) · GW(p)]], we shouldn't assume that no one new will ever care about AI x-risk, especially as AI x-risk becomes more evidently real.  So, it makes sense to re-try making points like these from time to time as discourse evolves.

Summary

In this post, I introduced the notion of a "tech company singularity", discussed how the idea might be usable as an important coordination and leverage point for reducing x-risk, and gave some ideas for convincing others to help steer tech company singularities away from AAGI.

All of this isn't to say we'll be safe from AI risk, and far from it; e.g., see What Multipolar Failure Looks Like [LW · GW].  Efforts to maintain cooperation on safety across labs and jurisdictions remains paramount, IMHO.

In any case, try on the "tech company singularity" concept and see if does anything for you :)

4 comments

Comments sorted by top scores.

comment by devansh (devanshpandey) · 2022-05-13T20:07:14.234Z · EA(p) · GW(p)

>>after a tech company singularity, such as if the tech company develops safe AGI

I think this should be "after AGI"?

Replies from: critch
comment by Andrew Critch (critch) · 2022-05-13T21:05:26.978Z · EA(p) · GW(p)

Yes, thanks!  Fixed.

comment by Harrison Durland (Harrison D) · 2022-05-14T03:11:28.899Z · EA(p) · GW(p)

I’m a bit confused and wanted to clarify what you mean by AGI vs AAGI: are you of the belief that AGI could be safely controlled (e.g., boxed) but that setting it to “autonomously” pursue the same objectives would be unsafe?

Could you describe what an AGI system might look like in comparison to an AAGI?

comment by Peter S. Park · 2022-05-14T22:01:33.626Z · EA(p) · GW(p)

Thank you so much for this extremely important and brilliant post, Andrew! I really appreciate it.

I completely agree that the degree to which autonomous general-capabilities research is outpacing alignment research needs to be reduced (most likely via recruitment and social opinion dynamics), and that this seems neglected relative to its importance.

I wrote a post on a related topic recently, and it would be really great to hear what you think! (https://forum.effectivealtruism.org/posts/juhMehg89FrLX9pTj/a-grand-strategy-to-recruit-ai-capabilities-researchers-into [EA · GW])