Posts

Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public 2023-03-09T10:40:07.206Z
[Crosspost] Why Uncontrollable AI Looks More Likely Than Ever 2023-03-08T15:33:49.651Z
Existential Risk Observatory: results and 2022 targets 2022-01-14T13:52:04.853Z
Introducing the Existential Risk Observatory 2021-08-12T15:51:12.184Z

Comments

Comment by Otto on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T14:47:22.230Z · EA · GW

I hope that this article sends the signals that pausing the development of the largest AI-models is good, informing society about AGI xrisk is good, and that we should find a coordination method (regulation) to make sure we can effectively stop training models that are too capable.

What I think we should do now is:

1) Write good hardware regulation policy proposals that could reliably pause the development towards AGI.
2) Campaign publicly to get the best proposal implemented, first in the US and then internationally.

This could be a path to victory.

Comment by Otto on The Overton Window widens: Examples of AI risk in the media · 2023-03-24T00:04:06.087Z · EA · GW

Crossposting a comment: As co-author of one of the mentioned pieces, I'd say it's really great to see the AGI xrisk message mainstreaming. It doesn't nearly go fast enough, though. Some (Hawking, Bostrom, Musk) have already spoken out about the topic for close to a decade. So far, that hasn't been enough to change common understanding. Those, such as myself, who hope that some form of coordination could save us, should give all they have to make this go faster. Additionally, those who think regulation could work should work on robust regulation proposals which are currently lacking. And those who can should work on international coordination, which is currently also lacking.

A lot of work to be done. But the good news is that the window of opportunity is opening, and a lot of people could work on this which currently aren't. This could be a path to victory.

Comment by Otto on Announcing the European Network for AI Safety (ENAIS) · 2023-03-22T20:01:49.153Z · EA · GW

Great idea, congrats on the founding and looking forward to working with you!

Comment by Otto on Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public · 2023-03-10T22:45:35.513Z · EA · GW

Thanks Peter for the compliment! If there is something in particular you're interested in, please let us know and perhaps we can take it into account in future research projects!

Comment by Otto on Slowing down AI progress is an underexplored alignment strategy · 2022-07-20T18:04:18.475Z · EA · GW

I agree that this strategy is underexplored. I would prioritize the following work in this direction as follows:

  • What kind of regulation would be sufficiently robust to slow down, or even pause, all AGI capabilities actors? This should include research/software regulation, hardware regulation, and data regulation. I think a main reason why many people think this strategy is unlikely to work is that they don't believe any practical regulation would be sufficiently robust. But to my knowledge, that key assumption has never been properly investigated. It's time we do so.
  • How could we practically implement sufficiently robust regulation? What would be required to do so?
  • How can we inform sufficiently large portions of society about AI xrisk to get robust regulation implemented? We are planning to do more research on this topic at the Existential Risk Observatory this year (we already have some first findings).
Comment by Otto on FYI: I’m working on a book about the threat of AGI/ASI for a general audience. I hope it will be of value to the cause and the community · 2022-07-09T10:43:02.743Z · EA · GW

Awesome initiative! At the Existential Risk Observatory, we are also focusing on outreach to the societal debate, I think that should be seen as one of the main opportunities to reduce existential risk. If you want to connect and exchange thoughts, that's always welcome.

Comment by Otto on AGI Safety Communications Initiative · 2022-06-14T22:13:29.836Z · EA · GW

Great idea to look into this!

It sounds a lot like what we have been doing at the Existential Risk Observatory (posts from us, website). We're more than willing to give you input insofar that helps, and perhaps also to coordinate. In general, we think this is a really positive action and the space is wide open. So far, we have good results. We also think there is ample space for other institutes to do this.

Let's further coordinate by email, you can reach us at info@existentialriskobservatory.org. Looking forward to learn from each other!

Comment by Otto on Existential Risk Observatory: results and 2022 targets · 2022-06-14T10:19:02.079Z · EA · GW

Enough happened to write a small update about the Existential Risk Observatory.

First, we made progress in our core business:  informing the public debate. We have published two more op-eds (in Dutch, one with a co-author from FLI) in a reputable, large newspaper. Our pieces warn against existential risk, especially from AGI, and propose low-hanging fruit type of measures the Dutch government could take to reduce risk (e.g. extra AI safety research).

A change w.r.t. the previous update, is that we see serious, leading journalists become interested in the topic. One leading columnist has already written a column about AI existential risk in a leading newspaper. Another journalist is planning to write a major article about it. This same person proposed having a debate about AI xrisk at the leading debate center, which would be well-positioned to influence yet others, and he proposed to use his network for the purpose. This is definitely not yet a fully-fledged informed societal debate yet, but it does update our expectations in relevant ways:

  • The idea of op-eds translating into broader media attention is realistic.
  • That attention is generally constructive, and not derogatory.
  • Most of the informing takes place in a social, personal context.

From our experience, the process is really to inform leaders of the societal debate, who then inform others. We have for example organized an existential risk drink, where thought leaders, EAs, and journalists could talk to each other, which worked very well. Key figures should hear accurate existential risk information from different sides. Social proof is key. Being honest, sincere, coherent, and trying to receive as well as send, goes a long way, too.

Another update is that we will receive funding from the SFF and are in serious discussions with two other funds. We are really happy that this proves that our approach, reducing existential risk by informing the public debate, has backing in the existential risk community. We are still resource-constrained, but also massively manpower- and management-constrained. Our vision is a world where everyone is informed about existential risk. We cannot achieve this vision alone, but would like other institutes (new and existing) to join us in the communication effort. That we have received funding for informing the societal debate is evidence that others can, too. We are happy to share information about what we are doing and how others could do the same at talks, for example for local EA groups or at events.

Our targets for this year remain the same:

  1. Publish at least three articles about existential risk in leading media in the Netherlands.
  2. Publish at least three articles about existential risk in leading media in the US.
  3. Receive funding for stability and future upscaling.

We will start working on next year’s targets in Q4.

Comment by Otto on Should we buy coal mines? · 2022-05-06T11:28:56.473Z · EA · GW

Anyway I posted this here because I think it somewhat resembles the policy of buying and closing coal mines. You're deliberately creating scarcity. Since there are losers when you do that, policymakers might respond. I think creating scarcity in carbon rights is more efficient and much more easy to implement than creating scarcity in coal, but indeed suffers from some of the same drawbacks.

Comment by Otto on Should we buy coal mines? · 2022-05-06T11:24:06.488Z · EA · GW

Possibly, in the medium term. To counter that, you might want to support groups who lobby for lower carbon scheme ceilings as well.

Comment by Otto on How I failed to form views on AI safety · 2022-05-05T08:39:20.629Z · EA · GW

Hey I wasn't saying it wasn't that great :)

I agree that the difficult part is to get to general intelligence, also regarding data. Compute, algorithms, and data availability are all needed to get to this point. It seems really hard to know beforehand what kind and how much of algorithms and data one would need. I agree that basically only one source of data, text, could well be insufficient. There was a post I read on a forum somewhere (could have been here) from someone who let GPT3 solve questions including things like 'let all odd rows of your answer be empty'. GPT3 failed at all these kind of assignments, showing a lack of comprehension. Still, the 'we haven't found the asymptote' argument from OpenAI (intelligence does increase with model size and that increase doesn't seem to stop, implying that we'll hit AGI eventually), is not completely unconvincing either. It bothers me that no one can completely rule out that large language models might hit AGI just by scaling them up. It doesn't seem likely to me, but from a risk management perspective, that's not the point. An interesting perspective I'd never heard before from intelligent people is that AGI might actually need embodiment to gather the relevant data. (They also think it would need social skills first - also an interesting thought.)

While it's hard to know how much (and what kind of) algorithmic improvement and data is needed, it seems doable to estimate the amount of compute needed, namely what's in a brain plus or minus a few orders of magnitude. It seems hard for me to imagine that evolution can be beaten by more than a few orders of magnitude in algorithmic efficiency (the other way round is somewhat easier to imagine, but still unlikely in a hundred year timeframe). I think people have focused on compute because it's most forecastable, not because it would be the only part that's important.

Still, there is a large gap between what I think are essentially thought experiments (relevant ones though!) leading to concepts such as AGI and the singularity, and actual present AI. I'm definitely interested in ideas filling that gap. I think 'AGI safety from first principles' by Richard Ngo is a good try, I guess you've read that too since it's part of the AGI Safety Fundamentals curriculum? What did you think about it? Do you know any similar or even better papers about the topic?

It could be that belief too, yes! I think I'm a bit exceptional in the sense that I have no problem imagining human beings achieving really complex stuff, but also no problem imagining human beings failing miserably at what appear to be really easy coordination issues. My first thought when I heard about AGI, recursive self-improvement, and human extinction was 'ah yeah that sounds like typically the kind of thing engineers/scientists would do!' I guess some people believe engineers/scientists could never make AGI (I disagree), while others think they could, but would not be stupid enough to screw up badly enough to actually cause human extinction (I disagree).

Comment by Otto on Should we buy coal mines? · 2022-05-05T07:51:38.355Z · EA · GW

If you want to spend money quickly on reducing carbon dioxide emissions, you can buy emission rights and destroy them. In schemes such as the EU ETS, destroyed emission rights should lead to direct emission reduction. This has technically been implemented already. Even cheaper is probably to buy and destroy rights in similar schemes in other regions.

Comment by Otto on How I failed to form views on AI safety · 2022-05-01T19:56:58.746Z · EA · GW

Hi AM, thanks for your reply.

Regarding your example, I think it's quite specific, as you notice too. That doesn't mean I think it's invalid, but it does get me thinking: how would a human learn this task? A human intelligence wasn't trained on many specific tasks in order to be able to do them all. Rather, it first acquired general intelligence (apparently, somewhere), and was later able to apply this to an almost infinite amount of specific tasks with typically only a few examples needed. I would guess that an AGI would solve problems in a similar way. So, first learn general intelligence (somehow), then learn specific tasks quickly with little data needed.

For your example, if the AGI would really need to do this task, I'd say it could find ways itself to gather the data, just like a human would who would want to learn this skill, after first acquiring some form of general intelligence. A human doctor might watch the healthily moving joint, gathering visual data, and might hear the joint moving, gathering audio data, or might put her hand on the joint, gathering sensory data. The AGI could similarly film and record the healthy joint moving, with already available cameras and microphones, or use data already available online, or, worst case, send in a drone with a camera and a sound recorder. It could even send in a robot that could gather sensory data if needed.

Of course, current AI lacks certain skills that are necessary to solve such a general problem in such a general way, such as really understanding the meaning behind a question that is asked, being able to plan a solution (including acquiring drones and robots in the process), and probably others. These issues would need to be solved first, so there is still a long way to go. But with the manpower, investment, and time (e.g. 100 years) available, I think we should assign a probability of at least tens of percents that this type of general intelligence including planning and acting effectively in the real world, will eventually be found. I'd say it is still unsure whether it will be based on a neural network (large language model or otherwise) or not.

Perhaps the difference between longtermists and shorttermists is imagination, rather than intelligence? And I'm not saying which side is right: perhaps we have too much imagination, on the other hand, perhaps you have too little imagination. We will only really know when the time comes.

Comment by Otto on How I failed to form views on AI safety · 2022-04-22T07:37:49.517Z · EA · GW

Thanks for the reply, and for trying to attach numbers to your thoughts!

So our main disagreement lies in (1). I think this is a common source of disagreement, so it's important to look into it further.

Would you say that the chance to ever build AGI is similarly tiny? Or is it just the next hundred years? In other words, is this a possibility or a timeline discussion?

Comment by Otto on How I failed to form views on AI safety · 2022-04-21T10:52:12.624Z · EA · GW

Hi Ada-Maaria, glad to have talked to you at EAG and congrats for writing this post - I think it's very well written and interesting from start to finish! I also think you're more informed on the topic than most people who are AI xrisk convinced in EA, surely including myself.

As an AI xrisk-convinced person, it always helps me to divide AI xrisk in these three steps. I think superintelligence xrisk probability is the product of these three probabilities:

1) P(AGI in next 100 years)
2) P(AGI leads to superintelligence)
3) P(superintelligence destroys humanity)

Would you like to share your estimates? I think it would make the discussion more targeted, and I think no estimate would be very foolish since basically no-one knows. :) or maybe :(

Personally, I guess my estimates are something like 1) 50%,  2) 70%, 3) 40% (not based on much).

It would be really great to have more and better papers on this (peer reviewed), so that disagreement can be made as small as possible - though it will probably never disappear.

Comment by Otto on Existential Risk Observatory: results and 2022 targets · 2022-01-15T16:12:14.280Z · EA · GW

Thanks for that context and for your thoughts! We understand the worries that you mention, and as you say, op-eds are a good way to avoid those. Most (>90%) of the other mainstream media articles we've seen about existential risk (there's a few dozen) did not suffer from these issues either, fortunately.

Comment by Otto on Existential Risk Observatory: results and 2022 targets · 2022-01-15T15:53:51.789Z · EA · GW

Thank you for the heads up! We would love to have more information about general audience attitudes towards existential risk, especially related to AI and other novel tech.  Particularly interesting for us would be research into which narratives work best. We've done some of this ourselves, but it would be interesting to see if our results match others'. So yes please let us know when you have  this available!