comment by MichaelA ·
2021-01-03T06:00:25.886Z · EA(p) · GW(p)
Thoughts on Toby Ord’s policy & research recommendations
In Appendix F of The Precipice, Ord provides a list of policy and research recommendations related to existential risk (reproduced here). This post contains lightly edited versions of some quick, tentative thoughts I wrote regarding those recommendations in April 2020 (but which I didn’t post at the time).
Overall, I very much like Ord’s list, and I don’t think any of his recommendations seem bad to me. So most of my commentary is on things I feel are arguably missing.
Regarding “other anthropogenic risks”
Ord’s list includes no recommendations specifically related to any of what he calls “other anthropogenic risks”, meaning:
- “dystopian scenarios”
- “back contamination” from microbes from planets we explore
- “our most radical scientific experiments”
(Some of his “General” recommendations would be useful for those risks, but there are no recommendations specifically targeted at those risks.)
This is despite the fact that Ord estimates a ~1 in 50 chance that “other anthropogenic risks” will cause existential catastrophe in the next 100 years. That's ~20 times as high as his estimate for each of nuclear war and climate change (~1 in 1000), and ~200 times as high as his estimate for all "natural risks" put together (~1 in 10,000). (Note that Ord's "natural risks" includes supervolcanic eruption, asteroid or comet impact, and stellar explosion, but does not include "'naturally' arising pandemics". See here [EA · GW] for Ord’s estimates and some commentary on them.)
Meanwhile, Ord includes 10 recommendations specifically related to "natural risk"s, 7 related to nuclear war, and 8 related to climate change. Those recommendations do all look to me like good recommendations, and like things “someone” should do. But it seems odd to me that there are that many recommendations for those risks, yet none specifically related to a category Ord seems to think poses many times more existential risk.
Perhaps it’s just far less clear to Ord what, concretely, should be done about “other anthropogenic risks”. And perhaps he wanted his list to only include relatively concrete, currently actionable recommendations. But I expect that, if we tried, we could find or generate such recommendations related to dystopian scenarios and nanotechnology (the two risks from this category I’m most concerned about).
So one thing I'd recommend is someone indeed having a go at finding or generating such recommendations! (I might have a go at that myself for dystopias, but probably only at least 6 months from now.)
(See also posts tagged global dystopia [? · GW], atomically precise manufacturing [? · GW], or space [? · GW].)
Regarding naturally arising pandemics
Similarly, Ord has no recommendations specifically related to what he called “‘naturally’ arising pandemics” (as opposed to “engineered pandemics”), which he estimates as posing as much existential risk over the next 100 years as all “natural risks” put together (~1 in 10,000). (Again, note that he doesn’t include “‘naturally’ arising pandemics” as a “natural risk”.)
This is despite the fact that, as noted above, he has 10 recommendations related to “natural risks”. This also seems somewhat strange to me.
That said, one of Ord's recommendations for “Emerging Pandemics” would also help with “‘naturally’ arising pandemics”. (This is the recommendation to “Strengthen the WHO’s ability to respond to emerging pandemics through rapid disease surveillance, diagnosis and control. This involves increasing its funding and powers, as well as R&D on the requisite technologies.”) But the other five recommendations for “Emerging Pandemics” do seem fairly specific to emerging rather than “naturally” arising pandemics.
Regarding engineered pandemics
Ord recommends “Increas[ing] transparency around accidents in BSL-3 and BSL-4 laboratories.” “BSL” refers to “biosafety level”, and 4 is the highest it gets.
In Chapter 5, Ord provides some jawdropping/hilarious/horrifying tales of accidents even among labs following the BSL-4 standards (including two accidents in a row for one lab). So I’m very much on board with the recommendation to increase transparency around those accidents.
But I was a little surprised to see that Ord didn’t also call for things like:
- introducing more stringent standards (to prevent rather than be transparent about accidents),
- introducing more monitoring and enforcement of compliance with those standards, and/or
- restricting some kinds of research as too dangerous for even labs following the highest standards
Some possible reasons why he may not have called for such things:
- He may have worried there’d be too much pushback, e.g. from the bioengineering community
- He may have thought those things just actually would be net-negative, even if not for pushback
- He may have felt that his other recommendations would effectively accomplish similar results
But I’d guess (with low confidence) that at least something along the lines of the three “missing recommendations” mentioned above - and beyond what Ord already recommends - would probably help reduce biorisk, if done as collaboratively with the relevant communities as is practical.
Regarding existential risk communication
One of Ord’s recommendations is to:
Develop better theoretical and practical tools for assessing risks with extremely high stakes that are either unprecedented or thought to have extremely low probability.
I think this is a great recommendation. (See also Database of existential risk estimates [EA · GW].) That recommendation also made me think that another strong recommendation might be something like:
Develop better approaches, incentives, and norms for communicating about risks with extremely high stakes that are either unprecedented or thought to have extremely low probability.
That sounds a bit vague, and I’m not sure exactly what form such approaches, incentives, or norms should take or how one would implement them. (Though I think that the same is true of the recommendation of Ord’s which inspired this one.)
That proposed recommendation of mine was in part inspired by the COVID-19 situation, and more specifically by the following part of an 80,000 Hours Podcast episode. (which also gestures in the direction of concrete implications of my proposed recommendations).
Rob Wiblin: The alarm [about COVID-19] could have been sounded a lot sooner and we could have had five extra weeks to prepare. Five extra weeks to stockpile food. Five extra weeks to manufacture more hand sanitizer. Five extra weeks to make more ventilators. Five extra weeks to train people to use the ventilators. Five extra weeks to figure out what the policy should be if things got to where they are now.
Work was done in that time, but I think a lot less than could have been done if we had had just the forecasting ability to think a month or two ahead, and to think about probabilities and expected value. And this is another area where I think we could improve a great deal.
I suppose we probably won’t fall for this exact mistake again. Probably the next time this happens, the world will completely freak out everywhere simultaneously. But we need better ability to sound the alarm, potentially greater willingness actually on the part of experts to say, ‘I’m very concerned about this and people should start taking action, not panic, but measured action now to prepare,’ because otherwise it’ll be a different disaster next time and we’ll have sat on our hands for weeks wasting time that could have saved lives. Do you have anything to add to that?
Howie Lempel: I think one thing that we need as a society, although I don’t know how to get there, is an ability to see an expert say that they are really concerned about some risk. They think it likely won’t materialize, but it is absolutely worth putting a whole bunch of resources into preparing, and seeing that happen and then seeing the risk not materialize and not just cracking down on and shaming that expert, because that’s just going to be what happens most of the time if you want to prepare for things that don’t occur that often.
Regarding AI risk
Here are Ord’s four policy and research recommendations under the heading “Unaligned Artificial Intelligence”:
Foster international collaboration on safety and risk management.
Explore options for the governance of advanced AI.
Perform technical research on aligning advanced artificial intelligence with human values.
Perform technical research on other aspects of AGI safety, such as secure containment or tripwires.
These all seem to me like excellent suggestions, and I’m glad Ord has lent additional credibility and force to such recommendations by including them in such a compelling and not-wacky-seeming book. (I think Human Compatible and The Alignment Problem were also useful in a similar way.)
But I was also slightly surprised to not see explicit mention of, for example:
- Work to actually understand what human values actually are, how they’re structured, which aspects of them we do/should care about, etc.
- E.g., much of Stuart Armstrong’s research, or some work that’s more towards the philosophical rather than technical end
- “Agent foundations”/“deconfusion”/MIRI-style research
- Further formalisation and critique of the various arguments and models about AI risk
But this isn’t really a criticism, because:
- Perhaps the first two of the “missing recommendations” I mentioned were actually meant to be implicit in Ord’s third and fourth recommendations
- Perhaps Ord has good reasons to not see these recommendations as especially worth mentioning
- Perhaps Ord thought he’d be unable to concisely state such recommendations (or just the MIRI-style research one) in a way that would sound concrete and clearly actionable to policymakers
- Any shortlist of a person’s top recommendations will inevitably fail to 100% please all readers
You can see a list of all the things I've written that summarise, comment on, or take inspiration from parts of The Precipice here [EA(p) · GW(p)].