comment by Evan R. Murphy ·
2021-10-22T00:32:33.744Z · EA(p) · GW(p)
People in bunkers, "sardines" and why biorisks may be overrated as a global priority
I'm going to make the case here that certain problem areas currently prioritized highly in the longtermist EA community are overweighted in their importance/scale. In particular I'll focus on biorisks, but this could also apply to other risks such as non-nuclear global war and perhaps other areas as well.
I'll focus on biorisks because that is currently highly prioritized by both Open Philanthropy and 80,000 Hours and probably other EA groups as well. If I'm right that biotechnology risks should be deprioritized, that would relatively increase the priority of other issues like AI, growing Effective Altruism, global priorities research, nanotechnology risks and others by a significant amount. So it could help allocate more resources to those areas which still pose existential threats to humanity.
I won't be taking issue with the longtermist worldview here. In fact, I'll assume the longtermist worldview is correct. Rather, I'm questioning whether biorisks really pose a significant existential/extinction risk to humanity. I don't doubt that they could lead to major global catastrophes which it would be really good to avert. I just think that it's extremely unlikely for them to lead to total human extinction or permanent civilization collapse.
This started when I was reading about disaster shelters. Nick Beckstead has a paper considering whether they could be a useful avenue for mitigating existential risks . He concludes there could be a couple of special scenarios where they are that need further research, but by and large new refuges don't seem like a great investment because there are already so many existing shelters and other things which could serve to protect people from many global catastrophes. Specifically, the world already has a lot of government bunkers, private shelters, people working on submarines, and 100-200 uncontacted peoples which are likely to produce survivors from certain otherwise devastating events. 
A highly lethal engineered pandemic is among the biggest risks considered from biotechnology. This could potentially wipe out billions of people and lead to a collapse of civilization. But it's extremely unlikely not to spare at least a few hundred or thousand people among those who have access to existing bunkers or other disaster shelters, people who are working on submarines, and among the dozens of tribes and other peoples living in remote isolation. Repopulating the Earth and rebuilding civilization would not be fast or easy, but these survivors could probably do it over many generations.
So are humans immune then from all existential risks thanks to preppers, "sardines"  and uncontacted peoples? No. There are certain globally catastrophic events which would likely spare no one. A superintelligent malevolent AI could probably hunt everyone down. The feared nanotechnological "gray goo" scenario could wreck all matter on the planet. A nuclear war extreme enough that it contaminated all land on the planet with radioactivity - even though it would likely have immediate survivors - might create such a mess that no humans would last long-term. There are probably others as well.
I've gone out on a bit of a limb here to claim that biorisks aren't an existential risk. I'm not a biotech expert, so there could be some biorisks that I'm not aware of. For example, could there be some kind of engineered virus that contaminates all food sources on the planet? I don't know and would be interested to hear from folks about that. This could be similar to a long-lasting global nuclear fallout in that it would have immediate survivors but not long-term survivors. However, mostly the biorisks I have seen people focus on seem to be lethal virulent engineered pandemics that target humans. As I've said, it seems unlikely this would kill all the humans in bunkers/shelters, submarines and on remote parts of the planet.
Even if there is some kind of lesser-known biotech risk which could be existential, my bottom-line claim is that there seems to be an important line between real existential risks that would annihilate all humans and near-existential risks that would spare some people in disaster shelters and shelter-like situations. I haven't seen this line discussed much and I think it could help with better prioritizing global problem areas for the EA community.
: "How much could refuges help us recover from a global catastrophe?" https://web.archive.org/web/20181231185118/https://www.fhi.ox.ac.uk/wp-content/uploads/1-s2.0-S0016328714001888-main.pdf
: I just learned that sailors use this term for submariners which is pretty fun. https://www.operationmilitarykids.org/what-is-a-navy-squid-11-slang-nicknames-for-navy-sailors/Replies from: Linch
↑ comment by Linch ·
2021-10-22T01:48:45.487Z · EA(p) · GW(p)
It's hard to have a discussion about this in the open because many EAs (and presumably some non-EAs) with biosecurity expertise strongly believe that this is too dangerous a topic to talk about in detail in the open, because of information hazards [? · GW] and related issues.
Speaking for myself, I briefly looked into the theory of information hazards, as well as thought through some of the empirical consequences, and my personal view is that while the costs of having public dialogue about various xrisk stuff (including biorisk) are likely underestimated, the benefits are also likely underestimated as well, and on balance more things should be shared rather than less. I think Will Bradshaw [EA · GW]and (I'm less confident) Anders Sandberg share* this view.
Unfortunately, it's hard to have a conversation about frank open conversation about biorisk before having a frank meta-conversation about the value of open conversations about biorisk, so here we are.
(EDIT: Note however that I am likely personally not aware of many of the empirical considerations that pro-secrecy biorisk people are aware of, which makes this conversation somewhat skewed)Replies from: Evan R. Murphy
*both of whom, unlike me, did nontrivial work in advancing the theory of infohazards, in addition to having biosecurity expertise.
↑ comment by Evan R. Murphy ·
2021-10-23T07:23:31.551Z · EA(p) · GW(p)
Thanks, Linch. I didn't realize I might be treading near information hazards. It's good to know and an interesting point about the pros and cons of having such conversations openly.