Enlightened Concerns of Tomorrowpost by cassidynelson · 2018-03-15T05:29:29.854Z · EA · GW · Legacy · 8 comments
Enlightenment Now Existential Threats Global Catastrophic Biological Risks Conclusion References None 8 comments
I've written the below as a response to Steven Pinker's new book "Enlightenment Now" and specifically a dedicated chapter he wrote on existential threats.
Steven Pinker’s works span the fields of cognitive psychology, linguistics and human nature. His book “The Better Angels of Our Nature: Why Violence Has Declined” provided a strong voice against the prevailing rhetoric that human violence has been increasing. His assertions have not been without criticism, but for many, his evidence-based arguments have been a welcome dissident voice filled with seemingly justified optimism against prevailing unsupported cynicism about human nature.
Pinker’s latest book “Enlightenment Now: The Case for Reason, Science, Humanism and Progress” is a continuation within the theme of optimism. Starting with a summary of the building block ideals of the Enlightenment, he weaves data and statistics to tell a story of positive human progress. Pinker lays a foundation for discussing contemporary topics of concern, ranging from inequality to terrorism, the environment to the quality of life, arguing that by all relevant measures most human affairs have improved drastically, and asserting that knowledge and technology will alleviate most of our persisting worries in time. He writes, “… there is no limit to the betterments we can attain if we continue to apply knowledge to enhance human flourishing.”
However, carrying this optimism forward brazenly, he is quick to dismiss the notion that humans are at risk of future catastrophes. In a dedicated chapter entitled “Existential Threats” he caricatures those worried about global catastrophic risks as Luddites who are sounding an unfounded alarm about emerging and anthropogenic threats to humanity. He lists the Future of Humanity Institute as one of the “Techno-philanthropist […] bankrolled research institutes dedicated to discovering new existential threats and figuring out how to save the world from them.”
This focus on existential threats seems potentially dangerous from Pinker’s point of view, and he argues that concerns about catastrophic risks can itself have unintended negative consequences. He cites the 1960’s nuclear arms race and the 2003 invasion of Iraq as examples of times that fears about hypothetical disasters endangered humanity instead of protecting it. He later argues that human behaviour changes for the worse when contemplating possible demise, and that the “cumulative psychological effects of the drumbeat of doom” should be given consideration. He does not, however, suggest a mechanism by which to weigh these potentially negative consequences against possible catastrophic events, or suggest a threshold by which we should concern ourselves with potential existential threats.
Pinker goes on to argue that the notion of civilisation able to destroy itself is misconceived. He likens present concerns about existential risks caused by Artificial Intelligence (AI) as a “21st-century version of the Y2K bug.” He seems to misunderstand some basic tenets of AI safety research, citing that “Understanding does not obey Moore’s Law: knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm faster and faster.” He suggests the main concern in AI safety is the development of Artificial General Intelligence (AGI) and infers that the relative temporal distance of AGI makes it of little concern. With regards to developing a ‘Doomsday Computer’ he suggest “The way to deal with this threat is straightforward: don’t build one.” However, this negates the concerns within the AI safety community that such technological developments may not be intentional or foreseen, but inadvertently created.
Global Catastrophic Biological Risks
In Pinker’s discussion on the topic of bioterrorism, he becomes ensnared in the Normalcy bias, purporting that the lack of role that biological weapons has played in modern warfare since the international prohibition in 1972 by the Biological Weapons Convention is sufficient reason to believe they will never pose an existential threat. This is a common problem when thinking about unprecedented risks, because their never-before-seen characteristic makes them difficult for most humans to consider. Nonetheless, the counsel of science suggests our intuition can be an unreliable guide to reality, and the counsel of history warns that the past does not always augur the future well.
Engineered pathogens of pandemic potential are a novel threat with consequences that are difficult to ascertain, but may be trajectory-altering for humankind. While Pinker discusses possible augmentations to pathogens that increase their transmissibility, virulence and durability, he also states “… breeding such a fine-tuned germ would require Nazi-like experiments on living humans that even terrorists (to say nothing of teenagers) are unlikely to carry off.” This ignores the recent apposite case of horsepox virus being synthesized de novo and resurrected by a team of scientists in Canada for $100,000. Small-scale actors engineering “fine-tuned germs” seems more conceivable everyday.
Pinker asserts that risk assessments, which vary by orders of magnitude for highly improbable events, is a reason in itself to set aside worries about global catastrophic events. He states bluntly “You can’t worry about everything.” This is true to perhaps to a technical extent, but there is a large difference between “everything” and low probability, high impact risks. Indeed, it has been demonstrated that uncertainty in risk assessments does not negate cause for consideration, but instead adds to the reason for concern.1 Furthermore, there is a strong case for considering low probability, high impact risks that may threaten human extinction and all future generations.2 Given the astronomical number of lives that could exist in the future,3 reducing the risk of their non-existence even by a small factor is a worthwhile endeavour.
Global Catastrophic Risks are an emerging phenomenon in a fast-evolving risk landscape filled with uncertainty. As with all new technologies and unprecedented advances, the lack of historical precedence carries little weight against their consideration as a plausible existential threat to humanity. Criticism is a valuable tool that invokes debate and refinement, however, hasty dismissal does not allow for balanced discourse on a topic that potentially could be of existential importance.
Pinker concedes that he can offer no assurance that global catastrophe will never happen, instead insisting that “we can treat them not as apocalypses in waiting but as problems to be solved.” However, these solutions rely upon the hard work and due concern of individuals to mitigate risks that threaten the existence of civilisation and future generations.
1 Ord T, Hillerbrand R, Sandberg A. Probing the improbable: methodological challenges for risks with low probabilities and high stakes. Journal of Risk Research. 2010 Mar 1;13(2):191-205.
2 Millett P, Snyder-Beattie A. Existential Risk and Cost-Effective Biosecurity. Health security. 2017 Aug 1;15(4):373-83.
3 Bostrom N. Existential risk prevention as global priority. Global Policy. 2013 Feb 1;4(1):15-31.
Thanks to Greg Lewis for his comments.
Comments sorted by top scores.