MichaelA's Shortform

post by MichaelA · 2019-12-22T05:35:17.473Z · score: 10 (4 votes) · EA · GW · 56 comments

56 comments

Comments sorted by top scores.

comment by MichaelA · 2020-05-05T04:54:40.433Z · score: 18 (8 votes) · EA(p) · GW(p)

To provide us with more empirical data on value drift [EA(p) · GW(p)], would it be worthwhile for someone to work out how many EA Forum users each year have stopped being users the next year? E.g., how many users in 2015 haven't used it since?

Would there be an easy way to do that? Could CEA do it easily? Has anyone already done it?

One obvious issue is that it's not necessary to read the EA Forum in order to be "part of the EA movement". And this applies more strongly for reading the EA Forum while logged in, for commenting, and for posting, which are presumably the things there'd be data on.

But it still seems like this could provide useful evidence. And it seems like this evidence would have a different pattern of limitations to some other evidence we have (e.g., from the EA Survey), such that combining these lines of evidence could help us get a clearer picture of the things we really care about.

comment by MichaelA · 2020-02-24T17:51:57.889Z · score: 18 (8 votes) · EA(p) · GW(p)

Collection [EA · GW] of sources that seem very relevant to the topic of civilizational collapse and/or recovery

Civilization Re-Emerging After a Catastrophe - Karim Jebari, 2019 (see also my commentary on that talk [EA · GW])

Civilizational Collapse: Scenarios, Prevention, Responses - Denkenberger & Ladish, 2019

Update on civilizational collapse research [EA · GW] - Ladish, 2020 (personally, I found Ladish's talk more useful; see the above link)

Modelling the odds of recovery from civilizational collapse [EA · GW] - Michael Aird (i.e., me), 2020

The long-term significance of reducing global catastrophic risks - Nick Beckstead, 2015 (Beckstead never actually writes "collapse", but has very relevant discussion of probability of "recovery" and trajectory changes following non-extinction catastrophes)

How much could refuges help us recover from a global catastrophe? - Nick Beckstead, 2015 (he also wrote a related EA Forum post [EA · GW])

Various EA Forum posts by Dave Denkenberger [EA · GW] (see also ALLFED's site)

Aftermath of Global Catastrophe - GCRI, no date (this page has links to other relevant articles)

A (Very) Short History of the Collapse of Civilizations, and Why it Matters [EA · GW] - David Manheim, 2020

A grant application from Ladish, and Oliver Habryka's thoughts on it [EA(p) · GW(p)] - 2019

Civilisational collapse has a bright past – but a dark future - Luke Kemp, 2019

Are we on the road to civilisation collapse? - Luke Kemp, 2019

Civilization: Institutions, Knowledge and the Future - Samo Burja, 2018

Secret of Our Success - Henrich, 2015 (not about collapse, but it has many relevant insights, in my opinion) (see also the Slate Star Codex review)

Is there a subfield of economics devoted to "fragility vs resilience"? [EA · GW] (and the answers there) - steve6320 and various commenters, 2020

I also have some as-yet unpublished work on collapse & recovery that I'm happy to share upon request.

Things about existential risk or GCRs [EA(p) · GW(p)] more broadly, but with relevant parts

Toby Ord on the precipice and humanity’s potential futures - 2020 (the first directly relevant part is in the section on nuclear war)

The Precipice - Ord, 2020

Long-Term Trajectories of Human Civilization - Baum et al., 2019 (the authors never actually write "collapse", but their section 4 is very relevant to the topic)

Towards Comprehensive Existential Risk Assessment: A Bayesian Network Model And Proposal For Assessment - Rozendal, 2019, working paper

Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter - Cotton-Barratt, Daniel, Sandberg, 2020

Existential Risk Strategy Conversation with Holden Karnofsky, Eliezer Yudkowsky, and Luke Muehlhauser - 2014

Causal diagrams of the paths to existential catastrophe [EA · GW] - Michael Aird, 2020

Stuart Armstrong interview - 2014 (the relevant section is 7:45-14:30)

Existential Risk Prevention as Global Priority - Bostrom, 2012

The Future of Humanity - Bostrom, 2007 (covers similar points to the above paper)

How Would Catastrophic Risks Affect Prospects for Compromise? - Tomasik, 2013/2017

Crucial questions for longtermists [EA · GW] - Michael Aird, 2020

Things that sound relevant, but which I haven't read/watched/listened to yet

Catastrophe, Social Collapse, and Human Extinction - Robin Hanson, 2007

The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk - David Manheim,

Existential Risks: Exploring a Robust Risk Reduction Strategy - Karim Jebari, 2015

Islands as refuges for surviving global catastrophes - Turchin & Green, 2018

Videos and slides from a Princeton Workshop on Historical Systemic Collapse - 2019

Feeding Everyone No Matter What - Denkenberger & Pearce, 2014

Why and how civilisations collapse - Kemp [CSER]

https://en.wikipedia.org/wiki/Societal_collapse

https://en.wikipedia.org/wiki/Collapse:_How_Societies_Choose_to_Fail_or_Succeed [book]

https://en.wikipedia.org/wiki/The_Knowledge:_How_to_Rebuild_Our_World_from_Scratch - Dartnell [book] (there's also this TEDx Talk by the author, but I didn't find that very useful from a civilizational collapse perspective)

The Collapse of Complex Societies - Joseph Tainter, 1988

1177 B.C.: The Year Civilization Collapsed - Eric Cline, 2014

On Collapse Risk (C-Risk) [EA · GW] - Pawntoe4, 2020

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

comment by gavintaylor · 2020-06-28T19:20:37.497Z · score: 5 (3 votes) · EA(p) · GW(p)

Guns, Germs, and Steel - I felt this provided a good perspective on the ultimate factors leading up to agriculture and industry.

comment by MichaelA · 2020-06-28T22:56:11.360Z · score: 2 (1 votes) · EA(p) · GW(p)

Great, thanks for adding that to the collection!

comment by MichaelA · 2020-09-18T07:01:28.123Z · score: 3 (2 votes) · EA(p) · GW(p)

Suggested by a member of the History and Effective Altruism Facebook group:

comment by MichaelA · 2020-06-26T07:17:14.720Z · score: 17 (6 votes) · EA(p) · GW(p)

Collection [EA · GW] of EA analyses of political polarisation

EA considerations regarding increasing political polarization [EA · GW] - Alfred Dreyfus, 2020

Adapting the ITN framework for political interventions & analysis of political polarisation [EA · GW] - OlafvdVeen, 2020

Thoughts on electoral reform [EA · GW] - Tobias Baumann, 2020

Risk factors for s-risks - Tobias Baumann, 2019

(Perhaps some Slate Star Codex posts? I can't remember for sure.)

Notes

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

Also, I'm aware that there has also been a vast amount of non-EA analysis of this topic. The reasons I'm collecting only EA analyses here are that:

  • their precise focuses or methodologies may be more relevant to other EAs than would be the case with non-EA analyses
  • links to non-EA work can be found in most of the things I list here
  • I'd guess that many collections of non-EA analyses of these topics already exist (e.g., in reference lists)
comment by MichaelA · 2020-03-26T10:58:13.599Z · score: 16 (10 votes) · EA(p) · GW(p)

Collection [EA · GW] of EA analyses of how social social movements rise, fall, can be influential, etc.

Movement collapse scenarios [EA · GW] - Rebecca Baron

Why do social movements fail: Two concrete examples. [EA · GW] - NunoSempere

What the EA community can learn from the rise of the neoliberals [? · GW] - Kerry Vaughan

How valuable is movement growth? [? · GW] - Owen Cotton-Barratt (and I think this is sort-of a summary of that article)

Long-Term Influence and Movement Growth: Two Historical Case Studies [EA · GW] - Aron Vallinder, 2018

Some of the Sentience Institute's research, such as its "social movement case studies"* and the post How tractable is changing the course of history?

A Framework for Assessing the Potential of EA Development in Emerging Locations [EA · GW]* - jahying

EA considerations regarding increasing political polarization [EA · GW] - Alfred Dreyfus, 2020

Hard-to-reverse decisions destroy option value [? · GW] - Schubert & Garfinkel, 2017

These aren't quite "EA analyses", but Slate Star Codex has several relevant book reviews and other posts, such as:

It appears Animal Charity Evaluators did relevant research, but I haven't read it, they described it as having been "of variable quality", and they've discontinued it.

In this comment [EA · GW], Pablo Stafforini refers to some relevant work that sounds like it's non-public.

See also my collection of work on value drift [EA(p) · GW(p)], and my list of some history topics it might be very valuable to investigate [EA · GW].

*Asterisks indicate I haven't read that source myself, and thus that the source might not actually be a good fit for this list.

Notes

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

Also, I'm aware that there are a lot of non-EA analyses of these topics. The reasons I'm collecting only EA analyses here are that:

  • their precise focuses or methodologies may be more relevant to other EAs than would be the case with non-EA analyses
  • links to non-EA work can be found in most of the things I list here
  • I'd guess that many collections of non-EA analyses of these topics already exist (e.g., in reference lists)
comment by vaidehi_agarwalla · 2020-07-14T01:19:47.229Z · score: 7 (2 votes) · EA(p) · GW(p)

I have a list here that has some overlap but also some new things: https://docs.google.com/document/d/1KyVgBuq_X95Hn6LrgCVj2DTiNHQXrPUJse-tlo8-CEM/edit#

comment by MichaelA · 2020-07-14T02:50:22.178Z · score: 2 (1 votes) · EA(p) · GW(p)

That looks very helpful - thanks for sharing it here!

comment by Shri_Samson · 2020-08-31T01:02:32.644Z · score: 3 (2 votes) · EA(p) · GW(p)

This is probably too broad but here's Open Philanthropy's list of case studies on the History of Philanthropy which includes ones they have commissioned, though most are not done by EAs with the exception of Some Case Studies in Early Field Growth by Luke Muehlhauser.

Edit: fixed links

comment by MichaelA · 2020-08-31T06:00:55.162Z · score: 2 (1 votes) · EA(p) · GW(p)

Yeah, I think those are relevant, thanks for mentioning them!

It looks like the links lead back to your comment for some reason (I think I've done similar in the past). So, for other readers, here are the links I think you mean: 1, 2.

(Also, FWIW, I think if an analysis is by a non-EA by commissioned by an EA, I'd say that essentially counts as an "EA analysis" for my purposes. This is because I expect that such work's "precise focuses or methodologies may be more relevant to other EAs than would be the case with [most] non-EA analyses".)

comment by MichaelA · 2020-09-08T09:06:06.152Z · score: 13 (7 votes) · EA(p) · GW(p)

Reflections on data from a survey about things I’ve written 

I recently requested [EA · GW] people take a survey on the quality/impact of things I’ve written. So far, 22 people have generously taken the survey. (Please add yourself to that tally!)

Here I’ll display summaries of the first 21 responses (I may update this later), and reflect on what I learned from this.[1] 

I had also made predictions about what the survey results would be, to give myself some sort of ramshackle baseline to compare results against. I was going to share these predictions, then felt no one would be interested; but let me know if you’d like me to add them in a comment.

For my thoughts on how worthwhile this was and whether other researchers/organisations should run similar surveys, see Should surveys about the quality/impact of research outputs be more common?  [EA · GW]

(Note that many of the things I've written were related to my work with Convergence Analysis, but my comments here reflect only my own opinions.)

The data

Q1:

Q2: 

Q3:

Q4: 

Q5: “If you think anything I've written has affected your beliefs, please say what that thing was (either titles or roughly what the topic was), and/or say how it affected your beliefs.”

(I didn’t ask for permission to share people’s comments, so, for this and the other comment questions, I’ll just highlight some recurring themes or seemingly noteworthy specifics.)

Q6:

Q7: “If you think anything I've written has affected your decisions or plans, please say what that thing was (either titles or roughly what the topic was), and/or say how it affected your decisions or plans.”

Q8: 

Q8, text box: “If you answered "Yes" to either of the above, could you say a bit about why?”

  • 15/21 respondents filled in this text box
  • Some respondents indicated things “on their end” (e.g., busyness, attention span), or that they’d have said yes to one or both of those questions for most authors rather than just for me in particular
  • Some respondents mentioned topics just not seeming relevant to their interests
  • Some respondents mentioned my posts being long, being rambly, or failing to have a summary
  • Some respondents mentioned they were already well-versed in the areas I was writing about and didn’t feel my posts were necessary for them

Q9: “Do you have any other feedback on specific things I've written, my general writing style, my topic choices, or anything else?”

  • 10/21 respondents answered this
  • Several non-specific positive comments/encouragements
  • Several positive or neutral comments on me having a lot of output
  • Several comments suggesting I should be more concise, use summaries more consistently, and/or be clearer about what the point of what I’m writing is
  • Some comments indicating appreciation of my summaries, collections, and efforts to make ideas accessible
  • Some comments on my writing style and clarity being good
  • Some comments that my original research wasn’t very impressive
  • One comment that I seem to hung up on defining things precisely/prescriptively
    • (I don’t actually endorse linguistic prescriptivism, and remember occasionally trying to make that explicit. But I’ll take this as useful data that I’ve sometimes accidentally given that impression, and try to adjust accordingly.)

Q10: “If you would like to share your name, please do so below. But this is 100% voluntary - you're not at all obliged to do so :)”

  • 6/21 respondents gave their name/username
  • 2 gave their email for if I wanted to follow-up

Some takeaways from all this 

  • Responses were notably more positive than expected for some questions, and notably less positive for others
    • I don’t think this should notably change my bottom-line view of the overall quality and impact of my work to date
    • But it does make me a little less uncertain about that all-things-considered view, as I now have slightly more data that roughly supports it
    • In turn, this updates me towards being a little more confident that it makes sense for me to focus on pursuing an EA research career for now (rather than, e.g., switching to operations or civil service roles)
      • This is because I’m now slightly less worried that I’m being strongly influenced by overconfidence or motivated reasoning. (I already wanted to do research or writing before learning about EA.)
  • I should definitely more consistently include summaries, and/or in other ways signal early and clearly what the point of a post is
    • I was already aiming to move in this direction, and had predicted responses would often mention this, but this has still given me an extra push
  • I should look out for ways in which I might appear linguistically prescriptive or overly focused on definitions/precision
  • I should more seriously consider moving more towards concision, even at the cost of precision, clarity, or comprehensiveness
    • Though I’m still not totally sold on that
    • I’m also aware that this shortform comment is not a great first step!
  • I should consider moving more towards concision, even at the cost of quantity/speed of output
    • With extra time on a given post, I could perhaps find ways to be more concise without sacrificing other valuable things
  • I should feel less like I “have to” produce writings rapidly
    • This point is harder to explain briefly, so I’ll just scratch the surface here
    • I don’t actually expect this to substantially change my behaviours, as that feeling wasn’t the main reason for my large amount of output
    • But if my output slows for some other reason, I think I’ll now not feel (as) bad about that
  • People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive
    • The “direction” of this effect is in line with my expectations, but the strength was surprising
    • I’ve updated towards more confidence that my summaries and (especially) my collections were valuable and worth making, and this may slightly increase the already-high chance that I’ll continue creating that sort of thing
    • But this is also slightly confusing, as my original research/ideas and/or aptitude for future original research seems to have put me in good stead for various job and grant selection processes
      • And I don’t have indications that my summaries or collections helped there, though they may have
  • Much of my work to date may be less useful for more experienced/engaged EAs than less experienced/engaged EAs
    • This is in line with my sense that I was often trying to make ideas more accessible, make getting up to speed easier, etc.
  • There seemed to be a weak correlation between how recently something was posted and how often it was positively mentioned
    • This broadly aligns with trends from other data sources (e.g., researchers reaching out to me, upvotes)
    • This could suggest that:
      • my work is getting better
      • people are paying more attention to things written by me, regardless of their quality
      • people just remember the recent stuff more
    • I’d guess all three of those factors play some role

(I also have additional thoughts that are fuzzier or even less likely to be of interest to anyone other than me.)

[1] There are of course myriad reasons to not read into this data too much, including that: 

  • it’s from a sample of only 21 people
  • the sample was non-representative, and indeed self-selecting (so it may, for example, disproportionately represent people who like my work)
  • the responses may be biased towards not hurting my feelings

That said, I think I can still learn something from this data, especially given flaws in other data sources I have. (E.g., comments from people who choose to randomly and non-anonymously reach out to me may be even more positively biased.)

If you’ve made it this far, you may also be interested in the above-mentioned Should surveys about the quality/impact of research outputs be more common? [EA · GW]

comment by HowieL · 2020-09-11T14:42:53.030Z · score: 9 (5 votes) · EA(p) · GW(p)

"People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive"

I haven't read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I'd expect fewer people to find it useful (and for a relatively broad summary to be biased against them).

That said, as you know, I think your summaries/collections are useful and underprovided.

comment by MichaelA · 2020-09-11T17:50:23.376Z · score: 2 (1 votes) · EA(p) · GW(p)

Good point. 

Though I guess I suspect that, if the reason a person finds my original research not so useful is just because they aren't the target audience, they'd be more likely to either not explicitly comment on it or to say something about it not seeming relevant to them. (Rather than making a generic comment about it not seeming useful.) 

But I guess this seems less likely in cases where: 

  • the person doesn't realise that the key reason it wasn't useful is that they weren't the target audience, or
  • the person feels that what they're focused on is substantially more important than anything else (because then they'll perceive "useful to them" as meaning a very similar thing to "useful")

In any case, I'm definitely just taking this survey as providing weak (though useful) evidence, and combining it with various other sources of evidence.

comment by HowieL · 2020-09-11T18:38:34.831Z · score: 1 (1 votes) · EA(p) · GW(p)

Seems reasonable

comment by MichaelA · 2020-04-18T08:55:01.888Z · score: 11 (5 votes) · EA(p) · GW(p)

Epistemic status: Unimportant hot take on a paper I've only skimmed.

Watson and Watson write:

Conditions capable of supporting multicellular life are predicted to continue for another billion years, but humans will inevitably become extinct within several million years. We explore the paradox of a habitable planet devoid of people, and consider how to prioritise our actions to maximise life after we are gone.

I react: Wait, inevitably? Wait, why don't we just try to not go extinct? Wait, what about places other than Earth?

They go on to say:

Finally, we offer a personal challenge to everyone concerned about the Earth’s future: choose a lineage or a place that you care about and prioritise your actions to maximise the likelihood that it will outlive us. For us, the lineages we have dedicated our scientific and personal efforts towards are mistletoes (Santalales) and gulls and terns (Laridae), two widespread groups frequently regarded as pests that need to be controlled. The place we care most about is south-eastern Australia – a region where we raise a family, manage a property, restore habitats, and teach the next generations of conservation scientists. Playing favourites is just as much about maintaining wellbeing and connecting with the wider community via people with shared values as it is about maximising future biodiversity.

I react: Wait, seriously? Your recipe for wellbeing is declaring the only culture-creating life we know of (ourselves) irreversibly doomed, and focusing your efforts instead on ensuring that mistletoe survives the ravages of deep time?

Even if your focus is on maximising future biodiversity, I'd say it still makes sense to set your aim a little higher - try to keep us afloat to keep more biodiversity afloat. (And it seems very unclear to me why we'd value biodiversity intrinsically, rather than individual nonhuman animal wellbeing, even if we cared more about nature than humans, but that's a separate story.)

This was a reminder to me of how wide the gulf can be between different people's ways of looking at the world.

It also reminded me of this quote from Dave Denkenberger:

In 2011, I was reading this paper called Fungi and Sustainability, and the premise was that after the dinosaur killing asteroid, there would not have been sunlight and there were lots of dead trees and so mushrooms could grow really well. But its conclusion was that maybe when humans go extinct, the world will be ruled by mushrooms again. I thought, why don’t we just eat the mushrooms and not go extinct?
comment by MichaelA · 2020-04-02T15:56:43.210Z · score: 10 (4 votes) · EA(p) · GW(p)

If a typical mammalian species survives for ~1 million years, should a 200,000 year old species expect another 800,000 years, or another million years?

tl;dr I think it's "another million years", or slightly longer, but I'm not sure.

In The Precipice, Toby Ord writes:

How much of this future might we live to see? The fossil record provides some useful guidance. Mammalian species typically survive for around one million years before they go extinct; our close relative, Homo erectus, survived for almost two million.[38] If we think of one million years in terms of a single, eighty-year life, then today humanity would be in its adolescence - sixteen years old, just coming into our power; just old enough to get ourselves into serious trouble.

(There are various extra details and caveats about these estimates in the footnotes.)

Ord also makes similar statements on the FLI Podcast, including the following:

If you think about the expected lifespan of humanity, a typical species lives for about a million years [I think Ord meant "mammalian species"]. Humanity is about 200,000 years old. We have something like 800,000 or a million or more years ahead of us if we play our cards right and we don’t lead to our own destruction. The analogy would be 20% of the way through our life[...]

I think this is a strong analogy from a poetic perspective. And I think that highlighting the typical species' lifespan is a good starting point for thinking about how long we might have left. (Although of course we could also draw on many other facts for that analysis, as Ord discusses in the book.)

But I also think that there's a way in which the lifespan analogy might be a bit misleading. If a human is 70, we expect they have less time less to live than if a human is 20. But I'm not sure whether, if a species if 700,000 years old, we should expect that species to go extinct sooner than a species that is 200,000 years old will.

My guess would be that a ~1 million year lifespan for a typical mammalian species would translate into a roughly 1 in a million chance of extinction each year, which doesn't rise or fall very much in a predictable way over most of the species' lifespan. Specific events, like changes in a climate or another species arriving/evolving, could easily change the annual extinction rate. But I'm not aware of an analogy here to how ageing increases the annual risk of humans dying from various causes.

I would imagine that, even if a species has been around for almost or more than a million years, we should still perhaps expect a roughly 1 in a million chance of extinction each year. Or perhaps we should even expect them to have a somewhat lower annual chance of extinction, and thus a higher expected lifespan going forwards, based on how long they've survived so far?

(But I'm also not an expert on the relevant fields - not even certain what they would be - and I didn't do extra research to inform this shortform comment.)

I don't think that Ord actually intends to imply that species' "lifespans" work like humans' lifespans do. But the analogy does seem to imply it. And in the FLI interview, he does seem to briefly imply that, though of course there he was speaking off the cuff.

I'm also not sure how important this point is, given that humans are very atypical anyway. But I thought it was worth noting in a shortform comment, especially as I expect that, in the wake of The Precipice being great, statements along these lines may be quoted regularly over the coming months.

comment by MichaelA · 2020-03-26T06:01:13.142Z · score: 10 (8 votes) · EA(p) · GW(p)

My review of Tom Chivers' review of Toby Ord's The Precipice

I thought The Precipice was a fantastic book; I'd highly recommend it. And I agree with a lot about Chivers' review of it for The Spectator. I think Chivers captures a lot of the important points and nuances of the book, often with impressive brevity and accessibility for a general audience. (I've also heard good things about Chivers' own book.)

But there are three parts of Chivers' review that seem to me to like they're somewhat un-nuanced, or overstate/oversimplify the case for certain things, or could come across as overly alarmist.

I think Ord is very careful to avoid such pitfalls in The Precipice, and I'd guess that falling into such pitfalls is an easy and common way for existential risk related outreach efforts to have less positive impacts than they otherwise could, or perhaps even backfire. I understand that a review gives on far less space to work with than a book, so I don't expect anywhere near the level of nuance and detail. But I think that overconfident or overdramatic statements of uncertain matters (for example) can still be avoided.

I'll now quote and comment on the specific parts of Chivers' review that led to that view of mine.

An alleged nuclear close call

Firstly, in my view, there are three flaws with the opening passage of the review:

Humanity has come startlingly close to destroying itself in the 75 or so years in which it has had the technological power to do so. Some of the stories are less well known than others. One, buried in Appendix D of Toby Ord’s splendid The Precipice, I had not heard, despite having written a book on a similar topic myself. During the Cuban Missile Crisis, a USAF captain in Okinawa received orders to launch nuclear missiles; he refused to do so, reasoning that the move to DEFCON 1, a war state, would have arrived first.
Not only that: he sent two men down the corridor to the next launch control centre with orders to shoot the lieutenant in charge there if he moved to launch without confirmation. If he had not, I probably would not be writing this — unless with a charred stick on a rock.

First issue: Toby Ord makes it clear that "the incident I shall describe has been disputed, so we cannot yet be sure whether it occurred." Ord notes that "others who claimed to have been present in the Okinawa missile bases at the time" have since challenged this account, although there is also "some circumstantial evidence" supporting the account. Ultimately, Ord concludes "In my view this alleged incident should be taken seriously, but until there is further confirmation, no one should rely on it in their thinking about close calls." I therefore think Chivers should've made it clear that this is a disputed story.

Second issue: My impression from the book is that, even in the account of the person claiming this story is true, the two men sent down the corridor did not turn out to be necessary to avert the launch. (That said, the book isn't explicit on the point, so I'm unsure.) Ord writes that Bassett "telephoned the Missile Operations Centre, asking the person who radioed the order to either give the DEFCON 1 order or issue a stand-down order. A stand-down order was quickly given and the danger was over." That is the end of Ord's retelling of the account itself (rather than discussion of the evidence for or against it).

Third issue: I think it's true that, if a nuclear launch had occurred in that scenario, a large-scale nuclear war probably would've occurred (though it's not guaranteed, and it's hard to say). And if that happened, it seems technically true that Chivers probably would've have written this review. But I think that's primarily because history would've just unfolded very, very difficulty. Chivers seems to imply this is because civilization probably would've collapsed, and done so so severely than even technologies such as pencils would be lost and that they'd still be lost all these decades on (such that, if he was writing this review, he'd do so with "a charred stick on a rock").

This may seem like me taking a bit of throwaway rhetoric or hyperbole too seriously, and that may be so. But I think among the key takeaways of the book were vast uncertainties around whether certain events would actually lead to major catastrophes (e.g., would a launch lead to a full-scale nuclear war?), whether catastrophes would lead to civilizational collapse (e.g., how severe and long-lasting would the nuclear winter be, and how well would we adapt?), how severe collapses would be (e.g., to pre-industrial or pre-agricultural levels?), and how long-lasting collapses would be (from memory, Ord seems to think recovery is in fact fairly likely).

So I worry that a sentence like that one makes the book sound somewhat alarmist, doomsaying, and naive/simplistic, whereas in reality it seems to me quite nuanced and open about the arguments for why existential risk from certain sources may be "quite low" - and yet still extremely worth attending to, given the stakes.

To be fair, or to make things slightly stranger, Chivers does later say:

Perhaps surprisingly, [Ord] doesn’t think that nuclear war would have been an existential catastrophe. It might have been — a nuclear winter could have led to sufficiently dreadful collapse in agriculture to kill everyone — but it seems unlikely, given our understanding of physics and biology.

(Also, as an incredibly minor point, I think the relevant appendix was Appendix C rather than D. But maybe that was different in different editions or in an early version Chivers saw.)

"Numerically small"

Secondly, Chivers writes:

[Ord] points out that although the difference between a disaster that kills 99 per cent of us and one that kills 100 per cent would be numerically small, the outcome of the latter scenario would be vastly worse, because it shuts down humanity’s future.

I don't recall Ord ever saying something like that the death of 1 percent of the population would be "numerically small". Ord very repeatedly emphasises and reminds the reader that something really can count as deeply or even unprecedently awful, and well worth expending resources to avoid, even if it's not an existential catastrophe. This seems to me a valuable thing to do, otherwise the x-risk community could easily be seen as coldly dismissive of any sub-existential catastrophes. (Plus, such catastrophes really are very bad and well worth expending resources to avoid - this is something I would've said anyway, but seems especially pertinent in the current pandemic.)

I think saying "the difference between a disaster that kills 99 per cent of us and one that kills 100 per cent would be numerically small" cuts against that goal, and again could paint Ord as more simplistic or extremist than he really is.

"Blowing ourselves up"

Finally (for the purpose of my critiques), Chivers writes:

We could live for a billion years on this planet, or billions more on millions of other planets, if we manage to avoid blowing ourselves up in the next century or so.

To me, "avoid blowing ourselves up" again sounds quite informal or naive or something like that. It doesn't leave me with the impression that the book will be a rigorous and nuanced treatment of the topic. Plus, Ord isn't primarily concerned with us "blowing ourselves up" - the specific risks he sees as the largest are unaligned AI, engineered pandemics, and "unforeseen anthropogenic risk".

And even in the case of nuclear war, Ord is quite clear that it's the nuclear winter that's the largest source of existential risk, rather than the explosions themselves (though of course the explosions are necessary for causing such a winter). In fact, Ord writes "While one often hears the claim that we have enough nuclear weapons to destroy the world may times over, this is loose talk." (And he explains why this is loose talk.)

So again, this seems like a case where Ord actively separates his clear-headed analysis of the risks from various naive, simplistic, alarmist ideas that are somewhat common among some segments of the public, but where Chivers' review makes it sound (at least to me) like the book will match those sorts of ideas.

All that said, I should again note that I thought the review did a lot right. In fact, I have no quibbles at all with anything from that last quote onwards.

comment by Aaron Gertler (aarongertler) · 2020-03-27T01:11:31.088Z · score: 5 (4 votes) · EA(p) · GW(p)

This was an excellent meta-review! Thanks for sharing it. 

I agree that these little slips of language are important; they can easily compound into very stubborn memes. (I don't know whether the first person to propose a paperclip AI regrets it, but picking a different example seems like it could have had a meaningful impact on the field's progress.)

comment by MichaelA · 2020-03-30T02:07:53.101Z · score: 1 (1 votes) · EA(p) · GW(p)

Agreed.

These seem to often be examples of hedge drift [LW · GW], and their potential consequences seem like examples of memetic downside risks [LW · GW].

comment by MichaelA · 2020-02-24T08:31:28.865Z · score: 10 (5 votes) · EA(p) · GW(p)

Collection [EA · GW] of all prior work I found that seemed substantially relevant to information hazards

Information hazards: a very simple typology [EA · GW] - Will Bradshaw, 2020

Information hazards and downside risks [? · GW] - Michael Aird (me), 2020

Information hazards - EA concepts

Information Hazards in Biotechnology - Lewis et al., 2019

Bioinfohazards [EA · GW] - Crawford, Adamson, Ladish, 2019

Information Hazards - Bostrom, 2011 (I believe this is the paper that introduced the term)

Terrorism, Tylenol, and dangerous information [LW · GW] - Davis_Kingsley, 2018

Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical [LW · GW] - Gentzel, 2018

Horsepox synthesis: A case of the unilateralist's curse? - Lewis, 2018

Mitigating catastrophic biorisks - Esvelt, 2020

The Precipice (particularly pages 135-137) - Ord, 2020

Information hazard - LW Wiki

Thoughts on The Weapon of Openness [? · GW] - Will Bradshaw, 2020

Exploring the Streisand Effect [EA · GW] - Will Bradshaw, 2020

Informational hazards and the cost-effectiveness of open discussion of catastrophic risks [EA · GW] - Alexey Turchin, 2018

A point of clarification on infohazard terminology [LW · GW] - eukaryote, 2020

Somewhat less directly relevant

The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse? - Shevlane & Dafoe, 2020 (commentary here [LW · GW])

The Vulnerable World Hypothesis - Bostrom, 2019 (footnotes 39 and 41 in particular)

Managing risk in the EA policy space [EA · GW] - weeatquince, 2019 (touches briefly on information hazards)

Strategic Implications of Openness in AI Development - Bostrom, 2017 (sort-of relevant, though not explicitly about information hazards)

[Review] On the Chatham House Rule (Ben Pace, Dec 2019) [LW · GW] - Pace, 2019

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

comment by MichaelA · 2020-03-20T15:18:33.194Z · score: 1 (1 votes) · EA(p) · GW(p)

Interesting example: Leo Szilard and cobalt bombs

In The Precipice, Toby Ord mentions the possibility of "a deliberate attempt to destroy humanity by maximising fallout (the hypothetical cobalt bomb)" (though he notes such a bomb may be beyond our current abilities). In a footnote, he writes that "Such a 'doomsday device' was first suggested by Leo Szilard in 1950". Wikipedia similarly says:

The concept of a cobalt bomb was originally described in a radio program by physicist Leó Szilárd on February 26, 1950. His intent was not to propose that such a weapon be built, but to show that nuclear weapon technology would soon reach the point where it could end human life on Earth, a doomsday device. Such "salted" weapons were requested by the U.S. Air Force and seriously investigated, but not deployed.[citation needed] [...]
The Russian Federation has allegedly developed cobalt warheads for use with their Status-6 Oceanic Multipurpose System nuclear torpedoes. However many commentators doubt that this is a real project, and see it as more likely to be a staged leak to intimidate the United States.

That's the extent of my knowledge of cobalt bombs, so I'm poorly placed to evaluate that action by Szilard. But this at least looks like it could be an unusually clear-cut case of one of Bostrom's subtypes of information hazards:

Attention hazard: The mere drawing of attention to some particularly potent or relevant ideas or data increases risk, even when these ideas or data are already “known”.
Because there are countless avenues for doing harm, an adversary faces a vast search task in finding out which avenue is most likely to achieve his goals. Drawing the adversary’s attention to a subset of especially potent avenues can greatly facilitate the search. For example, if we focus our concern and our discourse on the challenge of defending against viral attacks, this may signal to an adversary that viral weapons—as distinct from, say, conventional explosives or chemical weapons—constitute an especially promising domain in which to search for destructive applications. The better we manage to focus our defensive deliberations on our greatest vulnerabilities, the more useful our conclusions may be to a potential adversary.

It seems that Szilard wanted to highlight how bad cobalt bombs would be, that no one had recognised - or at least not acted on - the possibility of such bombs until he tried to raise awareness of them, and that since he did so there may have been multiple government attempts to develop such bombs.

I was a little surprised that Ord didn't discuss the potential information hazards angle of this example, especially as he discusses a similar example with regards to Japanese bioweapons in WWII elsewhere in the book.

I was also surprised by the fact that it was Szilard who took this action. This is because one of the main things I know Szilard for is being arguably one of the earliest (the earliest?) examples of a scientist bucking standard openness norms due to, basically, concerns of information hazards potentially severe enough to pose global catastrophic risks. E.g., a report by MIRI/Katja Grace states:

Leó Szilárd patented the nuclear chain reaction in 1934. He then asked the British War Office to hold the patent in secret, to prevent the Germans from creating nuclear weapons (Section 2.1). After the discovery of fission in 1938, Szilárd tried to convince other physicists to keep their discoveries secret, with limited success.
comment by MichaelA · 2020-05-10T04:39:16.252Z · score: 9 (3 votes) · EA(p) · GW(p)

Collection [EA · GW] of evidence about views on longtermism, time discounting, population ethics, significance of suffering vs happiness, etc. among non-EAs

Appendix A of The Precipice - Ord, 2020 (see also the footnotes, and the sources referenced)

The Long-Term Future: An Attitude Survey [EA · GW] - Vallinder, 2019

Older people may place less moral value on the far future [EA · GW] - Sanjay, 2019

Making people happy or making happy people? Questionnaire-experimental studies of population ethics and policy - Spears, 2017

The Psychology of Existential Risk: Moral Judgments about Human Extinction - Schubert, Caviola & Faber, 2019

Psychology of Existential Risk and Long-Termism [? · GW] - Schubert, 2018 (space for discussion here [EA · GW])

Descriptive Ethics – Methodology and Literature Review - Althaus, ~2018 (this is something like an unpolished appendix to Descriptive Population Ethics and Its Relevance for Cause Prioritization [EA · GW], and it would make sense to read the latter post first)

A Small Mechanical Turk Survey on Ethics and Animal Welfare - Brian Tomasik, 2015

Work on "future self continuity" might be relevant (I haven't looked into it)

Some evidence about the views of EA-aligned/EA-adjacent groups

Survey results: Suffering vs oblivion - Slate Star Codex, 2016

Survey about preferences for the future of AI - FLI, ~2017

Some evidence about the views of EAs

Facebook poll relevant to preferences for one's own suffering vs bliss - Jay Quigley, 2016

See also my collection of sources relevant to moral circles, moral boundaries, or their expansion [EA(p) · GW(p)], and my collection of sources relevant to the idea of “moral weight” [EA(p) · GW(p)].

comment by MichaelA · 2020-05-07T07:55:52.071Z · score: 9 (6 votes) · EA(p) · GW(p)

Collection [EA · GW] of sources relevant to moral circles, moral boundaries, or their expansion

Works by the EA community or related communities

Moral circles: Degrees, dimensions, visuals [EA · GW] - Michael Aird (i.e., me), 2020

Why I prioritize moral circle expansion over artificial intelligence alignment [EA · GW] - Jacy Reese, 2018

The Moral Circle is not a Circle [EA · GW] - Grue_Slinky, 2019

The Narrowing Circle - Gwern, 2019 (see here [EA · GW] for Aaron Gertler’s summary and commentary)

Radical Empathy - Holden Karnofsky, 2017

Various works from the Sentience Institute, including:

Extinction risk reduction and moral circle expansion: Speculating suspicious convergence - Aird, work in progress

-Less relevant, or with only a small section that’s directly relevant-

Why do effective altruists support the causes we do? [EA · GW] - Michelle Hutchinson, 2015

Finding more effective causes [EA · GW] - Michelle Hutchinson, 2015

Cosmopolitanism [EA · GW] - Topher Hallquist, 2014

Three Heuristics for Finding Cause X [EA · GW] - Kerry Vaughan, 2016

The Drowning Child and the Expanding Circle [EA · GW] - Peter Singer, 1997

The expected value of extinction risk reduction is positive [EA · GW] - Brauner and Grosse-Holz, 2018

Crucial questions for longtermists: Overview - Michael Aird (me), work in progress

Mass media

Should animals, plants, and robots have the same rights as you? - Sigal Samuel (for Vox’s Future Perfect), 2019

Academic works

(There appears to be a substantial and continuing amount of psychological work on this topic; the papers I list here are just a fairly random subset to get you started.)

Toward a Psychology of Moral Expansiveness - Crimston et al., 2018

Moral expansiveness: Examining variability in the extension of the moral world - Crimston et al., 2016 (my unpolished commentary on this is here) (brief summary here)

Centripetal and centrifugal forces in the moral circle: Competing constraints on moral learning - Graham et al., 2017

Expanding the moral circle: Inclusion and exclusion mindsets and the circle of moral regard - Laham, 2009

Ideological differences in the expanse of the moral circle - Waytz et al., 2019

The Expanding Circle - Peter Singer, 1981

-Less relevant, or with only a small section that’s directly relevant-

The Better Angels of Our Nature - Steven Pinker, 2011

The moral standing of animals: Towards a psychology of speciesism - Caviola, Everett, & Faber, 2019

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

See also this comment [EA · GW], my collection of sources relevant to the idea of “moral weight” [EA(p) · GW(p)] ,and my collection of evidence about views on longtermism, time discounting, population ethics, etc. among non-EAs [EA(p) · GW(p)].

comment by Jamie_Harris · 2020-05-24T18:15:53.000Z · score: 8 (3 votes) · EA(p) · GW(p)

The only other very directly related resource I can think of is my own presentation on moral circle expansion, and various other short content by Sentience Institute's website, e.g. our FAQ, some of the talks or videos. But I think that the academic psychology literature you refer to is very relevant here. Good starting point articles are, the "moral expansiveness" article you link to above and "Toward a psychology of moral expansiveness."

Of course, depending on definitions, a far wider literature could be relevant, e.g. almost anything related to animal advocacy, robot rights, consideration of future beings, consideration of people on the other side of the planet etc.


There's some wider content on "moral advocacy" or "values spreading," of which work on moral circle expansion is a part:

Arguments for and against moral advocacy - Tobias Baumann, 2017

Values Spreading is Often More Important than Extinction Risk - Brian Tomasik, 2013

Against moral advocacy - Paul Christiano, 2013


Also relevant: "Should Longtermists Mostly Think About Animals? [EA · GW]"

comment by MichaelA · 2020-05-24T23:38:57.950Z · score: 1 (1 votes) · EA(p) · GW(p)

Thanks for adding those links, Jamie!

I've now added the first few into my lists above.

comment by Aaron Gertler (aarongertler) · 2020-05-12T07:43:17.355Z · score: 3 (2 votes) · EA(p) · GW(p)

I continue to appreciate all the collections you've been posting! I expect to find reasons to link to many of these in the years to come.

comment by MichaelA · 2020-05-12T08:06:40.252Z · score: 2 (2 votes) · EA(p) · GW(p)

Good to hear!

Yeah, I hope they'll be mildly useful to random people at random times over a long period :D

Although I also expect that most people they'd be mildly useful for would probably never be aware they exist, so there may be a better way to do this.

Also, if and when EA coordinates on one central wiki, these could hopefully be folded into or drawn on for that, in some way.

comment by MichaelA · 2020-02-28T17:23:56.509Z · score: 8 (6 votes) · EA(p) · GW(p)

Collection [EA · GW] of some definitions of global catastrophic risks (GCRs)

Bostrom & Ćirković (pages 1 and 2):

The term 'global catastrophic risk' lacks a sharp definition. We use it to refer, loosely, to a risk that might have the potential to inflict serious damage to human well-being on a global scale.
[...] a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe. A catastrophe that caused 10 million fatalities or 10 trillion dollars worth of economic loss (e.g., an influenza pandemic) would count as a global catastrophe, even if some region of the world escaped unscathed. As for disasters falling between these points, the definition is vague. The stipulation of a precise cut-off does not appear needful at this stage. [emphasis added]

Open Philanthropy Project/GiveWell:

risks that could be bad enough to change the very long-term trajectory of humanity in a less favorable direction (e.g. ranging from a dramatic slowdown in the improvement of global standards of living to the end of industrial civilization or human extinction).

Global Challenges Foundation:

threats that can eliminate at least 10% of the global population.

Wikipedia (drawing on Bostrom's works):

a hypothetical future event which could damage human well-being on a global scale, even endangering or destroying modern civilization. [...]
any risk that is at least "global" in scope, and is not subjectively "imperceptible" in intensity.

Yassif (appearing to be writing for the Open Philanthropy Project):

By our working definition, a GCR is something that could permanently alter the trajectory of human civilization in a way that would undermine its long-term potential or, in the most extreme case, threaten its survival. This prompts the question: How severe would a pandemic need to be to create such a catastrophic outcome? [This is followed by interesting discussion of that question.]

Beckstead (writing for Open Philanthropy Project/GiveWell):

the Open Philanthropy Project’s work on global catastrophic risks focuses on both potential outright extinction events and global catastrophes that, while not threatening direct extinction, could have deaths amounting to a significant fraction of the world’s population or cause global disruptions far outside the range of historical experience.

(Note that Beckstead might not be saying that global catastrophes are defined as those that "could have deaths amounting to a significant fraction of the world’s population or cause global disruptions far outside the range of historical experience". He might instead mean that Open Phil is focused on the relatively extreme subset of global catastrophes which fit that description. It may be worth noting that he later quotes Open Phil's other, earlier definition of GCRs, which I listed above.)

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

My half-baked commentary

My impression is that, at least in EA-type circles, the term "global catastrophic risk" is typically used for events substantially larger than things which cause "10 million fatalities or 10 trillion dollars worth of economic loss (e.g., an influenza pandemic)".

E.g., the Global Challenges Foundation's definition implies that the catastrophe would have to be able to eliminate at least ~750 million people, which is 75 times higher than the number Bostrom & Ćirković give. And I'm aware of at least some existential-risk [EA · GW]-focused EAs whose impression is that the rough cutoff would be 100 million fatalities.

With that in mind, I also find it interesting to note that Bostrom & Ćirković gave the "10 million fatalities" figure as indicating something clearly is a GCR, rather than as the lower threshold that a risk must clear in order to be a GCR. From their loose definition, it seems entirely plausible that, for example, a risk with 1 million fatalities might be a GCR.

That said, I do agree that "The stipulation of a precise cut-off does not appear needful at this stage." Personally, I plan to continue to use the term in a quite loose way, but probably primarily for risks that could cause much more than 10 million fatalities.

comment by MichaelA · 2020-05-30T01:40:12.035Z · score: 7 (2 votes) · EA(p) · GW(p)

There is now a Stanford Existential Risk Initiative, which (confusingly) describes itself as:

a collaboration between Stanford faculty and students dedicated to mitigating global catastrophic risks (GCRs). Our goal is to foster engagement from students and professors to produce meaningful work aiming to preserve the future of humanity by providing skill, knowledge development, networking, and professional pathways for Stanford community members interested in pursuing GCR reduction.

And they write:

What is a Global Catastrophic Risk?
We think of global catastrophic risks (GCRs) as risks that could cause the collapse of human civilization or even the extinction of the human species.

That is much closer to a definition of an existential risk [EA · GW] (as long as we assume that the collapse is not recovered from) than of an global catastrophic risk. Given that fact and the clash between the term the initiative uses in its name and the term it uses when describing what they'll focus on, it appears this initiative is conflating these two terms/concepts.

This is unfortunate, and could lead to confusion, given that there are many events that would be global catastrophes without being existential catastrophes. An example would be a pandemic that kills hundreds of millions but that doesn't cause civilizational collapse [EA(p) · GW(p)], or that causes a collapse humanity later fully recovers from. (Furthermore, there may be existential catastrophes that aren't "global catastrophes" in the standard sense, such as "plateauing — progress flattens out at a level perhaps somewhat higher than the present level but far below technological maturity" (Bostrom).)

For further discussion, see Clarifying existential risks and existential catastrophes [EA · GW].

(I should note that I have positive impressions of the Center for International Security and Cooperation (which this initiative is a part of), that I'm very glad to see that this initiative has been set up, and that I expect they'll do very valuable work. I'm merely critiquing their use of terms.)

comment by MichaelA · 2020-03-19T06:50:21.091Z · score: 4 (3 votes) · EA(p) · GW(p)

Some more definitions, from or quoted in 80k's profile on reducing global catastrophic biological risks

Gregory Lewis, in that profile itself:

Global catastrophic risks (GCRs) are roughly defined as risks that threaten great worldwide damage to human welfare, and place the long-term trajectory of humankind in jeopardy. Existential risks are the most extreme members of this class.

Open Philanthropy Project:

[W]e use the term “global catastrophic risks” to refer to risks that could be globally destabilising enough to permanently worsen humanity’s future or lead to human extinction.

Schoch-Spana et al. (2017), on GCBRs, rather than GCRs as a whole:

The Johns Hopkins Center for Health Security's working definition of global catastrophic biological risks (GCBRs): those events in which biological agents—whether naturally emerging or reemerging, deliberately created and released, or laboratory engineered and escaped—could lead to sudden, extraordinary, widespread disaster beyond the collective capability of national and international governments and the private sector to control. If unchecked, GCBRs would lead to great suffering, loss of life, and sustained damage to national governments, international relationships, economies, societal stability, or global security.
comment by MichaelA · 2020-04-28T01:59:00.739Z · score: 1 (1 votes) · EA(p) · GW(p)

From an FLI podcast interview with two researchers from CSER:

"Ariel Conn: [...] I was hoping you could quickly go over a reminder of what an existential threat is and how that differs from a catastrophic threat and if there’s any other terminology that you think is useful for people to understand before we start looking at the extreme threats of climate change."

Simon Beard: So, we use these various terms as kind of terms of art within the field of existential risk studies, in a sense. We know what we mean by them, but all of them, in a way, are different ways of pointing to the same kind of outcome — which is something unexpectedly, unprecedentedly bad. And, actually, once you’ve got your head around that, different groups have slightly different understandings of what the differences between these three terms are.

So, for some groups, it’s all about just the scale of badness. So, an extreme risk is one that does a sort of an extreme level of harm; A catastrophic risk does more harm, a catastrophic level of harm. And an existential risk is something where either everyone dies, human extinction occurs, or you have an outcome which is an equivalent amount of harm: Maybe some people survive, but their lives are terrible. Actually, at the Center for the Study of Existential Risk, we are concerned about this classification in terms of the cost involved, but we also have coupled that with a slightly different sort of terminology, which is really about systems and the operation of the global systems that surround us.

Most of the systems — be this physiological systems, the world’s ecological system, the social, economic, technological, cultural systems that surround those institutions that we build on — they have a kind of normal space of operation where they do the things that you expect them to do. And this is what human life, human flourishing, and human survival are built on: that we can get food from the biosphere, that our bodies will continue to operate in a way that’s consistent with and supporting our health and our continued survival, and that the institutions that we’ve developed will still work, will still deliver food to our tables, will still suppress interpersonal and international violence, and that we’ll basically, we’ll be able to get on with our lives.

If you look at it that way, then an extreme risk, or an extreme threat, is one that pushes at least one of these systems outside of its normal boundaries of operation and creates an abnormal behavior that we then have to work really hard to respond to. A catastrophic risk is one where that happens, but then that also cascades. Particularly in global catastrophe, you have a whole system that encompasses everyone all around the world, or maybe a set of systems that encompass everyone all around the world, that are all operating in this abnormal state that’s really hard for us to respond to.

And then an existential catastrophe is one where the systems have been pushed into such an abnormal state that either you can’t get them back or it’s going to be really hard. And life as we know it cannot be resumed; We’re going to have to live in a very different and very inferior world, at least from our current way of thinking." (emphasis added)

comment by MichaelA · 2020-04-23T06:51:07.875Z · score: 1 (1 votes) · EA(p) · GW(p)

Sears writes:

The term ‘global catastrophic risk’ (GCR) is increasingly used in the scholarly community to refer to a category of threats that are global in scope, catastrophic in intensity, and non-zero in probability (Bostrom and Cirkovic, 2008). [...] The GCR framework is concerned with low-probability, high-consequence scenarios that threaten humankind as a whole (Avin et al., 2018; Beck, 2009; Kuhlemann, 2018; Liu, 2018)

(Personally, I don't think I like that second sentence. I'm not sure what "threaten humankind" is meant to mean, but I'm not sure I'd count something that e.g. causes huge casualties on just one continent, or 20% casualties spread globally, as threatening humankind. Or if I did, I'd be meaning something like "threatens some humans", in which case I'd also count risks much smaller than GCRs. So this sentence sounds to me like it's sort-of conflating GCRs with existential risks.)

comment by MichaelA · 2020-02-24T08:45:38.844Z · score: 8 (6 votes) · EA(p) · GW(p)

Collection [EA · GW] of all prior work I found that explicitly uses the terms differential progress / intellectual progress / technological development

Differential progress / intellectual progress / technological development [EA · GW] - Michael Aird (me), 2020

Differential technological development - summarised introduction [EA · GW] - james_aung, 2020

Differential Intellectual Progress as a Positive-Sum Project - Tomasik, 2013/2015

Differential technological development: Some early thinking - Beckstead (for GiveWell), 2015/2016

Differential progress - EA Concepts

Differential technological development - Wikipedia

Existential Risk and Economic Growth [EA · GW] - Aschenbrenner, 2019 (summary by Alex HT here [EA · GW])

On Progress and Prosperity [EA · GW] - Christiano, 2014

How useful is “progress”? - Christiano, ~2013

Improving the future by influencing actors' benevolence, intelligence, and power [EA · GW] - Aird, 2020

Differential intellectual progress - LW Wiki

Existential Risks: Analyzing Human Extinction Scenarios - Bostrom, 2002 (section 9.4) (introduced the term differential technological development, I think)

Intelligence Explosion: Evidence and Import - Muehlhauser & Salamon (for MIRI) (section 4.2) (introduced the term differential intellectual development, I think)

The Precipice - Ord, 2020 (page 206)

Superintelligence - Bostrom, 2014

Some sources that are quite relevant but that don’t explicitly use those terms

Strategic Implications of Openness in AI Development - Bostrom, 2017

Related concepts

The growth of our "power" (or "science and technology") vs our "wisdom" (see, e.g., page 34 of The Precipice)

The "pacing problem" (see, e.g., footnote 57 in Chapter 1 of The Precipice)

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

comment by MichaelA · 2020-04-07T02:06:34.649Z · score: 7 (3 votes) · EA(p) · GW(p)

List of things I've written or may write that are relevant to The Precipice

Things I’ve written

Upcoming posts

  • Existential security and related concepts
  • What would it mean for humanity to protect its potential, but use it poorly?
  • Arguments for and against Toby Ord's "grand strategy for humanity"
  • Does protecting humanity's potential guarantee its fulfilment?
  • A typology of strategies for influencing the future

Working titles of things I plan/vaguely hope to write

Note: If you might be interested in writing about similar ideas, feel very free to reach out to me. It’s very unlikely I’ll be able to write all of these posts by myself, so potentially we could collaborate, or I could just share my thoughts and notes with you and let you take it from there.

  • My thoughts on Toby Ord's policy & research recommendations
  • Civilizational collapse and recovery: Toby Ord's views and my doubts
  • The Terrible Funnel: Estimating odds of each step on the x-risk causal path (working title)
    • The idea here would be to adapt something like the "Great Filter" or "Drake Equation" reasoning to estimating the probability of existential catastrophe, using how humanity has fared in prior events that passed or could've passed certain "steps" on certain causal chains to catastrophe [EA · GW].
    • E.g., even though we've never faced a pandemic involving a bioengineered pathogen, perhaps our experience with how many natural pathogens have moved from each "step" to the next one can inform what would likely happen if we did face a bioengineered pathogen, or if it did get to a pandemic level.
    • This idea seems sort of implicit in the Precipice, but isn't really spelled out there. Also, as is probably obvious, I need to do more to organise my thoughts on it myself.
    • This may include discussion of how Ord distinguishes natural and anthropogenic risks, and why the standard arguments for an upper bound for natural extinction risks don’t apply to natural pandemics. Or that might be a separate post.
  • Developing - but not deploying - drastic backup plans (see my comment here [EA(p) · GW(p)])
  • “Macrostrategy”: Attempted definitions and related concepts
    • This would relate in part to Ord’s concept of “grand strategy for humanity”
  • Collection of notes
  • A post summarising the ideas of existential risk factors and existential security factors?
    • I suspect I won’t end up writing this, but I think someone should. For one thing, it’d be good to have something people can reference/link to that explains that idea (sort of like the role EA Concepts serves).

Some selected Precipice-related works by others

comment by MichaelA · 2020-09-23T08:34:09.160Z · score: 6 (4 votes) · EA(p) · GW(p)

Here I list all the EA-relevant books I've read (well, mainly listened to as audiobooks) since learning about EA, in roughly descending order of how useful I perceive/remember them being to me. I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin, Nick Beckstead, and Luke Muehlhauser [LW(p) · GW(p)]'s lists very useful.) 

That said, this isn't exactly a recommendation list, because some of factors making these books more/less useful to me won't generalise to most other people, and because I'm including all relevant books I've read (not just the top picks). 

Google Doc version here. Let me know if you want more info on why I found something useful or not so useful, where you can find the book, etc.

See also this list of EA-related podcasts [EA · GW] and this list of sources of EA-related videos [EA · GW].

  1. The Precipice
    • Superintelligence may have influenced me more, but that’s just due to the fact that I read it very soon after getting into EA, whereas I read The Precipice after already learning a lot. I’d now recommend The Precipice first.
  2. Superforecasting
  3. How to Measure Anything
  4. Rationality: From AI to Zombies
    • I.e., “the sequences”
  5. Superintelligence
    • Maybe this would've been a little further down the list if I’d already read The Precipice.
  6. Expert Political Judgement
    • I read this after Superforecasting and still found it very useful.
  7. Normative Uncertainty
    • This is MacAskill’s thesis, rather than a book
    • I’d now instead recommend the book by him and others on the same topic
  8. Secret of Our Success by Henrich
  9. Human-Compatible
  10. The Book of Why
  11. Blueprint
    • This is useful primarily in relation to some specific research I’m doing, rather than more generically.
  12. Moral Tribes
  13. Algorithms to Live By
  14. The Better Angels of Our Nature
  15. Thinking, Fast and Slow
    • This might be the most useful of all these books for people who have little prior familiarity with the ideas, but I happened to already know a decent portion of what was covered.
  16. Against the Grain
    • I read this after Sapiens and thought the content would overlap a lot, but actually it provided a lot of independent value, I thought.
  17. Sapiens
  18. Destined for War
  19. The Dictator’s Handbook
  20. Age of Ambition
  21. Moral Mazes
  22. The Myth of the Rational Voter
  23. The Hungry Brain
    • If I recall correctly, I found this surprisingly useful for purposes unrelated to the topics of weight, hunger, etc.; e.g., it gave me a better understanding of the liking-wanting distinction.
  24. The Quest: Energy, Security, and the Remaking of the Modern World
  25. Harry Potter and the Methods of Rationality
    • Fiction
    • I also just found this very enjoyable (I was somewhat amused and embarrassed by how enjoyable and thought-provoking I found this, to be honest)
    • This overlaps in many ways with Rationality: AI to Zombies, so it would be more valuable to someone who hadn't already read those sequences (but then I'd recommend such a person read most of those sequences)
    • Within the 2 hours before I go to sleep, I try not to stimulate my brain too much; e.g. I'd avoid listening to most of the books on this list during that time. But I found that I could listen to this during that time without it keeping my brain too active. This is a perk, as that period of my day is less crowded with other things to do.
      • Same goes for Steve Jobs, Power Broker, Animal Farm, and Consider the Lobster.
  26. Steve Jobs by Walter Isaacson
    • Surprisingly useful, given I don’t plan to at all emulate Jobs’ life and don’t work in relevant industries.
  27. Enlightenment Now
  28. The Undercover Economist Strikes Back
  29. Inadequate Equilibria
    • Halfway between a book and a series of posts
  30. Radical Markets
  31. Command and Control
  32. How to Be a Dictator: The Cult of Personality in the Twentieth Century
  33. Climate Matters: Ethics in a Warming World by John Broome
  34. The Power Broker
    • Very interesting, but very long and probably not super useful.
  35. Science in the Twentieth Century
  36. Animal Farm
    • Fiction
  37. Consider the Lobster
    • To be honest, I'm not sure why Wiblin recommended this. But I benefitted from many of his other

(Hat tip to Aaron Gertler for sort-of prompting me to post this list [EA(p) · GW(p)].)

comment by MichaelA · 2020-09-04T08:48:15.669Z · score: 6 (3 votes) · EA(p) · GW(p)

If anyone reading this has read anything I’ve written on the EA Forum or LessWrong, I’d really appreciate you taking this brief, anonymous survey. Your feedback is useful whether your opinion of my work is positive, mixed, lukewarm, meh, or negative. 

And remember what mama always said: If you’ve got nothing nice to say, self-selecting out of the sample for that reason will just totally bias Michael’s impact survey.

(If you're interested in more info on why I'm running this survey and some thoughts on whether other people should do similar, I give that here [EA · GW].)

comment by MichaelA · 2020-03-29T06:48:32.151Z · score: 5 (3 votes) · EA(p) · GW(p)

What are the implications of the offence-defence balance for trajectories of violence?

Questions: Is a change in the offence-defence balance part of why interstate (and intrastate?) conflict appears to have become less common? Does this have implications for the likelihood and trajectories of conflict in future (and perhaps by extension x-risks)?

Epistemic status: This post is unpolished, un-researched, and quickly written. I haven't looked into whether existing work has already explored questions like these; if you know of any such work, please comment to point me to it.

Background/elaboration: Pinker argues in The Better Angels of Our Nature that many types of violence have declined considerably over history. I'm pretty sure he notes that these trends are neither obviously ephemeral nor inevitable. But the book, and other research pointing in similar directions, seems to me (and I believe others?) to at least weakly support the ideas that:

  • if we avoid an existential catastrophe, things will generally continue to get better
  • apart from the potential destabilising effects of technology, conflict seems to be trending downwards, somewhat reducing the risks of e.g. great power war, and by extension e.g. malicious use of AI (though of course a partial reduction in risks wouldn't necessarily mean we should ignore the risks)

But How Does the Offense-Defense Balance Scale? (by Garfinkel and Dafoe, of the Center for the Governance of AI; summary here) says:

It is well-understood that technological progress can impact offense-defense balances. In fact, perhaps the primary motivation for developing the concept has been to understand the distinctions between different eras of military technology.
For instance, European powers’ failure to predict the grueling attrition warfare that would characterize much of the First World War is often attributed to their failure to recognize that new technologies, such as machine guns and barbed wire, had shifted the European offense-defense balance for conquest significantly toward defense.

And:

holding force sizes fixed, the conventional wisdom holds that a conflict with mid-nineteenth century technology could be expected to produce a better outcome for the attacker than a conflict with early twentieth century technology. See, for instance, Van Evera, ‘Offense, Defense, and the Causes of War’.

The paper tries to use these sorts of ideas to explore how emerging technologies will affect trajectories, likelihood, etc. of conflict. E.g., the very first sentence is: "The offense-defense balance is a central concept for understanding the international security implications of new technologies."

But it occurs to me that one could also do historical analysis of just how much these effects have played a role in the sort of trends Pinker notes. From memory, I don't think Pinker discusses this possible factor in those trends. If this factor played a major role, then perhaps those trends are substantially dependent on something "we" haven't been thinking about as much - perhaps we've wondered about whether the factors Pinker discusses will continue, whereas they're less necessary and less sufficient than we thought for the overall trend (decline in violence/interstate conflict) that we really care about.

And at a guess, that might mean that that trend is more fragile or "conditional" than we might've thought. It might mean that we really really can't rely on that "background trend" continuing, or at least somewhat offsetting the potentially destabilising effects of new tech - perhaps a lot of the trend, or the last century or two of it, was largely about how tech changed things, so if the way tech changes things changes, the trend could very easily reverse entirely.

I'm not at all sure about any of that, but it seems it would be important and interesting to explore. Hopefully someone already has, in which case I'd appreciate someone pointing me to that exploration.

(Also note that what the implications of a given offence-defence balance even are is apparently somewhat complicated/debatable matter. Eg., Garfinkel and Dafoe write: "While some hold that shifts toward offense-dominance obviously favor conflict and arms racing, this position has been challenged on a number of grounds. It has even been suggested that shifts toward offense-dominance can increase stability in a number of cases.")

comment by MichaelA · 2020-02-27T07:59:57.438Z · score: 5 (4 votes) · EA(p) · GW(p)

Collection [EA · GW] of sources I've found that seem very relevant to the topic of downside risks [LW · GW]/accidental harm

Information hazards and downside risks [? · GW] - Michael Aird (me), 2020

Ways people trying to do good accidentally make things worse, and how to avoid them - Rob Wiblin and Howie Lempel (for 80,000 Hours), 2018

How to Avoid Accidentally Having a Negative Impact with your Project - Max Dalton and Jonas Vollmer, 2018

Sources that seem somewhat relevant

https://en.wikipedia.org/wiki/Unintended_consequences (in particular, "Unexpected drawbacks" and "Perverse results", not "Unintended benefits")

(See also my lists of sources related to information hazards [EA(p) · GW(p)], differential progress [EA(p) · GW(p)], and the unilateralist's curse [EA(p) · GW(p)].)

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

comment by MichaelA · 2020-02-24T08:53:54.076Z · score: 5 (4 votes) · EA(p) · GW(p)

Collection [EA · GW] of all prior work I've found that seemed substantially relevant to the unilateralist’s curse

Unilateralist's curse [EA Concepts]

Horsepox synthesis: A case of the unilateralist's curse? [Lewis] (usefully connects the curse to other factors)

The Unilateralist's Curse and the Case for a Principle of Conformity [Bostrom et al.’s original paper]

Hard-to-reverse decisions destroy option value [CEA]

Framing issues with the unilateralist's curse [EA(p) · GW(p)] - Linch, 2020

Somewhat less directly relevant

Managing risk in the EA policy space [EA · GW] [EA Forum] (touches briefly on the curse)

Ways people trying to do good accidentally make things worse, and how to avoid them [80k] (only one section on the curse)

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

comment by MichaelA · 2019-12-22T05:35:17.611Z · score: 5 (4 votes) · EA(p) · GW(p)

Potential downsides of EA's epistemic norms (which overall seem great to me)

This is adapted from this comment [EA(p) · GW(p)], and I may develop it into a proper post later. I welcome feedback on whether it'd be worth doing so, as well as feedback more generally.

Epistemic status: During my psychology undergrad, I did a decent amount of reading on topics related to the "continued influence effect" (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But I'm a bit rusty (my Honours was in 2017, and I haven't reviewed the literature since then).

This is a quick attempt to summarise some insights from psychological findings on the continued influence effect of misinformation (and related areas) that (speculatively) might suggest downsides to some of EA's epistemic norms (e.g., just honestly contributing your views/data points to the general pool and trusting people will update on them only to the appropriate degree, or clearly acknowledging counterarguments even when you believe your position is strong).

From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.

From this paper's abstract:

Information that initially is presumed to be correct, but that is later retracted or corrected, often continues to influence memory and reasoning. This occurs even if the retraction itself is well remembered. The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warning--giving detailed information about the continued influence effect (CIE)--succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warning--reminding people that facts are not always properly checked before information is disseminated--was even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether. (emphasis added)

This seems to me to suggest some value in including "epistemic status" messages up front, but that this don't make it totally "safe" to make posts before having familiarised oneself with the literature and checked one's claims.

Here's a couple other seemingly relevant quotes from papers I read back then:

  • "retractions [of misinformation] are less effective if the misinformation is congruent with a person’s relevant attitudes, in which case the retractions can even backfire [i.e., increase belief in the misinformation]." (source) (see also this source)
  • "we randomly assigned 320 undergraduate participants to read a news article presenting either claims both for/against an autism-vaccine link [a "false balance"], link claims only, no-link claims only or non-health-related information. Participants who read the balanced article were less certain that vaccines are safe, more likely to believe experts were less certain that vaccines are safe and less likely to have their future children vaccinated. Results suggest that balancing conflicting views of the autism-vaccine controversy may lead readers to erroneously infer the state of expert knowledge regarding vaccine safety and negatively impact vaccine intentions." (emphasis added) (source)
    • This seems relevant to norms around "steelmanning" and explaining reasons why one's own view may be inaccurate. Those overall seem like very good norms to me, especially given EAs typically write about issues where there truly is far less consensus than there is around things like the autism-vaccine "controversy" or climate change. But it does seem those norms could perhaps lead to overweighting of the counterarguments when they're actually very weak, perhaps especially when communicating to wider publics who might read and consider posts less carefully than self-identifying EAs/rationalists would. But that's all my own speculative generalisations of the findings on "falsely balanced" coverage.
comment by MichaelA · 2020-08-11T23:41:20.711Z · score: 4 (2 votes) · EA(p) · GW(p)

The old debate over "giving now vs later" is now sometimes phrased as a debate about "patient philanthropy". 80,000 Hours recently wrote a post [EA · GW] using the term "patient longtermism", which seems intended to:

  • focus only on how the debate over patient philanthropy applies to longtermists [EA · GW]
  • generalise the debate to also include questions about work (e.g., should I do a directly useful job now, or build career capital and do directly useful work later?)

They contrast this against the term "urgent longtermism", to describe the view that favours doing more donations and work sooner. 

I think the terms "patient longtermism" and "urgent longtermism" are both useful. One reason I think "urgent longtermism" is useful is that it doesn't sound pejorative, whereas "impatient longtermism" would.

I suggest we also use three additional terms:

  1. Patient altruism [? · GW]
    1. Like "patient philanthropy" and unlike "patient longtermism", this term is cause-neutral. 
    2. But like "patient longtermism" and unlike "patient philanthropy", this term clearly relates to both work and donations, not merely to donations.
      1. Discussions about "patient philanthropy" do often make some reference to optimal timing of work, but it's not usually central. Also, the term "philanthropy" is typically used just for donations.
  2. Urgent altruism
    1. Again, this is partly to avoid negative connotations, as is my next suggestion.
  3. Urgent philanthropy
comment by MichaelDickens · 2020-08-12T20:08:09.506Z · score: 5 (3 votes) · EA(p) · GW(p)

I don't think "patient" and "urgent" are opposites, in the way Phil Trammell originally defined patience. He used "patient" to mean a zero pure time preference, and "impatient" to mean a nonzero pure time preference. You can believe it is urgent that we spend resources now while still having a pure time preference. Trammell's paper argued that patient actors should give later, irrespective of how much urgency you believe there is. (Although he carved out some exceptions to this.)

comment by MichaelA · 2020-08-13T01:40:16.252Z · score: 2 (1 votes) · EA(p) · GW(p)

Yes, Trammell writes:

We will call someone “patient” if he has low (including zero) pure time preference with respect to the welfare he creates by providing a good.

And I agree that a person with a low or zero pure time preference may still want to use a large portion of their resources now, for example due to thinking now is a much "hingier"/"higher leverage" time than average, or thinking value drift will be high.

You highlighting this makes me doubt whether 80,000 Hours should've used "patient longtermism" as they did [EA · GW], whether they should've used "patient philanthropy" as they arguably did*, and whether I should've proposed the term "patient altruism" for the position that we should give/work later rather than now (roughly speaking).

On the other hand, if we ignore Trammell's definition of the term, I think "patient X" does seem like a natural fit for the position that we should do X later, rather than now. 

Do you have other ideas for terms to use in place of "patient"? Maybe "delayed"? (I'm definitely open to renaming the tag [? · GW]. Other people can as well.) 

*80k write:

If the case for patient philanthropy is as strong as Phil believes, many of us should be trying to improve the world in a very different way than we are now.

He points out that on top of being able to dispense vastly more, whenever your trustees decide to use your gift to improve the world, they’ll also be able to rely on the much broader knowledge available to future generations. [...]

And there’s a third reason to wait as well. What are the odds that we today live at the most critical point in history, when resources happen to have the greatest ability to do good? It’s possible. But the future may be very long, so there has to be a good chance that some moment in the future will be both more pivotal and more malleable than our own.

Of course, there are many objections to this proposal. If you start a foundation you hope will wait around for centuries, might it not be destroyed in a war, revolution, or financial collapse?

Or might it not drift from its original goals, eventually just serving the interest of its distant future trustees, rather than the noble pursuits you originally intended?

Or perhaps it could fail for the reverse reason, by staying true to your original vision — if that vision turns out to be as deeply morally mistaken as the Rhodes’ Scholarships initial charter, which limited it to ‘white Christian men’.

Alternatively, maybe the world will change in the meantime, making your gift useless. At one end, humanity might destroy itself before your trust tries to do anything with the money. Or perhaps everyone in the future will be so fabulously wealthy, or the problems of the world already so overcome, that your philanthropy will no longer be able to do much good.

Are these concerns, all of them legitimate, enough to overcome the case in favour of patient philanthropy? [...]

  • Should we have a mixed strategy, where some altruists are patient and others impatient?

This suggests to me that 80k is, at least in that post, taking "patient philanthropy" to refer not just to a low or zero pure time preference, but instead to a low or zero rate of discounting overall, or to a favouring of giving/working later rather than now.

comment by MichaelA · 2020-04-10T06:20:43.396Z · score: 4 (1 votes) · EA(p) · GW(p)

Collection [EA · GW] of work on value drift

Review of 'value drift' estimates, and several new estimates [EA · GW] - Ben Todd, 2020

EA Survey 2018 Series: How Long Do EAs Stay in EA? [EA · GW] - Peter Hurford, 2019

Empirical data on value drift [EA · GW] - Joey Savoie, 2018

Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift [EA · GW] - Darius Meissner, 2018

A Qualitative Analysis of Value Drift in EA [EA · GW] - Marisa Jurczyk, 2020

Value Drift & How to Not Be Evil Part I & Part II - Daniel Gambacorta, 2019

Keeping everyone motivated: a case for effective careers outside of the highest impact EA organizations [EA · GW] - FJehn, 2019

EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge? [EA · GW] - Peter Hurford, 2019

Value drift in effective altruism - Effective Thesis, no date

Will Future Civilization Eventually Achieve Goal Preservation? - Brian Tomasik, 2017/2020

Let Values Drift [LW · GW] - G Gordon Worley III, 2019 (note: I haven't read this)

On Value Drift - Robin Hanson, 2018 (note: I haven't read this)

Somewhat relevant, but less so

Value uncertainty [LW · GW] - Michael Aird (me), 2020

An idea for getting evidence on value drift in EA [EA(p) · GW(p)] - Michael Aird, 2020

Estimating the Philanthropic Discount Rate [EA · GW] - Michael Dickens, 2020

The case for investing to give later [EA · GW] - Sjir Hoeijmakers, 2020

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment. One place to check is the relevant EA Forum tag [? · GW]. (As of July 2020, this list contains everything with that tag, but that might change in future.)

See also my collection of EA analyses of how social social movements rise, fall, can be influential, etc. [EA(p) · GW(p)]

comment by MichaelA · 2020-03-30T15:04:46.291Z · score: 4 (3 votes) · EA(p) · GW(p)

Collection [EA · GW] of sources related to dystopias and "robust totalitarianism"

The Precipice - Toby Ord (Chapter 5 has a section on Dystopian Scenarios)

The Totalitarian Threat - Bryan Caplan (if that link stops working, a link to a Word doc version can be found on this page) (some related discussion on the 80k podcast here; use the "find" function)

Reducing long-term risks from malevolent actors [EA · GW] - David Althaus and Tobias Baumann, 2020

The Centre for the Governance of AI’s research agenda - Allan Dafoe (this contains discussion of "robust totalitarianism", and related matters)

A shift in arguments for AI risk - Tom Sittler (this has a brief but valuable section on robust totalitarianism) (discussion of the overall piece here [LW · GW])

Existential Risk Prevention as Global Priority - Nick Bostrom (this discusses the concepts of "permanent stagnation" and "flawed realisation", and very briefly touches on their relevance to e.g. lasting totalitarianism)

The Future of Human Evolution - Bostrom, 2009 (I think some scenarios covered there might count as dystopias, depending on definitions)

The Vulnerable World Hypothesis - Bostrom, 2019

80,000 Hours interview with Tyler Cowen - 2018

Various works of fiction, most notably Orwell's 1984

Some sources on dictatorships/totalitarianism in general (without a focus on long-term future consequences)

Dikötter, F. (2019). How to Be a Dictator: The Cult of Personality in the Twentieth Century. Bloomsbury Publishing.

Glad, B. (2002). Why tyrants go too far: Malignant narcissism and absolute power. Political Psychology, 23(1), 1-2.*

Chang, J., & Halliday, J. (2007). Mao: The unknown story. Vintage.*

*Asterisks indicate I haven't read that source myself, and thus that the source might not actually be a good fit for this list.

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

comment by MichaelA · 2020-04-08T08:51:39.965Z · score: 3 (2 votes) · EA(p) · GW(p)

Collection [EA · GW] of ways of classifying existential risk pathways/mechanisms

Each of the following works show or can be read as showing a different model/classification scheme/taxonomy:

Personally, I think the model/classification scheme in Defence in Depth is probably the most useful. But I think at least a quick skim of the above sources is useful; I think they each provide an additional useful angle or tool for thought.

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

Wait, exactly what are you actually collecting here?

The scope of this collection is probably best revealed by checking out the above sources.

But to further clarify, here are two things I don't mean, which aren't included in the scope:

  • Classifications into things like "AI risk vs biorisk", or "natural vs anthropogenic"
    • Such categorisation schemes are clearly very important, but they're also well-established and you probably don't need a list of sources that show them.
  • Classifications into different "types of catastrophe", such as Ord's distinction between extinction, unrecoverable collapse [EA(p) · GW(p)], and unrecoverable dystopia [EA(p) · GW(p)]
    • This is also very important, and maybe I should make such a collection at some point, but it's a separate matter to this.
comment by MichaelA · 2020-06-10T01:29:01.645Z · score: 2 (1 votes) · EA(p) · GW(p)

On a 2018 episode of the FLI podcast about the probability of nuclear war and the history of incidents that could've escalated to nuclear war, Seth Baum said:

a lot of the incidents were earlier within, say, the ’40s, ’50s, ’60s, and less within the recent decades. That gave me some hope that maybe things are moving in the right direction.

I think we could flesh out this idea as the following argument:

  • Premise 1. We know of fewer incidents that could've escalated to nuclear war from the 70s onwards than from the 40s-60s.
  • Premise 2. If we know of fewer such incidents from the 70s onwards than from the 40s-60s, this is evidence that there really were fewer incidents from the 70s onwards than from the 40s-60s.
  • Premise 3. If there were fewer such incidents from the 70s onwards than from the 40s-60s, the odds of nuclear war are lower than they were in the 40s-60s.
  • Conclusion. The odds of nuclear war are (probably) lower than they were in the 40s-60s.

I don't really have much independent knowledge regarding the first premise, but I'll take Baum's word for it. And the third premise seems to make sense.

But I wonder about the second premise, which Baum's statements seem to sort-of take for granted (which is fair enough, as this was just one quick, verbal statement from him). In particular, I wonder whether the observation "I know about fewer recent than older incidents" is actually what we'd expect to see even if the rate hadn't changed, just because security-relevant secrets only gradually get released/filter into the public record? If so, should we avoid updating our beliefs about the rate based on that observation?

These are genuine rather than rhetorical questions. I don't know much about how we come to know about these sorts of incidents; if someone knows more, I'd appreciate their views on what we can make of knowing about fewer recent incidents.

This also seems relevant to some points made earlier on that podcast. In particular, Robert de Neufville said:

We don’t have incidents from China’s nuclear program, but that doesn’t mean there weren’t any, it just means it’s hard to figure out, and that scenario would be really interesting to do more research on.

(Note: This was just one of many things Baum said, and was a quick, verbal comment. He may in reality already have thought in depth about the questions I raised. And in any case, he definitely seems to think the risk of nuclear war is significant enough to warrant a lot of attention.)

comment by MichaelA · 2020-05-08T07:07:11.250Z · score: 2 (2 votes) · EA(p) · GW(p)

Collection [EA · GW] of sources relevant to the idea of “moral weight”

Comparisons of Capacity for Welfare and Moral Status Across Species [EA · GW] - Jason Schukraft, 2020

Preliminary thoughts on moral weight [LW · GW] - Luke Muehlhauser, 2018

Should Longtermists Mostly Think About Animals? [EA · GW] - Abraham Rowe, 2020

2017 Report on Consciousness and Moral Patienthood - Luke Muehlhauser, 2017 (the idea of “moral weights” is addressed briefly in a few places)

Notes

As I’m sure you’ve noticed, this is a very small collection. I intend to add to it over time. If you know of other relevant work, please mention it in a comment.

(ETA: The following speculation appears false; see comments below.) It also appears possible this term was coined, for this particular usage, by Muehlhauser, and that in other communities other labels are used to discuss similar concepts. Please let me know if you have any information about either of those speculations of mine.

See also my collection of sources relevant to moral circles, moral boundaries, or their expansion [EA(p) · GW(p)] and my collection of evidence about views on longtermism, time discounting, population ethics, etc. among non-EAs [EA(p) · GW(p)].

comment by Jason Schukraft · 2020-05-08T13:17:13.595Z · score: 15 (5 votes) · EA(p) · GW(p)

A few months ago I compiled a bibliography of academic publications about comparative moral status. It's not exhaustive and I don't plan to update it, but it might be a good place for folks to start if they're interested in the topic.

comment by MichaelA · 2020-05-08T23:40:20.545Z · score: 2 (2 votes) · EA(p) · GW(p)

Ah great, thanks!

Do you happen to recall if you encountered the term "moral weight" outside of EA/rationality circles? The term isn't in the titles in the bibliography (though it may be in the full papers), and I see one that says "Moral status as a matter of degree?", which would seem to refer to a similar idea. So this seems like it might be additional weak evidence that "moral weight" might be an idiosyncratic term in the EA/rationality community (whereas when I first saw Muehlhauser use it, I assumed he took it from the philosophical literature).

comment by Jason Schukraft · 2020-05-09T01:36:49.792Z · score: 13 (4 votes) · EA(p) · GW(p)

The term 'moral weight' is occasionally used in philosophy (David DeGrazia uses it from time to time, for instance) but not super often. There are a number of closely related but conceptually distinct issues that often get lumped together under the heading moral weight:

  1. Capacity for welfare, which is how well or poorly a given animal's life can go
  2. Average realized welfare, which is how well or poorly the life of a typical member of a given species actually goes
  3. Moral status, which is how much the welfare of a given animal matters morally

Differences in any of those three things might generate differences in how we prioritize interventions that target different species.

Rethink Priorities is going to release a report on this subject in a couple of weeks. Stay tuned for more details!

comment by MichaelA · 2020-05-09T09:35:31.646Z · score: 2 (2 votes) · EA(p) · GW(p)

Thanks, that's really helpful! I'd been thinking there's an important distinction between that "capacity for welfare" idea and that "moral status" idea, so it's handy to know the standard terms for that.

Looking forward to reading that!