Posts

What questions could COVID-19 provide evidence on that would help guide future EA decisions? 2020-03-27T05:51:25.107Z · score: 7 (2 votes)
What's the best platform/app/approach for fundraising for things that aren't registered nonprofits? 2020-03-27T03:05:46.791Z · score: 5 (1 votes)
Fundraising for the Center for Health Security: My personal plan and open questions 2020-03-26T16:53:45.549Z · score: 13 (6 votes)
Will the coronavirus pandemic advance or hinder the spread of longtermist-style values/thinking? 2020-03-19T06:07:03.834Z · score: 9 (5 votes)
[Link and commentary] Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society 2020-03-14T09:04:10.955Z · score: 14 (5 votes)
Suggestion: EAs should post more summaries and collections 2020-03-09T10:04:01.629Z · score: 20 (12 votes)
Quotes about the long reflection 2020-03-05T07:48:36.639Z · score: 41 (20 votes)
Where to find EA-related videos 2020-03-02T13:40:18.971Z · score: 15 (7 votes)
Causal diagrams of the paths to existential catastrophe 2020-03-01T14:08:45.344Z · score: 33 (15 votes)
Morality vs related concepts 2020-02-10T08:02:10.570Z · score: 14 (9 votes)
Differential progress / intellectual progress / technological development 2020-02-07T15:38:13.544Z · score: 28 (16 votes)
What are information hazards? 2020-02-05T20:50:25.882Z · score: 11 (10 votes)
Four components of strategy research 2020-01-30T19:08:37.244Z · score: 16 (12 votes)
When to post here, vs to LessWrong, vs to both? 2020-01-27T09:31:37.099Z · score: 12 (6 votes)
Potential downsides of using explicit probabilities 2020-01-20T02:14:22.150Z · score: 23 (13 votes)
[Link] Charity Election 2020-01-19T08:02:09.114Z · score: 8 (5 votes)
Making decisions when both morally and empirically uncertain 2020-01-02T07:08:26.681Z · score: 11 (5 votes)
Making decisions under moral uncertainty 2020-01-01T13:02:19.511Z · score: 33 (12 votes)
MichaelA's Shortform 2019-12-22T05:35:17.473Z · score: 4 (1 votes)
Are there other events in the UK before/after EAG London? 2019-08-11T06:38:12.163Z · score: 9 (7 votes)

Comments

Comment by michaela on Fundraising for the Center for Health Security: My personal plan and open questions · 2020-03-31T11:35:05.900Z · score: 5 (3 votes) · EA · GW

I'm planning to do it via SoGive in the next few days. I'll report back once I see how much it raised and if there were any hints of people coming to understand more about effective giving/major risks (seems very doubtful, but worth a shot!).

Comment by michaela on Toby Ord’s ‘The Precipice’ is published! · 2020-03-30T16:42:37.002Z · score: 3 (2 votes) · EA · GW

I got it on Google Play: https://play.google.com/store/books/details/Toby_Ord_The_Precipice?id=W7rEDwAAQBAJ

Comment by michaela on MichaelA's Shortform · 2020-03-30T15:04:46.291Z · score: 1 (1 votes) · EA · GW

Collection of sources related to dystopias and "robust totalitarianism"

The Precipice - Toby Ord (Chapter 5 has a full section on Dystopian Scenarios)

The Totalitarian Threat - Bryan Caplan (a link to a Word doc version can be found on this page) (some related discussion on the 80k podcast here; use the "find" function)

The Centre for the Governance of AI’s research agenda - Allan Dafoe (this contains discussion of "robust totalitarianism", and related matters)

A shift in arguments for AI risk - Tom Sittler (this has a brief but valuable section on robust totalitarianism) (discussion of the overall piece here)

Existential Risk Prevention as Global Priority - Nick Bostrom (this discusses the concepts of "permanent stagnation" and "flawed realisation", and very briefly touches on their relevance to e.g. lasting totalitarianism)

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

Comment by michaela on MichaelA's Shortform · 2020-03-30T02:07:53.101Z · score: 1 (1 votes) · EA · GW

Agreed.

These seem to often be examples of hedge drift, and their potential consequences seem like examples of memetic downside risks.

Comment by michaela on MichaelA's Shortform · 2020-03-29T06:48:32.151Z · score: 1 (1 votes) · EA · GW

What are the implications of the offence-defence balance for trajectories of violence?

Questions: Is a change in the offence-defence balance part of why interstate (and intrastate?) conflict appears to have become less common? Does this have implications for the likelihood and trajectories of conflict in future (and perhaps by extension x-risks)?

Epistemic status: This post is unpolished, un-researched, and quickly written. I haven't looked into whether existing work has already explored questions like these; if you know of any such work, please comment to point me to it.

Background/elaboration: Pinker argues in The Better Angels of Our Nature that many types of violence have declined considerably over history. I'm pretty sure he notes that these trends are neither obviously ephemeral nor inevitable. But the book, and other research pointing in similar directions, seems to me (and I believe others?) to at least weakly support the ideas that:

  • if we avoid an existential catastrophe, things will generally continue to get better
  • apart from the potential destabilising effects of technology, conflict seems to be trending downwards, somewhat reducing the risks of e.g. great power war, and by extension e.g. malicious use of AI (though of course a partial reduction in risks wouldn't necessarily mean we should ignore the risks)

But How Does the Offense-Defense Balance Scale? (by Garfinkel and Dafoe, of the Center for the Governance of AI; summary here) says:

It is well-understood that technological progress can impact offense-defense balances. In fact, perhaps the primary motivation for developing the concept has been to understand the distinctions between different eras of military technology.
For instance, European powers’ failure to predict the grueling attrition warfare that would characterize much of the First World War is often attributed to their failure to recognize that new technologies, such as machine guns and barbed wire, had shifted the European offense-defense balance for conquest significantly toward defense.

And:

holding force sizes fixed, the conventional wisdom holds that a conflict with mid-nineteenth century technology could be expected to produce a better outcome for the attacker than a conflict with early twentieth century technology. See, for instance, Van Evera, ‘Offense, Defense, and the Causes of War’.

The paper tries to use these sorts of ideas to explore how emerging technologies will affect trajectories, likelihood, etc. of conflict. E.g., the very first sentence is: "The offense-defense balance is a central concept for understanding the international security implications of new technologies."

But it occurs to me that one could also do historical analysis of just how much these effects have played a role in the sort of trends Pinker notes. From memory, I don't think Pinker discusses this possible factor in those trends. If this factor played a major role, then perhaps those trends are substantially dependent on something "we" haven't been thinking about as much - perhaps we've wondered about whether the factors Pinker discusses will continue, whereas they're less necessary and less sufficient than we thought for the overall trend (decline in violence/interstate conflict) that we really care about.

And at a guess, that might mean that that trend is more fragile or "conditional" than we might've thought. It might mean that we really really can't rely on that "background trend" continuing, or at least somewhat offsetting the potentially destabilising effects of new tech - perhaps a lot of the trend, or the last century or two of it, was largely about how tech changed things, so if the way tech changes things changes, the trend could very easily reverse entirely.

I'm not at all sure about any of that, but it seems it would be important and interesting to explore. Hopefully someone already has, in which case I'd appreciate someone pointing me to that exploration.

(Also note that what the implications of a given offence-defence balance even are is apparently somewhat complicated/debatable matter. Eg., Garfinkel and Dafoe write: "While some hold that shifts toward offense-dominance obviously favor conflict and arms racing, this position has been challenged on a number of grounds. It has even been suggested that shifts toward offense-dominance can increase stability in a number of cases.")

Comment by michaela on How tractable is changing the course of history? · 2020-03-28T14:30:22.663Z · score: 1 (1 votes) · EA · GW

I just finished reading the full post this links to. Interesting work, thanks for posting it.

I'm not sure if you're still pursuing this sort of question or plan to return to it later, but if you are, or if other readers are, a book I imagine would be quite relevant and interesting is Tetlock and Belkin's Counterfactual Thought Experiments in World Politics. The Amazon description reads:

Political scientists often ask themselves what might have been if history had unfolded differently: if Stalin had been ousted as General Party Secretary or if the United States had not dropped the bomb on Japan. Although scholars sometimes scoff at applying hypothetical reasoning to world politics, the contributors to this volume-including James Fearon, Richard Lebow, Margaret Levi, Bruce Russett, and Barry Weingast-find such counterfactual conjectures not only useful, but necessary for drawing causal inferences from historical data. Given the importance of counterfactuals, it is perhaps surprising that we lack standards for evaluating them. To fill this gap, Philip Tetlock and Aaron Belkin propose a set of criteria for distinguishing plausible from implausible counterfactual conjectures across a wide range of applications. The contributors to this volume make use of these and other criteria to evaluate counterfactuals that emerge in diverse methodological contexts including comparative case studies, game theory, and statistical analysis. Taken together, these essays go a long way toward establishing a more nuanced and rigorous framework for assessing counterfactual arguments about world politics in particular and about the social sciences more broadly.

Unfortunately I haven't read this book, and doubt I'll get to it anytime soon, partly because I don't think there's an audiobook version. But it sounds like it'd be quite useful for the topic of how tractable changing the course of history is, so I'd love it if someone were to read the book and summarise/apply its most relevant lessons.

Comment by michaela on What's the best platform/app/approach for fundraising for things that aren't registered nonprofits? · 2020-03-28T02:24:42.527Z · score: 1 (1 votes) · EA · GW

Great! I've sent Sanjay an email - my thanks to both of you.

Comment by michaela on What questions could COVID-19 provide evidence on that would help guide future EA decisions? · 2020-03-27T06:20:00.373Z · score: 1 (1 votes) · EA · GW

Here's another example of a prior statement of something like the idea I'm proposing should be investigated. This is from Carrick Flynn talking about AI policy and strategy careers:

If you are in this group whose talents and expertise are outside of these narrow areas, and want to contribute to AI strategy, I recommend you build up your capacity and try to put yourself in an influential position. This will set you up well to guide high-value policy interventions as clearer policy directions emerge. [...]
Depending on how slow these “entangled” research questions are to unjam, and on the timelines of AI development, there might be a very narrow window of time in which it will be necessary to have a massive, sophisticated mobilization of altruistic talent. This makes being prepared to mobilize effectively and take impactful action on short notice extremely valuable in expectation. (emphasis in original)

And Richard Ngo discusses similar ideas, again in relation to AI policy.

Comment by michaela on What posts do you want someone to write? · 2020-03-27T06:10:55.778Z · score: 1 (1 votes) · EA · GW

Posts investigating/discussing any of the questions listed here. These are questions which would be "valuable for someone to research, or at least theorise about, that the current pandemic in some way 'opens up' or will provide new evidence about, and that could inform EAs’ future efforts and priorities".

If anyone has thought of such questions, please add them as answers to to that post.

An example of such a question which I added: "What lessons can be drawn from [events related to COVID-19] for how much to trust governments, mainstream experts, news sources, EAs, rationalists, mathematical modelling by people without domain-specific expertise, etc.? What lessons can be drawn for debates about inside vs outside views, epistemic modesty, etc.?"

Comment by michaela on What questions could COVID-19 provide evidence on that would help guide future EA decisions? · 2020-03-27T06:05:17.013Z · score: 3 (2 votes) · EA · GW

Some people have suggested that one way to have a major, long-term influence on the world is for an intellectual movement to develop a body of ideas and have adherents to those ideas in respected positions (e.g., university professorships, high-level civil service or political staffer roles), with these ideas likely lying dormant for a while, but then potentially being taken up when there are major societal disruptions of some sort. I’ve heard these described as making sure there are good ideas “lying around” when an unexpected crisis occurs.

As an example, Kerry Vaughan describes how stagflation “helped to set the stage for alternatives to Keynesian theories to take center stage.” He also quotes Milton Freedman as saying: “the role of thinkers, I believe, is primarily to keep options open, to have available alternatives, so when the brute force of events make a change inevitable, there is an alternative available to change it.”

What evidence did COVID-19, reactions to it, and reactions that seem likely to occur in future, provide for or against these ideas? For example:

  • Was there a major appetite in governments for lasting changes that EA-aligned (or just very sensible and forward-thinking) civil servants were able to seize upon?
  • Were orgs like FHI, CSER, and GCRI, or other aligned academics, called upon by governments, media, etc., in a way that (a) seemed to depend on them having spent years developing rigorous versions of ideas about GCRs, x-risks, etc., and (b) seems likely to shift narratives, decisions, etc. in a lasting way?

And to more precisely inform future decisions, it’d be good to get some sense of:

  • How likely is it that similar benefits could’ve been seized by people “switching into” those pathways, roles, etc. during the crisis, without having built up the credibility, connections, research, etc. in advance?
  • If anyone did manage to influence substantial changes that seem likely to last, what precise factors, approaches, etc. seemed to help them do so?
  • Were there apparent instances where someone was almost able to influence such a change? If so, what seemed to block them? How could we position ourselves in future to avoid such blockages?
Comment by michaela on What questions could COVID-19 provide evidence on that would help guide future EA decisions? · 2020-03-27T06:01:32.524Z · score: 3 (2 votes) · EA · GW

Some people have previously suggested that "warning shots" in the form of things somewhat like, but less extreme than, global or existential catastrophes could increase the extent to which people prepare for future GCRs and existential risks.

What evidence does/did COVID-19, reactions to it, and reactions to it that seem likely to occur in future provide for or against that idea?

And what evidence do these things give about how well society generalises the lesson from such warning shots? E.g., does/will society from COVID-19 that it’s important to make substantial preparations for other types of low-likelihood, high-stakes possibilities like AI risk? This could be seen as trying to gather more evidence (or at least thoughts) relevant to the following statement from Nick Beckstead (2015):

Overspecificity of reactions to warning shots: It may be true that, e.g., the 1918 flu pandemic served as a warning shot for more devastating pandemics that happened in the future. For example, it frequently gets invoked in support of arguments for enhancing biosecurity. But it seems significantly less true that the 1918 flu pandemic served as a warning shot for risks from nuclear weapons, and it is not clear that the situation would change if one were talking about a pandemic more severe than the 1918 flu pandemic.
Comment by michaela on What questions could COVID-19 provide evidence on that would help guide future EA decisions? · 2020-03-27T05:59:22.491Z · score: 3 (2 votes) · EA · GW

Will governments and broader society now adequately prioritise pandemics, or some subset of them such as natural pandemics or respiratory disease pandemics? Does this mean that pandemics (or that subset) is mostly “covered”, and thus that “the EA portfolio” should move instead towards other things (e.g., any still overlooked types of pandemics, towards other x-risks, etc.)?

Conversely, should EA now place more emphasis on pandemics, because the “window” or “appetite” for people to work on such matters is currently larger than normal? If so, how long will that last? (E.g., if someone is just starting their undergrad, should they plan with that window/appetite in mind, or should they assume attention will shift away again by the time they’re in a position to take relevant roles.)

Comment by michaela on What questions could COVID-19 provide evidence on that would help guide future EA decisions? · 2020-03-27T05:57:24.793Z · score: 7 (5 votes) · EA · GW

What lessons can be drawn from these events for how much to trust governments, mainstream experts, news sources, EAs, rationalists, mathematical modelling by people without domain-specific expertise, etc.? What lessons can be drawn for debates about inside vs outside views, epistemic modesty, etc.?

E.g., I think these events should probably update me somewhat further towards:

  • expecting governments to think and/or communicate quite poorly about low-probability, high-stakes events.
  • believing in something like, or a moderate form of, "Rationalist/EA exceptionalism"
  • trusting inside-views that seem clever even if they're from non-experts and I lack the expertise to evaluate them

But I'm still wary of extreme versions of those conclusions. And I also worry about something like a "stopped clock is right twice a day" situation - perhaps this was something like a "fluke", and "early warnings" from the EA/rationalist community would typically not turn out to seem so prescient.

(I believe there’s been a decent amount discussion of this sort of thing on LessWrong.)

Comment by michaela on Fundraising for the Center for Health Security: My personal plan and open questions · 2020-03-27T03:07:45.840Z · score: 1 (1 votes) · EA · GW

Good point. It's indeed not possible to do a "charity" fundraiser for CHS on Facebook. But I can set up a "Fundraiser for personal causes", collect the money myself, and then donate that. (I've tweaked the post to reflect that being my new tentative plan A.)

I've now asked a more general question of what the best platform/app/approach would be for situations like this (where the intended recipient is not a registered nonprofit). I'd be interested in people's thoughts on that.

Comment by michaela on What's the best platform/app/approach for fundraising for things that aren't registered nonprofits? · 2020-03-27T03:06:20.737Z · score: 1 (1 votes) · EA · GW

Two options that occur to me:

1) Use that Facebook "personal cause" fundraiser feature, and say at the end of the description something like:

(Note: Because the CHS is part of a university, it's not a registered non-profit in the usual sense, and thus I can't set up a Facebook fundraiser where the money goes directly to them. Instead, this would give me the money, and then I'd donate it, get the money matched, and post receipts in here to confirm.
If you can handle the effort of a few clicks, you could donate directly at http://www.centerforhealthsecurity.org/giving
Donating directly also prevents Facebook taking a cut of 0.33AUD and 1.77%. If you do that, please post here or message me, so I can still feel all good about myself, which is of course what really matters.)

2) Just make a FB post with the description I would've used, and the link to donate directly, and encourage people to comment if they've donated. And perhaps I could share the post again later to bring it back to the top of people's feeds.

I'd be interested in people's thoughts on those options.

Comment by michaela on Suggestion: EAs should post more summaries and collections · 2020-03-26T14:33:00.505Z · score: 3 (2 votes) · EA · GW

I agree that having some central directory or collection of the summaries/collections would be ideal. And I think all of those suggestions for achieving that are good.

Also, I think EA Concepts is great. And I think people who haven't checked it out should, or should keep it in mind when they encounter a concept they're unfamiliar with. (Conceptually also performs a similar function. It isn't explicitly EA-focused, but it was made by EAs and covers a lot of concepts EAs like to use.) However:

  • EA Concepts doesn't cover everything
  • The entries are quite short, which is of course valuable in some ways, but also means there could also be value in longer summaries that build on, go beyond, and add detail to what's in those entries
  • The entries seem slightly old now, so they may not reflect the latest work, nor contain links to it. A forum post will later suffer the same issue, but it allows comments, so people could comment to add discussion of or links to more recent work. (That said, it seems relatively rare for people to comment on older posts, which I think is a shame.)

This suggests that one possible solution would be for the people behind EA Concepts to crowdsource (and then vet) new entries, and/or updated versions of existing entries.

Comment by michaela on MichaelA's Shortform · 2020-03-26T10:58:13.599Z · score: 4 (3 votes) · EA · GW

Collection of EA analyses of how social social movements rise, fall, can be influential, etc.

Movement collapse scenarios - Rebecca Baron

Why do social movements fail: Two concrete examples. - NunoSempere

What the EA community can learn from the rise of the neoliberals - Kerry Vaughan

Some of the Sentience Institute's research, such as its "social movement case studies" and the post How tractable is changing the course of history?

These aren't quite "EA analyses", but Slate Star Codex has several relevant book reviews and other posts, such as https://slatestarcodex.com/2019/03/18/book-review-inventing-the-future/

It appears Animal Charity Evaluators did relevant research, but I haven't read it, they described it as having been "of variable quality", and they've discontinued it.

Notes

I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.

Also, I'm aware that there are a lot of non-EA analyses of these topics. The reasons I'm collecting only EA analyses here are that:

  • their precise focuses or methodologies may be more relevant to other EAs than would be the case with non-EA analyses
  • links to non-EA work can be found in most of the things I list here
  • I'd guess that many collections of non-EA analyses of these topics already exist (e.g., in reference lists)
Comment by michaela on What to know before talking with journalists about EA · 2020-03-26T06:55:23.994Z · score: 3 (2 votes) · EA · GW

This is a very good point. Also, one of the coauthors of that paper was my Honours supervisor (Ullrich Ecker), and I'm pretty confident that that general body of research holds up pretty well (though not 100% that it holds up perfectly). That's based primarily based on my impression of the studies' methodologies (having read quite a few), on there being a variety of studies from different authors finding similar results, and on there being plausible theories to explain findings. On the other hand, I don't think I've seen replications using exactly the same methodologies as earlier studies (more like tweaking them in small ways and seeing how effects generalise) - not sure that's a problem; just saying.

I've also briefly discussed the relevance of this area of research to EA's epistemic norms here, and I may try to go into more detail on that in future if I have time and people think it'd be valuable.

Comment by michaela on MichaelA's Shortform · 2020-03-26T06:01:13.142Z · score: 9 (7 votes) · EA · GW

My review of Tom Chivers' review of Toby Ord's The Precipice

I thought The Precipice was a fantastic book; I'd highly recommend it. And I agree with a lot about Chivers' review of it for The Spectator. I think Chivers captures a lot of the important points and nuances of the book, often with impressive brevity and accessibility for a general audience. (I've also heard good things about Chivers' own book.)

But there are three parts of Chivers' review that seem to me to like they're somewhat un-nuanced, or overstate/oversimplify the case for certain things, or could come across as overly alarmist.

I think Ord is very careful to avoid such pitfalls in The Precipice, and I'd guess that falling into such pitfalls is an easy and common way for existential risk related outreach efforts to have less positive impacts than they otherwise could, or perhaps even backfire. I understand that a review gives on far less space to work with than a book, so I don't expect anywhere near the level of nuance and detail. But I think that overconfident or overdramatic statements of uncertain matters (for example) can still be avoided.

I'll now quote and comment on the specific parts of Chivers' review that led to that view of mine.

An alleged nuclear close call

Firstly, in my view, there are three flaws with the opening passage of the review:

Humanity has come startlingly close to destroying itself in the 75 or so years in which it has had the technological power to do so. Some of the stories are less well known than others. One, buried in Appendix D of Toby Ord’s splendid The Precipice, I had not heard, despite having written a book on a similar topic myself. During the Cuban Missile Crisis, a USAF captain in Okinawa received orders to launch nuclear missiles; he refused to do so, reasoning that the move to DEFCON 1, a war state, would have arrived first.
Not only that: he sent two men down the corridor to the next launch control centre with orders to shoot the lieutenant in charge there if he moved to launch without confirmation. If he had not, I probably would not be writing this — unless with a charred stick on a rock.

First issue: Toby Ord makes it clear that "the incident I shall describe has been disputed, so we cannot yet be sure whether it occurred." Ord notes that "others who claimed to have been present in the Okinawa missile bases at the time" have since challenged this account, although there is also "some circumstantial evidence" supporting the account. Ultimately, Ord concludes "In my view this alleged incident should be taken seriously, but until there is further confirmation, no one should rely on it in their thinking about close calls." I therefore think Chivers should've made it clear that this is a disputed story.

Second issue: My impression from the book is that, even in the account of the person claiming this story is true, the two men sent down the corridor did not turn out to be necessary to avert the launch. (That said, the book isn't explicit on the point, so I'm unsure.) Ord writes that Bassett "telephoned the Missile Operations Centre, asking the person who radioed the order to either give the DEFCON 1 order or issue a stand-down order. A stand-down order was quickly given and the danger was over." That is the end of Ord's retelling of the account itself (rather than discussion of the evidence for or against it).

Third issue: I think it's true that, if a nuclear launch had occurred in that scenario, a large-scale nuclear war probably would've occurred (though it's not guaranteed, and it's hard to say). And if that happened, it seems technically true that Chivers probably would've have written this review. But I think that's primarily because history would've just unfolded very, very difficulty. Chivers seems to imply this is because civilization probably would've collapsed, and done so so severely than even technologies such as pencils would be lost and that they'd still be lost all these decades on (such that, if he was writing this review, he'd do so with "a charred stick on a rock").

This may seem like me taking a bit of throwaway rhetoric or hyperbole too seriously, and that may be so. But I think among the key takeaways of the book were vast uncertainties around whether certain events would actually lead to major catastrophes (e.g., would a launch lead to a full-scale nuclear war?), whether catastrophes would lead to civilizational collapse (e.g., how severe and long-lasting would the nuclear winter be, and how well would we adapt?), how severe collapses would be (e.g., to pre-industrial or pre-agricultural levels?), and how long-lasting collapses would be (from memory, Ord seems to think recovery is in fact fairly likely).

So I worry that a sentence like that one makes the book sound somewhat alarmist, doomsaying, and naive/simplistic, whereas in reality it seems to me quite nuanced and open about the arguments for why existential risk from certain sources may be "quite low" - and yet still extremely worth attending to, given the stakes.

To be fair, or to make things slightly stranger, Chivers does later say:

Perhaps surprisingly, [Ord] doesn’t think that nuclear war would have been an existential catastrophe. It might have been — a nuclear winter could have led to sufficiently dreadful collapse in agriculture to kill everyone — but it seems unlikely, given our understanding of physics and biology.

(Also, as an incredibly minor point, I think the relevant appendix was Appendix C rather than D. But maybe that was different in different editions or in an early version Chivers saw.)

"Numerically small"

Secondly, Chivers writes:

[Ord] points out that although the difference between a disaster that kills 99 per cent of us and one that kills 100 per cent would be numerically small, the outcome of the latter scenario would be vastly worse, because it shuts down humanity’s future.

I don't recall Ord ever saying something like that the death of 1 percent of the population would be "numerically small". Ord very repeatedly emphasises and reminds the reader that something really can count as deeply or even unprecedently awful, and well worth expending resources to avoid, even if it's not an existential catastrophe. This seems to me a valuable thing to do, otherwise the x-risk community could easily be seen as coldly dismissive of any sub-existential catastrophes. (Plus, such catastrophes really are very bad and well worth expending resources to avoid - this is something I would've said anyway, but seems especially pertinent in the current pandemic.)

I think saying "the difference between a disaster that kills 99 per cent of us and one that kills 100 per cent would be numerically small" cuts against that goal, and again could paint Ord as more simplistic or extremist than he really is.

"Blowing ourselves up"

Finally (for the purpose of my critiques), Chivers writes:

We could live for a billion years on this planet, or billions more on millions of other planets, if we manage to avoid blowing ourselves up in the next century or so.

To me, "avoid blowing ourselves up" again sounds quite informal or naive or something like that. It doesn't leave me with the impression that the book will be a rigorous and nuanced treatment of the topic. Plus, Ord isn't primarily concerned with us "blowing ourselves up" - the specific risks he sees as the largest are unaligned AI, engineered pandemics, and "unforeseen anthropogenic risk".

And even in the case of nuclear war, Ord is quite clear that it's the nuclear winter that's the largest source of existential risk, rather than the explosions themselves (though of course the explosions are necessary for causing such a winter). In fact, Ord writes "While one often hears the claim that we have enough nuclear weapons to destroy the world may times over, this is loose talk." (And he explains why this is loose talk.)

So again, this seems like a case where Ord actively separates his clear-headed analysis of the risks from various naive, simplistic, alarmist ideas that are somewhat common among some segments of the public, but where Chivers' review makes it sound (at least to me) like the book will match those sorts of ideas.

All that said, I should again note that I thought the review did a lot right. In fact, I have no quibbles at all with anything from that last quote onwards.

Comment by michaela on What are examples of EA work being reviewed by non-EA researchers? · 2020-03-24T09:38:54.714Z · score: 3 (2 votes) · EA · GW

Ah, that makes sense. I was thinking more about the detailed points reviewers might make about specifics from particular EA research, rather than getting data on the general quality of EA research to inform how seriously to take other such research (which also seems very/more valuable).

Comment by michaela on What posts do you want someone to write? · 2020-03-24T08:57:49.615Z · score: 1 (1 votes) · EA · GW

That all seems to make sense.

Comment by michaela on What are examples of EA work being reviewed by non-EA researchers? · 2020-03-24T08:52:27.900Z · score: 3 (2 votes) · EA · GW

Good question!

Does anyone have good examples of respected* scholars who have reviewed EA research and either praised it highly or found it lackluster? 

Presumably you'd also be interested in examples where such scholars reviewed EA research and came to a conclusion in between high praise and finding it lackluster? I expect most academics find a lot of work in general to be somewhere around just "pretty good".

Comment by michaela on What posts do you want someone to write? · 2020-03-24T08:48:00.920Z · score: 3 (2 votes) · EA · GW

I also think this'd be useful.

Though I wonder why you suggest that someone should ask these questions to a lot of EA orgs in particular? Did you also mean orgs that aren't explicitly "EA orgs" but that many EAs see as high-value donation opportunities? And is it possible it'd also be valuable to ask non-EA foundations about their practices and thoughts on this matter, at least as an interesting quite different example?

Comment by michaela on What posts do you want someone to write? · 2020-03-24T08:02:09.818Z · score: 1 (1 votes) · EA · GW

I read in as-yet-unpublished post that the best approach for getting published in a major outlet without being on their staff is not to just write something and then send it to various publications, but rather to pick an outlet and optimise the piece (or versions of it) for that outlet's style, topic choices, readership, etc. (I'm not sure what the evidence base for that claim was, and have 0 relevant knowledge of my own.)

If that is a good approach, one could still potentially pick a few outlets and write somewhat different versions for each, rather than putting all their eggs in one basket. Or write one optimised version at a time, and not invest additional effort until that one is rejected. And one version could also be posted to the EA Forum and/or Medium and/or similar places, in the meantime. (Unless that would reduce odds of publication by a major outlet?)

Comment by michaela on The best places to donate for COVID-19 · 2020-03-23T17:04:33.248Z · score: 8 (3 votes) · EA · GW

Thanks for researching and writing this, and for doing so so quickly. I don't feel I'm in much of a position to comment on this post's accuracy or reasoning, but I can at least say it seems useful. Partly informed by this, I currently plan to donate a small amount to, and fundraise for, the Center for Health Security (as outlined here).

Comment by michaela on Is COVID an opportunity for non-EAs to give effectively? · 2020-03-23T16:55:48.903Z · score: 1 (1 votes) · EA · GW

Personally, I think the Centre for Health Security might be a pretty good bet, as they genuinely are doing prominent COVID-related work, but were also already an excellent donation opportunity from an x-risk/longtermist perspective (in my view, based largely on Founders Pledge's recommendation and Open Phil's donations, rather than separate new info of my own). So I'd guess that CHS passes the "related to COVID" test for non-EAs.

CHS may fail the "doing something tangible and easy to understand" test. If you think that'd be an issue for your networks, maybe the best option from Sanjay's post would be Univursa Health or Development Media International (just based on my impression after reading that post - I have no prior knowledge of Univursa).

I would also be cautious about just trying to point out to non-EAs that non-COVID things will be especially neglected, as they may find that reasoning callous or misguided if their first exposure to it is during this crisis. But you could try to first suggest what the best donation opportunities that are COVID/pandemic-related are, and then point out in a very non-pushy way how other things may be especially neglected right now, and thus especially valuable to donate to. Sort of like an intellectual point that they can take or leave, with you having first accepted and respected their starting point of interest in COVID specifically.

This is roughly what I plan to do in a Facebook fundraiser for CHS. Inspired partly by your question, I wrote up my planned text for the fundraiser, my rationale for it, and my open questions here.

Note: I lean towards longtermism. For people more convinced by arguments for other cause areas such as global health and development or animal welfare, I think there's probably more of a real tension between (a) trying to gently steer people interested in supporting work on COVID towards more effective donation opportunities, versus (b) recommending what you truly think is at least highly effective.

But maybe (not sure) this would be a time for just doing (a) anyway, given that there's a lot of scope for that right now. This could be based on moral uncertainty, or could be a sort of moral trade with longtermists. Alongside this, such people could personally continue to donate their resources to whatever they think is highest value.

Comment by michaela on Advice for getting the most out of one-on-ones · 2020-03-21T10:46:40.916Z · score: 4 (4 votes) · EA · GW

This seems like great advice to me. This part particularly rings true:

First, speaking from experience, I find that EAs are more likely than average to hold a meeting with you even if you don't have anything tangible to offer them. When you think about it, by helping you have more of an impact, they're also increasing they're own impact, which is motivating for most EAs. Don't let not having anything to offer immediately keep you from reaching out to someone you think you could have a valuable conversation with!

I was very surprised with how many "EA celebrities"* were happy to meet with me at EAG, despite me being pretty new to EA and not having much to contribute for them. And they seemed not just begrudgingly willing based on a cost-benefit analysis of the possibility they'd increase my impact, but genuinely enthusiastic about being helpful, asking me about my current plans, etc.

So definitely don't be too shy about reaching out to people!

*Source: My own strange, nerdy perceptions, rather than language anyone else has used :D

Comment by michaela on MichaelA's Shortform · 2020-03-20T15:18:33.194Z · score: 1 (1 votes) · EA · GW

Interesting example: Leo Szilard and cobalt bombs

In The Precipice, Toby Ord mentions the possibility of "a deliberate attempt to destroy humanity by maximising fallout (the hypothetical cobalt bomb)" (though he notes such a bomb may be beyond our current abilities). In a footnote, he writes that "Such a 'doomsday device' was first suggested by Leo Szilard in 1950". Wikipedia similarly says:

The concept of a cobalt bomb was originally described in a radio program by physicist Leó Szilárd on February 26, 1950. His intent was not to propose that such a weapon be built, but to show that nuclear weapon technology would soon reach the point where it could end human life on Earth, a doomsday device. Such "salted" weapons were requested by the U.S. Air Force and seriously investigated, but not deployed.[citation needed] [...]
The Russian Federation has allegedly developed cobalt warheads for use with their Status-6 Oceanic Multipurpose System nuclear torpedoes. However many commentators doubt that this is a real project, and see it as more likely to be a staged leak to intimidate the United States.

That's the extent of my knowledge of cobalt bombs, so I'm poorly placed to evaluate that action by Szilard. But this at least looks like it could be an unusually clear-cut case of one of Bostrom's subtypes of information hazards:

Attention hazard: The mere drawing of attention to some particularly potent or relevant ideas or data increases risk, even when these ideas or data are already “known”.
Because there are countless avenues for doing harm, an adversary faces a vast search task in finding out which avenue is most likely to achieve his goals. Drawing the adversary’s attention to a subset of especially potent avenues can greatly facilitate the search. For example, if we focus our concern and our discourse on the challenge of defending against viral attacks, this may signal to an adversary that viral weapons—as distinct from, say, conventional explosives or chemical weapons—constitute an especially promising domain in which to search for destructive applications. The better we manage to focus our defensive deliberations on our greatest vulnerabilities, the more useful our conclusions may be to a potential adversary.

It seems that Szilard wanted to highlight how bad cobalt bombs would be, that no one had recognised - or at least not acted on - the possibility of such bombs until he tried to raise awareness of them, and that since he did so there may have been multiple government attempts to develop such bombs.

I was a little surprised that Ord didn't discuss the potential information hazards angle of this example, especially as he discusses a similar example with regards to Japanese bioweapons in WWII elsewhere in the book.

I was also surprised by the fact that it was Szilard who took this action. This is because one of the main things I know Szilard for is being arguably one of the earliest (the earliest?) examples of a scientist bucking standard openness norms due to, basically, concerns of information hazards potentially severe enough to pose global catastrophic risks. E.g., a report by MIRI/Katja Grace states:

Leó Szilárd patented the nuclear chain reaction in 1934. He then asked the British War Office to hold the patent in secret, to prevent the Germans from creating nuclear weapons (Section 2.1). After the discovery of fission in 1938, Szilárd tried to convince other physicists to keep their discoveries secret, with limited success.
Comment by michaela on Finding equilibrium in a difficult time · 2020-03-19T09:44:47.432Z · score: 2 (2 votes) · EA · GW

This is great.

Highlight for me:

The work EAs do is still important. Donations are still important. All the problems we’ve been working on are still here, and will still be here when this is over. 

I do know this, on an intellectual level, but I need to remind my emotions about it now and again at the moment. So that was a welcome paragraph.

Highlight I immediately sent to my partner:

If I really sang “Happy birthday” twice every time I washed my hands, I’d probably lose my mind. There are many lists of alternative songs with roughly 20-second choruses including “Jolene” and “Love Shack.”

I expect both of those will be on heavy internal rotation for her for the next little while.

Comment by michaela on MichaelA's Shortform · 2020-03-19T06:50:21.091Z · score: 4 (3 votes) · EA · GW

Some more definitions, from or quoted in 80k's profile on reducing global catastrophic biological risks

Gregory Lewis, in that profile itself:

Global catastrophic risks (GCRs) are roughly defined as risks that threaten great worldwide damage to human welfare, and place the long-term trajectory of humankind in jeopardy. Existential risks are the most extreme members of this class.

Open Philanthropy Project:

[W]e use the term “global catastrophic risks” to refer to risks that could be globally destabilising enough to permanently worsen humanity’s future or lead to human extinction.

Schoch-Spana et al. (2017), on GCBRs, rather than GCRs as a whole:

The Johns Hopkins Center for Health Security's working definition of global catastrophic biological risks (GCBRs): those events in which biological agents—whether naturally emerging or reemerging, deliberately created and released, or laboratory engineered and escaped—could lead to sudden, extraordinary, widespread disaster beyond the collective capability of national and international governments and the private sector to control. If unchecked, GCBRs would lead to great suffering, loss of life, and sustained damage to national governments, international relationships, economies, societal stability, or global security.
Comment by michaela on Should recent events make us more or less concerned about biorisk? · 2020-03-19T06:24:35.751Z · score: 3 (2 votes) · EA · GW

I think there are sort-of four subquestions here:

1. Do these events provide evidence that we should've been more worried all along about pandemics in general (not necessarily from a longtermist/x-risk perspective)?

2. Do these events provide evidence that we should've been more worried all along about existential risk from pandemics?

3. Do these events increase the actual risk from future pandemics in general (not necessarily from a longtermist/x-risk perspective)?

4. Do these events increase the actual existential risk from future pandemics?

With that in mind, here are my wild speculations as to the answers, informed by very little actual expertise.

I'm fairly confident the answer to 3 is no. It seems quite likely to me that these events will at least somewhat decrease the actual risk from future pandemics in general, because of the "warning shot" effect you mention.

I think 4 is a very interesting question. I would guess that there's enough overlap between what's good for pandemics in general and what's good for existential risks from pandemics that these events will reduce those risks, again due to the "warning shot" effect.

I would also guess that we'll see something more like resources being added to the pool of pandemic preparedness, rather than resources being taken away from longtermist-style pandemic preparedness in order to fuel more "small scale" (by x-risk standards) or "short term" pandemic preparedness. This is partly informed by my second-hand impression that there's currently not many resources in specifically longtermist-style pandemic preparedness anyway (to the extent that the two categories are even separate).

But I could imagine being wrong about all of that.

I think the answers to 1 and 2 depend what you previously believed. I think for most people, the answer to both should be "yes" - most people seemed to have very much dismissed, or mostly just not thought about, risks from pandemics, so a very real example seems likely to remind them that things that don't usually happen really do happen sometimes.

But it seems to me that what we're seeing here is remarkably like what I've been hearing from EAs, longtermists, and biorisk people since I got into EA, from various podcasts and articles and conversations. So for these people, it might not be "new evidence", just something that fits with their existing models (which doesn't mean they expected precisely this to happen at precisely this point).

Comment by michaela on Will the coronavirus pandemic advance or hinder the spread of longtermist-style values/thinking? · 2020-03-19T06:11:35.147Z · score: 5 (5 votes) · EA · GW

Very speculative and anecdotal

I think I personally find myself emotionally tugged away from longtermism a little by these events. When there's so much destruction happening "right before my eyes" and in a short enough future that it can really emotionally resonate, it's like on some level my brain/emotions are telling me "How could you be worried about AI risk or a future bioengineered pandemic at a time like this! There are people dying right now. This already is a catastrophe!"

And it's slightly hard to feed into my emotions the fact that a very different scale of catastrophe, and a much more permanent type, could still possibly happen at some point. (Again, I'm not dismissing that the current pandemic really is a catastrophe, and I do believe it makes sense to reallocate substantial effort to it right now.)

On the other hand, this pandemic also seems to validate various things longtermists have been saying for a while, such as about how civilization is perhaps more fragile than people imagine, how we need to improve the speed at which we can develop vaccines, etc. And it provides an emotionally powerful reminder of just how bad and real a catastrophe can be, which might make it easier for people to feel how bad it is that we could have a catastrophe that's even worse, and that in fact destroys civilization as a whole.

I think I'd tentatively guess that this pandemic will make the general public slightly more "longtermist" in their values in general. I'd also guess that it'll make the general public substantially more in favour - for present-focused reasons - of things that also happen to be good from a longtermism perspective (e.g., increased spending on future pandemic preparedness in general).

But I'm not sure how it'll affect people who are already quite longtermist. From my sample size of 1 (myself), it seems it won't really change behaviours, but will slightly reduce the emotional resonance of longtermism right now (as opposed to just general focus on GCRs).

Comment by michaela on Should recent events make us more or less concerned about biorisk? · 2020-03-19T05:57:29.588Z · score: 4 (4 votes) · EA · GW
(Come to think of it, putting some thought now into how to mobilise those forces to avert the next pandemic is probably warranted, since I think there's a pretty good chance all that energy dissipates without much to show for it within a few years of this pandemic ending.)

I agree with this. I generally suspect it's important to give people "things to do" when they're currently riled up/inspired/motivated about something, and that if the absence of things to do they'll just gradually revert to their prior sets of interests and focuses (or those of the people they're around). I suspect it would be very valuable for people to currently think of concrete things that a wide range of people (not just biorisk experts) can productively do in relation to biorisk after this pandemic has been handled, and be ready to spread the word about those things during and right after the pandemic, so we can capitalise on the momentum.

(I have no firm data or expertise to back this view up.)

Comment by michaela on What are some 1:1 meetings you'd like to arrange, and how can people find you? · 2020-03-18T16:31:40.087Z · score: 6 (5 votes) · EA · GW

Who are you?

I'm Michael Aird. I'm a researcher/writer with the existential risk strategy group Convergence Analysis. Before that, I studied and published a paper in psychology, taught at a high school, and won a stand-up comedy award which ~30 people in the entire world would've heard of (a Golden Doustie, if you must know).

People can talk to me about

  • Things related to topics I've written about, such as:
  • I might be able to help you think about what longtermism-related research, career options, etc. to pursue, based on my extended hunt. But I'm pretty new to the area myself.
  • EA clubs/events/outreach in schools. I don't do this anymore, but could share resources or tips from when I did.

I'd like to talk to other people about

  • Pretty much any topic I've written about!
  • How to evaluate or predict the impacts of longtermist/x-risk-related interventions
  • Relatedly, theory of change for research or research organisations (especially longtermist and/or relatively abstract research)
  • Feedback on my work - about anything from minor style points to entire approaches or topic choices
  • Other topics people think it'd be useful for me to learn about, research, and/or write about

How to get in touch:

Send me a message here, or email me at michaeljamesaird at gmail dot com, and we can arrange a Hangouts meeting if you wish.

Comment by michaela on Virtual EA Global: News and updates from CEA · 2020-03-18T16:15:42.466Z · score: 1 (1 votes) · EA · GW

Sounds great, and thanks for the response!

Comment by michaela on What are some 1:1 meetings you'd like to arrange, and how can people find you? · 2020-03-18T15:54:01.557Z · score: 1 (1 votes) · EA · GW

Potentially there could be both. And if people enter themselves in the master directory, they could get a pop up informing them of their local group's directory or way of networking, and could be asked if they're happy for their info to be automatically added there as well. And vice versa if they add their info to the local group's directory (or equivalent), as long as the local group's approach involves people adding their info in some way.

That way both the centralised and local versions could grow together, but people would still have the choice to just be involved in one or the other if they prefer.

Just a thought - not sure how easy/useful it'd be to actually institute.

Personally, I think I'd benefit from and appreciate something like that system, if someone else put in the work to make it happen :D

Comment by michaela on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T15:49:09.913Z · score: 1 (1 votes) · EA · GW

What are your thoughts on how to evaluate or predict the impact of longtermist/x-risk interventions, or specifically efforts to generate and spread insights on this matters? E.g., how do you think about decisions like which medium to write in and whether to focus on generating ideas vs publicising ideas vs fundraising?

Comment by michaela on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T15:46:33.295Z · score: 1 (1 votes) · EA · GW

How would your views change (if at all) if you thought it was likely that there are intelligent beings elsewhere in the universe that "are responsive to moral reasons and moral argument" (quote from your book)? Or if you thought it's likely that, if humans suffer an existential catastrophe, other such beings would evolve on Earth later, with enough time to potentially colonise the stars?

Do your thoughts on these matters depend somewhat on your thoughts on moral realism vs antirealism/subjectivism?

Comment by michaela on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T15:41:35.948Z · score: 1 (1 votes) · EA · GW

You break down a "grand strategy for humanity" into reaching existential security, the long reflection, and then actually achieving our potential. I like this, and think it would be a good strategy for most risks.

But do you worry that we might not get a chance for a long reflection before having to "lock in" certain things to reach existential security?

For example, perhaps to reach existential security given a vulnerable world, we put in place "greatly amplified capacities for preventive policing and global governance" (Bostrom), and this somehow prevents a long reflection - either through permanent totalitarianism or just through something like locking in extreme norms of caution and stifling of free thought. Or perhaps in order to avoid disastrously misaligned AI systems, we have to make certain choices that are hard to reverse later, so we have to have at least some idea up-front of what we should ultimately choose to value.

(I've only started the book; this may well be addressed there already.)

Comment by michaela on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T15:29:53.982Z · score: 2 (2 votes) · EA · GW

You seem fairly confident that we are at "the precipice", or "a uniquely important time in our story". This seems very plausible to me. But how long of a period are you imagining for the precipice?

The claim is much stronger if you mean something like a century than something like a few millennia. But even if the "hingey" period is a few millennia, then I imagine that us being somewhere in it could still be quite an important fact.

(This might be answered past chapter 1 of the book.)

Comment by michaela on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T15:25:19.289Z · score: 3 (2 votes) · EA · GW

In your book, you define an existential catastrophe as "the destruction of humanity's longterm potential". Would defining it instead as "the destruction of the vast majority of the longterm potential for value in the universe" capture the concept you wish to refer to? Would it perhaps slightly more technically accurately/explicitly capture what you wish to refer to, just perhaps in a less accessible or emotionally resonating way?

I wonder this partly because you write:

It is not that I think only humans count. Instead, it is that humans are the only beings we know of that are responsive to moral reasons and moral argument - the beings who can examine the world and decide to do what is best. If we fail, that upwards force, that capacity to push towards what is best or what is just, will vanish from the world.

It also seems to me that "the destruction of the vast majority of the longterm potential for value in the universe" would seem to be meaningfully more similar to what I'm really interested in avoiding than the destruction of humanity's potential if/when AGI, aliens, or other intelligent life evolving on earth becomes or is predicted to become an important shaper of events (either now or in the distant future).

Comment by michaela on Virtual EA Global: News and updates from CEA · 2020-03-18T15:12:53.774Z · score: 15 (8 votes) · EA · GW

Great thinking, and impressive work putting this together so quickly!

The agenda for the broadcast is here. A live host will provide updates and commentary during a varied programme of pre-recorded videos.

That made me wonder if the videos are pre-recorded videos from past events, or if they're new videos but pre-recorded rather than live streams. But the agenda seems to indicate neither, as the titles are unfamiliar to me and they're listed as live streams. Are they indeed pre-recorded?

Also, does the nature of this event mean the videos are likely to be on YouTube sooner than normal?

Comment by michaela on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T14:47:01.482Z · score: 1 (1 votes) · EA · GW

Good question!

Just a thought: Assuming this question is intended to essentially be about natural vs anthropogenic risks, rather than also comparing against other things like animal welfare and global poverty, it might be simplest to instead wonder: "Are there any specific natural existential risks that are significant enough that more than 1% of longtermist [or "existential risk focused"] resources should be devoted to it? .1%? .01%?"

Comment by michaela on Suggestion: EAs should post more summaries and collections · 2020-03-16T07:20:41.130Z · score: 2 (2 votes) · EA · GW

Yeah, I think that'd be another great thing for people to do. Although I'd add that that might be especially valuable when there's no existing non-EA lit review on the topic (e.g., when the topic is somewhat obscure, or you're taking a particular angle on a topic that EAs are unusually interested in).

E.g., perhaps there's already a fairly good lit review on hiring practices in general, and you could add some value by writing a more summarised or updated version. But you might also be able to capture a lot of that value just by making a link post to that review on the forum and noting/summarising a few key points. Meanwhile, there might be no lit review that's focused on hiring practices for start-up-style nonprofits in particular, so writing that might be especially worthwhile (if that's roughly the subtopic/angle you were interested in anyway).

On the other hand, I think I would've guessed that the topic "how to create a highly impactful/disruptive research team" would be quite far from obscure, and would already have a solid, up-to-date lit review covering what EA people would want to know. But this post suggests there wasn't an existing lit review with that particular angle, and the post seemed quite interesting and useful to me. So there are probably more "gaps" than I would naively expect, and thus substantial value in lit reviews on certain topics I would naively expect are already "covered".

Comment by michaela on Toby Ord’s ‘The Precipice’ is published! · 2020-03-12T16:49:44.669Z · score: 1 (1 votes) · EA · GW

Thanks. Yes, I'll get the ebook then.

Comment by michaela on Toby Ord’s ‘The Precipice’ is published! · 2020-03-11T07:56:47.073Z · score: 6 (5 votes) · EA · GW

Excited to get stuck into this!

I generally prefer audiobooks, but on 80k Toby mentioned that about half of the book is interesting footnotes and appendices. Will the audiobook version have all of that? And how would it work (e.g., are all the footnotes just read at the end, or read alongside the relevant part of the main text)?

Comment by michaela on Causal diagrams of the paths to existential catastrophe · 2020-03-10T14:39:44.238Z · score: 1 (1 votes) · EA · GW

Hmm, I'm not sure I fully understand what you mean. But hopefully the following somewhat addresses it:

One possibility is that two different researchers might have different ideas of what the relevant causal pathways actually are. For a simple example, one researcher might not think of the possibility that a risk could progress right from the initial harms to the existential catastrophe, without a period of civilizational collapse first, or might think of that but dismiss it as not even worth considering because it seems so unlikely. A different researcher might think that that path is indeed worth considering.

If either of the researchers tried to make an explicit causal diagram of how they think the risk could lead to existential catastrophe, the other one would probably notice that their own thoughts on the matter differ. This would likely help them see where the differences in their views lie, and the researcher who'd neglected that path might immediately say "Oh, good point, hadn't thought of that!", or they might discuss why that seems worth considering to one of the researchers but not to the other.

(I would guess that in practice this would typically occur for less obvious paths than that, such as specific paths that can lead to or prevent the development/spread of certain types of information.)

Another possibility is that two different researchers have essentially the same idea of what the relevant causal pathways are, but very different ideas of the probabilities of progression from certain steps to other steps. In that case, merely drawing these diagrams, in the way they're shown in this post, wouldn't be sufficient for them to spot why their views differ.

But having the diagrams in front of them could help them talk through how likely they think each particular path or step is. Or they could each assign an actual probability to each path or step. Either way, they should then be able to see why and where their views differ.

In all of these cases, ideally, the researchers would go beyond just noticing where their views differ and instead discuss why each of them believes what they believe about the point on which they differ.

Does that answer your question?

If by "how to compare them" you mean "how to tell which one is better", then that's something that this tool alone can't do. But by facilitating clear, explicit thought and discussion, this tool could potentially help well-informed people form views about which diagrams/models are more valid or useful.

Comment by michaela on Where to find EA-related videos · 2020-03-10T12:08:14.880Z · score: 2 (2 votes) · EA · GW

Thanks! Added ACE and J-PAL to the list :)

I didn't add Econimate because a brief look suggests it's not especially EA- or extreme-poverty-related (e.g., not necessarily more so than MRU's playlist on development economics, which I also didn't include). But it does look interesting and fairly relevant to various EA-type interests, so still cool that you commented it here!

Comment by michaela on Causal diagrams of the paths to existential catastrophe · 2020-03-09T16:28:37.696Z · score: 3 (2 votes) · EA · GW

Thanks! I hope so.

By "comparing different positions on a specific existential risk", it seems to me that you could mean either:

1. Comparing what different "stages" of a specific risk would be like

  • e.g., comparing what it'd be like if we're at the "implementation of hazardous information" vs "harmful events" stage of a engineered pathogen risk

2. Comparing different people's views on what stage a specific risk is currently at

  • e.g., identifying that one person believes the information required to develop an engineered pathogen just hasn't been developed, while another believes that it's been developed but has yet to be shared or implemented

3. Comparing different people's views on a specific risk more generally

  • e.g., identifying that two people roughly agree on the chances an engineered pathogen could be developed, but disagree on how likely it is that it'd be implemented or that a resulting outbreak would result in collapse/extinction, and that's why they overall disagree about the risk levels
  • e.g., identifying that two people roughly agree on the overall risks from an engineered pandemic, but this obscures the fact that they disagree, in ways that roughly cancel out, on the probabilities of progression from each stage to the next stage. This could be important because it could help them understand why they advocate for different interventions.

(Note that I just randomly chose to go with pathogen examples here - as I say in the post, these diagrams can be used for a wide range of risks.)

I think that, if these diagrams can be useful at all (which I hope they can!), they can be useful for 1 and 3. And I think perhaps you had 3 in mind, as that's perhaps most similar to what the state space model you linked to accomplishes. (I'd guess these models could also be useful for 2, but I'm not sure how often informed people would have meaningful disagreements about what stage a specific risk is currently at.)

Hopefully my examples already make it somewhat clear why I think that these diagrams could help with 1 and 3, and why that's important. Basically, I think most things that help people make their more of their thinking more explicit, or that prompt/force them to do so, will help them identify precisely where they agree and disagree with each other. (I think this also applies to stating one's probabilities/credences explicitly, as I sort of allude to in passing in a few places here.)

Another way to put that is these things will help or make people "factor out" various inputs into their bottom line conclusions, so we can more easily point to those inputs that seem most uncertain or contestable, or conversely we can realise "Oh, that's actually a great point - I should add that to my own internal model of the situation". I think visualisation also generally makes that sort of thing easier and more effective.

And I think these diagrams can also work as "a summary of all causal pathways to this risk", if I'm interpreting you correctly. For example, you could further flesh out my final diagram from this post (not the Defence in Depth diagram) to represent basically all the major causal pathways to existential catastrophes from bioengineering. And then you could also have people assign probabilities to moving from each stage to each other stage is connects to, or even contest which connections are shown (e.g., suggest how one step could "bypass" what I showed it as connecting to in order to connect to later steps). And then they could debate these things.

(But as I say in the post, I think if we wanted to get quite granular, we'd probably want to ultimately use something like Guesstimate. And there are also various other models and approaches we could use to complement these diagrams.)

Comment by michaela on A List of Things For People To Do · 2020-03-09T10:42:15.011Z · score: 1 (1 votes) · EA · GW

Thanks for making this list!

An additional set of suggestions I've just made, a year and a day after your post, are to:

  • Make a link post to a summary of useful ideas/topics, if a decent summary exists but isn’t already on the EA Forum or LessWrong
  • Write a summary of the idea/topic themselves, if no decent summary exists at all, or one exists but doesn’t quite capture how EAs want to use that thing more summaries of useful ideas, and more collections of quotes, sources, definitions, terms, etc.
  • Post a "collection" of quotes, sources, definitions, terms, etc. on a useful idea/topic.

I suggest this particularly for people who are just learning about or researching a topic anyway, and happen to spot a gap. Collections in particular can be very quick to do, in that case. And I think that both summaries and collections can make the road easier for those who follow.

Posting such summaries and collections is more like an incidental extra thing people can do than like a thing people could primarily focus on. But it could perhaps serve a similar role to something like occasional volunteering.