What actions would obviously decrease x-risk?

post by reallyeli · 2019-10-06T21:00:24.025Z · score: 22 (12 votes) · EA · GW · No comments

This is a question post.


    16 Ben_West
    10 G Gordon Worley III
    9 Khorton
    8 Pablo_Stafforini
    6 RomeoStevens
    2 atlas
No comments

Consider all the actions possible tomorrow, which individuals or groups could take. Are there any which would "obviously" (meaning: you believe it with high probability, and you expect that belief to be uncontroversial) result in decreased x-risk?

(For example, consider reducing the size of Russia and the US's nuclear stockpiles. I'm curious if this is on the list.)

(I include "which individuals or groups" could take because I am interested in what actions we could take if we all coordinated perfectly. For example, neither Russia nor the US can unilaterally reduce both their stockpiles, and perhaps it would increase x-risk for one of them to lower only theirs, but the group consisting of US and Russia's government could theoretically agree to lower both stockpiles.)


answer by Ben_West · 2019-10-07T17:27:04.515Z · score: 16 (7 votes) · EA(p) · GW(p)

ALLFED and related projects like seed banks seem pretty uncontroversially likely to reduce the risk of human extinction.

comment by Max_Daniel · 2019-10-07T17:57:47.500Z · score: 9 (6 votes) · EA(p) · GW(p)

If "uncontroversially" means something like "we can easily see that their net effect is to reduce extinction risk," then I disagree. To give just two examples, the known availability of alternative foods might decrease the perceived cost of nuclear war, thus making it more likely; and it might instill a sense of false confidence in decision-makers, effectively diverting attention and funding from more effective risk-reduction measures. I'm perhaps willing to believe that, after weighing up all these considerations, we can all agree that the net effect is to reduce risk, but I think this is far from obvious.

comment by Ben_West · 2019-10-09T14:32:28.503Z · score: 3 (2 votes) · EA(p) · GW(p)
effectively diverting attention and funding from more effective risk-reduction measures

Yeah, if you count "may distract from an even better intervention" as a reason why something is "not obviously good", then I think that basically nothing is obviously good. (Which might be true, just pointing out that this criticism seems pretty general.)

comment by Max_Daniel · 2019-10-09T17:14:15.698Z · score: 7 (4 votes) · EA(p) · GW(p)

I agree, and in fact "nothing is obviously good" describes my (tentative) view reasonably well, at least if (i) the bar for 'obviously' is sufficiently high and (ii) 'good' is to be understood as roughly 'maximizing long-term aggregate well-being.'

Depending on the question one is trying to answer, this might not be a useful perspective. However, I think that when our goal is to actually select and carry out an altruistic action, this perspective is the right one: I'd want to simply compare the totality of the (wellbeing-relevant) consequences with the relevant counterfactual (e.g., no action, or another action), and it would seem arbitrary to me to exclude certain effects because they are due to a general or indirect mechanism.

(E.g., suppose for the sake of the argument that I'm going to die in a nuclear war that would not have happened in a world without seed banks. - I'd think that my death makes the world worse, and I'd want someone deciding about seed banks today to take this disvalue into account; this does not depend on whether the mechanism is that nuclear bombs can be assembled from seeds, or that seed banks have crowded out nuclear deproliferation efforts, or whatever.)

comment by Khorton · 2019-10-09T17:24:33.871Z · score: 2 (1 votes) · EA(p) · GW(p)

I think it's a better idea to first identify ideas that are better than doing nothing - which in itself can be difficult! - and then prioritize those.

I think there are talented people who could be convinced to work on the long term future if they are given a task to do which is uncontroversially better than doing nothing. I agree it's better to prioritize actions than just work on the first one you think of, but starting with a bar of 'optimal' seems too high.

comment by Max_Daniel · 2019-10-09T19:20:10.529Z · score: 6 (4 votes) · EA(p) · GW(p)

I agree. However, your reply makes me think that I didn't explain my view well: I do, in fact, believe that it is not obvious that, say, setting up seed banks is "better than doing nothing" - and more generally, that nothing is obviously better than doing nothing.

I suspect that my appeal to "diverting attention and funding" as a reason for this view might have been confusing. What I had in mind here was not an argument about opportunity cost: while true, I did not want to say that an actor that set up a seed bank could perhaps have done better by doing something else instead (say, donating to ALLFED).

Instead, I was thinking of effects on future decisions (potentially by other actors), as illustrated by the following example:

  • Compare the world in which, at some time t0, some actor A decides to set up a seed bank (say, world w1) with the world w2 in which A decides to do nothing at t0.
  • It could be the case that, in w2, at some later time t1, a different actor B makes a decision that:
    • Causes a reduction in the risk of extinction from nuclear war that is larger than the effect of setting up a seed bank at t0. (This could even be, say, the decision to set up two seed banks.)
    • Happened only because A did not set up a seed bank at t0, and so in particular does not occur in world w1. (Perhaps a journalist in w2 wrote a piece decrying the lack of seed banks, which inspired B - who thus far was planning to become an astronaut - to devote her career to setting up seed banks.)

Of course, this particular example is highly unlikely. And worlds w1 and w2 would differ in lots of other aspects. But I believe considering the example is sufficient to see that extinction risk from nuclear war might be lower in world w2 than in w1, and thus that setting up a seed bank is not obviously better than doing nothing.

answer by G Gordon Worley III · 2019-10-07T19:21:22.580Z · score: 10 (5 votes) · EA(p) · GW(p)

Develop and deploy a system to protect Earth from impacts from large asteroids, etc.

comment by Ben_Harack · 2019-10-18T03:42:22.655Z · score: 14 (7 votes) · EA(p) · GW(p)

While I'm sympathetic to this view (since I held it for much of my life), I have also learned that there are very significant risks to developing this capacity naively.

To my knowledge, one of the first people to talk publicly about this was Carl Sagan, who discussed this in his television show Cosmos (1980), and in these publications:

Harris, A., Canavan, G., Sagan, C. and Ostro, S., 1994. The Deflection Dilemma: Use Vs. Misuse of Technologies for Avoiding Interplanetary Collision Hazards.

Ben's summary:

  • Their primary concern and point is that a system built to defend humanity from natural asteroids would actually expose us to more risk (of anthropogenic origin) than it would mitigate (of natural origin).
  • Opportunities for misuse of the system depend almost solely on the capability of that system to produce delta-V changes in asteroids (equivalently framed as “response time”). A system capable of ~1m/s delta V would be capable of about 100 times as many misuses as its intended uses. That is, it would see ~100 opportunities for misuse for each opportunity for defending Earth from an asteroid.
  • They say that a high capability system (capable of deflection with only a few days notice) would be imprudent to build at this time.

Sagan, C. and Ostro, S.J., 1994. Dangers of asteroid deflection. Nature, 368(6471), p.501.

Sagan, C., 1992. Between enemies. Bulletin of the Atomic Scientists, 48(4), p.24.

Sagan, C. and Ostro, S.J., 1994. Long-range consequences of interplanetary collisions. Issues in Science and Technology, 10(4), pp.67-72.

Two interesting quotes from the last one:

  • “There is no other way known in which a small number of nuclear weapons can destroy global civilization.”
  • “No matter what reassurances are given, the acquisition of such a package of technologies by any nation is bound to raise serious anxieties worldwide.”

More recently, my collaborator Kyle Laskowski and I have reviewed the relevant technologies (and likely incentives) and have come to a somewhat similar position, which I would summarize as: the advent of asteroid manipulation technologies exposes humanity to catastrophic risk; if left ungoverned, these technologies would open the door to existential risk. If governed, this risk can be reduced to essentially zero. (However, other approaches, such as differential technological development and differential engineering projects do not seem capable of entirely closing off this risk. Governance seems to be crucial.)

So, we presented a poster at EAG 2019 SF: Governing the Emerging Risk Posed By Asteroid Manipulation Technologies where we summarized these ideas. We're currently expanding this into a paper. If anyone is keenly interested in this topic, reach out to us (contact info is on poster).

comment by MichaelA · 2020-04-08T09:22:47.231Z · score: 2 (2 votes) · EA(p) · GW(p)

You may already be aware of this, and/or the window of relevance may have passed, but just thought I'd mention that Toby Ord discusses a similar matter in The Precipice. He seems to come to roughly similar conclusions to you and to Sagan et al., assuming I'm interpreting everyone correctly.

E.g. he writes:

There is active debate about whether more should be done to develop deflection methods ahead of time. A key problem is that methods for deflecting asteroids away from Earth also make it possible to deflect asteroids towards Earth. This could occur by accident (e.g. while capturing asteroids for mining), or intentionally (e.g. in a war, or in a deliberate attempt to end civilization). Such a self-inflicted asteroid impact is extremely unlikely, yet may still be the bigger risk.

This seems like an interesting and important point, and an example of how important it can be to consider issues like downside risks [? · GW], the unilateralist’s curse, etc. - perhaps especially in the area of existential risk reduction. And apparently even with what we might see as one of the rare "obviously" good options!

Something I find slightly odd, and that might conflict with yours or Sagan et al.'s views, was that Ord also wrote:

One reason [such a self-inflicted asteroid impact] is unlikely is that several of the deflection methods (such as nuclear explosions) are powerful enough to knock the asteroid off course, but not refined enough to target a particular country with it. For this reason, these might be the best methods to pursue.

I don't really know anything about this area, but it seems strange to hear that the option involving nuclear explosions is the safer one. And I wonder if the increased amount of explosives, development of tech for delivering it to asteroids, etc., could increase risks independently of asteroid-deflection, such as if it can be repurposed for just directly harming countries on Earth. Or perhaps it could reduce the safety benefits we'd get from having colonies on other moons/planets/asteroids/etc.?

Again, though, this is a field I know almost nothing about. And I assume Ord considered these points. Also, obviously there are many nuclear weapons and delivery mechanisms already.

comment by Linch · 2019-10-15T06:01:12.165Z · score: 7 (3 votes) · EA(p) · GW(p)

This is the only answer here I'm moderately confident is correct. A pity the EV is so low!

answer by Khorton · 2019-10-06T22:04:05.419Z · score: 9 (7 votes) · EA(p) · GW(p)

I'd suggest work that would allow vaccines to be developed much more quickly falls into this category - it was mentioned in the 80,000 Hours podcast with Tom Kalil.

"I was able to get some additional funding for this new approach [to develop vaccines more quickly] and my primary motivation for it was, maybe it’ll help in Ebola, but almost certainly if it works it will improve our ability to respond to future emerging infectious diseases, or maybe even a world of engineered pathogens."


comment by Max_Daniel · 2019-10-07T18:03:22.356Z · score: 10 (7 votes) · EA(p) · GW(p)

Again, I disagree this is obvious. Just some ways in which this could be negative:

  • It could turn out that some of the research required for rapid vaccine development can be misused or exacerbate other risk.
  • The availability of rapid vaccine manufacturing methods could lead to a false sense of confidence among decisionmakers, leading to them effectively neglecting other important prevention and mitigation measures against biorisk.
answer by Pablo_Stafforini · 2019-10-07T23:30:11.320Z · score: 8 (6 votes) · EA(p) · GW(p)

In this talk on 'Crucial considerations and wise philanthropy', Nick Bostrom tentatively mentions some actions that appear to be robustly x-risk reducing, including promoting international peace and cooperation, growing the effective altruism movement, and working on solutions to the control problem.

comment by alexlintz · 2019-10-08T17:47:39.680Z · score: 3 (2 votes) · EA(p) · GW(p)

Just to play devil's advocate with some arguments against peace (in a not so well thought out way)... There's a book called 'The Great Leveler' which puts forward the hypothesis that the only time when widespread redistribution has happened is after wars. This means that without war we might expect consistently rising inequality. This effect has been due to mass mobilization ('Taxing the Rich' asserts that there has only been mass political willpower to increase redistribution with the claims of veterans having served and feeling they should be compensated) anddestructionn of capital (in Europe much of the capital was destroyed in WW2 -> massive decrease in inequality, US less so on both front) (haven't read the book though). Spinning this further we could be approaching a time where great power war would not have this effect. This is because less labor is required and it would be higher skilled. Perhaps there would be little use for low skilled grunts in near future wars (or already). If we also saw less destruction of capital (maybe information warfare is the way of the future?) Then we lose the mechanisms which made war a leveller in the past. SO we might be in the last time where a great power war (one of the only things we know reduces inequality) would be able to reduce inequality. If inequality continues to increase we could see suboptimal societal values which could continue on indefinitely and/or cause large amount of suffering the mediumrun. This could also lead to more domestic unrest in medium-run which would imply a peace now vs peace later trade-off. Depending on how hingey the moment is for the long-term future now, it could be better to have peace later. ALSO, UN was created post WW2. Maybe we only have appetite for major international cooperation after nasty wars? Anyway... Even after considering that, peace and cooperation is probably good on net, but not as obvious as it may seem. (Wrote this on mobile, sorry for any errors and lack of having read more than a few pages of the books I cited)

comment by Pablo_Stafforini · 2019-10-08T18:30:02.131Z · score: 7 (5 votes) · EA(p) · GW(p)
ALSO, UN was created post WW2. Maybe we only have appetite for major international cooperation after nasty wars?

This seems like point worth highlighting, especially vis-à-vis Bostrom's own views about the importance of global governance in 'The Vulnerable World Hypothesis'. Worth also noting that the League of Nations was created in the aftermath of WW1.

comment by Ben_Harack · 2019-10-18T03:50:48.298Z · score: 8 (5 votes) · EA(p) · GW(p)

This line of inquiry (that rebuilding after wars is quite different from other periods of time) is explored in G. John Ikenberry's After Victory: Institutions, Strategic Restraint, and the Rebuilding of Order After Major Wars. A quick and entertaining summary of the book - and how it has held up since its publication - was written by Ikenberry in 2018: Reflections on After Victory.

comment by Pablo_Stafforini · 2019-10-18T11:50:21.766Z · score: 4 (2 votes) · EA(p) · GW(p)

Thank you for those references!

answer by RomeoStevens · 2019-10-06T23:59:48.192Z · score: 6 (4 votes) · EA(p) · GW(p)

Increasing the ease/decreasing the formality of world leaders talking to each other as per the Red Phone. World leaders mostly getting educated at the same institutions helps enormously with communication as well, though it does increase other marginal risks due to correlated blind spots.

Biorisk mitigation becoming much higher status a field and thus attracting more top talent.

Pakistan not having nukes.

comment by Max_Daniel · 2019-10-07T18:01:17.524Z · score: 9 (6 votes) · EA(p) · GW(p)

Again, I disagree that any of these is obvious:

  • Ease of communication also opens up more opportunities for rash decisions and premature messages, can reduce the time available for decisions, and the potential for this infrastructure to be misused by malign actors.
  • Biorisk mitigation being higher status could contribute to making the dangers of bioweapons more widely known among malign actors, thus making it more likely that they're being developed.
  • Pakistan not having nukes would alter the geopolitical situation in South Asia in major ways, with repercussions for the relationships between the major powers India, China, and the US. I find it highly non-obvious what the net effect of this would be.
comment by RomeoStevens · 2019-10-07T18:04:51.202Z · score: 2 (5 votes) · EA(p) · GW(p)

I'd suggest keeping brainstorming and debates about obviousness thresholds separate as the latter discourages people from ideating.

comment by Pablo_Stafforini · 2019-10-07T23:37:28.699Z · score: 8 (5 votes) · EA(p) · GW(p)

Personally, I don't find that skeptical comments like Max's discourage me from ideating. And the suggestion to keep ideation and evaluation separate might discourage the latter, since it's actually not obvious how to operationalize 'keeping separate'.

comment by Khorton · 2019-10-08T21:14:07.731Z · score: 6 (4 votes) · EA(p) · GW(p)

I've previously read a study that suggested evaluation during brainstorming led to less ideas - I don't remember where. Personally, I feel less inclined to post when I know someone will tell me my idea is wrong.

Edit: A Harvard Business Review article about brainstorming and 'evaluation anxiety' led me to this article, which I have not been able to read yet.



comment by anonymous_ea · 2019-10-08T23:05:56.829Z · score: 16 (8 votes) · EA(p) · GW(p)

A general comment about this thread rather than a reply to Khorton in particular: The original post didn't suggest that this should be a brainstorming thread, and I didn't interpret it like that. I interpreted it as a question looking for answers that the posters believe, rather than only hypothesis generation/brainstorming.

comment by Larks · 2019-10-09T14:07:48.188Z · score: 4 (2 votes) · EA(p) · GW(p)

When I was studying maths it was made clear to us that some things were obvious, but not obviously obvious. Furthermore, many things I thought were obvious were in fact not obvious, and some were not even true at all!

comment by Max_Daniel · 2019-10-07T23:26:54.101Z · score: 4 (3 votes) · EA(p) · GW(p)

Thanks for this suggestion. I agree that in general brainstorming and debates are best kept separate. I also wouldn't want to discourage anyone from posting an answer to this question - in fact, I'm unusually interested in more answers to this question. I'm not sure if you were saying that you in particular feel discouraged from ideating as a response to seeing my comment, but I'm sorry if so. I'm wondering if you would have liked me to explain why I was expressing my disagreement, and to explicitly say that I value you suggesting answers to the original question (which I do)?

comment by G Gordon Worley III (gworley3) · 2019-10-07T19:19:04.263Z · score: 1 (4 votes) · EA(p) · GW(p)


Further the OP gives a specific notion of obviousness to use here:

"obviously" (meaning: you believe it with high probability, and you expect that belief to be uncontroversial)

This doesn't leave a lot of room for debate about what is "obvious" unless you want to argue that a person doesn't believe it with high probability and they are wrong about their own belief about how controversial it is.

answer by atlas · 2019-10-07T20:25:20.041Z · score: 2 (2 votes) · EA(p) · GW(p)
  • Most actions that seem to make arms races or war more unlikely, e.g. the world's major powers committing to strengthening international institutions and multilateralism.
  • Any well-connected and well-resourced actor dedicating themselves to research ways to improve decision-making that affects the long-term in large institutions.
  • Everyone in the AI research community taking a few weeks to engage deeply with AI risk arguments.

No comments

Comments sorted by top scores.