International cooperation as a tool to reduce two existential risks.

post by johl@umich.edu · 2021-04-19T16:51:36.974Z · EA · GW · 4 comments

Contents

    Status
    Points:
    is a global public good and why are global public goods usually underprovided?
      Why are GPGs underprovided?
      What disciplines are useful for international cooperation research?
    Functions
      What is an aggregator function?
      Aggregator functions applied to two existential risks
    could international cooperation reduce undiscovered existential risks?
    / Neglectedness / Tractability
      Importance: High
      Neglectedness: Medium
      Tractability: Medium
    I might be wrong
      (1) You believe international cooperation will not be a bottleneck to reducing existential risk.
      (2) You prefer more easily quantifiable and less risky causes to work on and donate to:
      (3) You believe that most moral worth lies in living (rather than future, unborn) people and/or animals
      (4) You believe international cooperation is a slippery slope to totalitarianism
    
      In the EA community, I recommend:
      Individuals that want to contribute to this cause could do so in the following ways:
      Goals for an international agreement on AI safety:
      Goals for an international agreement to reduce engineered pandemic risk:
      What would success in international cooperation look like?
    1: Further Questions
    2: Related EA forum posts and videos
    3: Papers / books I have found helpful in learning about international cooperation on existential risk
      Books
      Papers
    4: What is the track record of international cooperation?
      Successes:
      Failures:
    
  Endnotes
None
4 comments

Epistemic Status

Main Points:

What is a global public good and why are global public goods usually underprovided?

Why are GPGs underprovided?

What disciplines are useful for international cooperation research?

Aggregator Functions

What is an aggregator function?

Aggregator functions applied to two existential risks

Unaligned Artificial intelligence
Engineered Pandemics

Both of these risks also have commonalities with the Type-1 vulnerabilities defined in Bostrom’s Vulnerable World Hypothesis paper[12], since a single actor could cause catastrophic outcomes for large populations. Fortunately, both AGI and engineered pandemics are currently much more difficult to develop than the “easy nukes” in Bostrom’s example.

How could international cooperation reduce undiscovered existential risks?

I’ll continue to focus primarily on AI/EP risks in this post, in order to highlight the more tangible and actionable benefits from international cooperation. However, I believe that the benefits from international cooperation extend far beyond reducing these risks[13], or for that matter, any existential risks we are currently aware of. In (Bostrom, 2019), Bostrom points out that we don’t know how dangerous and accessible future technologies might be[14]. For example, he points out that we were simply lucky that nuclear weapons required a large amount of resources to build. If they could’ve been built with “a piece of glass, a metal object, and a battery”, then their discovery would’ve spelled a global catastrophe. By improving global institutions now, before such technologies are discovered, responses to future risky technologies could be more robust than the uncoordinated actions of ~200 sovereign countries. Since improving international cooperation could reduce future risks larger than even the most dire ones we’re currently facing, the benefits to reducing known existential risks may be just a small fraction of the total value generated by improving international cooperation. For more on the benefits from global governance on yet-unknown existential risks, I refer the reader to (Bostrom, 2019).

Importance / Neglectedness / Tractability

Importance: High

Comparison to other methods of reducing existential risk:

Finally, despite my focus on existential risk in this post, international coordination could be impactful for non-existential, non-longtermist causes. Improved cooperation on medical research, climate change, and trade could improve our longevity, environment, and wealth in the current generation. Using the 80,000 hours framework, I assign a 14 to the importance factor, as I believe improvements in global cooperation on AI/EP could reduce existential risk by approximately 0.5-1%. This corresponds to a very high importance relative to other priority causes.

Neglectedness: Medium

How much funding does the area have?
Organizations that research or advocate for international cooperation on AI Safety and Engineered Pandemics:

The total annual budgets of the above institutions is approximately $57M[32]. This number is likely an overestimate, since these organizations are not entirely devoted to international cooperation on AI/EP risks.

Research occurring in this area:
How much attention are governments paying to this area?

Using the 80,000 hours framework, I assign a 6 to the neglectedness factor. This corresponds to a moderate level of neglectedness relative to other priority causes.

Tractability: Medium

Ways I might be wrong

In this section, I discuss why you might not prioritize international cooperation for AI safety and engineered pandemics.

My argument for international cooperation has at least four areas of potential weakness. I believe objection 1 is the most plausible, as there are at least 3 ways in which it could be true. I am especially uncertain about the propensity of governments to cooperate in the absence of further research and advocacy into international cooperation. I am also uncertain if relevant governments are able to monitor either of these risks with enough scope to reduce existential risk.

(1) You believe international cooperation will not be a bottleneck to reducing existential risk.

This could be true if:

(a) There are good outcomes in the counterfactual: You believe governments will cooperate regardless of increased research/advocacy into cooperation on AI/EP
(b) You believe regional enforcement capabilities and individual values preclude effective international agreements
(c) The aggregator function is wrong: reduction of these two existential risks are not weakest-link GPGs:

(2) You prefer more easily quantifiable and less risky causes to work on and donate to:

(3) You believe that most moral worth lies in living (rather than future, unborn) people and/or animals

(4) You believe international cooperation is a slippery slope to totalitarianism

Recommendations

In the EA community, I recommend:

Individuals that want to contribute to this cause could do so in the following ways:

Below I include what I view as key components of any future international agreements on AI / EP safety, specifically. Countries that cooperate on these points may recommend trade sanctions on non-cooperators to incentivize participation.

Goals for an international agreement on AI safety:

Goals for an international agreement to reduce engineered pandemic risk:

What would success in international cooperation look like?

I believe international cooperation on existential risk would no longer be neglected, and would soon reach diminishing marginal returns if events on a scale similar to the below occurred:

Appendix 1: Further Questions

Appendix 3: Papers / books I have found helpful in learning about international cooperation on existential risk

Books

Papers

Appendix 4: What is the track record of international cooperation?

A look into the history of GPGs is helpful in estimating the tractability of improving GPG provision. Roughly, GPG provision efforts have succeeded when (1) the benefits from cooperating clearly outweigh the costs, and (2) every country benefits from cooperation.

Successes:

Montreal Protocol
Eradication of smallpox

Failures:

Bibliography

Endnotes


  1. As defined on p. 3 in (Bostrom, 2001) ↩︎

  2. p. 16 (Reisen, 2004) ↩︎

  3. This example was borrowed from (Barrett, 2007) p.2 ↩︎

  4. (Buchholz and Sandler, 2021) p.10 ↩︎

  5. (Barrett, 2007) p.20 ↩︎

  6. Other aggregator functions are discussed at length in (Buchholz and Sandler, 2021), p. 10-14. ↩︎

  7. See Appendix 4 for further details on the eradication of smallpox. ↩︎

  8. Barrett discusses the details of disease eradication in the context of polio in (Barrett, 2010). ↩︎

  9. (Ord, 2020) p. 167 ↩︎

  10. (Bostorm, 2014) p. 77 ↩︎

  11. (Bostrom, 2014) p.100. This argument cuts both ways: if the first superintelligence is aligned to human values, it could also stop all progress on unaligned AI. ↩︎

  12. (Bostrom, 2019) p. 458 ↩︎

  13. Thanks to Michael Wulfsohn for raising this point. ↩︎

  14. (Bostrom, 2019) p.455 ↩︎

  15. (Ord 2020) p. 167 ↩︎

  16. Ord estimates the risks of existential catastrophe from AI and engineered pandemics as approximately 10% and 3.3%, respectively, over the next century.(Ord, 2020) p. 167 ↩︎

  17. (Ord, 2020) pp. 96-97 ↩︎

  18. See the prior section: “Aggregator functions applied to two existential risks”, above, for why this path dependence exists. ↩︎

  19. See Recommendations section for example criteria of a “strong level of cooperation” ↩︎

  20. The amount of funding for Open AI, one research lab working on developing safe AI ↩︎

  21. The approximate amount of viewers for the Super Bowl in 2021, typically the most-watched American television event each year. See footnote 22 for further details. ↩︎

  22. $1B could have bought all of the Super Bowl commercials in 2021 (assuming $5M for a 30 second advertisement, and 50 minutes of ads), with $500M left over for production costs. Even more cost-effective advertising is probably achievable via Facebook or other targeted advertising. ↩︎

  23. I have not listed organizations that advocate for international coordination on other risks, such as nuclear war or climate change, since the focus of this post is AI/EP risk. ↩︎

  24. Despite its name, NTI also funds programs to reduce biosecurity risk. I excluded the Global Nuclear Policy Program and International Fuel Cycle Strategies line items, to arrive at an upper bound on pandemic response and prevention funding. Funding amounts can be found on p. 31 of (Nuclear Threat Initiative, 2019). ↩︎

  25. I assumed that John Hopkins’ Bloomberg School of Public Health’s $598m budget (Johns Hopkins, 2021) was allocated to research centers on a pro rata basis based on the number of faculty in each research center. There are 12 faculty members ranking Senior Scholar or above at the Center for Health Security. There are 837 total faculty at the School of Public Health (12 / 837) * $598M yields a midpoint estimate of $9M. ↩︎

  26. (Partnership on AI, 2019) p. 9 ↩︎

  27. (Future of Life Institute, 2019) ↩︎

  28. (Open Philanthropy, 2017). Used exchange rate of $1.39 per £1. ↩︎

  29. I could not find an exact budget. I estimated these numbers based on FHI’s budget, a similar research center at Oxford. ↩︎

  30. (Leverhulme, 2021) Calculated as 1/10 of the 10 million pound grant from the Leverhulme Trust. See footnote 28 for exchange rates. ↩︎

  31. (Global Catastrophic Risk Institute, 2020), retrieved April 14, 2021 ↩︎

  32. I used the upper bound of my estimates for the Johns Hopkins Center for Health Security and Centre for the Study of Existential Risk budgets. ↩︎

  33. (Ord, 2020) p. 280 ↩︎

  34. (Global Priorities Institute, 2020) p. 43 ↩︎

  35. (Zheng, 2020) ↩︎

  36. (Frischknecht, 2003), p. 2 ↩︎

  37. See Appendix 4 for further details on the eradication of smallpox. ↩︎

  38. (US State Department, 2019), p.47 ↩︎

  39. In the Preventive Policing header in (Bostrom, 2019), Bostrom explores the tradeoff between enforcement efficacy and privacy. While a “high-tech panopticon” probably would not be necessary to reduce AI/EP risks to acceptable levels today, if the means to unleash existential catastrophes come into the hands of many, citizens may choose to trade civil liberties for increased safety. ↩︎

  40. “Such as automated blurring of intimate body parts, and...the option to redact identity-revealing data such as faces and name tags”, (Bostrom, 2019). ↩︎

  41. The Communist Party of China has set a goal to be the global leader in AI by 2030 (O’Meara, 2020). ↩︎

  42. (Yudkowsky, 2008) p.333-338, Yudkowsky lists several reasons to invest in “local” efforts, which he defines as actions that require a “concentration of will, talent and funding to overcome a threshold”. Yudkowsky argues that “majoritarian” action (like international cooperation) may be possible but local action (like technical research) is probably faster and easier. My view is that both majoritarian and local actions should be undertaken to reduce AI risk, especially when there is not common knowledge of all actors’ potentially risky activities. ↩︎

  43. (Ord, 2020) p. 201-202 ↩︎

  44. “if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.” ↩︎

  45. (Bostrom, 2014) pp. 102-106 ↩︎

  46. These final two measures are cited in (Nouri and Chyba, 2008) p. 463-464, ↩︎

  47. (Wilson, 2013), pp. 351-364 discusses what such a treaty might include. ↩︎

  48. (Barrett, 2007d) pp. 79-82 ↩︎

  49. (Barrett, 2007) p. 52 ↩︎

  50. (MasAskill, 2018) This statement is made at 3 minutes, 3 seconds. ↩︎

  51. See p. 1252 of (Allwood, et al 2014), which defines Annex-1 countries ↩︎

  52. (Dessai, 2001) p. 5 ↩︎

4 comments

Comments sorted by top scores.

comment by lukeprog · 2021-04-19T23:25:45.583Z · EA(p) · GW(p)

I think EAs focused on x-risks are typically pretty gung-ho about improving international cooperation and coordination, but it's hard to know what would actually be effective for reducing x-risk, rather than just e.g. writing more papers about how cooperation is desirable. There are a few ideas I'm exploring in the AI governance area, but I'm not sure how valuable and tractable they'll look upon further inspection. If you're curious, some concrete ideas in the AI space are laid out here and here.

Replies from: johl@umich.edu
comment by johl@umich.edu · 2021-04-21T18:02:11.336Z · EA(p) · GW(p)

Great points. I wonder if building awareness of x-risk in the general public (i.e. outside EAs) could help increase tractability and make research papers on cooperation more likely to get put into practice.

I'm curious which ideas you're exploring too. I saw your post on the topic from last year. Reading some of the research linked there has been super helpful!

Thanks for linking these resources too. Looking forward to reading them.

comment by jackva · 2021-04-19T20:45:04.212Z · EA(p) · GW(p)

Interesting post! My colleague Stephen Clare (at Founders Pledge) is currently doing an investigation into this topic, it will be great to exchange.

Replies from: johl@umich.edu
comment by johl@umich.edu · 2021-04-21T14:59:59.972Z · EA(p) · GW(p)

Thank you! Sounds great. DM'd!