Existential Risk and Economic Growth

post by leopold · 2019-09-03T13:23:35.536Z · score: 113 (42 votes) · EA · GW · 19 comments

As a summer research fellow at FHI, I’ve been working on using economic theory to better understand the relationship between economic growth and existential risk. I’ve finished a preliminary draft; see below. I would be very interesting in hearing your thoughts and feedback!

Draft: leopoldaschenbrenner.com/xriskandgrowth

Abstract:
Technological innovation can create or mitigate risks of catastrophes—such as nuclear war, extreme climate change, or powerful artificial intelligence run amok—that could imperil human civilization. What is the relationship between economic growth and these existential risks? In a model of endogenous and directed technical change, with moderate parameters, existential risk follows a Kuznets-style inverted U-shape. This suggests we could be living in a unique “time of perils,” having developed technologies advanced enough to threaten our permanent destruction, but not having grown wealthy enough yet to be willing to spend much on safety. Accelerating growth during this “time of perils” initially increases risk, but improves the chances of humanity's survival in the long run. Conversely, even short-term stagnation could substantially curtail the future of humanity. Nevertheless, if the scale effect of existential risk is large and the returns to research diminish rapidly, it may be impossible to avert an eventual existential catastrophe.

19 comments

Comments sorted by top scores.

comment by Max_Daniel · 2019-09-03T13:40:34.799Z · score: 33 (13 votes) · EA(p) · GW(p)

I thought this was one of the most exciting pieces of research I've seen in the last few years. It also makes me really eager to see GPI hopefully making more work in a similar vein happen.

[Disclaimer: I co-organized the summer research fellowship, as part of which Leopold worked on this research, though I didn't supervise him.]

comment by trammell · 2019-09-03T14:16:19.470Z · score: 27 (12 votes) · EA(p) · GW(p)

As the one who supervised him, I too think it's a super exciting and useful piece of research! :)

I also like that its setup suggests a number of relatively straightforward extensions for other people to work on. Three examples:

  • Comparing (1) the value of an increase to B (e.g. a philanthropist investing / subsidizing investment in safety research) and (2) the value of improved international coordination (moving to the "global impatient optimum" from a "decentralized allocation" of x-risk mitigation spending at, say, the country level) to (3) a shock to growth and (4) a shock to the "rate of pure time preference" on which society chooses to invest in safety technology. (The paper currently just compares (3) and (4).)
  • Seeing what happens when you replace the N^(epsilon - beta) term in the hazard function with population raised to a new exponent, say N^(mu), to allow for some risky activities and/or safety measures whose contribution to existential risk depends not on the total spent on them but on the amount per capita spent on them, or something in between.
  • Seeing what happens when you use a more general population growth function.
comment by Benjamin_Todd · 2019-09-26T06:38:42.118Z · score: 15 (8 votes) · EA(p) · GW(p)

Yes, great paper and exciting work. Here are some further questions I'd be interested in (apologies if they result from misunderstanding the paper - I've only skimmed it once).

1) I'd love to see more work on Phil's first bullet point above.

Would you guess that due to the global public good problem and impatience, that people with a low rate of pure rate of time preference will generally believe society is a long way from optimal allocation to safety, and therefore that increasing investment in safety is currently much higher impact than increasing growth?


2) What would be the impact of uncertainty about the parameters be? Should we act as if we're generally in the eta > beta (but not much greater) regime, since that's where altruists could have the most impact?


3) You look at the chance of humanity surviving indefinitely - but don't we care more about something like the expected number of lives?

Might we be in the eta >> beta regime, but humanity still have a long future in expectation (i.e. tens of millions of years rather than billions). It might then still be very valuable to further extend the lifetime of civilisation, even if extinction is ultimately inevitable.

Or are there regimes where focusing on helping people in the short-term is the best thing to do?

Would looking at expected lifetime rather than probability of making it have other impacts on the conclusions? e.g. I could imagine it might be worth trading acceleration for a small increase in risk, so long as it allows more people to live in the interim in expectation.



comment by leopold · 2019-10-26T15:25:17.309Z · score: 8 (2 votes) · EA(p) · GW(p)

Hi Ben, thanks for your kind words, and so sorry for the delayed response. Thanks for your questions!

  1. Yes, this could definitely be the case. In terms of what the most effective intervention is, I don’t know. I agree that more work on this would be beneficial. One important consideration would be what intervention has the potential to raise the level of safety in the long run. Safety spending might only lead to a transitory increase in safety, or it could enable R&D that improves improves the level of safety in the long run. In the model, even slightly faster growth for a year means people are richer going forward forever, which in turn means people are willing to spend more on safety forever.

  2. At least in terms of thinking about the impact of faster/slower growth, it seemed like the eta > beta case was the one we should focus on as you say (and this is what I do in the paper). When eta < beta, growth was unambiguously good; when eta >> beta, existential catastrophe was inevitable.

  3. In terms of expected number of lives, it seems like the worlds in which humanity survives for a very long time are dramatically more valuable than any world in which existential catastrophe is inevitable. Nevertheless, I want to think more about potential cases where existential catastrophe might be inevitable, but there could still be a decently long future ahead. In particular, if we think humanity’s “growth mode” might change at some stage in the future, the relevant consideration might be the probability of reaching that stage, which could change the conclusions.

comment by Benjamin_Todd · 2019-11-05T00:10:09.118Z · score: 2 (1 votes) · EA(p) · GW(p)

Thank you!

comment by leopold · 2019-09-03T22:15:12.353Z · score: 4 (3 votes) · EA(p) · GW(p)

Thank you for your kind words!

comment by riceissa · 2019-11-03T08:59:03.583Z · score: 20 (8 votes) · EA(p) · GW(p)

I read the paper (skipping almost all the math) and Philip Trammell's blog post. I'm not sure I understood the paper, and in any case I'm pretty confused about the topic of how growth influences x-risk, so I want to ask you a bunch of questions:

  1. Why do the time axes in many of the graphs span hundreds of years? In discussions about AI x-risk, I mostly see something like 20-100 years as the relevant timescale in which to act (i.e. by the end of that period, we will either go extinct or else build an aligned AGI and reach a technological singularity). Looking at Figure 7, if we only look ahead 100 years, it seems like the risk of extinction actually goes up in the accelerated growth scenario.

  2. What do you think of Wei Dai's argument [LW(p) · GW(p)] that safe AGI is harder to build than unsafe AGI and we are currently putting less effort into the former, so slower growth gives us more time to do something about AI x-risk (i.e. slower growth is better)?

  3. What do you think of Eliezer Yudkowsky's argument [LW · GW] that work for building an unsafe AGI parallelizes better than work for building a safe AGI, and that unsafe AGI benefits more in expectation from having more computing power than safe AGI, both of which imply that slower growth is better from an AI x-risk viewpoint?

  4. What do you think of Nick Bostrom's urn analogy for technological developments? It seems like in the analogy, faster growth just means pulling out the balls at a faster rate without affecting the probability of pulling out a black ball. In other words, we hit the same amount of risk but everything just happens sooner (i.e. growth is neutral).

  5. Looking at Figure 7, my "story" for why faster growth lowers the probability of extinction is this: The richer people are, the less they value marginal consumption, so the more they value safety (relative to consumption). Faster growth gets us sooner to the point where people are rich and value safety. So faster growth effectively gives society less time in which to mess things up (however, I'm confused about why this happens; see the next point). Does this sound right? If not, I'm wondering if you could give a similar intuitive story.

  6. I am confused why the height of the hazard rate in Figure 7 does not increase in the accelerated growth case. I think equation (7) for might be the cause of this, but I'm not sure. My own intuition says accelerated growth not only condenses along the time axis, but also stretches along the vertical axis (so that the area under the curve is mostly unaffected).

    As an extreme case, suppose growth halted for 1000 years. It seems like in your model, the graph for hazard rate would be constant at some fixed level, accumulating extinction probability during that time. But my intuition says the hazard rate would first drop near zero and then stay constant, because there are no new dangerous technologies being invented. At the opposite extreme, suppose we suddenly get a huge boost in growth and effectively reach "the end of growth" (near period 1800 in Figure 7) in an instant. Your model seems to say that the graph would compress so much that we almost certainly never go extinct, but my intuition says we do experience a lot of risk for extinction. Is my interpretation of your model correct, and if so, could you explain why the height of the hazard rate graph does not increase?

    This reminds me of the question of whether it is better to walk or run in the rain (keeping distance traveled constant). We can imagine a modification where the raindrops are motionless in the air.

comment by zdgroff · 2019-09-04T20:13:00.936Z · score: 14 (8 votes) · EA(p) · GW(p)

I think this is an extremely impressive piece of work in economics proper not to mention a substantial contribution to longtermism research. Nice going.

comment by leopold · 2019-09-04T23:40:10.334Z · score: 2 (2 votes) · EA(p) · GW(p)

Thanks Zach!

comment by Larks · 2019-09-17T01:26:21.198Z · score: 5 (3 votes) · EA(p) · GW(p)

Thanks very much for writing this, I found it really interesting. I like the way you follow the formalism with many examples.

I have a very simple question, probably due to my misunderstanding - looking at your simulations, you have the fraction of workers and scientists working on consumption going asymptotically to zero, but the terminal growth rate of consumption is positive. Is this a result of consumption economies of scale growing fast enough to offset the decline in worker fraction?

comment by leopold · 2019-09-22T19:34:02.183Z · score: 4 (3 votes) · EA(p) · GW(p)

Thanks!

Regarding your question, yes, you have the right idea. Growth of consumption per capita is growth in consumption technology plus growth in consumption work per capita — thus, while the fraction of workers in the consumption sector declines exponentially, consumption technology grows (due to increasing returns) quickly enough to offset that. This yields positive asymptotic growth of consumption per capita overall (on the specific asymptotic paths you are referring to). Note that the absolute total number of people working consumption *research* is still increasing on the asymptotic path: while the fraction of scientists in the consumption sector declines exponentially, there is still overall population growth. This yields the asymptotic growth in consumption technology (but this growth is slower than what would be feasible, since scientists are being shifted away from consumption). Does that make sense?

comment by SoerenMind · 2019-09-04T16:06:29.057Z · score: 4 (3 votes) · EA(p) · GW(p)

This sounds really cool. Will have to read properly later. How would you recommend a time pressured reader to go through this? Are you planning a summary?

comment by trammell · 2019-10-26T15:23:00.005Z · score: 11 (3 votes) · EA(p) · GW(p)

Still no summary of the paper as a whole, but if you're interested, I just wrote a really quick blog post which summarizes one takeaway. https://philiptrammell.com/blog/45

comment by leopold · 2019-09-04T17:54:03.390Z · score: 8 (5 votes) · EA(p) · GW(p)

Thanks. I generally try to explain the intuition of what is going on in the body of the text—I would recommend focusing on that rather than on the exact mathematical formulations. I am not planning to write a summary at the moment, sorry.

comment by Denkenberger · 2019-09-07T06:40:34.211Z · score: 2 (1 votes) · EA(p) · GW(p)

I'm getting site not secure errors on all 4 browsers for the draft. Could you please make it more accessible?

comment by leopold · 2019-09-07T11:24:43.985Z · score: 3 (2 votes) · EA(p) · GW(p)

Sorry to hear that! I’m not sure why it’s doing that—it’s just hosted on Github. Try this direct link: https://leopoldaschenbrenner.github.io/xriskandgrowth/ExistentialRiskAndGrowth050.pdf

comment by Denkenberger · 2019-09-10T05:57:09.079Z · score: 3 (2 votes) · EA(p) · GW(p)

Thanks - it worked!

comment by Donald Hobson · 2019-09-18T21:13:25.765Z · score: 1 (1 votes) · EA(p) · GW(p)

I think that existential risk is still something that most governments aren't taking seriously. If major world governments had a model that contained a substantial probability of doom, there would be a Lot more funding. Look at the sort of funding anything and everything that might possibly help that happened in the cold war. I see this not taking it seriously as being caused by a mix of human psychology, and historical coincidence. I would not expect it to apply to all civilizations.

comment by ishi · 2019-09-05T17:18:41.533Z · score: 0 (3 votes) · EA(p) · GW(p)

That's an interesting and the little i skimmed was somewhat straight forward if you can get through the dialect or notation, which is standard in econ papers ---which i'd call neoclassical. ( I got up to about page 20 -- discussions of effects of scientists/workers switching to safety production rather than consumption production).

This raises to me a few issues. you have probably seen https://arxiv.org/abs/1410.5787 Given debates about risks of other envirojmental risks like GMOs and nuclear energy, its even unclear what is 'safety' or 'precautionary' versus consumption production. Its also unclear how much 'science can come to the rescue' (discussed many places like AAAS).

There are also the behavioral issues---even if your model (like the Kuznets curve) is basically correct and one can calculate 'effectively altruistic' policies, whether they will be supported by the public/government and entice scientists and other workers to switch to 'green jobs' (whether technical or, say organic farming ) is a sociopolitical issue.

(Its possible other sorts of models, or variants of yours using some behavioral data, might be able to assess both effects of policies as you do, and include factors describing the plausibility they will be adopted. (I googled you at Columbia and see you also studied public opinion spread via Twitter, etc. and that gives ideas about dynamics of behavioral variables. Presumably these are already implicit in your various parameters beta, epsilon, etc. I guess these are also implicit in the discount factors discussed by Nordhaus and others--but they may have their own dynamics, rather than being constants. )

Alot of current climate activists promote 'degrowth' and lifestyle change (diet, transport, etc.) (eg extinction rebellion) , partly because because they think that maybe more important than growth, and don't trust growth will be applied to 'safety' rather than activities that contribute to AGW risks. Also many of them don't trust economic models, and many if not most people do understand them much (I can can only get a rough understanding partly because going through the math details is both often beyond my competency, and I have other things to do (i'm trying to sketch more simple models that attempt to catch the main ideas which might be comprehensible to and useful for a wider audience. ) As noted, a variant of your model could probably include some of these sociopolitical issues.)

Anyway, thought provoking paper.