'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions

post by Ben_Snodin · 2020-08-18T10:25:02.333Z · EA · GW · 8 comments


  Introduction and preliminary comments
  Conclusions that depend on contentious model assumptions
    assumptions that lead to different conclusions
  Conclusions that may depend on contentious model assumptions
  Conclusions that we believe are robust
  Objections to the model
    objections that we believe are not strong
    strong / possibly strong objections
  Why we think work in this area is useful
  Appendix: the role of population growth


This is the second post in a three-part series summarising, critiquing, and suggesting variants and extensions of Leopold Aschenbrenner’s paper called Existential Risk and Growth. The series is written by me and Alex Holness-Tofts, with input from Phil Trammell (any errors in this post are due to me). Alex summarised the model and results from the paper in the first post [EA · GW].

The main purpose of this post is to help the reader understand how seriously they should take the conclusions from the paper, and to highlight some conclusions that we think are robust.

Key points

Introduction and preliminary comments

In this post we

The primary audience we have in mind is people who are interested in knowing what conclusions can be drawn from the paper, and how robust these conclusions are. The post is written such that, hopefully, non-economists with a range of technical knowledge can understand it.

I wrote this post in consultation with Phil Trammell and Alex Holness-Tofts. “We” in this post refers to our collective judgement.

I should mention that Alex and I don’t have economics backgrounds, although Phil does, and, in addition, Phil supervised the writing of the paper. Ideally, someone who is an expert both in economic growth theory and existential risk would do a really deep analysis of the model presented in the paper, but in the absence of this we feel that giving our thoughts on this is useful.

Note that we are aware of a few minor errors in the current draft of the paper. It seems possible that fixing these will change one or more of the conclusions that follow from the model assumptions. However, for the purposes of this post we’ll assume that this isn’t the case, and address the model conclusions as currently given in the paper (alongside conclusions that aren’t explicitly mentioned in the paper).

Finally, while we are spending a lot of this post emphasising the limitations of the model, we want to stress that we think that the work done in the paper is valuable, and that further work in this area has the potential to be valuable. We discuss our reasons for thinking this in the last section of the post.

Conclusions that depend on contentious model assumptions

Some of the following conclusions depend on the chosen values of the model parameters 𝜀, 𝛽, and 𝛾. To get a feel for these parameters, note that 𝜀 controls how strongly the total production of consumption goods influences existential risk, while 𝛽 controls how strongly the total production of safety goods influences existential risk. For both parameters, a bigger value means a bigger effect. 𝛾 is the constant in the agent’s isoelastic utility function. Loosely speaking, 𝛾 controls how sharply the utility gain from an extra bit of consumption falls off as consumption increases. A bigger 𝛾 means a sharper fall-off. We discuss the role of 𝜀 and 𝛽 in more detail in the next subsection. (the first post in the series [EA · GW] also gives much more detail and context regarding these parameters)

The conclusions we’ll discuss in this section, and which depend on contentious model assumptions, are:

  1. The scale effect (𝜀 vs 𝛽) is centrally important for determining whether we will avert existential catastrophe

    • If we live in a world where 𝜀 > 𝛽 and 𝛾 is small, or where 𝜀 >> 𝛽, existential catastrophe is inevitable
    • Otherwise, i) there’s a chance that existential catastrophe is averted, and ii) there is an existential risk Kuznets curve, which means that history will contain a 'time of perils'. Note that this conclusion is highly relevant for the case for existential risk reduction, which many people in the Effective Altruism community think is a highly valuable cause area.
  2. Empirically, 𝜀 < 𝛽 is unlikely, 𝜀 > 𝛽 and 𝜀 >> 𝛽 both seem plausible

  3. Unless we're confident that 𝜀 >> 𝛽, we might as well act on the hope that 𝜀 > 𝛽; in that world we might be able to unlock an astronomically valuable future, whereas our future will necessarily be curtailed if 𝜀 >> 𝛽.

A reasonable question that you might ask is “how much should the work in this paper move me towards believing these conclusions?”.

A starting point is that if you believe that the assumptions the model makes are an accurate reflection of the world, you should take the conclusions seriously.

Given our own beliefs about the assumptions, we think that these conclusions are not very robust. We think that the work in the paper should move you a bit, but not much, towards believing these conclusions, assuming that you were previously ambivalent towards them.

Alternative assumptions that lead to different conclusions

A key assumption is that the hazard rate is determined by the following equation where is the hazard rate at time , is total consumption production at time , is total safety production at time , and 𝜀, 𝛽 and are constant model parameters.

This is a vast simplification compared to the complex ways we might imagine the state of the world influencing existential risk in reality. To be clear, this isn’t to say that it might not turn out to be roughly right - we just don’t have much empirical evidence here.

Because of this equation, in the model the values that we choose for 𝜀 and 𝛽 are crucially important for the prospects of humanity’s long-run survival, even assuming that people coordinate perfectly and don’t discount future value.

There are some quite plausible ways that the real world might break this assumed equation for the hazard rate, in ways that mean that the conclusions listed above no longer follow:

Conclusions that may depend on contentious model assumptions

  1. If we’re in a world where there’s an existential risk Kuznets curve:

    • A period of faster economic growth reduces overall existential risk in the long run (even though, if it happens on the upward part of the Kuznets curve, it increases risk in the short term). Conversely, slower economic growth increases overall existential risk.
    • On the other hand, a temporary boom followed by a bust, where the economy reverts back to its normal path after a period above trend, increases existential risk (a temporary bust followed by a boom decreases it).
  2. More generally, making people better off makes them care about preserving their future more relative to increasing consumption today, and so increases the amount society is willing to spend on existential risk reduction.

  3. Reducing the rate at which people discount their future utility is probably an easier way to reduce existential risk than increasing the growth rate (depending on what you think the likely range for 𝛾 is).

  4. Moving from a “rule of thumb” allocation where spending on existential risk reduction stays constant over time to an optimal allocation (whether future value is discounted or not) can create the possibility of averting existential catastrophe, where previously existential catastrophe was inevitable.

In general, it’s not clear to us whether or not these conclusions would change under reasonable modifications to the hazard rate equation, or to other parts of the model.

That being said, given what we know at the moment, we think that these conclusions are perhaps slightly more robust than the ones listed in the previous section, but still not very robust. And as with the previous conclusions, we think that the work in the paper should move you a bit, but not much, towards believing these conclusions, assuming that you were previously ambivalent towards them.

Conclusions that we believe are robust

  1. Making people better off makes them care about preserving their future more relative to increasing consumption today, and so increases the amount society would optimally spend on existential risk reduction (this increased inclination (assuming optimality) for society to spend money to preserve its own existence can be thought of as a civilisational analogue of the smaller-scale increased willingness of individuals to spend money to extend their lives).
  2. If you think that i) economic growth is more or less inevitable, ii) the sign of the impact of economic growth on existential risk won’t change over time, and iii) you’re not sure whether the sign is positive or negative, then you might as well speed up economic growth. In that world, most of the value comes from worlds where economic growth reduces existential risk, because in worlds where economic growth increases existential risk, we’re doomed anyway.
    • Also, this is generally true for inevitable trends with an unknown, but constant, impact on risk
    • (Phil has made this point in his blog; note, though, that it’s not at all clear that the sign of the impact of economic growth on existential risk won’t change over time)
  3. It’s not the case that moving from a “rule of thumb” allocation where spending on existential risk reduction stays constant over time to an optimal allocation that varies over time (whether people discount future value or not) never makes much difference to existential risk.
  4. You don’t have to believe in a suspiciously long or detailed list of ad-hoc claims about past and future technological developments in order to conclude that world history includes a ‘time of perils’; it appears in the simple world described in the paper with a moderate 𝜀 > 𝛽 scale effect.

We consider the final point to be especially important because, in our view, the plausibility of a ‘time of perils’ is extremely action-relevant from a longtermist perspective. We can consider this time of perils as an existential risk version of the environmental Kuznets curve, where environmental degradation rises at first, but eventually falls as society becomes richer and more willing to spend on measures that improve environmental quality.

Note also that conclusion 3 is in contrast to what is found in some previous work (Martin and Pindyck (2015, 2019) and Aurland-Bredesen (2019)) on catastrophe mitigation, which concludes that you don’t want to change the fraction of total output spent on safety much as you get richer. The difference comes from the fact that this previous work considers fractional losses to consumption, rather than existential catastrophe.

Objections to the model

Common objections that we believe are not strong

Objection: Why is it sensible to assume risk depends on the level of production in the consumption and safety sectors, rather than the rate of increase of production in the consumption and safety sectors?
Response: Consider the implications of spending 10x more on electricity production, of which 20% is power plants burning more fossil fuels and 80% is some extremely expensive (with our current tech) carbon capture that makes all the new production zero-net-emissions. Or spending 10x more on AI development than we currently do, of which only 20% goes to adding new capabilities and 80% goes to running extremely computationally expensive (with our current tech) proof-checking procedures that verify that the new code does exactly what we expect. It seems like that wouldn’t increase risk, even though the rate of increase of spending on consumption production is high. The intuition that speed is risky, if you pick it apart, is at least largely driven by the thought that risks come from increasing production in the consumption sector, before you had the time to research how to develop safety measures accordingly (increasing ) and then maintain the relevant safety infrastructure ().

Objection: There is no capital in the model, but in reality capital is an important determinant of production, as well as labour.
Response: We think this is unlikely to change the important results of the model (although it would be interesting to include capital in the model, in order to model philanthropic investment).

Objection: In the model, it is assumed that people live forever and that they don’t have any descendants that they care about. This is clearly unrealistic.
Response: While there is a class of economic models that model generations more realistically (called overlapping generations models), we don’t think that using this kind of approach would change the important results of the model.

Objection: The model says that, for parameter values that are likely to be realistic (specifically, if 𝜀 > 𝛽, or 𝛾 is large), society will eventually allocate almost everyone into the safety sector. But it seems implausible that in the long run almost everyone will be working on safety.
Response: We’re not sure that that scenario is so implausible. You could imagine a future where advanced technology led to the production of really great consumer goods, but also to the possibility of very dangerous goods, such that almost everyone had to be employed, say, monitoring the use of the dangerous goods (or doing research into how to monitor them better).

Objection: Given differential technological progress, it might matter a whole lot whether a potentially risky technology is developed sooner or later. If later, the probability increases that we found corresponding risk-mitigating technologies in the meantime. But it seems to be the case that according to the model, speeding up growth always decreases long-run risk (at least, given our uncertainty in the model parameters)
Response: Slowing growth within the model corresponds to reducing the population growth rate. This leads to a reduction in both the rate of consumption technology discovery and the rate of safety technology discovery (ignoring changes to employee allocation). So, within the model, to reduce growth is not to cause differential technological progress. Rather, within the model, promoting differential technological progress means allocating a greater share of scientists to safety technology.

Moderately strong / possibly strong objections

Objection: The model is an approximation of how the world is now and how we think it might be in the future. But how sure are we that things won’t have qualitatively changed in, say, 500 years’ time
Response: Sure, we do need to assume that the world hasn’t changed so radically in 500 years’ time that the model is no longer applicable. If you think society will radically change in some way, you’d have to think carefully about whether the assumptions the model makes would still apply. One thing in favour of the model, though, is that it isn’t very closely fitted to the way society is currently organised, so you might expect it to be more resilient to radical changes to society than it would otherwise be.

Objection: Why should existential risk depend on the total amount of consumption rather than per capita consumption? Surely what matters is how much resources each individual has, rather than the total amount of stuff produced
Response: If you think that the important thing for existential risk is how much each individual produces rather than how much everyone produces collectively, then you shouldn’t find the model very convincing. On the surface though, it seems plausible to us that total consumption would be more important. In any case, note that Phil has created an alternative model that is relevant here. Phil’s model assumes a fixed population (and exogenous productivity growth), which means that per capita consumption only differs from total consumption by a constant factor, so that it doesn’t really matter which one existential risk depends on. That model results in similar conclusions to Aschenbrenner’s one, except that existential catastrophe is no longer inevitable for 𝜀 >> 𝛽, provided that 𝛾 is not too small (but NB this result is provisional while potential errors in Aschenbrenner’s paper are checked). See the Appendix for more discussion of the role of population in the model.

Objection: I think the level of technological development should be important for existential risk, but the model doesn’t account for this
Response: The choice to use the level of consumption production rather than the level of consumption technology as an input to the hazard rate equates, roughly speaking, to assuming that existential risk comes from things being produced rather than research being done (e.g. due to lab accidents) or technology being available. It would be interesting to explore a model where the hazard rate depends explicitly on the level of technological development.

Objection: In the model, population grows forever at a constant rate. But, in reality we expect population growth to rapidly decline over the next century. This being the case, how can the model tell us anything about reality?
Objection: In the model, economic growth is ultimately driven by population growth. Is there strong evidence for this being true in the real world?
Response: Phil has created an alternative model that doesn’t assume exponential population growth, and found that this does lead to different results in some cases. Specifically, the model results in similar conclusions to Aschenbrenner’s one, except that existential catastrophe is no longer inevitable for 𝜀 >> 𝛽, provided that 𝛾 is not too small (but NB this result is provisional while potential errors in Aschenbrenner’s paper are checked). See the Appendix for a more detailed discussion.

Objection: In the model, society allocates workers between the four possible occupations (consumption worker, consumption scientist, safety worker, safety scientist) according to whatever maximises utility. Isn’t this vastly optimistic? It’s hard to imagine that we are currently at an optimal allocation, even for people with a non-zero discount rate.
Response: We are probably not particularly close to the optimal allocation at the moment. If you believe the model assumptions other than Optimal Allocation, you can consider that the model gives us what happens in a “best case” for coordination on existential risk mitigation. It would be interesting to look at the non-optimal allocation case. This might give you information about how valuable moving towards the optimal allocation would be.

Objection: In the model, we assume that everyone knows the true 𝜀 and 𝛽 values, so that society can allocate resources accordingly. But in reality it seems like it would be hard to be very confident about what these values are.
Response: It might be interesting to model this uncertainty explicitly. If you think that people have roughly the right central estimates, but with some uncertainty, our guess is that this wouldn’t change the big picture results very much, but it might change the detailed results in interesting ways. If you think that people are biased in one direction, it seems clear that this can qualitatively change the results (for example, if people think they’re in a regime where they should be transferring labour from safety to consumption, when in fact the opposite is true). In that case, modelling this explicitly could shed light on the value of educating people about existential risk.

Objection: It doesn’t seem sensible to have the same 𝛼 parameter for the good production functions in the safety and consumption sectors.
Response: Our guess is that allowing the two sectors to have different parameters rather than a single 𝛼 parameter doesn’t change the important results of the model. However, it might be interesting (and fairly easy) to try this out.

Strongest objections

Objection: 𝜀 and 𝛽 aren’t really fixed / the hazard rate is determined in a far more complex way than what is in the model. In the end, the likely path of the hazard rate will come down to details about which technologies get developed in which order, how dangerous they turn out to be, and things like how stable and robust important institutions are. The model can’t capture these things.
Objection: In the 𝜀 >> 𝛽 case, we face a choice between extinction on the one hand, and devoting so much of our resources to safety that we live lives worse than death on the other. But surely with perfect coordination (which we assume in the model), there’s some chance we could organise society in such a way that we have lots of “safe” technology / goods and none of the “dangerous” ones, such that we live lives worth living and in safety. A cartoon example might be a world where advanced technologies are all banned, and we move all resources currently spent on luxury goods to carbon capture.
Objection: According to the model, the scale effect, i.e. 𝜀 vs 𝛽, is centrally important for determining whether we will avert existential catastrophe. But isn’t this just an artefact of the model, which follows from the assumed relationship between existential risk, total consumption, and total safety production?
Response: We think these all point towards important things, which we discussed in the earlier section called “Conclusions that depend on contentious model assumptions” (note that we don’t necessarily agree with every assertion made in these objections).

Objection: Eliezer Yudkowsky's argument [LW · GW] that work for building an unsafe AGI parallelizes better than work for building a safe AGI, and that unsafe AGI benefits more in expectation from having more computing power than safe AGI, both imply that slower growth is better from an AI x-risk viewpoint.
Response: This can be thought of as a concrete example of the general point (also raised in the three previous objections) that the hazard rate equation in the model represents quite a strong assumption about the relationship between technology and existential risk. If you think that the argument described in the objection is definitely right, the model probably can’t tell you very much.

Why we think work in this area is useful

While we think that the work in the paper represents very weak evidence for the conclusions that follow only from the detailed assumptions of the model, we want to make the argument that work of the kind done in the paper, on theoretical models connecting economic growth to existential risk, is useful.

We claim that weak evidence that shifts your beliefs a bit is still valuable, especially when it addresses questions as important as the ones we’re considering here. Similarly, there is value in conclusions that are weaker (in the sense that assuming that they’re true changes your beliefs about the world less than “stronger” conclusions would) but more robust, like the ones we’ve drawn in the “Conclusions that we believe are robust” section of this post.

Currently, most work on understanding existential risk makes granular and specific arguments about particular technologies or political situations. However, more general and abstract models, like Aschenbrenner’s, give us a different flavor of insight about existential risk. Whichever is your preferred approach, it might be that a mixed strategy, where both approaches are used, can give more robust conclusions (particularly where the approaches converge).

Exploring both approaches in parallel also has high information value given the high stakes and our uncertainty about which approach might be more fruitful.

Generally, the stronger your beliefs about the link between economic growth and existential risk, the less useful work like this will be. For example, if you’re certain that potentially dangerous Artificial General Intelligence will arrive in the next 20 years, and that economic growth will speed up its arrival, but not speed up safety work, you probably won’t learn as much from models of economic growth and existential risk as someone who is more agnostic on this point. On the other hand, if you’re more uncertain about when transformative technologies are likely to be developed, and what the effect of economic growth on these technologies and their regulation might be, theoretical models connecting economic growth to existential risk would shift your beliefs more.

Since there is a wide range of views out there, both within the Effective Altruism community and outside it, we think that this work is likely to be useful for many people.

A separate consideration is one of signalling / marketing: it might be the case that creating an academic literature on the impact of economic growth on existential risk will mean that academics, philanthropists and, eventually, policymakers and the general public take existential risk more seriously.


Thanks to Toby Newberry and Hamish Hobbs (and probably others) for feedback on earlier drafts of this.


Martin, Ian W. R. and Robert S. Pindyck, “Averting Catastrophes: The Strange Economics of Scylla and Charybdis,” American Economic Review, October 2015, 105 (10), 2947-2985.

_ and _, “Welfare Costs of Catastrophes: Lost Consumption and Lost Lives,” July 2019. Working Paper 26068, National Bureau of Economic Research.

Aurland-Bredesen, Kine Josefine, “The Optimal Economic Management of Catastrophic Risk.” PhD dissertation 2019.

Appendix: the role of population growth

One pair of features of the model that you might find unintuitive is that

  1. population growth is necessary for sustained economic growth (without population growth, economic growth eventually stagnates), and
  2. population grows at a constant rate forever.

Feature (1) is inherited from the economic growth model that the Existential Risk and Growth model is based on, which is known as the Jones model; whether (1) is true or not in the real world doesn’t seem to be a settled issue. Meanwhile, feature (2) contradicts population growth forecasts pretty clearly.

At first glance, feature (2) in particular might seem to count against the model pretty strongly.

However, the combined effect of these features on economic growth is just to make the economy grow consistently over time at some fairly steady rate (unless there’s a dramatic change in the number of people employed in the consumption sector). The exact rate of economic growth doesn’t seem to be very important for the conclusions we draw from the model, so it appears that the effect that these features have on economic growth is pretty unimportant, as long as you think that the economy will continue to grow at a roughly constant rate.

Still, we might worry that exponentially increasing population drives a lot of the results through a means other than its effect on the economic growth rate. The argument is this: the hazard rate depends on total consumption production, while utility depends on per capita consumption production. As the population increases, either total consumption has to increase in proportion, or per capita consumption production needs to decrease. But there’s a minimum acceptable level of per capita consumption production (in order to keep utility positive), so eventually we’ll reach a point where total consumption has to increase in proportion to population increase. So it’s not surprising that catastrophe is inevitable in the 𝜀 >> 𝛽 case - this only comes about because we artificially forced population to keep increasing.

But, it turns out that catastrophe would be inevitable in the 𝜀 >> 𝛽 case even without population growth, because without population growth you can’t shift people into the safety sector fast enough to reduce the hazard rate fast enough to prevent existential catastrophe eventually happening.

However, while the simple argument given above is wrong, Phil has worked through the consequences of a model without exponentially increasing population and with exogenous economic growth (so, economic growth that is not driven by population growth), and found that this does change some of the results. In particular, existential catastrophe is no longer inevitable for 𝜀 >> 𝛽, provided that 𝛾 is not too small (but NB this result is provisional while potential errors in Aschenbrenner’s paper are checked) .


Comments sorted by top scores.

comment by CarlShulman · 2020-08-23T20:21:39.358Z · EA(p) · GW(p)

My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people's tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.

If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in surveillance, PPE, research, and other capacities in response to data about previous coronaviruses such as SARS to stop COVID-19 in its tracks. Renewable energy research funding would be vastly higher than it is today, as would AI technical safety. As advanced AI developments brought AI catstrophic risks closer, there would be no competitive pressures to take risks with global externalities in development either by firms or nation-states.

Externalities massively reduce the returns to risk reduction, with even the largest nation-states being only a small fraction of the world, individual politicians much more concerned with their term of office and individual careers than national-level outcomes, and individual voters and donors constituting only a minute share of the affected parties. And conflict and bargaining problems are entirely responsible for war and military spending, central to the failure to overcome externalities with global climate policy, and core to the threat of AI accident catastrophe.

If those things were solved, and the risk-reward tradeoffs well understood, then we're quite clearly in a world where we can have very low existential risk and high consumption. But if they're not solved, the level of consumption is not key: spending on war and dangerous tech that risks global catastrophe can be motivated by the fear of competitive disadvantage/local catastrophe (e.g. being conquered) no matter how high consumption levels are.

Replies from: trammell, Ben_Snodin
comment by trammell · 2020-08-24T09:14:33.056Z · EA(p) · GW(p)

I agree that the world underinvests in x-risk reduction (/overspends on activities that increase x-risk as a side effect) for all kinds of reasons. My impression would be that the two most important reasons for the underinvestment are that existential safety is a public good on two fronts:

  • long-term (but people just care about the short term, and coordination with future generations is impossible), and
  • global (but governments just care about their own countries, and we don't do global coordination well).

So I definitely agree that it's important that there are many actors in the world who aren't coordinating well, and that accounting for this would be an important next step.

But my intuition is that the first point is substantially more important than the second, and so the model assumes away much but not close to all of the problem. If the US cared about the rest of the world equally, that would multiply its willingness to pay for an increment of x-risk mitigation by maybe an order of magnitude. But if it had zero pure time preference but still just cared about what happened within its borders (or something), that would seem to multiply the WTP by many orders of magnitude.

Replies from: CarlShulman
comment by CarlShulman · 2020-08-24T16:24:10.171Z · EA(p) · GW(p)

I'd say it's the other way around, because longtermism increases both rewards and costs in prisoner's dilemmas. Consider an AGI race or nuclear war. Longtermism can increase the attraction of control over the future (e.g. wanting to have a long term future following religion X instead of Y, or communist vs capitalist). During the US nuclear monopoly some scientists advocated for preemptive war based on ideas about long-run totalitarianism. So the payoff stakes of C-C are magnified, but likewise for D-C and C-D.

On the other hand, effective bargaining and cooperation between players today is sufficient to reap almost all the benefits of safety (most of which depend more on not investing in destruction than investing in safety, and the threat of destruction for the current generation is enough to pay for plenty of safety investment).

And coordinating on deals in the interest of current parties is closer to the curent world than fanatical longtermism.

But the critical thing is that risk is not just an 'investment in safety' but investments in catastrophically risky moves driven by games ruled out by optimal allocation.

Replies from: trammell
comment by trammell · 2020-08-25T10:43:07.776Z · EA(p) · GW(p)

Sure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list).

I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.

That's not a very firm belief on my part--I could easily be convinced that arms races should rank higher than the mundane, profit-motivated carelessness. But I'd be surprised if the latter were approximately none of the problem.

Replies from: CarlShulman
comment by CarlShulman · 2020-08-25T18:29:17.007Z · EA(p) · GW(p)
But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"),

The main dynamic I have in mind there is 'country X being overwhelmingly technologically advantaged/disadvantaged ' treated as an outcome on par with global destruction, driving racing, and the necessity for international coordination to set global policy.

I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.

Biotech threats are driven by violence. On AI, for rational regulators of a global state, a 1% or 10% chance of destroying society looks enough to mobilize immense resources and delay deployment of dangerous tech for safety engineering and testing. There are separate epistemic and internal coordination issues that lead to failures of rational part of the rational social planner model (e.g. US coronavirus policy has predictably failed to serve US interests or even the reelection aims of current officeholders, underuse of Tetlockian forecasting) that loom large (it's hard to come up with a rational planner model explaining observed preparation for pandemics and AI disasters).

I'd say that given epistemic rationality in social policy setting, then you're left with a big international coordination/brinksmanship issue, but you would get strict regulation against blowing up the world for small increments of profit.

comment by Ben_Snodin · 2020-08-24T08:37:56.179Z · EA(p) · GW(p)

I haven't thought about this angle very much, but it seems like a good angle which I didn't talk about much in the post, so it's great to see this comment.

I guess the question is whether you can take the model, including the optimal allocation assumption, as corresponding to the world as it is plus some kind of (imagined) quasi-effective global coordination in a way that seems realistic. It seems like you're pretty skeptical that this is possible (my own inside view is much less certain about this but I haven't thought about it that much).

One thing that comes to mind is that you could incorporate into the model spending on dangerous tech by individual states for self-defence into the hazard rate equation through epsilon - it seems like the risk from this should probably increase with consumption (easier to do it if you're rich), so it doesn't seem that unreasonable. Not sure whether this is getting to the core of the issue you've raised, though.

I suppose you can also think about this through the "beta and epsilon aren't really fixed" lens that I put more emphasis on in the post. It seems like greater / less coordination (generally) implies more / less favourable epsilon and beta, within the model.

comment by Jan_Kulveit · 2020-08-27T10:41:09.392Z · EA(p) · GW(p)

I posted a short version of this, but I think people found it unhelpful, so I'm trying to post somewhat longer version.

  • I have seen some number of papers and talks broadly in the genre of "academic economy"
  • My intuition based on that is, often they seem to consist of projecting complex reality into a space of single-digit real number dimensions and a bunch of differential equations
  • The culture of the field often signals solving the equations is profound/important, and the how you do the projection "world -> 10d" is less interesting
  • In my view for practical decision making and world-modelling it's usually the opposite: the really hard and potentially profound part is the projection. Solving the maths is in often is some sense easy, at least in comparison to the best maths humans are doing
  • While I overall think the enterprise is worth to pursue, people should in my view have a relatively strong prior that for any conclusions which depends on the "world-> reals" projection there could be many alternatives leading to different conclusions; while I like the effort in this post to dig into how stable the conclusions are, in my view people who do not have cautious intuitions about the space of "academic economy models" could still easily over-update or trust too much the robustness
  • If people are not sure, an easy test could be something like "try to modify the projection in any way, so the conclusions do not hold". At the same time this will usually not lead to an interesting or strong argument, it's just trying some semi-random moves is the model space. But it can lead to a better intuition.
  • I tried to do few tests in a cheap and lazy way (eg what would this model tell me about running at night on a forested slope?) and my intuitions was:
  • I agree with the cautious the work in the paper represents very weak evidence for the conclusions that follow only from the detailed assumptions of the model in the present post. (At the same time it can be an excellent academic economy paper)
  • I'm more worried about other writing about the results, such as linked post on Phil's blog , which in my reading signals more of "these results are robust" than it's safe
  • Harder and more valuable work is to point to something like some of the most significant way in which the projection fails" (aspects of reality you ignored etc.). In this case this was done by Carl Shulman and it's worth discussing further
  • In practice I do have some worries about some meme 'ah, we don't know, but given we don't know, speeding up progress is likely good' (as proved in this good paper) being created in the EA memetic ecosystem. (To be clear I don't think the meme would reflect what Leopold or Ben believe)
comment by Jan_Kulveit · 2020-08-25T09:44:31.855Z · EA(p) · GW(p)

In my view

  • a safe way how to read the paper is as academic economy - the paper says what happens if you solve a particular set of equations
  • while the variable names used in the equations appear to point toward reality, in fact it is almost completely unclear if the model is a reasonable map of at least some aspect of the territory

Overall I think a good check for EAs if they should update based on this result is

  • would you be able to make different set of at first glance reasonable assumptions of the same type, leading to opposite conclusions?

where if the answer is "no", I would suggest people basically should not update.