# Estimating the Philanthropic Discount Rate

post by MichaelDickens · 2020-07-03T16:58:54.771Z · EA · GW · 20 comments## Contents

Introduction of a declining long-run discount rate Breaking down the current discount rate nullification Extinction Superintelligent AI Economic collapse and value drift Expropriation Value drift in opportunities estimate Breaking down the long-run discount rate nullification Extinction Superintelligent AI Economic collapse and value drift in opportunities estimate Can we change the discount rate? significance of reducing value drift risk by creating multiple funds about individual value drift? Significance of mis-estimating the discount rate Ramsey model with estimated discount rate plan for a (slightly) more realistic model puzzle arguments against prioritizing improving the discount rate estimate the importance/tractability/neglectedness framework Importance Tractability Neglectedness Conclusion Appendix: Proof that spending should decrease as the discount rate decreases Notes None 20 comments

*Cross-posted to my website. I have tried to make all the formatting work on the EA Forum, but if anything doesn't look right, try reading on my website instead.*

## Summary

- How we should spend our philanthropic resources over time depends on how much we discount the future. A higher discount rate means we should spend more now; a lower discount rate tells us to spend less now and more later.
- We (probably) should not assign less moral value to future beings, but we should still discount the future based on the possibility of extinction, expropriation, value drift, or changes in philanthropic opportunities.
- According to the Ramsey model, if we estimate the discount rate based on those four factors, that tells us how quickly we should consume our resources
^{[1]}. - We can decrease the discount rate, most notably by reducing existential risk and guarding against value drift. We still have a lot to learn about the best ways to do this.
- According to a simple model, improving our estimate of the discount rate might be the top effective altruist priority.

# Introduction

Effective altruists can become more effective by carefully considering how they should spread their altruistic consumption over time. This subject receives some attention in the EA community, but a lot of low hanging fruit still exists, and EAs could probably do substantially more good by further optimizing their consumption schedules (for our purposes, "consumption" refers to money spent trying to improve the world).

So, how should altruists use their resources over time? In 1928, Frank Ramsey developed what is now known as the Ramsey model. In this model, a philanthropic actor has some stock of invested capital that earns interest over time. They want to know how to maximize utility by spending this capital over time. The key question is, at what rate should they spend to maximize utility?

(Further suppose this philanthropic actor is the sole funder of a cause. If other actors also fund this cause, that substantially changes considerations because you have to account for how they spend their money^{[2]}. For the purposes of this essay, I will assume the cause we care about only has one funder, or that all funders can coordinate.)

Specifically, we assume the actor's capital grows according to a constant (risk-free) interest rate . Additionally, we discount future utility at some rate , so that if performing some action this year would produce 1 utility, next year it will only give us discounted utility. The actor then needs to decide at what rate to consume their capital.

Total utility equals the sum of discounted utilities at each moment in time. In mathematical terms, we write it as

where c(t) gives the amount of resources to be consumed (that is, spent on altruistic endeavors) at time t, and u(c) gives utility of consumption.

This model makes many simplifications—see Ramsey (1928)^{[3]} and Greaves (2017)^{[4]} for a detailing of the required assumptions, of both an empirical and a philosophical nature. To keep this essay relatively simple, I will take the Ramsey model as given, but it should be noted that changing these assumptions could change the results.

It is common to assume that actors have constant relative risk aversion (CRRA), which means their level of risk aversion doesn't change based on how much money they have. Someone with logarithmic utility of consumption has CRRA, as does anyone whose utility function looks like for some constant .

An actor with CRRA maximizes utility by following this consumption schedule^{[3:1]}:

where is the interest rate and is elasticity of marginal utility. Higher indicates greater risk aversion. corresponds to logarithmic utility.

(Original result is due to Ramsey (1928), but credit to Philip Trammell^{[5]} for this specific formulation.)

The scale factor tells us what proportion of the portfolio to spend during each period in order to maximize utility. A higher discount rate means we should spend more now, while a lower discount rate tells us to save more for later. Intuitively, if we discount the future more heavily, that means we care relatively less about future spending, so we should spend more now (and vice versa).

According to the Ramsey model, following a different consumption schedule than the above results in sub-maximal utility. If we spend too much early on, we prevent our assets from growing as quickly as they should. And if we spend too little, we don't reap sufficient benefits from our assets. Therefore, we would like to know the value of so we know how to optimally spread our spending over time. (The parameters and matter as well, but in this essay, I will focus on .)

If we have a pure time preference, that means we discount future utility because we consider the future less morally valuable, and not because of any empirical facts. Ramsey called a pure time preference "ethically indefensible." But even if we do not admit any pure time preference, we may still discount the value of future resources for four core reasons:

- All resources become useless (I will refer to this as "economic nullification").
- We lose access to our own resources.
- We continue to have access to our own resources, but do not use them in a way that our present selves would approve of.
- The best interventions might become less cost-effective over time as they get more heavily funded, or might become more cost-effective as we learn more about how to do good.

("Resources" can include money, stocks, gold, or any other valuable and spendable asset. I will mostly treat resources as equivalent to money.)

In the next section, I explain why we might care about the long-run discount rate in addition to the current discount rate. In "Breaking down the current discount rate", I consider the current discount rate in terms of the above four core reasons and roughly estimate how much we might discount based on each reason. In "Breaking down the long-run discount rate", I do the same for the discount rate into the distant future. In "Can we change the discount rate?", I briefly investigate the value of reducing the discount rate as an effective altruistic activity. Similarly, in "Significance of mis-estimating the discount rate", I find that simply improving our estimate of the discount rate could possibly be a top effective altruist cause. Finally, the conclusion provides some takeaways and suggests promising areas for future research.

In this essay, I deal with some complicated subjects that deserve a much more detailed treatment. I provide answers to questions whenever possible, but these answers should be interpreted as extremely preliminary guesses, not confident claims. The primary purpose of this essay is merely to provide a starting point for discussion and raise some important and neglected research questions.

This essay addresses the philanthropic discount rate, referring specifically to the discount rate that effective altruists should use. This relates to the economic concept of the social discount rate, which (to simplify) is the rate at which governments should discount the value of future spending. Effective altruists tend to have substantially different values and beliefs than governments, resulting in substantially different discount rates. But if we know the social discount rate, we can use it to "reverse-engineer" the philanthropic discount rate by subtracting out any factors governments use that we do not believe philanthropists should care about, and then adding in any factors governments tend to neglect (e.g., perhaps we believe most people underestimate the probability of extinction). For now, I will not attempt this approach, but this would make a good subject for future research. For a more detailed survey of the social discount rate and the considerations surrounding it, see Greaves (2017)^{[4:1]}.

When attempting to make predictions, I will frequently refer to Metaculus questions. Metaculus is a website that "poses questions about the occurrence of a variety of future events, on many timescales, to a community of participating predictors" with the aim of helping humanity make better predictions. It has a reasonably impressive track record. Although Metaculus' short-term track record might not extrapolate well to the long-term questions referenced in this essay, the aggregated predictions made by Metaculus are probably more reliable than uninformed guesses^{[6]}. Metaculus predictions can change over time as more users make predictions, so the numbers I quote in this essay might not reflect the most up-to-date information. In order to avoid double-counting my personal opinion, I have not registered my own predictions on any of the linked Metaculus questions.

Sjir Hoeijmakers, senior researcher at Founders Pledge, has written a similar essay [EA · GW] about how we should discount the future. I read his post before publishing this, but I wrote this essay before I knew he was working on the same topic, so any overlap in content is coincidental.

## Significance of a declining long-run discount rate

The basic Ramsey model assumes a fixed discount rate. But it seems plausible that the discount rate declines over time. How does that affect how we should allocate our spending across time?

In short, we should spend more when the discount rate is high, and decrease our rate of spending as the discount rate falls. See Appendix for proof.

The pace of this decline in spending heavily depends on model assumptions. If we use (as in the Appendix), the optimal consumption rate does not have a closed-form solution, but we can verify numerically that with reasonable parameters, the optimal rate at time t = 0 only slightly exceeds the optimal long-run rate (e.g., 0.11% vs. 0.10% when ). But if we use a discrete state-based model (as in Trammell^{[5:1]} section 3), under some reasonable parameters, the current consumption rate equals the current discount rate.

Given these reasonable but conflicting models, it is unclear how much we should consume today as a function of the current and long-run discount rates. More investigation is required, but until then, it makes sense to attempt to estimate both the current and long-run discount rates.

Additionally, some arguments suggest that we do not live at a particularly influential time [EA · GW]. If true, that means most estimates of the current discount rate are way too high, the current rate probably resembles the long-run rate, and the long-run rate should be used in calculating optimal consumption.

# Breaking down the current discount rate

In this part, I examine some plausible reasons why each of the four types of events (economic nullification, expropriation, value drift, change in opportunities) could occur, and roughly reason about how they should factor into the discount rate.

## Economic nullification

An economic nullification event is one in which all our resources become worthless. Let's break this down into three categories: extinction, superintelligent AI, and economic collapse. Other types of events might result in economic nullification, but these three seem the most significant.

### Extinction

Even if we do not prioritize extinction risk reduction as a top cause area^{[7]}, we should factor the probability of extinction into the discount rate. In possible futures where civilization goes extinct, we have no way of creating value.

We only have very rough estimates of the probability of extinction. I will cite three sources that appear to give among the best-quality estimates we have right now.

- Pamlin and Armstrong (2015), 12 Risks That Threaten Human Civilization estimated a 0.13% probability of extinction in the next century from all causes excluding AI, and a 0-10% chance of extinction due to AI
^{[8]}. - Sandberg and Bostrom (2008)'s Global Catastrophic Risks Survey estimated a 19% probability of extinction before 2100, based on a survey of participants at the Global Catastrophic Risks Conference.
- "Database of existential risk estimates (or similar)", a Google Doc compiled by Michael Aird, includes a list of predictions on the probability of extinction. As of 2020-06-19, these predictions (excluding the two I already cited) give a median annual probability of 0.13% and a mean of 0.20% (see my copy of the sheet for calculations)
^{[9]}.

These estimates translate into an annual extinction probability of 0.0013% to 0.26%, depending on which numbers we use.

For more, see Rowe and Simon (2018), "Probabilities, methodologies and the evidence base in existential risk assessments.", particularly the appendix, which provides a list of estimates of the probability of extinction or related events^{[10]}.

Michael Aird (2020), "Database of existential risk estimates" [EA · GW] (an EA Forum post accompanying the above-linked spreadsheet), addresses the fact that we only have extremely rough estimates of the extinction probability. He reviews some of the implications of this fact, and ultimately concludes that attempting to construct such estimates is still worthwhile. I think he explains the relevant issues pretty well, so I won't address this problem other than to say that I basically endorse Aird's analysis.

### Superintelligent AI

If we develop a superintelligent AI system, this could result in extinction. Alternatively, it could result in such a fantastically positive outcome that any money or resources we have now become useless. Even though a "friendly" AI does not constitute an existential threat, it could still put us in a situation where everyone's money loses its value, so we should include this possibility in the discount rate.

AI Impacts reviewed AI timeline surveys, in which AI experts estimated their probabilities of seeing human-level AI by a certain date. We can use these survey results to calculate the implied probability of artificial general intelligence P(AGI)^{[11]}.

Let's take the 2013 FHI survey as an example. This survey gives a median estimated 10% chance of AGI by 2020 and 50% chance by 2050. A 10% chance between 2013 and 2020 suggests an annual probabliity of 1.37%; and a 50% chance between 2013 and 2050 implies a 1.11% annual probability.

The 10% and 50% estimates given by each of the surveys reviewed by AI Impacts imply annual probabilities ranging from a minimum of 0.56% to a maximum of 1.78%, with a mean of 1.13% and a standard deviation of 3.2 percentage points.

Three relatively recent surveys asked participants for predictions rather than probabilities, and these imply P(AGI) ranging from 0.51% to 1.78%.

Metaculus predicts that AGI has a 50% chance of emerging by 2043 (with 168 predictions), implying a 2.97% annual probability of AGI.

A superintelligent AI could lead to an extremely bad outcome (extinction) or an extremely good one (post-scarcity), or it could land us somewhere in the middle, where we can still use our resources to improve the world, and therefore money has value. Or the AI might be able to use our accumulated resources to continue producing value—in fact, this seems likely. So we should only treat the probability of AGI as a discount insofar as we expect it to result in extinction or post-scarcity.

What is the probability of an extreme outcome (good or bad)? Again, we do not have any good estimates of this. As an upper bound, we can simply assume a 100% chance that a superintelligent AI results in an extreme outcome. Combining this with the AI Impacts survey review gives an estimated 1.78% annual probability of an extreme outcome due to AI, equating to a 1.78% discount factor.

As a lower bound, assume only extinction can result in extreme outcomes, and that the extreme upside (post-scarcity) cannot happen. Taking the upper end of the extinction risk estimate from Pamlin and Armstrong (2015) gives a 0.1% annual probability of extinction, and thus a 0.1% annual probability of an extreme outcome due to AI. So based on these estimates, our discount factor due to AI falls somewhere between 0.1% and 2.97% (or possibly lower), and this may largely or entirely overlap with the discount factor due to extinction.

Metaculus gives a 57% probability (with 77 predictions) that an AGI will lead to a "positive transition." Müller & Bostrom (2016)^{[12]} surveyed AI experts and came up with a 78% probability on a similar resolution. This gives us some idea of to what extent the discount due to AGI overlaps with the discount due to extinction.

We could spend time examining plausible AI scenarios and how these impact the discount rate, but I will move on for now. For more on predictions of AI timelines (and the problems thereof), see Muehlhauser (2015), What Do We Know about AI Timelines?

### Economic collapse

Money could become useless if the global economy experiences a catastrophic collapse, even if civilization ultimately recovers.

Depending on the nature of the event, it may be possible to guard against an economic collapse. For example, hyperinflation destroys the value of cash and bonds, but might leave stocks, gold, and real estate relatively unaffected, so investors in these assets could still preserve (some of) their wealth.

We have seen some countries experience severe economic turmoil, such as Germany after WWI and Zimbabwe in 2008, but these would not have resulted in complete loss of capital for a highly diversified investor (i.e., one who holds some gold or other real assets).

Almost any severe economic collapse would merely result in a *near* loss of all resources and not a *complete* loss. We should only discount future worlds where we see a complete loss, because any partial loss of capital can get rolled into the interest rate.

Pamlin and Armstrong (2015) include catastrophic economic collapse as one of their 12 risks that threaten civilization, but do not provide a probability estimate.

## Expropriation and value drift

Obviously, expropriation and value drift are not the same thing. But over longer time periods, it is not always clear whether an old institution ceased to exist due to outside forces or because its leaders lost focus.

I am not aware of any detailed investigations on the rate of institutional failure. Philip Trammell stated on the 80,000 Hours Podcast:

I did a cursory look at what seemed to me like the more relevant foundations and institutions that were set up over the past thousand years or something. [...] I came up with a very tentative value drift/expropriation rate of half a percent per year for ones that were explicitly aiming to last a long time with a relatively well defined set of values.

According to Sandberg (n.d.)^{[13]}, nations have a 0.5% annual probability of ceasing to exist. Most institutions don't last as long as nations, but an institution that's designed to be long-lasting might outlast its sovereign country. So perhaps we could infer an institutional failure rate of somewhere around 0.5%.

### Expropriation

According to Dimson, Marsh, and Staunton's Global Investment Returns Yearbook 2018 (henceforth "DMS"), from 1900 to 2018, only two major countries (out of 23) experienced a nationwide expropriation of government assets: Russia and China (in both cases because of a communist revolution). This gives a historical annual 0.05% probability of expropriation when countries are weighted by market capitalization (0.07% when countries are equal-weighted).

Both expropriation events occurred in unstable countries that DMS classify as having been "emerging" at the time (defined as having a GDP per capita under $25,000, adjusted for inflation). Thus, it seems investors have some ability to predict in advance whether their country has a particularly high risk of expropriation. We can probably assume that developed countries such as the United States have an expropriation risk of less than 0.05% because no developed-country expropriations occurred in DMS's sample.

Note that some other countries (such as Cuba) did expropriate citizens' funds, but are not included in DMS. DMS's sample covers 98% of world market cap, so the remaining countries matter little on a cap-weighted basis. Furthermore, if investors can predict in advance that they live in a high-risk country, this holds doubly so for frontier markets like Cuba.

So it seems the risk of nationwide expropriation in developed countries is so small that it's a rounding error compared to other factors like value drift.

What about the risk that your personal assets are expropriated? If governments only expropriate assets from certain people or institutions, the risk to any particular individual is relatively small, simply because that individual will probably not be among the targeted group. But as these sorts of events do not appear in stock market returns, we cannot estimate the risk based on DMS data, and the risk is harder to estimate in general. As individual expropriation happens fairly rarely, I would expect that investors experience greater risk from nationwide expropriation. As a naive approach, we could double the 0.05% figure from before to get a 0.1% all-in annual probability of expropriation, although I suspect this overstates the risk.

More frequently, governments seize some but not all of citizens' assets, for example when the United States government forced all citizens to sell their gold at below-market rates. Such events do not existentially threaten one's financial position, so they should not be considered as part of the expropriation rate for our purposes.

Metaculus predicts that donor-advised funds (DAFs) have a somewhat higher probability of expropriation, although this is based on a limited number of predictions, and it only applies to philanthropists who use DAFs.

Investors can protect against expropriation by domiciling their assets in multiple countries. Probably the safest legal way to do this is to buy foreign real estate, which is the most difficult asset for governments to expropriate. But in general, investors cannot easily shield their assets from expropriation. In Deep Risk, William Bernstein concludes that the benefits of avoiding expropriation probably do not justify the costs for individual investors. The same is probably true for philanthropists.

### Value drift

When discussing value drift, we must distinguish between individuals and institutions. Both types of actors must make decisions about how to use their money over time, but they experience substantially different considerations. Most obviously, individuals cannot continue donating money for multiple generations.

For the purposes of this essay, we care more about the institutional rate of value drift:

- Effective altruist institutions have much more money. Indeed, sufficiently wealthy individuals typically create institutions to manage their money.
- Insofar as individuals have a higher value drift rate, they can mitigate this by giving their money to long-lived institutions. (Although for many individuals, most of their donations will come from future income, and donating future income now poses some challenges, to say the least.)
- Individual effective altruists typically share values and goals with many other people. A single individual ceasing to donate to a cause almost never existentially threatens the goals of that cause.

That said, I will briefly address individual value drift. We don't know much about it, but we have some information:

- According to the 2018 EA Survey, 40% of Giving What We Can pledge-signers do not report keeping up with the pledge [EA · GW] (although this is partially due to lack of reporting)
- An analysis of the 2014-2018 EA Surveys [EA · GW] suggests about a 60% 4-5 year survival rate.
- A poll of one individual's contacts [EA · GW] found a 45% 5-year survival rate.

Each of these sources suggests something like a 10% annual value drift rate. This is much higher than any other rate estimated in this essay. On the bright side, one survey found that wealthier individuals tend to have a lower rate of value drift, which means the dollar-weighted value drift rate might not be quite as bad as 10%.

For long-lived institutions, it's hard to measure the value drift rate in isolation. We can more easily measure the combined expropriation/value drift rate. As discussed above, some preliminary evidence suggests a rate of about 0.5%. Further investigation could substantially refine this estimate.

## Changes in opportunities

I've saved the best for last, because changes in opportunities appears to be the most important factor in the discount rate.

First, I should note that it doesn't really make sense to model the rate of changes in opportunities as part of the discount rate. Future *utility* doesn't become less valuable due to changes in opportunities; rather, *money* becomes less (or more) effective at producing utility. It might make more sense to treat changes in opportunities as part of the utility function^{[14]}, or to create a separate parameter for it. Perhaps we can spend money on research to improve the value of future opportunities, and we could account for this. Unfortunately, that would probably mean we no longer have a closed-form solution for the optimal consumption rate. So for the sake of making the math easier, let's pretend it makes sense to include changes in opportunities within the discount rate, and assume the rate of change is fixed and we can't do anything to change it. A future project can relax this assumption and see how it changes results.

Our top causes could get better over time as we learn more about how to do good, or they could get worse as the best causes become fully funded. We have some reason to believe both of these things are happening. Which effect is stronger?

Let's start by looking at GiveWell top charities, where we have a particularly good (although nowhere near perfect) idea of how much good they do.

This table lists the most cost-effective charity for each year according to GiveWell's estimates, in terms of cost per life-saved equivalent (CPLSE). The "real" column adjusts each CPLSE estimate to November 2015 dollars.

Year | Organization | CPLSE nominal | CPLSE real |
---|---|---|---|

2012 | Against Malaria Foundation | $2004 | $2066 |

2013 | Against Malaria Foundation | $3401 | $3463 |

2014 | Deworm the World | $1625 | $1633 |

2015 | Against Malaria Foundation | $1783 | $1783 |

2016 | Deworm the World | $901 | $886 |

2017 | Deworm the World | $851 | $819 |

2018 | Deworm the World | $652 | $592 |

2019 | Deworm the World | $480 | $443 |

We cannot take these expected value estimates literally, but they might tell us something about the direction of change.

GiveWell does not provide cost-effectiveness estimate spreadsheets for earlier years, but its earlier estimates tended to be lower, e.g., "under $1000 per infant death averted" for VillageReach in 2009. For a time, GiveWell's estimates increased over time due to (according to GiveWell) excessive optimism in the earlier calculations. However, the estimates have been near-monotonically decreasing since 2013 (every year except 2014-2015). Metaculus predicts (with 117 predictions) the 2021 real cost-effectiveness estimate to lie between the values for 2018 and 2019, suggesting a positive but small change in cost. It predicts (with 49 predictions) that GiveWell's 2031 real cost-effectiveness estimate will be $454, nearly the same as 2019, implying that Metaculus expects GiveWell's estimates to stabilize.

Has the increased cost-effectiveness come from an improvement in the top charities' programs, or simply from changes in estimates? I did not examine this in detail, but according to GiveWell's 2018 changelog, the improvements in Deworm the World occurred primarily due to a reduction in cost per child dewormed per year. Perhaps we should classify this more as an operational improvement than as learning, but it falls in the same general category.

What about the value of finding new top charities? According to GiveWell, its current recommended charities are probably more cost-effective than its 2011 top recommendation of VillageReach. Since 2014, GiveWell has not found any charities that it ranks as more cost-effective than Deworm the World, but we should expect some nontrivial probability that it finds one in the future.

Other cause areas have a much weaker knowledge base than global poverty. Even if top global poverty charities were getting less cost-effective over time due to limited learning, I would still expect us to be able to find interventions in animal welfare or existential risk that work substantially better than our current best ideas. These cause areas probably have a relatively high annual "learning rate", which we should subtract from the discount rate (possibly resulting in a negative discount).

Under plausible assumptions, some cause areas could have a learning rate on the order of magnitude of 10% (translating to a -10% discount), or could have a 10% rate of opportunities disappearing.

## Combined estimate

This section summarizes all the estimates given so far. I came up with these based on limited information, and they should not be taken as reliable. But this can give us a starting point for thinking about the discount rate.

Category | Rate |
---|---|

extinction | 0.001% – 0.2% |

superintelligent AI | 0.001% – 3% |

economic collapse | ? |

expropriation | 0% – 0.05% |

institutional value drift | 0.5% |

individual value drift | 10% |

changes in opportunities | -10% – 10% |

Recall that the estimate for superintelligent AI does not indicate chance of developing AI, but the chance that AI is developed *and* money becomes useless as a result.

Adding these up gives an institutional discount rate of 0.5% – 2.3%, excluding the discount due to changes in opportunities. Introducing this extra discount dramatically widens the confidence interval.

My current best guess:

- Philanthropists who prioritize global poverty experience a slightly positive discount due to changes in opportunities, and probably expect a relatively low probability of extinction, suggesting an all-in discount rate of around 0.5% – 1%.
- Philanthropists who prioritize more neglected cause areas experience a substantially positive learning rate, and therefore a negative all-in discount rate. This suggests consumption should be postponed until the learning rate substantially diminishes, although in practice, there is no clear line between "consumption" and "doing research to learn more about how to do good."

# Breaking down the long-run discount rate

## Economic nullification

Again, let's consider three possible causes of economic nullification: extinction, superintelligent AI, and economic collapse.

### Extinction

If we use a moderately high estimate for the current probability of extinction (say, 0.2% per year), it seems implausible that this probability could remain at a similar level for thousands of years. A 0.2% annual extinction probability translates into a 1 in 500 million chance that humanity lasts longer than 10,000 years. Humanity has already survived for about 200,000 years, so on priors, this tiny probability seems extremely suspect.

Pamlin and Armstrong (2015)'s more modest estimate of 0.0013% translates to a more plausible 88% chance of surviving for 10,000 years, and a 27% chance of making it 100,000 years.

One of these three claims must be true:

- The annual probability of extinction is quite low, on the order of 0.001% per year or less.
- Currently, we have a relatively high probability of extinction, but if we survive through the current crucial period, then this probability will dramatically decrease.
- The current relatively high probability of extinction will maintain indefinitely. Therefore, humanity is highly likely to go extinct over an "evolutionary" timespan (10,000 to 100,000 years), and all but guaranteed not to survive (something like 1 in a googol chance) over a "geological" time scale (10+ million years).

In "Are we living at the most influential time in history?" [EA · GW] (2018), Will MacAskill offers some justification for (but does not strongly endorse) the first claim on this list. The second claim seems to represent the most common view among long-term-focused effective altruists.

If we accept the first or second claim, this implies existential risk has nearly zero impact on the long-run discount rate. The third claim allows us to use a nontrivial long-term discount due to existential risk. I find it the least plausible of the three—not because of any particularly good inside-view argument, but because it seems unlikely on priors.

### Superintelligent AI

With AGI, we can construct the same ternary choice that we did with extinction:

- We have a low annual probability of developing AGI.
- The probability is currently relatively high, but will decrease over time.
- The probability is high and will remain high in perpetuity.

Again, I find the third option the least plausible. Surely if we have not developed superintelligent AI after 1000 years, there must be some fundamental barrier preventing us from building it. In this case, I find the first option implausible as well. Based on what we know about AI, it seems the probability that we develop it in the near future must be high (for our purposes, a 0.1% annual probability qualifies as high). The Open Philanthropy Project agrees with this view, claiming "a nontrivial likelihood (at least 10% with moderate robustness, and at least 1% with high robustness) that transformative AI will be developed within the next 20 years."

If we accept one of the first two claims, then we should use a low long-run discount rate due to the possibility of developing superintelligent AI.

### Economic collapse

Unlike in the previous cases, I find it at least somewhat plausible that the probability of catastrophic economic collapse could remain high in perpetuity. Over the past several thousand years, many parts of the world have experienced periods of extreme turmoil where most investors lost all of their assets. Although investors today can more easily diversify globally across many assets, this increased globalization plausibly also increases the probability of a worldwide collapse.

Unlike extinction, and probably unlike the development of AGI, a global economic collapse could be a repeatable event. If civilization as we know it ends but humanity survives, we could slowly rebuild society and eventually re-establish an interconnected global economy. And if we can establish a global economy for a second time, it can probably also collapse for a second time. Perhaps civilization could experience 10,000-year long "mega cycles" of technological development, globalization, and collapse.

This is not to say I am *confident* that the future will look like this. I merely find it *somewhat plausible*.

Let's say we believe with 10% probability that the future will experience a catastrophic economic collapse on average once every 10,000 years. This translates into a 0.001% annual probability of economic collapse. This probably matters more than the long-run probability of extinction or AGI, but is still so small as to not be worth considering for our purposes.

## Expropriation and value drift

Based on historical evidence, it appears that institutions' ability to preserve themselves or their values follows something like an exponential distribution: as we look back further in time, we see dramatically fewer institutions from that time that still exist today. Thus, it seems plausible that the rate of value drift could remain substantially greater than zero in the long run.

Expropriation/value drift might not follow an exponential curve—we know extremely little about this. An exponential distribution seems plausible on priors, but it also seems plausible that the rate could decrease over time as institutions learn more about how to preserve themselves. Similarly, organizations that avoid value drift will tend to gain power over time relative to those that don't. On this basis, we might expect the value drift rate to decline over time as value-stable institutions gain an increasing share of the global market.

## Changes in opportunities

In the long run, the learning rate must approach 0. There must be some best action to take, and we can never do better than that best action. Over time, we will gain increasing confidence in our ability to identify that best action. Either we eventually converge on the best action, or we hit some upper limit on how much it's possible to learn. Either way, the learning rate must approach 0.

We can also expect giving opportunities to get worse over time as the best opportunities become fully funded. The utility of donations might asymptote toward the utility of general consumption—that is, in the long run, you might not be able to do more good by donating money than you can by spending it on yourself. Or new opportunities might continue to emerge, and might even get better over time. It seems conceivable that they could continue getting better in perpetuity, although I'm not sure how that would work. But in any case, the available opportunities cannot get worse in perpetuity. Money might have less marginal utility in the future as people become better off, but the Ramsey model already accounts for this in the parameter—for example, indicates logarithmic utility of money, which means exponentially growing people's wealth only linearly increases utility.

## Combined estimate

In summary:

- The outside view suggests a low long-run extinction rate.
- It's hard to say anything of substance about the long-run rate of economic collapse or expropriation/value drift.
- It seems the rate of changes in opportunities must approach 0.

It seems plausible that value drift is the largest factor in the long run, which perhaps suggests a 0.5% long-run discount rate if we assume 0.5% value drift. But this estimate seems much weaker than the (already-weak) approximation for the current discount rate.

# Can we change the discount rate?

So far, we have assumed we cannot change the discount rate. But the cause of existential risk reduction focuses on reducing the discount rate by decreasing the probability of extinction. Presumably we could also reduce the expropriation and value drift rates if we invested substantial effort into doing so.

## The significance of reducing value drift

Effective altruists invest substantial effort in reducing existential risk (although, arguably, society at large does not invest nearly enough). But we know almost nothing about how to reduce value drift. Some research has been done [EA · GW] on value drift among individuals in the effective altruism community, but it's highly preliminary, and I am not aware of any comparable research on institutional value drift.

Arguably, existential risk matters a lot more than value drift. Even in the absence of any philanthropic intervention, people generally try to make life better for themselves. If humanity does not go extinct, a philanthropist's values might eventually actualize, depending on their values and on the direction humanity takes.

Under most (but not all) plausible value systems and beliefs about the future direction of humanity, existential risk looks more important than value drift. The extent to which it looks more important depends on how much better one expects the future world to be (conditional on non-extinction) with philanthropic intervention than with its default trajectory.

A sampling of some beliefs that could affect how much one cares about value drift:

- If economic growth continues as it has but we do not see any transformative events (such as development of superintelligent AI), global poverty will probably disappear in the next few centuries, if not sooner.
- Even if humanity eradicates global poverty, we might continue disvaluing non-human animals' well-being and subjecting them to great unnecessary suffering. Philanthropic efforts in the near term could substantially alter this trajectory.
- Some people, particularly people interested in AI safety, believe that if we avoid extinction, we will almost certainly develop a friendly AI which will carry all sentient life into paradise. If that's true, we really only care about preventing extinction, and particularly about ensuring we don't make an unfriendly AI.
- It might be critically important to do a certain amount of AI safety research before AGI emerges, and this research might not happen without support from effective altruist donors.

Beliefs #1 and #3 imply relatively less concern about value drift (compared to extinction), while #2 and #4 imply relatively more.

Note that even if you expect good outcomes to be realized in the long run, you still care about how value drift impacts philanthropists' ability to do good in the next few decades or centuries.

I do not think it is obvious that reducing the probability of extinction does more good per dollar than the value drift rate, which naively suggests the effective altruist community should invest relatively more into reducing value drift. But I find it plausible that, upon further analysis, it would become clear that existential risk matters much more.

Aside: I spent some time constructing an explicit quantitative model of the significance of value drift versus existential risk. I will not reproduce the model here, but it bore out the intuition that the ratio (importance of value drift):(importance of extinction risk) is basically proportional to the ratio (welfare of future worlds by default):(welfare of future worlds with philanthropic intervention), with some consideration given to the probabilities of extinction and value drift.

## Reducing risk by creating multiple funds

Unlike self-interested investors, philanthropists don't just care about how much money they have. They also care about the assets of other value-aligned people. This allows philanthropists to protect against certain risks in ways self-interested investors cannot.

To mitigate expropriation risk, different value-aligned philanthropists can invest their assets in different countries. To some extent, this already happens automatically: if Alice lives in France and Bob lives in Australia, and they share the same values, they already naturally split their assets between the two countries. If, say, France undergoes a communist revolution and nationalizes all citizens' assets, Bob still has his portfolio, so Alice and Bob have only lost half the money they care about. If enough value-aligned philanthropists exist across many countries, total expropriation can probably only occur in the case of an economic nullification-like event, such as the formation of a one-world communist government.

The same applies to value drift. If a set of philanthropic investors share values but one member of the group becomes more selfish over time, only a small portion of the collective altruistic portfolio has been lost. It seems to me that the probability of value drift is mostly independent across individuals, although I can think of some exceptions (e.g., if ties weaken within the effective altruism community, this could increase the overall rate of value drift). Therefore, the probability of total value drift rapidly decreases as the number of philanthropists increases. But there's still the possibility that the EA community as a whole could experience value drift.

We should consider the special case where asset ownership is fat tailed—that is, a small number of altruists control almost all the wealth. In practice, wealth does follow a fat-tailed distribution, with the Open Philanthropy Project controlling a majority of (explicitly) effective altruist assets, and large donors constituting a much bigger fraction of the pie than small donors^{[15]}. Asset concentration substantially increases the damage caused by expropriation or value drift. The larger philanthropists can mitigate this by giving their money to smaller actors, effectively diversifying against value drift/expropriation risk. Although gifts of this sort are technically feasible and do occur in small portions, large philanthropists rarely (if ever) distribute the majority of their assets to other value-aligned actors for the purpose of reducing concentration risk. I would guess they do not distribute their funds primarily because (1) large philanthropists do not trust others to persistently share their values, (2) they do not trust others to do a good job identifying the best giving opportunities, and (3) they do not take concentration risk particularly seriously. At the least, large philanthropists should take concentration risk more seriously, although I do not know what to do about the other two points.

If large philanthropists do want to spread out their money, it makes sense that they should take care to ensure they only give it to competent, value-aligned associates.

Alternatively, institutions can diversify by spinning off separate organizations. This avoids the competence and value-alignment problems because they can form the new organizations with existing staff members, but it introduces a new set of complications.

Observe that even when assets are distributed across multiple funds, expropriation and value drift still reduce the expected rate of return on investments in a way that looking at historical market returns does not account for. This is a good trade—decreasing the discount rate and decreasing the investment rate by the same amount probably increases utility in most situations—but it isn't as good as eliminating the risks entirely.

Relatedly, wealthy individuals often create foundations to manage their donations, which (among other benefits) reduces value drift by providing checks on donation decisions (by involving paid staff in the decisions, or by psychologically reinforcing commitment to altruistic behavior). Converting wealthy-individual money into foundation money probably works extremely well at decreasing the value drift rate, and fortunately, it's already common practice.

## What about individual value drift?

As we saw, the existing (limited) evidence suggests about a 10% value drift rate among individual effective altruists. When individuals stop donating, this does not constitute a complete loss of capital because other value-aligned altruists can continue to provide funding; but it does hurt the effective investment rate of return.

Imagine if philanthropists could invest in an asset with 10 percentage points higher return than the market (at the same level of risk). That would represent a *phenomenal* opportunity. But that's exactly what we can get by reducing the value drift rate. We can't get the individual value drift rate all the way down to 0%, but it's so high right now that we could probably find a lot of impactful ways to reduce it. Reducing this rate from 10% to 5% might require less effort than reducing the probability of extinction from (say) 0.2% to 0.19%. These numbers are not based on any meaningful analysis, but they seem plausible given the extreme neglectedness of this cause area.

Marisa Jurczyk offers some suggestions [EA · GW] on future research that could help reduce individual value drift.

# Significance of mis-estimating the discount rate

As Weitzman (2001)^{[16]} wrote, "the choice of an appropriate discount rate is one of the most critical problems in all of economics." Changing the estimated discount rate substantially changes the implied optimal behavior.

Some might argue that we simply cannot estimate the discount rate, and it remains fundamentally unknowable. While I agree that we have no idea what discount rate to use, I do not believe we should equivocate [LW · GW] between (1) the radically uncertain state of knowledge if we don't think about the discount rate at all, (2) the highly uncertain state of knowledge if we think about it a little bit, and (3) what our state of knowledge could be if we invested much more in estimating the discount rate. Philanthropists' behavior necessarily entails some (implicit) discount rate; it is better to use a poor estimate than no estimate at all.

Aird (2020), "Database of existential risk estimates" [EA · GW], argues for the importance of better estimating the probability of extinction. Our estimates for value drift and changes in opportunities appear even rougher than for extinction, so working on improving these might be easier and therefore more cost-effective.

Some economic literature exists on estimating the discount rate (such as Weitzman (2001)^{[16:1]}, Nordhaus (2007)^{[17]}, and Stern (2007)^{[18]}), but philanthropists do not always discount for the same reasons as self-interested actors, so for our purposes, these estimates provide limited value.

How much should we value marginal research on estimating the philanthropic discount rate?

## Extended Ramsey model with estimated discount rate

Intuitively, it seems that mis-estimating the discount rate could result in substantially wrong decisions about how much to spend vs. save, and this could matter a lot. Some quantitative analysis with a simple model supports this intuition.

In the introduction, I presented the Ramsey model as a simple theoretical approach for determining how to spend resources over time. Let's return to this model. Additionally, let's assume we experience logarithmic utility of consumption, because doing so produces the simplest possible formula for the consumption schedule.

An actor maximizes utility by following this consumption schedule^{[3:2]}:

gives the proportion of assets to be consumed each period^{[19]}, and tells us the size of the portfolio at time t (recall that r is the investment rate of return). According to the chosen set of assumptions, the optimal consumption rate exactly equals the discount rate.

Suppose a philanthropist attempts to follow this optimal consumption schedule. Suppose they estimate the discount rate as , which might differ from the true . In that case, the philanthropist's total long-run utility is given by

To see how quickly utility increases as we move closer to , we should look at the derivative of utility with respect to :

What does this mean, exactly?

Suppose we have a choice between (1) moving closer to or (2) improving how effectively we use money by changing our utility function from to for some increasing "impact factor" . When should we prefer (1) over (2)?

We should prefer improving whenever utility increases faster by decreasing than by increasing b, that is, whenever for some particular values of (using absolute values because we only care about the magnitude of change, not the direction).

The formula for is hard to comprehend intuitively. But if we plug in some values for , , and , we see that for most reasonable inputs. For example, (a mis-estimate of 0.3 percentage points) gives . A closer estimate of gives . Therefore, according to this model, improving looks highly effective.

We also care about the rate at which we can improve and b. Presumably, moving closer to becomes something like exponentially more difficult over time—we could model this process as , where is effort spent researching the correct discount rate and is some constant. Then we need a function for the difficulty of increasing the impact factor b, perhaps .

Ultimately, we would need a much more complicated formulation to somewhat-accurately model our ability to improve the discount rate, and we cannot draw strong conclusions from the basic Ramsey model. But in our simple model, is much larger than for reasonable parameters, which does at least hint that improving our estimate of the discount rate—and adjusting our spending schedules accordingly—could be a highly effective way of increasing utility, especially given the weakness of our current estimates, and how much low-hanging fruit probably still exists. This preliminary result seems to justify spending a substantially larger fraction of altruistic resources on estimating .

## A plan for a (slightly) more realistic model

The model in the previous section assumes that a philanthropist can choose between saving and consumption at each moment in time, and can also spend out of an entirely separate budget to improve . This makes the optimization problem easier, but doesn't really make sense.

Under a more realistic model, the philanthropist can choose between three options: (1) saving, (2) consumption, and (3) improving . That is, research on estimating the discount rate comes out of the same budget as general consumption.

Under this model, the philanthropist wishes to maximize

with the constraint that c(t) cannot be a function of , it can only be a function of . Additionally, we can define a function giving the best estimate of as a function of Y(t), where Y(t) gives cumulative spending on determining up to time .

Solving this problem requires stronger calculus skills than I possess, so I will leave it as an open question for future research.

Some other useful model extensions:

- Allow the philanthropist to invest in risky assets. As a starting point, see Levhari and Srinivasan (1969), Optimal Savings Under Uncertainty.
- Make the discount rate a function of resources spent on reducing it (such as via x-risk research). That is, .

## Weitzman-Gollier puzzle

According to Gollier and Weitzman (2010), in the face of uncertainty about the discount rate, "[t]he long run discount rate declines over time toward its lowest possible value." There exists some disagreement in the economic literature as to whether the discount rate should trend toward its lowest or its highest possible value. This disagreement is known as the Weitzman-Gollier puzzle (WGP). I have not studied this disagreement well enough to have an informed opinion, but Greaves (2017)^{[4:2]} claims "there is a widespread consensus" that "something like" the lowest possible long-run discount rate should be used.

How much we care about this puzzle for the purposes of this essay depends on how we interpret long-term discount rates. If current consumption is only a function of the current discount rate, then WGP doesn't matter. If instead we believe that the long-run rate affects how much we should consume today, then Weitzman-Gollier becomes relevant. I already argued that we should expect the discount rate to decline over time (e.g., as extinction risk decreases and institutions become more robust), so Weitzman-Gollier provides an additional argument in favor of this policy.

## Some arguments against prioritizing improving the discount rate estimate

**Argument from long-term convergence:** Over a sufficiently long time horizon, it seems our estimate will surely converge on the true discount rate, even if we don't invest much in figuring it out. At that time, and in perpetuity after that, we can follow the optimal spending rate. If we prioritize figuring out now, that only helps us from now until when we would have solved anyway. (But on the other hand, improving our estimate in the short term could still increase utility by a lot.)

**Argument from intuitive meaningfulness:** Improving our estimate of the discount rate feels somehow less *meaningful* than actively reducing the discount rate (e.g., by reducing risk of extinction). In some sense, by improving our estimate, we aren't really *doing* anything. Obviously we do increase expected utility by better spreading out our spending over time, but this doesn't feel like the same sort of benefit as improving the effectiveness of our spending, or expanding the community to increase the pool of donations. Even if the Ramsey model supports improving as possibly the most effective intervention, this model entails a lot of assumptions, so we should pay attention to intuitions that contradict the model.

**Argument from model uncertainty:** Causes like global poverty prevention look good across many models and even many value systems (although we don't really know if global poverty prevention is even net positive). Under the Ramsey model, improving still looks good across a lot of value systems—it benefits you to improve the spending schedule no matter what utility function you use—but we don't know if it holds up in non-Ramsey-like models. Furthermore, it's a new idea that has not been subjected to much scrutiny.

**Argument from market efficiency:** According to the efficient market hypothesis (EMH), the correct discount rate should be embedded in market prices. Market forces don't always apply to philanthropic actors, but it seems plausible that something like a weaker version of EMH might still hold. Thus, we might expect the "philanthropic market" to basically correctly determine the discount rate, even if no individual actor has high confidence in their particular estimate. On the other hand, in practice, the philanthropic market appears far less efficient than the for-profit sector (or else the effective altruist approach would be much more popular!).

## Applying the importance/tractability/neglectedness framework

Let's qualitatively consider improving the discount rate and see how it fits in the importance/tractability/neglectedness framework.

### Importance

If we use philanthropic resources slightly too slowly, we lose out on the benefits of this marginal consumption, and continue losing out every year in perpetuity (or at least until we correct our estimate of the discount rate).

If we use resources too quickly, this eats into potential investment returns, decreasing the size of our future portfolio and hamstringing philanthropists' ability to do good in the future.

Under the Ramsey model, slightly refining the discount rate estimate greatly increases utility. But the previous section does provide some arguments against the importance of a correct discount rate.

Improving our estimate of the discount rate only matters in situations where we provide all the funding for a cause, or where we can coordinate with all (or most) other funders. If we only control a small portion of funds and other funders do not follow optimal consumption, then we simply want to bring overall spending closer to the optimal rate, which requires us to consume either all or none of our resources. In this situation, we do not need to exactly estimate the discount rate—we only need to know whether other funders use a discount that's too low or too high. But we do care about the exact rate in smaller causes (probably including existential risk, and possibly farm animal welfare) where we can coordinate with other donors.

### Tractability

Estimating the discount rate appears much easier than, say, ending global poverty. I can easily come up with several ways we could improve our estimate:

- Better surveys or studies on the probability of extinction, or better attempts to synthesize an estimate out of existing surveys.
- Research on historical movements and learn more about why they failed or succeeded.
- Theoretical research on how philanthropists should consume as a function of the discount rate.
- Theoretical research on how to break down the discount rate.

This suggests we could substantially improve our estimate with relatively little effort.

### Neglectedness

Some academic literature exists on estimating the discount rate, although much of this literature doesn't entirely apply to effective altruists. Within EA, I am only aware of one prior attempt to estimate the discount rate (from Trammell^{[5:2]}), and this was only given as a rough guideline. Even within academia, one could fairly describe this area of research as neglected; within EA, it has barely even been mentioned. The sheer neglectedness of this issue suggests that even a tiny amount of effort could substantially improve our estimate.

All things considered, it seems likely to me that the effective altruism community substantially under-invests in trying to determine the correct discount rate, but the simple extension to the Ramsey model perhaps overstates the case.

# Conclusion

In this essay, I have reviewed a number of philanthropic opportunities that, according to the simplistic Ramsey model, could substantially improve the world. Some of these are already widely discussed in the EA community, others receive a little attention, and some are barely known at all. These opportunities include:

- Reducing existential risk.
- Reducing individual value drift.
- Improving the ability of individuals to delegate their income to value-stable institutions.
- Making expropriation and value drift less threatening by spreading altruistic funds more evenly across actors and countries.
- Reducing the institutional value drift/expropriation rate.
- More accurately estimating the discount rate in order to know how best to use resources over time.

Before writing this essay, I created some basic models of the cost-effectiveness of each of these. The models are sufficiently complicated, and provide sufficiently little explanatory value, that I will not present them here. Suffice it to say the models suggest that #6—improving the estimate of the discount rate—does the most good per dollar spent. Obviously this heavily depends on model assumptions (and my models made a lot of assumptions). The takeaway is that, based on what we currently know, any of these six opportunities could plausibly represent the best effective altruist cause right now.

Let's briefly address each of these opportunities.

**Existential risk** already receives much attention in the EA community, so I have little to add.

A few EAs have written about **individual value drift**, most notably Marisa Jurczyk [EA · GW], who also provided some qualitative suggestions for how to reduce value drift. But, as Jurczyk noted, "[t]he study of EAs’ experiences with value drift is rather neglected, so further research is likely to be highly impactful and beneficial for the community."

If individuals want to **delegate their donations to institutions**, they run into the problem that most of their donations come from future income, and they cannot move this income from the future to the present. Donors have a few options for "leveraging" donations, but none of them look particularly feasible. If we identified better ways to help individuals delegate their future donations, that could provide a lot of value.

To my knowledge, the idea of **spreading altruistic funds** has never been meaningfully discussed. It poses substantial challenges in practice, and I can see why institutions generally don't want to do it. But I do think this idea has potential if we can figure out how to make it work.

Many types of institutions, not just effective altruists, should care about **reducing the institutional value drift/expropriation rate**. It's possible that there already exists literature on this subject, although I'm not aware of any. More research in this area could prove highly valuable.

I discussed **improving our estimate of the discount rate** in the previous section. According to my preliminary investigation, this could be a highly impactful area of research.

This table provides my (extremely) rough guesses as to the importance, tractability, and neglectedness of these cause areas relative to each other. When I say, for example, that I believe existential risk has low neglectedness, that's relative to the other causes on this list, not in general. (Existential risk is highly neglected compared to, say, developed-world education.)

Importance | Tractability | Neglectedness | |
---|---|---|---|

existential risk | high | low | low |

individual value drift | low | medium | medium |

delegating individuals' donations | low | medium | medium |

spreading altruistic funds | medium | high | high |

institutional value drift/expropriation | medium | medium | medium |

estimating discount rate | medium | high | high |

(While revising this essay, I basically completely re-did this table twice. My opinion might completely change again by next week. So don't treat these as well-informed guesses.)

Finally, questions that merit future investigation:

- What implications do we get if we change various model assumptions?
- How does the discount rate for effective altruists compare to the more traditional social discount rate, and what is the significance of this comparison? What do we get if we attempt to derive our discount rate from the social discount rate?
- How should we derive optimal consumption from the current and long-term discount rates?
- What coefficient of relative risk aversion () and investment rate of return (r) should be used? Should we expect them to change in the long run?
- Why do effective altruist organizations report such high discount rates?

Literature already exists on some of these, e.g., Hakansson (1970)^{[20]} on modifying the Ramsey model to allow for risky investments. Future work could review some of this literature and draw implications for effective altruists' behavior.

Thanks to Mindy McTeigue and Philip Trammell for providing feedback on this essay.

# Appendix: Proof that spending should decrease as the discount rate decreases

In the basic Ramsey model, the discount factor (call it D(t)) is given by . If we generalize the discount factor and allow it to obey any function, we can rewrite total utility as

Let be the discount rate, where . (Observe that when , .) We want the discount rate to decline with time. Many possible functions could give a declining discount rate, but for the sake of illustration, let's use . With this discount function, the discount rate gradually decreases over time to a minimum of . is a scale parameter that determines how rapidly the discount rate decreases. This corresponds to discount factor . This is similar to the "Gamma discount" used by Weitzman (2001)^{[16:2]}^{[21]}.

Under this discount rate, the optimal consumption rate declines over time. We can prove this by following the same proof steps as Trammell^{[5:3]}, but using a different discount factor.

Trammell defines y(t) as "the resources allocated at time 0 for investment until, followed by spending at, t." He observes that utility is maximized when the derivative of discounted utility with respect to y(t) equals some constant k, and then solves for y(t). If we solve for y(t) with a generalized time-dependent discount factor, we get

Observing that allows us to solve for k. Plugging in , solving the integral, and rearranging gives

where is the Gamma function.

Plugging this into the formula for y(t) gives

Observe that . Therefore, c(t) is proportional to .

Let be optimal consumption according to the variable-discount model, and similarly with for the fixed-discount model. Recall that . If , then . Therefore, grows more slowly than (when t > 1). The fixed-discount case has a constant consumption rate, so the variable-discount case must have a decreasing consumption rate.

Some brief observations about this variable-discount model:

- When , it behaves identically to the fixed-discount case with .
- Like the fixed-discount model, when , the model suggests we should save indefinitely and never consume. This condition does not depend on t—that is, this model will never recommend consuming for a while and then ceasing consumption once the discount rate drops below a certain level.
- Optimal consumption at time 0 is not defined because .
- Knowing optimal consumption does not tell us the optimal consumption
*rate*. I do not believe the optimal consumption rate has a closed-form solution. - The optimal consumption schedule depends on what one considers the "start time", and one's beliefs about optimal consumption can be inconsistent across time. Loewenstein and Prelec (1992)
^{[22]}discuss this and other related issues. However, this problem does not seriously affect the model as I have portrayed it^{[23]}.

# Notes

The Ramsey model also depends on two other parameters: the interest rate and the elasticity of marginal utility of consumption. Those parameters are beyond the scope of this essay. ↩︎

I won't go into detail, but we have good theoretical reasons to expect most actors to spend impatiently, so for most causes, we plausibly want to invest all our money because other actors already over-spend according to our values. See Trammell

^{[5:4]}for more ↩︎Ramsey (1928). A Mathematical Theory of Saving. ↩︎ ↩︎ ↩︎

Greaves (2017). Discounting for public policy: A survey. ↩︎ ↩︎ ↩︎

Trammell (2020). Discounting for Patient Philanthropists. Working paper (unpublished). Accessed 2020-06-17. ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

See Mullins (2018), Retrospective Analysis of Long-Term Forecasts. This report found that "[a]ll forecast methodologies provide more accurate predictions than uninformed guesses." ↩︎

In fact, if we do prioritize reducing existential risk, the model as presented in this essay does not work, because the discount rate due to extinction is no longer a constant. ↩︎

The report gave point probability estimates for all causes other than AI. But for AI, it gave a probability range, because "Artificial Intelligence is the global risk where least is known" (p. 164). ↩︎

I calculated these summary statistics without regard to the quality of the individual predictions. Two of the individual predictions provided lower bounds, not point predictions, but I treated them as point predictions anyway. ↩︎

Note that the provided hyperlink goes to a working version of the paper, because as far as I can tell, the final paper is not available for free online. ↩︎

Some people distinguish between superintelligent AI and AGI, where the latter merely has human-level intelligence, not superhuman-level. For simplicity, I treat the two terms as interchangeable. ↩︎

Müller & Bostrom (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. ↩︎

Sandberg (n.d.). Everything is transitory, for sufficiently large values of "transitory." ↩︎

Opportunities getting worse with increased spending is accounted for by the concavity of the utility function. But it might make sense to only include EA spending in the utility function, and treat other parties' spending as a separate parameter. ↩︎

Wealth in general is fat-tailed, but it appears even more fat-tailed in EA, where the top one donor controls more than half the wealth. As of this writing, the richest person in the world controls "only" 0.03% of global wealth ($113 billion out of $361 trillion). ↩︎

Weitzman (2001). Gamma Discounting. ↩︎ ↩︎ ↩︎

Nordhaus (2007). The Challenge of Global Warming: Economic Models and Environmental Policy. ↩︎

Stern Review (2007). The Economics of Climate Change. ↩︎

Technically this is a continuous model so there are no discrete periods, but you know what I mean. ↩︎

Hakansson (1970). Optimal Investment and Consumption Strategies Under Risk for a Class of Utility Functions. ↩︎

A proper discount factor should represent a probability distribution, which means it should have D(0) = 1 and should integrate to 1; but these details don't matter for the purposes of this proof. ↩︎

Loewenstein and Prelec (1992). Anomalies in Intertemporal Choice: Evidence and an Interpretation. ↩︎

The traditional problem of hyperbolic discounting is that it causes one's preferences to change over time, even if no information changes. For example, given the choice between receiving $100 in six months' time and $120 in seven months, people tend to choose the latter. But if you wait six months and then ask them if they'd rather receive $100 now or $120 in a month, they generally choose the former, even though fundamentally this is the exact same choice.

The model under discussion in this essay does not suffer from this problem. In traditional hyperbolic discounting, discount rates decline as a function of their

*distance from the present*. But in this model, discount rates decline as a result of changes in*facts about reality, independent of the time of consideration*. That is, although discount rates decrease hyperbolically, actors at different points in time agree on the value of the discount rate at any particular time, because that discount rate is a function of the extinction/expropriation/value drift risk, not of pure time preference. ↩︎

## 20 comments

Comments sorted by top scores.

## comment by MichaelA · 2020-08-29T17:48:30.313Z · EA(p) · GW(p)

(It's possible it'd be worth updating the sections on value draft in light of the estimates Ben Todd collects and makes in this new post [EA · GW]. Or maybe just adding a link. Or maybe this comment suffices.)

## comment by Larks · 2020-07-06T21:15:27.634Z · EA(p) · GW(p)

Great post, thanks very much for writing.

Such events do not existentially threaten one's financial position, so they should not be considered as part of the expropriation rate for our purposes.

Could you give some sense for why you think this is the case? Naively I would have thought that a double chance of getting half your assets expropriated would be approximately as bad as losing all of them. There will be diminishing marginal utility, but surely not enough to totally neglect this issue.

According to Sandberg (n.d.)[13] [EA · GW], nations have a 0.5% annual probability of ceasing to exist. Most institutions don't last as long as nations, but an institution that's designed to be long-lasting might outlast its sovereign country. So perhaps we could infer an institutional failure rate of somewhere around 0.5%.

This seems like an upper bound for what we care about. Many countries and institutions that have existed for centuries have done so at the cost of wholesale change in their values. The 21st century catholic church promotes quite different things than it did in the 11th century, and the US federal government of 2020 doesn't have that much in common with the articles of confederation.

Similarly, organizations that avoid value drift will tend to gain power over time relative to those that don't.

I'm not sure this is true in the sense you need it to be. Consider evolution - we haven't seen species that have low rates of change (like sharks) come to dominate the world. They have gained power relative to proto-mammals (as the latter no longer exist) but have lost power relative to the descendants of those proto-mammals. Similarly, a human organisation that resisted memetic pressure remained true to its values will find itself competing with other organisations that do not have to pay the value-integrity costs, despite outlasting its rivals of yesteryear.

Replies from: MichaelDickens## ↑ comment by MichaelDickens · 2020-07-07T00:01:26.424Z · EA(p) · GW(p)

Naively I would have thought that a double chance of getting half your assets expropriated would be approximately as bad as losing all of them.

Diminishing marginal utility means these two events are pretty different. According to the standard assumption of constant relative risk aversion, losing all your assets produces -infinity utility. I don't think this is a realistic assumption, but it's required to make the optimal consumption problem have an analytic solution. I've done some rough numeric analysis where the utility function is bounded below at 0 instead of at -infinity, and based on what I've seen, it generally recommends about the same consumption schedule. (I only did a super preliminary analysis, so I'm not confident about this.)

Similarly, organizations that avoid value drift will tend to gain power over time relative to those that don't.

Perhaps it would be more accurate to say that an organization that avoids value drift and also consumes its resources slowly (more slowly than `r - g`

) will gain resources over time.

## ↑ comment by MichaelA · 2020-07-08T00:35:00.044Z · EA(p) · GW(p)

`Perhaps it would be more accurate to say that an organization that avoids value drift and also consumes its resources slowly (more slowly than ``r - g`

) will gain resources over time.

To check I'm understanding, is the key mechanism here the idea that they can experience compounding returns that are greater than overall economic growth, and therefore come to control a larger portion of the world's resources over time?

Replies from: MichaelDickens## ↑ comment by MichaelDickens · 2020-07-08T22:03:19.533Z · EA(p) · GW(p)

That is correct.

## comment by MichaelA · 2020-07-06T05:45:32.858Z · EA(p) · GW(p)

**Miscellaneous thoughts and questions**

1.

First, I should note that it doesn't really make sense to model the rate of changes in opportunities as part of the discount rate. Future utility doesn't become less valuable due to changes in opportunities; rather, money becomes less (or more) at producing utility.

I agree with the latter sentence. But isn't basically the same thing true for the other factors you discuss (everything except pure time preference)? It seems like all of those factors are about how effectively we can turn money into utility, rather than about the value of future utility. And is that really a reason that it doesn't make sense to include those factors in the "discount rate" (as opposed to the "pure time discounting rate")?

As you write:

But even if we do not admit any pure time preference, we may still discount the value of future resources for four core reasons:

[...]

Or perhaps, given the text that follows the "First, I should note" passage, you really meant to be talking about something like how changes in opportunities may often be caused by donations themselves, rather than something that exogenously happens over time?

2.

Over a sufficiently long time horizon, it seems our estimate will surely converge on the true discount rate, even if we don't invest much in figuring it out.

Could you explain why you say this? Is it a generalised notion that humanity will converge on true beliefs about all things, if given enough time? (If so, I find it hard to see why we should be confident of that, as it seems there could also be stasis or more Darwinian dynamics.) Or is there some specific reason to suspect convergence on the truth regarding discount rates *in particular*?

3.

Arguably, existential risk matters a lot more than value drift. Even in the absence of any philanthropic intervention, people generally try to make life better for themselves. If humanity does not go extinct, a philanthropist's values might eventually actualize, depending on their values and on the direction humanity takes. Under most (but not all) plausible value systems and beliefs about the future direction of humanity, existential risk looks more important than value drift. The extent to which it looks more important depends on how much better one expects the future world to be (conditional on non-extinction) with philanthropic intervention than with its default trajectory.

I think these are important points. I've collected some relevant "crucial questions" and sources in my draft series on Crucial questions for longtermists. E.g., in relation to the question "How close to optimal would trajectories be “by default” (assuming no existential catastrophe)?" It's possible you or other readers would find that draft post, or the sources linked to from it, interesting (and I'd also welcome feedback).

4.

Such events do not existentially threaten one's financial position, so they should not be considered as part of the expropriation rate for our purposes.

Could you explain why we should only consider things that could wipe out one's assets, rather than things that result in loss of "some but not all" of one's assets, in the expropriation rate for our purposes? Is it something to do with the interest rate already being boosted upwards to account for risks of losing some but not all of one's assets, but for some reason not being boosted upwards to account for events that wipe out one's assets? If so, could you explain why *that *would be the case.

(This may be a naive question; I lack a background in econ, finance, etc. Feel free to just point me to a Wikipedia article or whatever.)

5.

Observe that even when assets are distributed across multiple funds, expropriation and value drift still reduce the expected rate of return on investments in a way that looking at historical market returns does not account for. This is a good trade—decreasing the discount rate and decreasing the investment rate by the same amount probably increases utility in most situations

I didn't understand these sentences. If you think you'd be able to explain them without too much effort, I'd appreciate that. (But no worries if not - my confusion may just reflect my lack of relevant background, which you're not obliged to make up for!)

Replies from: MichaelDickens## ↑ comment by MichaelDickens · 2020-07-06T19:24:30.827Z · EA(p) · GW(p)

Thanks for the comments! I will respond to each of your numbered points.

The possibility of, say, extinction is a discount on utility, not on money. To see this, we can extend the formula for utility at time t. Suppose there are two possibilities for the future: extinction and non-extinction. The probability that we end up in the non-extinction world is , so the expected utility due to non-extinction is . We could also add to this the utility of extinction world, call it . Then total expected utility is .

Then, we can say to get the formula used in my essay. Or we can just say that we should ignore the term because there's nothing we can do to change it (that's assuming is not changeable, which is obviously not true in real life, but it's true in the standard Ramsey model).

This wasn't a particularly well-thought out statement, but it was basically on the assumption that we should converge on true beliefs over time.

Thanks for the link!

If you dig into this a little more, it becomes apparent that the Ramsey model with constant relative risk aversion doesn't really make sense. In theory, people should accept a zero probability of losing all their assets, because that would result in negative infinity utility. But in practice, some small probability is acceptable, and in fact unavoidable. And people don't try to get the probability of bankruptcy as low as possible, either.

But according to the theoretical model, asset prices move according to geometric Brownian motion, which means they can never go to 0. Therefore, losing all your assets is a distinct thing from assets having a negative return, and it has to happen due to some special event that's not part of normal asset price changes. I realize this is kind of hand-wavey, but this is a commonly-used model in economics so at least I have good company in my handwaviness.

Example: Suppose . Say we have the chance to change this to . We would accept that deal, because decreasing has a bigger effect on utility than decreasing . (You can construct situations where this is false, but it's usually true.)

Replies from: MichaelA## ↑ comment by MichaelA · 2020-07-07T10:51:42.069Z · EA(p) · GW(p)

Thanks for this reply!

1.

The possibility of, say, extinction is a discount on utility, not on money

By that, do you mean that extinction makes future utility less valuable? Or that it means there may be less future utility (because there are no humans to experience utility), for reasons unrelated to how effectively money can create utility?

(Sorry if this is already well-explained by your equations.)

2.

it was basically on the assumption that we should converge on true beliefs over time.

I think my quick take would be that that's a plausible assumption, and that I definitely expect convergence *towards *the truth *on average *across areas, but that there seems a non-trivial chance of indefinitely failing to land *on the truth itself *in a given area. If that quick take is a reasonable one, then I think this might push slightly more in favour of work to estimate the philanthropic discount rate, as it means we'd have less reason to expect humanity to work it out eventually "by default".

4. To check I roughly understood, is the following statement approximately correct? "The chance of events that leave one with no assets at all can't be captured in the standard theoretical model, so we have to use a separate term for it, which is the expropriation rate. Whereas the chance of events that result in the loss of some but not all of one's assets is already captured in the standard theoretical model, so we don't include it in the expropriation rate."

Replies from: MichaelDickens## ↑ comment by MichaelDickens · 2020-07-07T19:34:29.954Z · EA(p) · GW(p)

Future utility is not less valuable, but the possibility of extinction means there is a chance that future utility will not actualize, so we should discount the future based on this chance.

That's pretty much right. I would add that another reason why complete loss of capital is "special" is because it is possible to recover from any non-complete loss via sufficiently high investing returns. But if you have $0, no matter how good a return you get, you'll still have $0.

## comment by MichaelA · 2020-07-06T05:43:10.117Z · EA(p) · GW(p)

**Thoughts on value drift and movement collapse**

1. You talk about value drift in several places, and also list as one of your "questions that merit future investigation:"

Research on historical movements and learn more about why they failed or succeeded

I share the view [EA(p) · GW(p)] that those are important topics in general and in relation to the appropriate discount rate, and would also be excited to see work on that question.

Given the importance and relevance of these topics, you or other readers may therefore find useful my collections of sources on value drift [EA(p) · GW(p)], and of EA analyses of how social social movements rise, fall, can be influential, etc. [EA(p) · GW(p)] (The vast majority of these sources were written by other people; I primarily just collect them.)

2.

It seems to me that the probability of value drift is mostly independent across individuals, although I can think of some exceptions (e.g., if ties weaken within the effective altruism community, this could increase the overall rate of value drift).

Wouldn't one big exception be movement collapse? Or a shift in movement priorities towards something less effective, which then becomes ossified due to information cascades, worse epistemic norms, etc.? Both scenarios seem unpleasantly plausible to me. And they seem perhaps not *far *less likely than a given EA's values drifting, conditional on EA remaining intact and effective (but I haven't thought about those relative likelihoods much at all).

3.

On the bright side, one survey found that wealthier individuals tend to have a lower rate of value drift, which means the dollar-weighted value drift rate might not be quite as bad as 10%.

That's interesting. Can you recall which survey that was?

Replies from: MichaelDickens## ↑ comment by MichaelDickens · 2020-07-06T19:27:45.167Z · EA(p) · GW(p)

Wouldn't one big exception be movement collapse?

Yeah, that's basically an extreme form of "ties weaken within the effective altruism community". I agree that this seems like an unpleasantly plausible outcome.

It was the GWWC survey [EA · GW].

## comment by MichaelA · 2020-07-06T05:38:11.746Z · EA(p) · GW(p)

**Some additional thoughts on existential and extinction risk**

1.

Michael Aird (2020), "Database of existential risk estimates" [EA · GW] (an EA Forum post accompanying the above-linked spreadsheet), addresses the fact that we only have extremely rough estimates of the extinction probability. He reviews some of the implications of this fact, and ultimately concludes that attempting to construct such estimates is still worthwhile. I think he explains the relevant issues pretty well, so I won't address this problem other than to say that I basically endorse Aird's analysis.

I'm very glad you seem to have found this database useful as one input into this valuable-seeming project!

If any readers are interested in my arguments/analysis on that matter, I'd actually recommend instead my EAGxVirtual Unconference talk. It's basically a better structured version of my post (though lacking useful links), as by then I'd had a couple extra months to organise my thoughts on the topic.

2.

If we use a moderately high estimate for the current probability of extinction (say, 0.2% per year), it seems implausible that this probability could remain at a similar level for thousands of years. A 0.2% annual extinction probability translates into a 1 in 500 million chance that humanity lasts longer than 10,000 years. Humanity has already survived for about 200,000 years, so on priors, this tiny probability seems extremely suspect.

I'm not sure I see the reasoning in that last sentence. It seems like you're saying that something which has a 1 in 500 million chance of happening is unlikely - which is basically true "by definition" - and that we know this *because *humanity already survived about 200,000 years - which seems somewhat irrelevant, and in any case unnecessary to point out? Wouldn't it be sufficiently merely to note that an annual 0.2% chance of A happening (whatever A is) means that, over a long enough time, it's *extremely *likely that either A has happened already or the annual chance actually went down?

Relatedly, you write:

The third claim allows us to use a nontrivial long-term discount due to existential risk. I find it the least plausible of the three—not because of particularly any good inside-view argument, but because it seems unlikely on priors.

Can't we just say it *is *unlikely - it logically must involve extremely low probabilities, even if we abstract away all the specifics - rather than than it *seems *unlikely on priors, or based on some reference class forecasting, or the like?

(Maybe I'm totally misunderstanding what you're getting at here.)

3.

One of these three claims must be true:

1. The annual probability of extinction is quite low, on the order of 0.001% per year or less.

2. Currently, we have a relatively high probability of extinction, but if we survive through the current crucial period, then this probability will dramatically decrease.

[...] The second claim seems to represent the most common view among long-term-focused effective altruists.

Personally, I see it as something like "There's a 5-90% chance that people like Toby Ord are basically right, and thus that 2 is true. I'm not very confident about that, and 1 is also very plausible. But this is enough to make the expected value of existential risk reduction very high (as long as there are tractable reduction strategies which wouldn't be adopted "by default")."

I suspect that something like that perspective - in which view 2 is not given far more credence than view 1, but ends up seeming especially decision-relevant - is quite common among longtermist EAs. (Though I'm sure there are also many with more certainty in view 2.)

(This isn't really a key point - just sharing how I see things.)

4. When you say "long-run discount rate", do you mean "discount-rate that applies *after* the short-run", or "discount rate that applies *from now till a very long time from now*"? I'm guessing you mean the former?

I ask because you say:

If we accept the first or second claim, this implies existential risk has nearly zero impact on the long-run discount rate

But it seems like the second claim - a high extinction risk now, which declines later - could still imply a non-trivial *total *existential risk across all time (e.g., 25%), just with this mostly concentrated over the coming decades or centuries.

## ↑ comment by MichaelDickens · 2020-07-06T20:43:50.305Z · EA(p) · GW(p)

Do you have a transcript of your EAGx talk?

Replies from: MichaelA## ↑ comment by MichaelDickens · 2020-07-06T19:45:03.014Z · EA(p) · GW(p)

My point was that we know humanity is capable of lasting 200,000 years, because it already did that. So on priors, we should expect humanity to last about another 200,000 years. We might update this prior downward based on facts like "we have nukes now" or "we might develop unfriendly AI soon". But if we assume a 0.2% annual probability of extinction, that gives a 1 in 10^174 chance of surviving 200,000 years, which requires an absurdly strong update away from the prior.

Can't we just say it is unlikely - it logically must involve extremely low probabilities

I find it really implausible that 10^-174 is the true probability that humanity survives 200,000 years. I don't think we are 10^-174 confident about anything ever.

Personally, I see it as something like "There's a 5-90% chance that people like Toby Ord are basically right, and thus that 2 is true. I'm not very confident about that, and 1 is also very plausible. But this is enough to make the expected value of existential risk reduction very high (as long as there are tractable reduction strategies which wouldn't be adopted "by default")."

The conclusion does not follow, for two reasons. The value of reducing x-risk might actually be lower if x-risk is higher. For an explanation, see the appendix of this paper: https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12318 (I think you need an account to download, but you can also get the paper on sci-hub.) But there are good arguments that decreasing the discount rate is more important than increasing consumption, which is also discussed in that paper.

"Long-run" means "discount rate that applies after the short-run".

Replies from: MichaelA## ↑ comment by MichaelA · 2020-07-08T01:37:37.777Z · EA(p) · GW(p)

(Possibly somewhat rambly, sorry)

**2.** I think I now have a better sense of what you mean.

**2a.** It sounds like, when you wrote:

The current relatively high probability of extinction will maintain indefinitely.

...you'd include "The high probability maintains for a while, and then we do go extinct" as a case where the high probability maintains indefinitely?

This seems an odd way of phrasing things to me, given that, if we go extinct, the probability that we *go *extinct at any time after that is 0, and the probability that we *are *extinct at any time after that is 1. So whatever the current probability is, it would change after that point. (Though I guess we could talk about the probability that we *will be extinct at the end of a time period*, which would be high - 1 - post-extinction, so if that probability is currently high it could then stay high indefinitely, even if the actual probability changes.)

I thought you were instead talking about a case where the probability stays relatively high for a very long time, without us going extinct. (That seemed to me like the most intuitive interpretation of the current probability maintaining indefinitely.) That's why I was saying that that's just unlikely "by definition", basically.

Relatedly, when you wrote:

Currently, we have a relatively high probability of extinction, but if we survive through the current crucial period, then this probability will dramatically decrease.

Would that hypothesis include cases where we *don't *survive through the current period?

My view would basically be that the probability might be low now or might be relatively high. And if it *is *relatively high, then it must be either that it'll go down before a long time passes or that we'll become extinct. I'm not currently sure whether that means I split my credence over the 1st and 2nd views you outline only, or over all 3.

**2b. **It also sounds like you were actually focusing on an argument like that the "natural" extinction rate must be low, given how long humanity has survived thus far. This would be similar to an argument Ord gives in *The Precipice*, and that's also given in this paper I haven't actually read, which says in the abstract:

Using only the information thatHomo sapienshas existed at least 200,000 years, we conclude that the probability that humanity goes extinct from natural causes in any given year is almost guaranteed to be less than one in 14,000, and likely to be less than one in 87,000.

That's an argument I agree with. I also see it as a reason to believe that, if we handle all the anthropogenic extinction risks, the extinction risk level from then on would be much lower than it might now be.

Though I'm not sure I'd draw from it the implication you draw: it seems totally plausible we could enter a state with a new, higher "background" extinction rate, which is also driven by our activities. And it seems to me that the only obvious reasons to believe this state wouldn't last a long time are (a) the idea that humanity will likely strive to get out of this state, and (b) the simple fact that, if the rate is high enough and lasts for long enough, extinction happening at some point becomes very likely. (One can also argue against believing that we'd enter such a state in the first place, or that we've done so thus far - I'm just talking about why we might not believe the state would *last a long time*, if we *did *enter it.)

So when you say:

if we assume a 0.2% annual probability of extinction, that gives a 1 in 10^174 chance of surviving 200,000 years, which requires an absurdly strong update away from the prior.

Wouldn't it make more sense to instead say something like: "The non-anthropogenic annual human extinction rate seems likely to be less than 1 in 87,000. To say the current total annual human extinction rate is 1 in 500 (0.2%) requires updating away from priors by a factor of 174 (87,000/500)." (Perhaps this should instead be phrased as "...requires thinking that humans have caused the total rate to increase by a factor of 174.")

Updating by a factor of 174 seems far more reasonable than the sort of update you referred to.

And then lasting 200,000 years at such an annual rate is indeed extremely implausible, but I don't think anyone's really arguing against that idea. The implication of a 0.2% annual rate, which isn't reduced, would just be that extinction becomes very likely in much less than 200,000 years.

**3.**

The conclusion does not follow, for two reasons. The value of reducing x-risk might actually be lower if x-risk is higher.

I haven't read that paper, but Ord makes what I think is a similar point in *The Precipice*. But, if I recall correctly, that was in a simple model, and he thought that in a more realistic model it does seem important how high the risk is now.

Essentially, I think x-risk work may be most valuable if the "background" x-risk level is quite low, but currently the risk levels are unusually high, such that (a) the work is urgent (we can't just punt to the future, or there'd be a decent chance that future wouldn't materialise), and (b) if we do succeed in that work, humanity is likely to last for a long time.

If instead the risk is high now but this is because there are new and large risks that emerge in each period, and what we do to fix them doesn't help with the later risks, then that indeed doesn't necessarily suggest x-risk work is worth prioritising.

And if instead the risk is pretty low across all time, that can* *still suggest x-risk work is worth prioritising, because we have a lower chance of succumbing to a risk in any given period but would lose more in expectation if we do. (And that's definitely an interesting and counterintuitive implication of that argument that Ord mentions.) But I think being in that situation would push somewhat more in favour of things like investing, movement-building, etc., rather than working on x-risks "directly" "right now".

So if we're talking about the view that "Currently, we have a relatively high probability of extinction, **but if we survive through the current crucial period, then this probability will dramatically decrease**", I *think* more belief in that view does push more in favour of work on x-risks now.

(I could be wrong about that, though.)

**4.** Thanks for the clarification!

## comment by MichaelA · 2020-07-06T05:34:48.058Z · EA(p) · GW(p)

**Existential risk ≠ extinction risk ≠ global catastrophic risk**

*For an expanded version of the following points, see Clarifying existential risks and existential catastrophes [EA · GW] and/or 3 suggestions about jargon in EA [EA · GW].*

There are some places where you seem to use the terms "existential risk" and "extinction risk" as interchangeable. For example, you write:

I do not think it is obvious that reducing the probability of extinction does more good per dollar than the value drift rate, which naively suggests the effective altruist community should invest relatively more into reducing value drift. But I find it plausible that, upon further analysis, it would become clear that existential risk matters much more.

Additionally, it seems that, to get your "annual extinction probability" estimate, some of the estimates you use from the spreadsheet I put together are actually existential risk, global catastrophic risk [EA(p) · GW(p)], or collapse risk. For example, you seem to use Ord's estimate of total *existential* risk, Rees' estimate of the odds that *our present civilization* *on earth *will survive to the end of the present century, and Simpson's estimate that “Humanity’s prognosis for the coming century is well approximated by a *global catastrophic risk* of 0.2% per year" (emphases added).

But, as both Bostrom and Ord make clear in their writings on existential risk, extinction is not the only possible type of existential catastrophe. There could also be an unrecoverable collapse [EA(p) · GW(p)] or an unrecoverable dystopia [EA(p) · GW(p)]. And many global catastrophes would not be existential catastrophes.

I see this as important because:

- Overlooking that there are possible types of existential catastrophe other than extinction might lead to us doing too little to protect against them.
- Relatedly, using the term "existential risk" when one really means "extinction risk" might make existential risk less effective as jargon [EA · GW] that can efficiently convey this key thing many EAs care about.
- Existential risk and global catastrophic risk are both very likely at least a bit higher than extinction risk (since they include a large number of possible events). And I'd guess collapse risk might be higher as well. So you may end up with an overly high extinction risk estimate in your discount rate.
- Alternatively, if
*existential*risk is actually the most appropriate thing to include in your discount rate (rather than extinction risk), using estimates of extinction risk alone may lead your discount rate being too*low*. This is because extinction risk estimates overlook the risk of unrecoverable collapse or dystopia.

To be clear, I have no problems with sources that just talk about extinction risk. Often, that's the appropriate scope for a given piece of work. I just have a pet peeve with people *really *talking about extinction risk, but using the *term *existential risk, or vice versa.

Also to be clear, you're far from the only person who's done that, and this isn't really a criticism of the substance of the post (though it may suggest that the estimates should be tweaked somewhat).

## comment by MichaelA · 2020-07-06T05:32:06.387Z · EA(p) · GW(p)

Thanks for this post! It seems to me like quite an interesting and impressive overview of this important topic. I look forward to reading more work from you related to patient-philanthropy-type things (assuming you intend to pursue more work on these topics?).

A bunch of questions and points came to mind as I was reading, which I'll split into a few separate comments. Sorry for the impending flood of words - take it as a signal of how interesting I found your post!

Firstly, it happens to be that I was *also* working on a post with a somewhat similar scope to this one and to Sjir Hoeijmakers' one [EA · GW]. My post was already drafted, but not published, and is entitled Crucial questions about optimal timing of work and donations. It has a somewhat different focus, and primarily just overviews some importnat questions and arguments, without making this post's valiant effort to actually provide estimates and recommendations.

My draft's marginal value is probably lower than I'd expected, given that you and Sjir have now published your perhaps more substantive work! But feel free to take a look, in case it might be useful - and I'd also welcome feedback. (That goes for both Michael Dickens and other readers.)

I suspect what I'll do is make a few tweaks to my draft in light of the two new posts, and then publish it as another perspective or way of framing things, despite some overlap in content and purpose.

## comment by Grayden · 2020-09-13T14:23:11.743Z · EA(p) · GW(p)

Really interesting article!

I don’t think the reduction in CPLSE is a good estimate of the change in opportunities for the following reasons: (1) CPLSE is very EA specific and EA is a very different movement now compared to 2012; (2) I’m sure AMF and Deworm the World have improved, but I don’t think they would have improved if their founders had sat on the sidelines waiting for research on malaria nets / deworming without actually getting out there and trying things.

My own instinct is that opportunities become more expensive over time. As world GDP increases, average prosperity increases and it becomes incrementally harder to help the ‘poorest’ person.

## comment by vernonbarth · 2020-07-20T08:35:14.960Z · EA(p) · GW(p)

The discount rate you use should be your required rate of return. If you purely want to help these startups, then you should give them some kind of interest-free loans (0%). If you want to cover inflation and be a little less philanthropic, then use 2% discount rate or so. And so on... dsw coupon