The case for investing to give later

post by SjirH · 2020-07-03T15:23:29.260Z · EA · GW · 28 comments

Contents

  Introduction
  Summary
  Defining ‘investing to give later’
  Financial returns on investment
  Risks of loss or value drift
    
    drift
  Availability of opportunities
    returns
    changing world
  Exogenous learning
  Parameter uncertainty
  Compounding returns on giving now
  Synthesis of factors
None
28 comments

Edit 07/07/20 at 1.30 pm BST: Thank you for your comments so far! In particular, there were a few useful comments on some of the 'conservative' estimates being too optimistic. As a result, I've updated some of those and replaced the sheet model by this Guesstimate model.

This model more accurately represents the uncertainty I have about these estimates and doesn't require the use of vague terms such as "conservative". Furthermore, it includes the "parameter uncertainty" factor, which is found by comparing the estimated yearly impact multiplier with the 10-yearly one: it seems to be an important factor that shouldn't be left out of the analysis.

For clarity, I'd also like to further emphasize that the primary purpose of this post is not to make a case for my current estimates, but to invite input on (1) the overall model, (2) missing factors and (3) which factors to prioritize for research. Indeed, my current estimates should be taken with a huge grain of salt, as I have hardly spent any time on most of them so far: refining these estimates is the purpose of the remainder of this research project. However, as mentioned above, comments on where you think my current estimates are wrong are obviously welcome as well!

Introduction

After having to pause the project for a while, we have recently resumed work on the idea of a long-term investment fund [EA · GW] at Founders Pledge. The next step is a research project on the impact of ‘investing to give later’ as a philanthropic strategy more generally. This will help us decide whether to launch the fund and to what extent to prioritise it.

In this post, I outline the key factors that bear on this question as I currently see them, after preliminary research, and draw a tentative conclusion. Please note that these represent my personal views, and not currently those of Founders Pledge.

I would appreciate any thoughts on (1) important factors that are missing, (2) faults in my reasoning and methodology, (3) which factors to prioritize deepening out further, and (4) resources or connections that would be helpful in doing that. Please leave these in the comments on this post or reach out at sjir@founderspledge.com.

Before launching into the content, I’d like to particularly thank Phil Trammell for the substantial contribution of his work on patient philanthropy (see also this 80,000 Hours podcast) to this project so far and for discussion and comments. I’m also grateful to Michael Dickens, Sasha Cooper, and my colleagues Aidan Goth and John Halstead for comments on a draft of this post. Finally, as will be clear from the linked sources below, this work leans heavily on earlier work by other people in the effective altruism community.

Edit 03/07/20 at 7 pm BST: It happens Michael Dickens was working on a very similar topic in parallel, and just published his work [EA · GW] as well. We both wrote our post before seeing each other's, so any overlap in content is coincidental.

Summary

I go into 7 key factors that determine how investing to give later compares to giving now for an individual investor-philanthropist in the current situation. Together, these paint a relatively strong picture in favour of investment: it seems plausible that one can have positive real, net financial returns and a growing share of the economy in expectation - taking into account value drift and expropriation risks - and that there are significant benefits to investment in terms of learning. Parameter uncertainty likely further strengthens the case for investment.

Risks of asset loss, risks of value drift, and uncertainty about the availability of high-impact giving opportunities over time weakens our confidence in this conclusion, but not to the extent that it changes it. There are however some ways of ‘investment-like’ giving, such as encouraging others to invest, which might compete with direct financial investment.

Excluding those ‘investment-like’ giving opportunities, my current best guess estimate is that an investor-philanthropist will on average be able to multiply the total impact of their funds by ~1.01 in one year and by >>10[1] in ten years. This is if they intend to spend the funds on longtermist objectives; I intend to add estimates from a short-termist perspective at a later point.

Defining ‘investing to give later’

Here we consider ‘investing to give later’ to be a philanthropic strategy in which one engages in for-profit investing with the intention to donate the principal and expected profits at a later time point.

In particular, we’ll consider the case from the point of view of an altruistic and strategic individual investor-philanthropist with limited resources (<$100 million) and in the immediate situation, i.e. whether this individual should invest the coming few years rather than spend this year, assuming he/she cannot coordinate with others. This is distinct from the question of what is the optimal spending rate in the effective altruism community as a whole, which Phil Trammell explores in his paper, or the question of at which point in time and to what extent an investor-philanthropist should change strategy. If we conclude here that investment is the optimal choice right now this doesn’t imply (1) that this will hold if a lot of value-aligned others also start doing it or (2) that this will hold at a later time point.

Financial returns on investment

Arguably the largest advantage of investing is that it can exponentially grow financial resources, which can be used for good at a later point. The S&P 500 has had an inflation-adjusted annualized return of ~7% since its inception in 1926. We need to adjust this for selection bias, as there have been multiple markets in other countries that have done a lot worse, or have even ceased to exist (e.g. the Rio de Janeiro Stock Exchange). A recent Credit Suisse report attempts this for global equity returns and finds an annualized real return of ~5% from 1900 to 2019.

There is controversy about whether inflation-adjusted prices using the Consumer Price Index are accurate: it’s likely the CPI is biased upwards, and hence that average recent global returns have been (much) higher than 5%.

For the purpose of this discussion, we’ll stick with 5% as a conservative estimate for real expected returns on index fund investing. This, in turn, is a lower-bound on expected real returns from investing more generally: higher returns seem possible with other types of investments, e.g. leveraged or venture capital investment, if one is able to exploit risk, information and/or market access premiums. We hence conservatively assume that a skilled investor can achieve 7% expected real returns. Note that this will come at the expense of higher risk, but that this is much less important for altruistically-minded investors, though it needs to be taken into account to some extent.

In addition to imperfect information and access, the existence of this opportunity can largely be explained by the pure time preference of most market actors and their risk aversion. An altruistic and strategic philanthropist is much less risk-averse (the extent to which may depend on the cause area) and doesn’t have a (strong) pure time preference: even if he/she cares more about the current generation, this is likely for person-affecting reasons, and those consider personhood rather than time.

Risks of loss or value drift

The gained financial value will not be fully converted into philanthropic value if it is lost before it can be spent (for other reasons than investment losses) or if it’s spent on less valuable activities. This could happen in a variety of ways, e.g. via legal challenges, theft, government taxation, existential catastrophes, or value drift of an investment vehicle’s management.

Loss

Existential risk estimates can be used as a lower bound for the loss rate. This is especially true from a longtermist perspective, as almost by definition your money will be worth a lot less after an existential catastrophe has happened. It also seems reasonable from a short-termist/person-affecting perspective, as many plausible existential risks will directly lead to a loss of assets. Toby Ord’s best guess estimate in the Precipice is a risk of one out of six in the coming century, which converts into a yearly rate of ~0.2%. From Ord’s argumentation it seems like he thinks the risk is increasing (most of the risk comes from future technologies), so we should perhaps revise this lower bound to 0.1% for investment in the current moment. On the other hand, many global catastrophic risks could be sufficient for asset loss, which our best guess estimate should take into account.

As an upper bound for the loss rate, we can look at the reference class of exiting nonprofits: a strategic investor-philanthropist should easily be able to outlive the average nonprofit, which has to deal with a lot more pressures (including fundraising) and for which asset loss is only one of the potential reasons to cease to exist. In the US, the yearly exit rate of nonprofits is ~4% (between 3% and 5%).

There are reasons to estimate the relevant loss rate much closer to the lower than the upper bound. Taking an inside view perspective, the short-term risk of loss is arguably low: beyond global catastrophic risks and war it’s hard to imagine many scenarios (other than investment risks) in which an investor-philanthropist would lose their assets in the current environment of (seemingly) stable property rights.[2] Furthermore, for a proportion of the scenarios one could imagine there would be warning signs (e.g. of an impending war), which would allow the investor-philanthropist to change their strategy in time.[3] Note also that scenarios in which the investor-philanthropist is forced to spend their money should not be counted as loss risks.[4]

Value drift

The risk of value drift is even harder to estimate, but an important factor. For instance, these three sources (1 [EA · GW],2 [EA · GW],3 [EA · GW]) collectively suggest a yearly value drift rate of ~10% for individuals within the effective altruism community.

However, the short-term value drift rate also seems much easier to influence positively, most easily via a proper design[5] of a legal vehicle used by the investor-philanthropist. One can, for instance, commit one’s funds to be given to charitable entities by investing from a donor-advised fund, and appoint a committee of trustees to spread the risk of value drift.

Given the availability of these strategies, my best-guess estimate for the short-term[6] value drift rate is currently 2% for a strongly committed and strategic investor-philanthropist. However, I have a lot of uncertainty about this[7] and this estimate depends a lot on the the hypothetical investor-philanthropist in question, so I invite the reader to make their own estimates based on the case they are considering.[8]

Availability of opportunities

Independent of whether we are able to detect them, the value of the best available funding opportunities changes over time and is dependent on the amount of capital one has.

Diminishing returns

First, at any point in time, marginal social returns to philanthropic capital are plausibly diminishing with respect to the amount of capital one spends. There might be exceptions to this, however: certain projects might have increasing marginal returns and some might even require a minimum amount of capital to have any chance of success at all. Creating a new global institution could be an example.

More importantly, when one is trying to improve the world rather than one's own life, one should consider one’s spent philanthropic capital as a marginal contribution to all of the capital that is spent in a value-aligned way. This means that diminishing returns will probably only play a significant role at very large amounts of spent personal philanthropic capital (>$100m). Hints of this can be observed empirically by looking at the scale of room for funding of GiveWell’s recommendations, or considering that Open Philanthropy is now able to spend more than $200m on a yearly basis on funding opportunities that meet their bar.

We should distinguish the above diminishing marginal returns to personal philanthropic spending from the diminishing marginal returns that could result from increased concurrent spending by other aligned philanthropists, or by actors that have at least similar instrumental goals. Here, too, there could be exceptions due to some major philanthropic projects only being feasible with a minimum amount of total capital dedicated to them. Furthermore, total aligned spending is not guaranteed to increase over time. In the case of the effective altruism community, an increase seems likely [EA · GW] for at least the foreseeable future, especially given the plans of Open Philanthropy/Good Ventures to increase spending over time and the recent arrival of Ben Delo as another UHNW effective altruist donor. This is less clear for a more general category of instrumentally aligned spending.

A changing world

Secondly and probably most importantly, there might be more or less impactful opportunities available due to a world that is exogenously in flux.

From a longtermist perspective, this consideration is strongly related to the debate on ‘hingeyness’ [EA · GW], though here we exclude the exogenous learning factor discussed below. One intuition says that the earlier in time, the more of the future you still have to influence, so the better the opportunities that will be available to you. On the other hand, this might be a very small effect, given the potential length of time still left to us. Moreover, it seems most likely to be dominated by other (more local) factors, such as whether humanity has just developed the power to destroy itself, or is going through a period of exceptional economic growth. I won’t address the full debate here in detail: it seems to be far from a done discussion, and it currently seems wise to explore interventions that are optimal across the spectrum of views. More importantly, most arguments about this being a special time (e.g. those from existential risk) concern the whole century or the next few centuries we are living in rather than the specific year, so it seems reasonable to assume the next few years will be quite similar to this year (in expectation) in terms of their hingeyness.

From a short-termist/person-affecting and human-centric perspective, a relevant question is how fast diseases and poverty might be eradicated exogenously, as this might influence how much good you can do. A rough but useful proxy for how fast this is happening is the number of people living in extreme poverty. This was 730 million in 2015 and was projected by the World Bank to reduce to 480 million by 2030, which implies a yearly rate of reduction of 2.7%. Another data point is the global growth rate of ~2%. Both are, however, no direct proxies for the availability of cost-effective opportunities to help people. A more direct but also more noisy way of looking at this question is by considering GiveWell’s cost-effectiveness estimates over time. In 2012, they estimated the cheapest way to save a life[9] to be $2300 for bednets, and in 2019 this was still $2300 (also for bednets). Looking at their cost-per-life-saved-equivalent numbers, Michael Dickens has even estimated a strong decrease from $2066 to $443 in the same time period, though this is arguably more strongly influenced by learning than by new or better giving opportunities being available. Taken together, my best guess is that there currently is a relatively slowly increasing cost to helping people, if at all.[10]

Exogenous learning

Another important advantage of investing over giving now is that it will allow an investor-philanthropist to learn about better giving opportunities over time. Here we are talking about the ability of the investor-philanthropist to identify the best opportunities that are available, rather than whether those opportunities are available (see above).

We should distinguish between two forms of learning: endogenous and exogenous. Endogenous learning is the learning that the investor-philanthropist brings about themselves, e.g. by funding research or trying things out. Opportunities for endogenous learning can be a reason to give now rather than to invest (see the section on compounding returns on giving below).

Exogenous learning includes advances in the scientific community, new philanthropic interventions being invented and/or tried out, moral progress, and more. It also captures the time needed for relevant knowledge to become available, e.g. an experiment might take time, research might need to be done in a certain order, or there might be a talent constraint in a research area that takes time to be resolved. When learning is done exogenously, there are advantages to waiting and hence investing.

90% of all scientists who ever lived are alive today, and we should expect big gains in knowledge across the board, though maybe not as much as that number would suggest on first sight. More importantly though, effective altruism is still a very young endeavour, so an investor-philanthropist should expect there to be a high rate of exogenous learning in the effective altruism community in the short to medium term.

From a short-termist/person-affecting perspective, as an illustration, consider that GiveWell has only been around for 13 years, Animal Charity Evaluators only for 8 years and Founders Pledge research only for 3 years, with new high-impact giving opportunities being discovered by these organisations on a regular basis. On the other hand, looking back at the example above, GiveWell has had broadly similar top giving recommendations since 2012, though they have only recently started considering policy interventions.

From a longtermist perspective, exogenous learning is an (even) more important factor. Firstly, longtermism as a more formal idea has only very recently [EA · GW] been developed, though institutional work has been done on it at least since the founding of the Future of Humanity Institute in Oxford in 2005. Secondly, the number of people working full time on what is the best thing to do from a longtermist perspective is probably less than 200[11]. Thirdly, funding opportunity research specifically is even younger than for short-termism, with the first institutional research probably being carried out by Open Philanthropy around 2014, and existential risk currently being the only well-established intervention area.

In addition to uncertainty about the best funding opportunities to achieve some defined goal, there is a lot of uncertainty and debate even about what the goals should be: should we be short-termist/person-affecting or longtermist; to what extent should we include animals in our moral concern; should we ultimately mostly/only aim for some (broad) measure of subjective well-being or are there other things that are important to consider as well? Given how recent it has been for many of these ideas to gain serious traction (e.g. Peter Singer’s Animal Liberation was only published in 1975, though the idea had obviously been around for a lot longer), it’s likely we have a lot to learn still and could hope to learn more relatively soon.

Lastly, we might learn more about investing to give later as a philanthropic strategy itself. To my knowledge, Phil Trammell’s paper is the first formal document to outline the case for this, and in this research project I will likely only be able to scratch the surface. From the factors presented here, the difference in impact potential between investing and giving now could plausibly be very large, certainly over longer time scales. And there is a relevant asymmetry: unless one has good reason to believe there is an extraordinary and timely giving opportunity available right now, it seems the expected cost of investing and waiting on more information (for a limited time) if giving now turns out to be better is lower than the expected cost of giving now and foregoing the opportunity to ever invest if investing turns out to be better.

Parameter uncertainty

There is a lot of uncertainty about many if not all of the parameters discussed above, e.g. the expected financial returns, the expropriation and value drift rates, and the learning rate. If these parameters combined are likely to cause compounding positive effects, then this uncertainty itself can further increase the expected value of investing over time.

In mathematical terms: for any yearly rate of social return r > 0, and any constant q > 0, the world in which r = q delivers a smaller multi-year social return than the world in which r is distributed as a non-degenerate random variable R > 0 where E[R] = q.

My model suggests this to be an important factor: there is a large difference between the one-year and ten-year impact multiplier for investing to give later.

Compounding returns on giving now

A final factor to consider in favour of giving now is whether there might be giving opportunities that themselves have larger compound (social) returns than investing does.

From a short-termist/person-affecting and human-focused perspective, an argument that is often brought up is that direct global health and poverty interventions may have compound gains for beneficiaries that outweigh compound investment gains. Phil Trammell explains in his paper (section 5.1.2 and 5.1.3) why this is almost certainly not the case, at least over longer timescales, for both theoretical and empirical reasons. The main conceptual point is that even though someone in poverty might obtain gains above the world growth rate (~2%) for a few years, these gains are (1) likely to disperse rapidly and (2) partially used for consumption without compounding returns. If the beneficiary of a donation is not themselves reprioritizing the spending of their gained resources (health, money, knowledge, etc.) on the best available giving opportunities in the world (or investing those resources), then the yearly gains of those resources will at some point be bounded from above by the growth rate.

If one is only concerned with effects in the very short term (e.g. 10 years) though, it could be that the short-term higher returns are a reason to give now rather than invest. However, without such short limits on the time horizon of effects, this by itself doesn’t affect the comparison: investing even only for 1 year and then giving would be better than giving now, as the extra early gains from investing would outstrip the bounded future gains from giving now.

There are, however, some giving opportunities which are ‘investment-like’ and could in principle have higher compounding returns than investment, even in the longer term, and both from a short-termist/person-affecting and longtermist perspective.

One obvious candidate is encouraging other people with aligned values to invest: at a high enough success rate per dollar spent on this, it is clear that this would beat direct investment, and the main uncertainty is whether such a success rate can be achieved. More general effective altruism movement-building may also qualify, but this is only true with certainty if it leads to a high enough rate of people joining and a large enough rate of those people choosing to invest.

Secondly, there is capacity building, including endogenous learning: if certain activities (e.g. research) increase the impact one can have from that moment onwards until one's last dollar is spent, those activities have compound gains that when large enough could exceed those of investment. The case for those activities becomes even stronger if they also affect the ability of others to have a positive impact. However, we should take into account the counterfactual: if this capacity building will happen anyway (i.e. can be achieved exogenously), and will be funded by someone who wouldn’t otherwise have invested, this would likely balance the scales in favour of investing.

Synthesis of factors

In this Guesstimate model, I have made preliminary estimates for the importance of each of the factors from a longtermist[12] perspective. I have combined these to come to a very tentative estimate of the expected impact multiplier for investment relative to giving now, both over 1 year and over 10 years.

This approach has important limitations, and this is only a first iteration, but I think it’s important to have an explicit model to (1) bring the discussion out of a vaguer “there are arguments on both sides” realm and come to decision-relevant best guesses and (2) be better able to identify which uncertainty would be most valuable to resolve and prioritize research efforts as a result. I encourage you to duplicate the model and make your own estimates, and to suggest alternatives and improvements to the (very rudimentary) methodology!

Excluding those ‘investment-like’ giving opportunities, my current best guess estimate is that a longtermist investor-philanthropist will on average be able to multiply the total impact of their funds by ~1.01 in one year and by >>10[1:1] in ten years.


  1. Guesstimate's Monte Carlo simulation sample limit of 5000 means the accuracy of its ten-year estimate is very limited: rerunning 10-20 times yields estimates ranging from 13 to 13000. ↩︎ ↩︎

  2. I plan to look further into whether and to what extent this is the case. ↩︎

  3. Incidentally, such a time would arguably (in expectation) have an above-average availability of high-impact giving opportunities, cf. the current situation as discussed under ‘Availability of opportunities’. ↩︎

  4. There is an interesting open question of whether the loss rate is (positively or negatively) correlated with the size of one’s investments. There seem to be arguments on both sides [LW(p) · GW(p)]. ↩︎

  5. An important consideration in the design is the (seeming) trade-off between preventing value drift and allowing for potential value improvement over time. An example of a way to balance those - in the case of a charitable investment fund - would be to appoint trustees that select their own successors. ↩︎

  6. The short-term yearly value drift risk intuitively seems a lot lower than the yearly value drift risk beyond an investor’s lifetime. However, if an investor is able to appoint a successor that he/she deems to have at least as good values as him-/herself, the short- and longer-term risks are arguably very similar. ↩︎

  7. To improve my estimates for the risk of loss and the risk of value drift and to learn more about what might help to minimize these risks, I aim to do a deeper dive both into relevant reference classes (e.g. UK charitable entities that invest a significant proportion of their assets) and case studies (e.g. waqfs). ↩︎

  8. Granted, there are obvious difficulties in estimating one’s own expected value drift rate. ↩︎

  9. For comparability between the estimates I’m considering benefits from direct mortality reduction. ↩︎

  10. As an aside, note that the cost-effectiveness of the giving opportunities available to us might be correlated with the financial returns we are able to make on investment or the risk of expropriation at that particular time. This could both strengthen (it allows for mission hedging) or weaken (we might need the capital exactly when returns are low) the case for investment. However, note that this correlation would likely be a local one in time, whereas if one invests over a longer timescale, most of the financial returns may accrue by or beyond this point, meaning that this would only have a limited influence on the total philanthropic value achieved. ↩︎

  11. This is a guess; I would welcome any available data on this. ↩︎

  12. I haven't yet made estimates from a short-termist/person-affecting perspective, but intend to do so at a later point. ↩︎

28 comments

Comments sorted by top scores.

comment by jackva · 2020-07-06T08:36:09.721Z · EA(p) · GW(p)

Thanks for writing this, this is fascinating!

To me, the assumptions around the issue of risk of loss seem quite optimistic for a couple of reasons:

  • From accounting for the fact that not only existential catastrophes but also catastrophic risk could cause expropriation you double the rate from 0.1% to 0.2%. But the universe of scenarios where existential catastrophe is avoided but there is enough destabilization vis-a-vis status quo to drive expropriation (or other ways in which the investment becomes unusuable) seems much larger than on the same order as existential catastrophes (which the mere doubling implies).
  • It is unclear to me how the exit-rate of non-profits is a relevant reference class here given a lot of the risk is not on the unit-level but on the systemic level, so things like economic upheavals / hyperinflation etc. seem a relevant consideration (e.g. "how many non-profit investors survived the Great Depression with their assets intact?")
  • As you write, property rights seem stable now, but that -- in its current level of stability more or less globally -- is a relatively new development and not necessarily a given.

From these considerations, 1% seems like a realistic guess, but it seems -- at least to me -- unlikely to be conservative in the sense of "with high likelihood being pessimistically biased against the argument".

A related "windows of wisdom" argument would be that ability to act in the future might be especially valuable in times where expropriation takes place / there is a certain turmoil, so investing in non-financial assets that do not require the current market order to persist could be relatively more valuable from that angle.

comment by SjirH · 2020-07-07T13:46:46.116Z · EA(p) · GW(p)

Thanks! I largely agree with your comment on the risk of loss and have incorporated it into the new model.

comment by Carl_Shulman · 2020-08-12T19:00:36.325Z · EA(p) · GW(p)
  • My biggest issue is that I don't think returns to increased donations are flat, with the highest returns coming from entering into neglected areas where EA funds are already, or would be after investment, large relative to the existing funds, and I see returns declining closer to logarithmically than flat with increased EA resources;
    • This is not correctly modeled in your guesstimate, despite it doing a Monte Carlo draw over different rates of diminishing returns, because it ignores the correlations between diminishing returns and impact of existing spending: if EA makes truly outsized altruistic returns, it will be by doing things that are much better than typical, and so the accounts on which more neglected activities are the best thing to do now have higher current philanthropic returns as well as faster diminishing returns
    • Likewise, high investment returns are associated with moving along the diminishing returns curve in the future, as diminishing marginal returns are not exogenous when EA is a large share of activity in an area; by drawing investment returns and diminishing returns from separate variables, your results wind up dominated by cases where explosive growth in EA funds is accompanied by flat marginal returns that are extremely implausible because of the missing correlations
    • These reflect a general problem with Guesstimate models, it's easy to create independent draws of variables that are not independent of each other and get answers exponentially off as one considers longer time frames or more variables
  • Regarding prognostications of future equity returns, I think it's worthwhile to follow other fundamental projections in breaking down equity returns into components such as P/E, economic growth, growth in corporate profits as a share of the economy etc; in particular, this reveals that some past sources of equity returns can't be extrapolated indefinitely, e.g. 100%+ corporate profit shares are not possible and huge profit shares would likely be accompanied by higher corporate or investment taxes, while early stock returns involved low rates of stock ownership and high transaction costs
  • When there are diminishing returns to spending in a given year, being forced to spend assets too quickly in response to a surprise does lower efficiency of spending, so regulatory changes requiring increased disbursement rates can be harmful
  • Mission hedging and tying funding to epistemic claims can be very important for altruistic investing; e.g. if scenarios where AI risk is higher are correlated with excess returns for AI firms, then an allocation to address that risk might overweight AI securities
comment by abergal · 2020-07-05T10:43:47.894Z · EA(p) · GW(p)

I am worried that investing precludes compounding effects from spending on movement building now that don't have to do with investment. In particular:

  • Maybe we should care more about the fraction of the world's population that's longtermist than the fraction of the world's wealth that we control.
  • Maybe a substantial fraction of the world population can become susceptible to longtermism only via slow diffusion from other longtermists, and cannot be converted through money alone.

That is to say: if there's a sufficient compounding effect from movement building that we can't replace with money, then maybe we should spend a lot now on movement building.

I haven't thought through how much of an effect this is, but something with this flavor feels intuitively compelling to me because we're in a situation now where it would be nice if e.g. key political figures were longtermists, but there's no obvious way to spend money to make that happen.

comment by SjirH · 2020-07-07T14:00:38.854Z · EA(p) · GW(p)
That is to say: if there's a sufficient compounding effect from movement building that we can't replace with money, then maybe we should spend a lot now on movement building.

I agree in principle, though it seems harder to ensure for other categories of movement-building that they will lead to prolonged compounding: encouraging investment seems the most straightforward way to make that happen, but not necessarily the only way.

comment by MichaelA · 2020-07-08T01:44:30.349Z · EA(p) · GW(p)

Relevant quote from Philip Trammell's interview on the 80,000 Hours podcast:

Philip Trammell: [...] in this write-up, I do try to make it clear that by investment, I really am explicitly including things like fundraising and at least certain kinds of movement building which have the same effect of turning resources now, not into good done now, but into more resources next year with which good will be done. I would be just a little careful to note that this has to be the sort of movement building advocacy work that really does look like fundraising in the sense that you’re not just putting more resources toward the cause next year, but toward the whole mindset of either giving to the cause or investing to give more in two years’ time to the cause. You might spend all your money and get all these recruits who are passionate about the cause that you’re trying to fund, but then they just do it all next year.
Robert Wiblin: The fools!
Philip Trammell: Right. And I don’t know exactly how high fidelity in this respect movement building tends to be or EA movement building in particular has been. So that’s one caveat. [Michael's note: Somewhat less relevant from here onwards.] I guess another one is that when you’re actually investing, you’re generally creating new resources. You’re actually building the factories or whatever. Whereas when you’re just doing fundraising, you’re movement building, you’re just diverting resources from where they otherwise would have gone.
Robert Wiblin: You’re redistributing from some efforts to others.
Philip Trammell: Yeah. And so you have to think that what people otherwise would have done with the resources in question is of negligible value compared to what they’ll do after the funds had been put in your pot. And you might think that if you just look at what people are spending their money on, the world as a whole… I mean you might not, but you might. And if you do, it might seem like this is a safe assumption to make, but the sorts of people you’re most likely to recruit are the ones who probably were most inclined to do the sort of thing that you wanted anyway on their own. My intuition is that it’s easy to overestimate the real real returns to advocacy and movement building in this respect. But I haven’t actually looked through any detailed numbers on this. It’s just a caveat I would raise.

(I think he also discusses similar matters in his write-up, but I can't remember for sure.)

comment by vincentweisser · 2020-09-29T16:29:02.248Z · EA(p) · GW(p)

Fantastic initiative! One potentially interesting approach could also be to invest in strongly EA aligned startups (pathogen detection, ai safety etc.). Basically like GoodVentures investments with a stronger focus on investing and donating much later.

Could also entail funding something like Jade Leung’ long-termist project incubator or something similar to a 100% EA aligned YCombinator or EntrepreneurFirst. This could become the go-to for EAs who want to start a for-profit venture but also want to enrich a fund that is completely aligned with their vision.

Basically a ea-aligned longtermist rolling fund that can make both for-profit investments and non-profit donations.

comment by Wayne_Chang · 2020-07-10T03:48:08.294Z · EA(p) · GW(p)

A 7% real investment return over the long-term is in my opinion, highly aggressive. World real GDP growth from 1960 through 2019 is 3.5%. Since the proposed fund expects to invest over “centuries or millennia,” any growth rate faster than GDP eventually takes over the world. Piketty’s r > g can’t work if wealth remains concentrated in a fund with no regular distributions.

Even in the shorter run, it’s unrealistic to expect the fund to implement a leveraged equity-only strategy (or analogous VC strategy):

1) A leveraged approach may not survive (e.g. will experience -100% returns). Even if the chance is small over a given year, this will be increasingly likely over a longer horizon. Dynamic leverage strategies can be implemented to reduce this risk but this likely reduce returns too.

2) A high-risk strategy will result in extremely painful drawdowns. In bad times, any fiduciary running the fund will face enormous pressure to shift to a more conservative strategy. During the Great Depression, US equities declined by nearly 90% during the course of just 3 years, even without leverage. Sticking to the same approach in the face of a potentially worse decline is nearly unimaginable.

3) A consistently leveraged portfolio approach has never been done before over long investment periods. Foundation/university endowments are probably in the most analogous position and few apply leverage. Harvard tried a modest 5% leverage during the 2000’s, and it blew up during the Financial Crisis.

4) Any successful strategy will be mimicked and thus face increasing competition and declining returns. If the fund grows to any significant size, it will start facing competition from itself. For example, Yale’s legendary endowment has seen declining returns from a ~9.5% real rate over the past 20 years to a ~5.5% one over the past decade. Similarly, given Berkshire Hathaway’s large size, it’s now increasingly difficult for Warren Buffet to beat the stock market.

Indeed, the proposed fund may actually have to be quite conservative for it to survive over time (through broad diversification even into low-return assets) and be accepted by the world (to avoid scrutiny or excess taxation). In my opinion, when investing over centuries with an unprecedented strategy, I would characterize a 2-4% real return (broad asset class diversification that keeps up with world GDP) as reasonable, and a 5%+ real return (all equity with or without leverage) as aggressive.

comment by Carl_Shulman · 2020-08-12T18:29:25.100Z · EA(p) · GW(p)

I agree risks of expropriation and costs of market impact rise as a fund gets large relative to reference classes like foundation assets (eliciting regulatory reaction) let alone global market capitalization. However, each year a fund gets to reassess conditions and adjust its behavior in light of those changing parameters, i.e. growing fast while this is all things considered attractive, and upping spending/reducing exposure as the threat of expropriation rises. And there is room for funds to grow manyfold over a long time before even becoming as large as the Bill and Melinda Gates Foundation, let alone being a significant portion of global markets. A pool of $100B, far larger than current EA financial assets, invested in broad indexes and borrowing with margin loans or foundation bonds would not importantly change global equity valuations or interest rates.

Regarding extreme drawdowns, they are the flipside of increased gains, so are a question of whether investors have the courage of their convictions regarding the altruistic returns curve for funds to set risk-aversion. Historically, Kelly criterion leverage on a high-Sharpe portfolio could have provided some reassurance with being ahead of a standard portfolio over very long time periods, even with great local swings.

comment by MichaelA · 2020-08-29T17:46:18.152Z · EA(p) · GW(p)

It's possible it'd be worth updating the section on value draft, and maybe the estimates you use, in light of the estimates Ben Todd collects and makes in this new post [EA · GW].

(Though his estimates are actually quite similar to yours, I think.)

comment by MichaelA · 2020-07-06T03:35:29.535Z · EA(p) · GW(p)

Thanks for this post! I found it quite interesting and useful.

One thing that stood out to me in particular was the distinction you made between exogenous learning and endogenous learning. It often seems hard to tease apart "doing good now" - or whatever we wish to call it - from "punting to the future", and to determine which is better. And this seems to in part be due to the ways that doing good now can also help us do good later, and thus have similar effects to punting. (I plan to write a post related to this soon.) So I think that future discussions on the topic can likely benefit from that explicit conceptual distinction between how our knowledge will improve if we simply wait and how our knowledge will improve if we do something that improves our knowledge.

I also liked the distinction between changes in availability of opportunities and changes in how much we know about opportunities (learning), for similar reasons.

It happens to be that I was also working on a post with a somewhat similar scope to this one, and to some extent to Michael Dickens' post. My post was already drafted, but not published, and is entitled Crucial questions about optimal timing of work and donations. I'd say the key differences in scope are that my draft surveys a somewhat broader set of questions, and makes less of an effort to actually provide estimates or recommendations (it more so overviews some important questions and arguments, without taking a stance).

My draft's marginal value is probably lower than I'd expected, given that this good work by you and Dickens has now been published! But feel free to take a look, in case it might be useful - and I'd also welcome feedback. (That goes for both Sjir and other readers.)

I suspect what I'll do is make a few tweaks to my draft in light of the two new posts, and then publish it as another perspective or way of framing things, despite some overlap in content and purpose.

comment by SjirH · 2020-07-07T13:48:40.747Z · EA(p) · GW(p)

Thank you MichaelA; happy to hear this was useful to you. I look forward to reading your post as well.

comment by Mati_Roy · 2020-11-05T11:44:01.706Z · EA(p) · GW(p)

I can't find the donate button on FundersPledge. Do you have no more room for additional funding?

comment by SjirH · 2020-11-07T23:21:11.238Z · EA(p) · GW(p)

We certainly do, though we normally receive our funding from a small group of closely-connected funders rather than collecting donations publicly. But if you're interested in making a donation, please do reach out to info@founderspledge.com :).

comment by Wayne_Chang · 2020-07-11T00:52:42.961Z · EA(p) · GW(p)

I don’t think it makes sense to compound the model distributions (e.g. from 1 year to 10 years). Doing so leads to non-intuitive results that are difficult to justify.

1) Compounded model results (e.g. 10x impact in 10 years) are highly sensitive to the arbitrarily assumed shape, range, and skewness parameters of the variable distributions. Also, these results will vary wildly from simulation to simulation depending on the sequence of random draws. This points to the model's fragility and leads to unnecessary confusion.

2) The parameter estimates may use annualized growth rates, but they need not correspond to an annual time frame. Indeed, it is more realistic to make estimates for longer horizons because short-term noise averages out (i.e. Law of Large Numbers). In other words, it is far easier to estimate a variable's expected mean than its underlying distribution. Estimates for the expected mean will already be highly uncertain. I don't think it's possible to reasonably defend distribution assumptions of the variables themselves.

The exercise is to compare giving-today vs. investing-to-give-later. The post usefully identifies key variables in this consideration. I think the most it can do is propose useful estimates of these variables’ expectations over the long run (i.e. their averages over time) and their key uncertainties (i.e. Knighting uncertainty and not quantifiable distribution parameters). If the expectations' net sum is above 1, it makes sense to give later. If it falls below 1, it makes sense to give now. Reasonable areas of uncertainty can be further discussed and debated. Already, there will be much irreconcilable (rational) disagreement. Compounding returns using arbitrary distribution parameters won’t (and shouldn’t) reconcile any differences and likely confuses the matter.

comment by Denkenberger · 2020-07-09T05:46:50.204Z · EA(p) · GW(p)

Thanks for the useful model. I think you should report ranges, because one would expect with the one-year multiplier of 1.01, that you would get a 10 year multiplier of 1.01^10 = 1.1. Even with the ranges, it seems counterintuitive to me. If you take the 5th percentile of a 0.84 multiplier, that gives a 10 year 0.17 multiplier, which is close to the Guesstimate result. However, if you take the 95th percentile of a 1.3 multiplier, that gives a 10 year multiplier of 14, which is very different from the Guesstimate value of about 1000. I assume this is because of the fat tail. This shows that this is a high risk strategy from the perspective of the donor-more than 50% of the time they have a smaller impact by investing. But they have some chance of having an enormous impact with investing.

comment by MichaelA · 2020-07-06T03:36:12.085Z · EA(p) · GW(p)

Some thoughts on value drift:

1. I've collected a bunch of relevant sources here [EA(p) · GW(p)], which you or other readers may find useful.

2.

For instance, these three sources (1 [EA · GW],2 [EA · GW],3 [EA · GW]) collectively suggest a yearly value drift rate of ~10% for individuals within the effective altruism community.
However, the short-term value drift rate also seems much easier to influence positively [...]
Given the availability of these strategies, I currently see 2% as a conservative estimate for the short-term[5] [EA · GW] value drift rate for a strongly committed and strategic investor-philanthropist.

I think it would be reasonable for one's best guess of the value drift rate for a "strongly committed and strategic investor-philanthropist" to be notably below the ~10% suggested by those three sources. This is because (a) those three sources don't provide very robust evidence, and (b), as you note, a strategic investor-philanthropist could make a conscious effort to reduce their value drift rate.

But (b) also seems a quite speculative and non-robust argument, at this stage. So it doesn't seem to me that 2% should be called a "conservative" estimate. It also seems like 0.5%, the "best guess" used in the spreadsheet, is a very low estimate, given the evidence we have. Is there other evidence you have in mind that leads you to see 2% as conservative, and 0.5% as a best guess?

(To be clear, I currently, tentatively lean towards the idea that EAs should likely move more in the direction of patient philanthropy. And I don't think these points would overturn that. But they might temper it somewhat.)

3.

this estimate depends a lot on the the hypothetical investor-philanthropist in question, so I invite the reader to make their own estimates based on the case they are considering
[Footnote:] Granted, there are obvious difficulties in estimating one’s own expected value drift rate.

One difficulty that seems to me especially worth noting is the end-of-history illusion: "a psychological illusion in which individuals of all ages believe that they have experienced significant personal growth and changes in tastes up to the present moment, but will not substantially grow or mature in the future".

I would expect this to cause a systematic bias towards underestimating one's own likelihood of value drift. (But it's hard to say how strong that bias would be, or whether it's outweighed by other factors.)

comment by jackmalde · 2020-07-06T11:55:11.130Z · EA(p) · GW(p)

I wonder if we need to make a very clear distinction between value drift in the case of an individual investing their own money with the intention of donating it later on, and value drift in the case of an individual legally-binding themselves to donate, for example by giving to a donor-advised fund.

In the latter case, which seems to be the most relevant in this context, I think the linked sources and the ~10% individual value drift figure are pretty irrelevant. A priority should probably be estimating a specific value drift estimate in the case of legally-binded giving, which will require some historical research into donor-advised funds or similar legal vehicles.

So MichaelA when you say "0.5% is very low given the evidence we have", I'm not convinced we actually have any relevant evidence at all, or at least I haven't seen it be presented.

comment by MichaelA · 2020-07-07T07:26:16.929Z · EA(p) · GW(p)

I definitely agree that:

  • The distinction you raise is important
  • The linked sources are most relevant to value drift among "individual[s] investing their own money with the intention of donating it later on"
  • That what's most relevant here is instead value drift among "individual[s] legally-binding themselves to donate, for example by giving to a donor-advised fund"
  • And that it would be valuable to do historical research relevant to the latter kind of value drift

(And I think those points are not merely true but important.)

But I also think that:

  • The linked sources seem somewhat relevant to the latter type of value drift, and worth using as a starting point, if we have little else to go on.
    • Consider that we always have to generalise from one context to another, and any historical research we do that seems more relevant to the "legally binding" or "donor-advised" aspects of the matter at hand might also be less relevant to the "EA" and "modern society" aspects.
  • The (I think?) purely speculative arguments as to why the latter type of value drift would occur at a lower rate do seem worth bringing up, and worth using to update one's estimates. However, it's not clear to me that those arguments are more robust than trying to generalise from the semi-relevant data we have would be.
  • Under such conditions, my conservative guess at the relevant value drift rate would be close to the 10% level, not 5 times lower.
  • If I was to decide that the linked sources' data was totally irrelevant, then it'd seem this post doesn't really provide any relevant data, only speculative argument. (Though there is data elsewhere that's arguably relevant, e.g. regarding waqfs.) Under those conditions, I think the range of value drift rates I'd see as plausible would stretch from close to 0% to close to 100%, and thus my conservative guess might have to be quite high.
comment by jackmalde · 2020-07-07T11:00:06.079Z · EA(p) · GW(p)

That all makes sense. I do think we need to make the clear distinction between 'individual' value drift and 'legally-binded' value drift, but you're probably right that the ~10% may be the best starting point we have for the latter.

It might be that the only way to get a decent estimate of legally-binded value drift in an EA setting is to actually set up a fund and see what happens. I suspect it would make sense to start cautious with putting money into the fund until a low value-drift has been demonstrated (which would admittedly take some time - perhaps a few generations). Overall I suspect it would be worth setting up such a fund for its informational value.

comment by SjirH · 2020-07-07T13:49:41.967Z · EA(p) · GW(p)

Thanks both! I largely agree and have incorporated an updated estimate into the new model (see above).

comment by Ben_Kuhn · 2020-07-05T12:29:06.020Z · EA(p) · GW(p)

Some of your "conservative" parameter estimates are surprising to me.

For instance, your conservative estimate of the effect of diminishing marginal returns is 2% per year or 10% over 5y. If (say) the total pool of EA-aligned funds grows by 50% over the next 5 years due to additional donors joining—which seems extremely plausible—it seems like that should make the marginal opportunity much more than 10% less good.

You also wrote

we’ll stick with 5% as a conservative estimate for real expected returns on index fund investing

but used 7% as your conservative estimate in the spreadsheet and in the bottom-line estimates you reported.

comment by SjirH · 2020-07-05T18:35:49.389Z · EA(p) · GW(p)
If (say) the total pool of EA-aligned funds grows by 50% over the next 5 years due to additional donors joining—which seems extremely plausible—it seems like that should make the marginal opportunity much more than 10% less good.

I'm not sure whether it would, considering, for example, the large room for funding GiveWell opportunities have had for multiple years (and will likely keep having) and their seemingly hardly diminishing cost-effectiveness on the margin (though data are obviously noisy here/there are other explanations).

But I do take your point that this is not a very conservative estimate. I'll update them from 1%/2% to 2%/4%, thank you!

but used 7% as your conservative estimate in the spreadsheet and in the bottom-line estimates you reported.

See the rest of the paragraph you refer to: the 5% is my conservative estimate for index investing, the 7% for investing more generally.

comment by Carl_Shulman · 2020-08-12T18:35:46.134Z · EA(p) · GW(p)

GiveWell top charities are relatively extreme in the flatness of their returns curves among areas EA is active in, which is related to their being part of a vast funding pool of global health/foreign aid spending, which EA contributions don't proportionately increase much.

In other areas like animal welfare and AI risk EA is a very large proportional source of funding. So this would seem to require an important bet that areas with relatively flat marginal returns curves are and will be the best place to spend.

comment by Grayden · 2020-09-13T14:22:04.655Z · EA(p) · GW(p)

A couple of points:

1) “We hence conservatively assume that a skilled investor can achieve 7% expected real returns” – I’m an investor (hopefully a skilled one), but I would certainly not think of 7% as conservative. Yes, historically real equity returns have been c.5%. That is indeed the correct prior to use when forecasting, but you then need to overlay other things about the future. Importantly, while the historically real risk-free was up around 2% for much of the period you quote (source: http://www.econ.yale.edu/~shiller/data/chapt26.xlsx), it is now less than -1% (source: https://www.federalreserve.gov/releases/h15), which should lower your estimate straight away from 5% to 2%. You can boost expected returns through leverage (though as you correctly say, this does have a cost). I would disagree about venture capital investment being higher returns. This may be the case on a post-tax basis, but is not on a pre-tax basis (which is what is most relevant for non-profits). I would not assume you are able to capture any premium from ‘information’. There is a whole industry competing for this and it is hard to do.

2) Your Guesstimate model assumes exogenous learning of +9.3% p.a. This input dwarfs all other variables, so it would be helpful if you could expand on how you reached it. It’s hard to critique something that is not explain (at least as far as I can see), but I think you may have fallen into the trap of looking at historical efficiency improvements brought about by scaling up of technology. As technology improves, the price comes down. But that price only comes down if you develop and manufacture the technology. Moore’s Law didn’t start until humanity built the first computers.

comment by alexherwix · 2020-07-21T13:55:45.646Z · EA(p) · GW(p)

Thanks for the post, it is interesting to see how other people are thinking about this question and I see it as valuable, although I am also somewhat critical toward this whole endeavor.

Maybe I am too naive or not thinking deep enough but with all of these giving now vs. giving later discussions I am somewhat worried about the mindset which is underlying such considerations. While I appreciate people investing time and resources into trying to understand how to have the biggest impact, just taking the perspective of a single investor comes across as somewhat narrow minded and selfish. What you basically seem to be calculating is the optimal degree of free riding that you can get away with to maximize the impact of your own dollars. Maybe it's good knowledge to have where that optimal point seems to be but I am somewhat worried about this becoming the underlying philosophy of longtermist giving.

For instance, longtermism is itself a rather new idea and people thinking about how they can invest as little as possible seems... yes, to some degree rational but also pretty risky in terms of ensuring success giving the many options for failure that exist in our world. I note that "capacity building" interventions are often explicitly excluded from these giving later considerations but giving off the whole vibe of "let's freeride as much as possible" doesn't seem to bode well for such initiatives as well. There is something like image, perception, and momentum and it really feels like this is strongly neglected in these kinds of discussions.

Having said that I am in favor of longtermist thinking but I would encourage to take a broader "community level" perspective. Wouldn't it be more effective to think about optimal rates of investment into community growth and then look for ways to get to those numbers and distributing them fairly rather than focusing on the best outcome for an individual investor and then circle back to what this means for the community? I mean your whole calculation depends on the possible return of investment that you can get from giving now vs. giving later. If we don't have a clear sense of what that RoI is right now how can you make good individual decisions?

Open to be shown the errors in my thinking!

comment by MichaelDickens · 2020-07-27T22:56:45.892Z · EA(p) · GW(p)

What you basically seem to be calculating is the optimal degree of free riding that you can get away with to maximize the impact of your own dollars.

If other people spend too much now and not enough later, then by investing, you do more good for the world than if you spent now. This maximizes the impact of your own dollars without reducing the impact of anyone else's, so it increases the total well-being of the world. And it's the optimal strategy if your goal is to maximize total well-being.

comment by alexherwix · 2020-07-28T14:48:56.975Z · EA(p) · GW(p)

Thanks for the counterpoint, I think that's an interesting perspective and in the abstract valid.

Nevertheless, as far as I can tell, in practice these discussions here don't seem to focus on the assessment of whether "other people spend too much now and not enough later" beyond the general assertion that people tend to discount the feature and the conclusion that, thus, there are opportunities to gain comparatively by investing.

However, what I haven't really seen are good arguments that people are actually spending too much now and not enough later[1] or models which model this aspect in some way. In another comment [EA · GW] I have outlined in more detail, why I think that it is important to explicitly consider the "nature" of problem solving when making such analyses and decisions.

Long story short, I think current models of giving now vs. giving later are way too simple and additional consideration about problem solving in general would lead me to believe that giving later should not become "the default" for longtermist giving - at least until we have set up an appropriate infrastructure to effectively identify and address problems as they arise. However, I don't want to misrepresent the position of giving later advocates who have often acknowledged that giving now that takes the form of "investments" (as I am suggesting) is somewhat exempt from the discussion. I agree that there might be substantial room for investments as part of wise philanthropic activity, I just don't think it's a winning strategy by itself. Thus, what I mostly seem to disagree with is the framing and emphasis of the debate.

Circling back to my comment on free riding. Simply postponing giving into the future under the assumption that other people will figure out what to do by then seems dangerous unless appropriate measures are taken to ensure that actual progress does happen at a reasonable rate as the world could also become much worse (e.g. climate change). However, postponing giving into the future, makes the individual who is postponing comparatively better of in the future, which would be a plus. Thus, there is in interesting dilemma situation here, where altruists who are not 100% aligned could get into conflict about who should invest when and how much to maximize overall expected value.

To avoid any potential conflicts as much as possible, care should be taken to communicate why specific decision to give now or later where made and how this is expected to affect the community as a whole. For instance, I would expect an organization considering giving later at a large scale like Founders Pledge to clearly articulate their strategy and what the EA community can expect from them now and in the future in a way that can be checked for value alignment over time. Otherwise, it seems totally plausible that opaque and non-transparent behavior could be perceived as free riding on the investments of the community as a whole.


  1. To me that notion actually seems to be a little bit paradoxical because the notion of giving later seems to imply that there will be better opportunities in the future but at the same time we seem to expect less giving then. Economics 101 would suggest that better opportunities would attract more buyers. Thus, wouldn't we need some other type of argument which considers the nature of the problem under consideration to justify giving later? ↩︎