Crucial questions about optimal timing of work and donations

post by MichaelA · 2020-08-14T08:43:28.710Z · score: 37 (12 votes) · EA · GW · 4 comments

Contents

  Introduction
  How will “leverage over the future” change over time?
    What should be our prior regarding how leverage over the future will change? What does the “outside view” say?
    How will our knowledge about what we should do change over time?
    How will the neglectedness of longtermist causes change over time?
    What “windows of opportunity” might there be? When might those windows open and close? How important are they?
    Are we biased towards thinking the leverage over the future is currently unusually high? If so, how biased?
      How often have people been wrong about such things in the past?
    If leverage over the future is higher at a later time, would longtermists notice?
  How effectively can we “punt to the future”?
    What would be the long-term growth rate of financial investments?
    What would be the long-term rate of expropriation of financial investments? How does this vary as investments grow larger?
    What would be the long-term “growth rate” from punting activities other than financial investment?
    Would the people we’d be punting to act in ways we’d endorse?
  Which “direct” actions might have compounding positive impacts?
  Do marginal returns to “direct work” done within a given time period diminish? If so, how steeply?
  Directions for future work
None
4 comments

This post is part of Convergence Analysis’s broader project on crucial questions for longtermists. The overview of this project [EA · GW] explains its purpose and scope, and outlines the crucial questions we’ve identified; we recommend reading that post before this one.

Introduction

Suppose your top altruistic priority - or one of them - is improving the expected value of the long-term future, such as by reducing existential risks [EA · GW].[1] There are a wide range of strategies you could take to achieve these goals, and a wide range of questions [EA · GW] worth asking to inform your choice of strategy. One important set of decisions you’ll have to make relates to the optimal timing of work and donations.[2]

For example, if you’re thinking about doing good through donations:

  1. Should you donate to effective charities now?
  2. Or should you invest to donate more later in your life?
  3. Or should you put your money in a foundation that’s meant to disburse it effectively sometime after your death?

And if you’re thinking about doing good through your work:

  1. Should you try to influence “current” events that affect the future “directly”?
    • E.g., improve the chances that, if AGI is developed in the next decade, that transition goes well
    • Following MacAskill [EA · GW], this post will refer to such strategies as involving “direct work”.[3]
  2. Or should you try to build your ability to do “direct work” later in your life?
    • E.g., gain networks and skills that position you for reducing risks from AGI development decades from now
  3. Or should you try to “punt to the future” [EA(p) · GW(p)]?
    • E.g., engage in movement-building or abstract strategic research

This post will overview the crucial questions that we (Convergence) believe do or should influence different longtermists’ views and choices regarding the best timing of work and donations. People’s beliefs about these questions can be thus seen as cruxes underlying their beliefs about optimal timing. This post will:

We hope this can serve as something of an orientation to, “research agenda” for, and structured reading list regarding the matter of optimal timing for longtermist work and donations.

Here are the questions we’ll cover:

Some things to note:

How will “leverage over the future” change over time?

What will be the “hinge of history [EA · GW]”, the “most influential time”, or the “precipice”?[7] During which period will direct work (as opposed to punting to the future [EA(p) · GW(p)]) have the highest leverage? How long will that period be? Will there be multiple such periods? Is one period now? How much higher than usual (or higher than now) will the leverage during that period be?

In general:

  1. The more a person thinks that leverage is now unusually high, the more inclined they may be towards doing or supporting direct work (e.g., diplomacy to reduce risks of great power wars, or ALLFED’s work to improve resilience to catastrophes that could occur in the near-term).

    • E.g., Toby Ord wrote a book whose central theme was that (a) we are probably currently at “the precipice”, and (b) that fact strengthens the argument for currently prioritising working on existential risk reduction.
    • This particular implication will be stronger the shorter the person thinks the current high-leverage period will be. E.g., if someone thinks that we’re now in a high-leverage period, but that leverage will remain high for centuries, their views and choices about timing would likely be determined by other questions discussed in this post (e.g., how strongly and lastingly the impacts of various actions “compound”).
  2. The more a person thinks that leverage isn’t now unusually high, the more inclined they may be to try to punt to the future (e.g., through investment, movement-building).

    • E.g., MacAskill writes that, “If we think that today is not exceptionally different from times in the past”, we have good reason to find promising the actions of “saving in a long-term foundation, or movement-building, with the aim of increasing the amount of resources longtermist altruists have at a future, more hingey time”.
    • Strategic views and choices should also depend on how long from now one thinks higher leverage periods would be, if they aren’t now. As hypothetical examples:
      • People who think the highest leverage period will be just decades from now might favour very similar strategies to those favoured by people who think we’re now in a multi-decade high-leverage period. E.g., supporting AI alignment work that’s based on current systems.
      • People who think the highest leverage period will be a century or more from now might favour strategies like setting up a foundation that will donate in effective and value-aligned ways later. These people likely won’t just save to give later in their lifetimes because these people wouldn’t be likely to live to see the higher leverage period.
      • People who think the highest leverage period will probably be millenia from now, but might be now, might favour acting as though the highest leverage period is now, because they think millenia-long chains of impact would be too hard to predict.
  3. The more a person thinks that there will be no substantial difference between how high leverage is now vs. at any future time, the more likely it is that the person’s time-related strategic choices will be determined by other questions discussed in this post.

Some other topics or questions this question is especially related to include:

(For sources relevant to those three matters, follow the links from the overview of this project [EA · GW].)

Some relevant existing work includes:

What follows are some of the more fine-grained “sub-questions” that inform many people’s beliefs about how “leverage over the future” will change over time.

What should be our prior regarding how leverage over the future will change? What does the “outside view” say?

MacAskill [EA · GW], commenters on his post, and Ord have provided lengthy and technical discussion of this question, which seems important to MacAskill and Ord’s differing views. I won’t summarise that discussion here.

How will our knowledge about what we should do change over time?

One reason longtermists and altruists may have more leverage later than they have now is if they later have better knowledge about what to do. For example, MacAskill [EA · GW] writes:

Perhaps we’re at a really transformative moment now, and we can, in principle, do something about it, but we’re so bad at predicting the consequences of our actions, or so clueless about what the right values are, that it would be better for us to save our resources and give them to future longtermists who have greater knowledge and are better able to use their resources, even at that less pivotal moment.

MacAskill also writes:

There are at least three ways in which our knowledge is changing or improving over time, and it’s worth distinguishing them:

  1. Our basic scientific and technological understanding, including our ability to turn resources into things we want.
  2. Our social science understanding, including our ability to make predictions about the expected long-run effects of our actions.
  3. Our values.

(MacAskill provides the caveat that “It’s more contentious whether we’re improving on (3) — for this argument one’s meta-ethics becomes crucial.”)

Similar points are also discussed by Ord and Cotton-Barratt (both using the term nearsightedness), Christiano, Tomasik, Dickens [EA · GW], Shlegeris [EA · GW], and Shulman [EA(p) · GW(p)].

Hoeijmakers [EA · GW] makes an important distinction between endogenous and exogenous learning:

Endogenous learning is the learning that the investor-philanthropist brings about themselves, e.g. by funding research or trying things out. [...]

Exogenous learning includes advances in the scientific community, new philanthropic interventions being invented and/or tried out, moral progress, and more. It also captures the time needed for relevant knowledge to become available, e.g. an experiment might take time, research might need to be done in a certain order, or there might be a talent constraint in a research area that takes time to be resolved.

The possibility for exogenous learning is the focus of this question. The more exogenous learning one expects, the later the optimal timing for work and donations is likely to be. In contrast, the possibility to cause endogenous learning can be a reason to “act soon”, and is related to the questions (covered below):

How will the neglectedness of longtermist causes change over time?

One reason longtermists may have less leverage later than we have now is if the sorts of work they’d wish to see done become less neglected over time. Reasons this could happen include:

Similar points are discussed by MacAskill [EA(p) · GW(p)], Shulman [EA(p) · GW(p)], Trammell, and Cotton-Barratt.

Conversely, the neglectedness of longtermist causes might increase over time, for reasons including the possible collapse or fizzling out [EA(p) · GW(p)] of the EA and longtermist movements. This could allow for more leverage later. This is discussed by MacAskill [EA(p) · GW(p)].

As noted by MacAskill [EA(p) · GW(p)], the implications of answers to this question also depend on how steeply marginal returns to (various types of) direct work diminish.

As with changes in knowledge:

What “windows of opportunity” might there be? When might those windows open and close? How important are they?

There may be some problems which can’t be effectively worked on until a certain time, or can’t be as effectively worked on before a certain time as after that time. That is, for some problems, there’s a window of opportunity that opens at a particular time. In some cases, the window may already be open. For example, it seems a dedicated and resourceful group of people in 2020 stand a far better chance of deliberately influencing how quantum computing will be used than such a group of people in the 16th century would’ve. In other cases, the window may be yet to open. For example, perhaps it will be easier to influence space governance [EA · GW] once humanity is closer to colonising space.

Additionally, there may be some problems which can’t be effectively worked on after a certain time, or can’t be as effectively worked on after a certain time. That is, the window of opportunity might close; there might be a deadline. For example, it’s impossible to prevent an existential catastrophe after it has occurred. For another example, people are continually making decisions about things like what jobs to take, where to donate to, how to design systems, and what policies to advocate for or implement. Once each decision is made (or implemented), the window for influencing it closes. Thus, work that would influence many such decisions could be more valuable the sooner it is done.

Relatedly, Ord writes that:

[One major effect which can make earlier labour matter more is] if it helps to change course. If we are moving steadily in the wrong direction, we would do well to change our course, and this has a larger benefit the earlier we do so. For example, perhaps effective altruists are building up large resources in terms of specialist labour directed at combatting a particular existential risk, when they should be focusing on more general purpose labour. Switching to the superior course sooner matters more, so efforts to determine the better course and to switch onto it matter more the earlier they happen.

Shlegeris [EA · GW] makes similar points in relation to work on AI safety (see his “Analogy to security”). And similar points seem to often be raised in relation to why present-day work on AI policy may be important. For example, Moës states:

So these [AI] policies are getting written right now, which at first is quite soft and then becomes harder and harder policies, and now to the point that at least in the EU, you have regulations for AI on the agenda, which is one of the hardest form[s] of legislation out there. Once these are written it is very difficult to change them. It’s quite sticky. There is a lot of path dependency in legislation. So this first legislation that passes, will probably shape the box in which future legislation can evolve. Its constraints, the trajectory of future policies, and therefore it’s really difficult to take future policies in another direction. So for people who are concerned about AGI, it’s important to be already present right now.

That said, it’s also worth noting that, the more decisions one can influence and path-dependencies one can create, the larger the downside risks [? · GW] an action might have. For example, one might lock in suboptimal choices or crowd out other efforts (see Wiblin & Lempel).

Generally speaking, the likelier it is that there’s a not-yet-open window of opportunity for working on a particular problem, and the longer it’s likely to be until that window opens, the more that pushes in favour of:

  1. Punting to the future, rather than supporting or doing direct work
  2. Punting to the further future than one would’ve otherwise punted to
  3. Prioritising work on other problems, whose windows of opportunity are more likely to be open, or to open sooner

In contrast, generally speaking, the likelier it is that there’s a window of opportunity for working on a particular problem that’s open but will close in future, and the sooner that window is likely to close, the more that pushes in favour of:

  1. Doing or supporting direct work
  2. Punting to the relatively near future (if one plans to punt)
    • For example, investing or movement-building in ways targeted to “pay off” in a few years, rather than many decades from now
  3. Prioritising work on that particular problem, rather than work on problems that are less likely to have windows that will close in future, or whose windows are likely to close later[8]
    • For example, mitigating risks that could strike in the next few years, rather than risks that seem larger but that are likely 5+ years away
    • For another example, prioritising strategies for AI alignment that work if timelines to transformative AI turn out to be short, relative to strategies that work if timelines are longer

Related points are also discussed by Dickens [EA · GW], Denkenberger, and Dixon [EA · GW].

There are multiple reasons why it can make sense to prioritise work on problems that are likelier to have windows of opportunity that’ll close relatively soon. One reason is that, compared to other problems, these problems may ultimately receive less work in total. This may increase the marginal returns to work on these problems, if marginal returns to work diminish. This point is discussed by Cotton-Barratt.

This question is especially related to, or perhaps hard to disentangle from, the questions:

Additionally, answers to this question could inform answers to the question “How effectively can we ‘punt to the future’?”

Are we biased towards thinking the leverage over the future is currently unusually high? If so, how biased?

MacAskill [EA · GW] discusses this question. For example, he writes:

Informally, the core argument against HoH [the Hinge of History Hypothesis] is that, in trying to figure out when the most influential time is, we should consider all of the potential billions of years through which civilisation might exist. Out of all those years, there is just one time that is the most influential. According to HoH, that time is… right now. If true, that would seem like an extraordinary coincidence, which should make us suspicious of whatever reasoning led us to that conclusion, and which we should be loath to accept without extraordinary evidence in its favour.

[...] it seems to me there’s a strong risk of bias in our assessment of the evidence regarding how influential our time is, for a few reasons:

Salience. It’s much easier to see the importance of what’s happening around us now, which we can see and is salient to us, than it is to assess the importance of events in the future, involving technologies and institutions that are unknown to us today, or (to a lesser extent) the importance of events in the past, which we take for granted and involve unsalient and unfamiliar social settings.

Confirmation. For those of us, like myself, who would very much like for the world to be taking much stronger action on extinction risk mitigation (even if the probability of extinction is low) than it is today, it would be a good outcome if people (who do not have longtermist values) think that the risk of extinction is high, even if it’s low. So we might be biased (subconsciously) to overstate the case in our favour. And, in general, people have a tendency towards confirmation bias: once they have a conclusion (“we should take extinction risk a lot more seriously”), they tend to marshall arguments in its favour, rather than carefully assess arguments on either side, more than they should. Though we try our best to avoid such biases, it’s very hard to overcome them.

Track record. People have a poor track record of assessing the importance of historical developments. And in particular, it seems to me, technological advances are often widely regarded as being more dangerous than they are. Some examples include assessment of risks from nuclear power, horse manure from horse-drawn carts, GMOs, the bicycle, the train, and many modern drugs.[4]

I don’t like putting weight on biases as a way of dismissing an argument outright (Scott Alexander gives a good run-down of reasons why here). But being aware that long-term forecasting is an area that’s very difficult to reason correctly about should make us quite cautious when updating from our prior.

A similar point is also briefly discussed by Baumann [EA · GW].

This question is especially related to the question “If leverage is higher at a later time, would longtermists notice?”

How often have people been wrong about such things in the past?

Some of MacAskill’s above-quoted arguments would seem to predict that people in history would’ve often, mistakenly, believed themselves to be in high-leverage periods. So evidence on how often people have made such predictions, ideally relative to how often they’ve considered themselves to not be in high-leverage periods, could help us assess how biased we might be towards thinking leverage is currently unusually high.

Focusing on existential risk estimates rather than specifically discussions of leverage, Fodor [EA · GW] writes:

[T]here is a very long history of predicting the end of the world (or the end of civilisation, or other existential catastrophes), so the baseline for accuracy of such claims is poor

On the other hand, Gwern [EA(p) · GW(p)] argues that some people in history have thought their time was less “special” or less of an “exception” than it really was (though note that this isn’t quite the same matter as how high leverage over the future was in those times). And Trammell [EA(p) · GW(p)] writes:

On my cursory understanding of history, it’s likely that for most of history people saw themselves as part of a stagnant or cyclical process which no one could really change, and were right. But I don’t have any quotes on this, let alone stats. I’d love to know what proportion of people before ~1500 thought of themselves as living at a special time.

Bostrom provides some support for the idea that most people through history saw development during their times as stagnant or cyclical.

If leverage over the future is higher at a later time, would longtermists notice?

Lewis [EA(p) · GW(p)] writes:

The invest for the future strategy[9] seems to rely on our descendants improving their epistemic access to the point where they can reliably determine whether they're at a 'hinge' or not, and deploying resources appropriately. There are grounds for pessimism about this ability ever being attained. Perhaps history (or the universe as a whole) is underpowered for these inferences.

[...] If we grant the ground truth is occasional 'crucial moments', but we expect evidence at-the-time for living in one of these is scant, my intuition is the optimal strategy would to husband resources to spend these disproportionately when the evidence gives some (but not decisive) indication one of these crucial moments is now.

Depending on how common these 'probably false alarms' are (plus things like how reliably can we steward resources for long periods of time), this might amount to monomaniacal work on immediate challenges. E.g., the prior is (say) 1/million this decade, but if the evidence suggests it is 1%, perhaps we should drop everything to work on it, if we won't expect our credence to be this high again for another millenia.

MacAskill’s reply [EA(p) · GW(p)] included:

I think if that were one's credences, what you say makes sense. But it seems hard for me to imagine a (realistic) situation where I think that it's 1% chance of HoH this decade, but I'm confident that the chance will [be] much lower than that for all of the next 99 decades.

Yudkowsky makes similar points to Lewis’ in relation to the idea that There's No Fire Alarm for Artificial General Intelligence [LW · GW]. For example, he states:

So far as I can presently estimate, now that we've had AlphaGo and a couple of other maybe/maybe-not shots across the bow, and seen a huge explosion of effort invested into machine learning and an enormous flood of papers, we are probably going to occupy our present epistemic state until very near the end.

By saying we're probably going to be in roughly this epistemic state until almost the end, I don't mean to say we know that AGI is imminent, or that there won't be important new breakthroughs in AI in the intervening time. I mean that it's hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won't know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. Whatever discoveries and milestones come next, it will probably continue to be hard to guess how many further insights are needed, and timelines will continue to be similarly murky.[10]

How effectively can we “punt to the future”?

Even if leverage over the future will stay the same or slightly decrease over time, it may still be wise to punt to the future if that can allow more direct work, or more impactful direct work, to be done later. For example, a person may choose to:

Conversely, even if leverage over the future will be far greater at some later date, it may still be best to “act now”, or relatively soon. This could be the case if punting to the future, or punting too far into the future, is unlikely to fully succeed, for example due to value drift [EA(p) · GW(p)].

Thus, a person’s beliefs about optimal timing for work and donations may depend in part on their beliefs about how effectively we can punt to the future. What follows are some of the more fine-grained “sub-questions” that inform many people’s beliefs about that question.

Some relevant existing work includes:

What would be the long-term growth rate of financial investments?

Beliefs about the value of financially investing in order to support larger amounts of direct work later (perhaps even after one’s own death), compared to the value of giving now or soon, seem to be driven in part by:

What would be the long-term rate of expropriation of financial investments? How does this vary as investments grow larger?

Growth in financial investments could be offset (or partially offset) by the annual probability that investments would be expropriated.

However, Trammell argues that that probability doesn’t actually matter, as:

investors will generally be compensated for [that probability] with a higher interest rate [...] A historical case against long-term investing thus requires a demonstration that the expropriation rate grows with fund size.

On that matter, he writes: “As the fund grows dizzyingly large, [...] People might grow more inclined to seize it, for example, or it might grow better able to defend itself”. He then explores the question more thoroughly. For readers interested in this question, I recommend reading his section 6.1.3 (see also Gwern’s reply [LW(p) · GW(p)]).

Quotes on some relevant historical case studies are provided by Hanson and Gwern.

What would be the long-term “growth rate” from punting activities other than financial investment?

Other than financial investing, other punting activities include movement-building and building one’s own career capital. For each of those types of activities, and each specific activity fitting within one of those types, one could ask what function of growth it would cause (in impact, resources dedicated towards doing good, or whatever). In particular, one could ask whether the growth will indeed be positive; whether it’ll be more like a lump sum increase, compounding growth, or some other function; and how large the lump sum, rate of compounding, or rate of some other function would be.

For example, Trammell states that, to count as “investment” for his purposes, movement-building has to:

look like fundraising in the sense that you’re not just putting more resources toward the cause next year, but toward the whole mindset of either giving to the cause or investing to give more in two years’ time to the cause. [The contrasting scenario would be one in which you] might spend all your money and get all these recruits who are passionate about the cause that you’re trying to fund, but then they just do it all next year.

One factor to consider is the potential reputational and motivational impacts of dedicating large amounts of resources to punting activities (relative to the resources dedicated to direct work), and especially to punting activities designed to “compound” by causing further punting activities. Wiblin notes the risk of coming to look like “some kind of multilevel marketing marketing scheme or some kind of Ponzi scheme”. This sort of risk could reduce the long-term effective “growth rate” of punting activities. This factor relates to the question (covered later) of “Which ‘direct’ actions might have compounding positive impacts?”

Christiano and Bergal [EA · GW] discuss points related to this question.

Would the people we’d be punting to act in ways we’d endorse?

If we punt to the future, examples of the people we’d be punting to might be:

Punting to the future is a less attractive strategy the less one expects the people we’d be punting to would act in ways that we’d (a) endorse currently, (b) endorse after some process of learning and reflection, and/or (c) endorse if we had “better” values.

This question can be further broken down into (at least) the following questions:

These sorts of questions are discussed by Tomasik, by the section of the GPI research agenda on “Intergenerational governance”, by Dickens [EA · GW], by Hoeijmakers [EA · GW] (also here [EA · GW]), by Ngo [EA(p) · GW(p)], by commenters in this thread [EA(p) · GW(p)], by Christiano, and in some of these sources on value drift [EA(p) · GW(p)].

This question is especially related to, or perhaps hard to disentangle from the topic “Importance of, and best approaches to, improving institutions and/or decision-making”, and from the questions:

(For sources relevant to the latter four questions, follow the links from the overview of this project [EA · GW].)

Which “direct” actions might have compounding positive impacts?

One argument often given for certain forms of punting to the future (e.g., financial investment to fuel later giving, or certain types of movement-building) is that they could provide compounding resources or impacts over time. This could cause much greater total impact than direct work done now would. But it also seems possible that certain forms of direct work could likewise have effects that compound over time. It’s thus worth asking which “direct” actions could have compounding impacts, and how strongly and lastingly those impacts would compound.

For example, some have argued that doing “object-level” research now, such as research into specific AI alignment problems, could help:

Reasons why direct work could have those sorts of compounding benefits include that such work could:

Conversely, it also seems possible that roughly the opposite effects could occur. For example, certain direct work conducted now could be perceived as pointless or premature, and this could make it harder to attract funding, attract talent, and so on. In any case, it seems likely that different “direct” actions would differ in whether and to what extent they’d cause compounding benefits (or harms).

These sorts of points are discussed by Ord, Trammell, Gleave, Shlegeris [EA · GW], and Shulman [EA(p) · GW(p)].

Arguably, this question could be reframed as, or replaced by, questions such as:

Do marginal returns to “direct work” done within a given time period diminish? If so, how steeply?

It’s possible that there are diminishing returns to additional direct work (either in general or in a particular area) within a given time.[14] For example, perhaps in each given year, the first $100 million spent on global catastrophic biological risk mitigation can support the continuation of the most cost-effective efforts, while the next $100 million can’t achieve as much value. This point is noted by Shulman [EA(p) · GW(p)], Trammell, and Yudkowksy & Muehlhauser (though see also MacAskill [EA(p) · GW(p)]).

Relatedly, Ord writes that:

[One major effect which can make earlier labour matter more is] a matter of serial depth. Some things require a long succession of stages each of which must be complete before the next begins. If you are building a skyscraper, you will need to build the structure for one story before you can build the structure for the next. You will therefore want to allow enough time for each of these stages to be completed and might need to have some people start building soon.

Similarly, if a lot of novel and deep research needs to be done to avoid a risk, this might involve such a long pipeline that it could be worth starting it sooner to avoid the diminishing marginal returns that might come from labour applied in parallel. This effect is fairly common in computation and labour dynamics (see The Mythical Man Month), but it is the factor that I am least certain of here.

We obviously shouldn’t hoard research labour (or other resources) until the last possible year, and so there is a reason based on serial depth to do some of that research earlier. But it isn’t clear how many years ahead of time it needs to start getting allocated (examples from the business literature seem to have a time scale of a couple of years at most) or how this compares to the downsides of accidentally working on the wrong problem. [I added line breaks to that quote]

This question is especially related to, or perhaps hard to disentangle from, the questions:

Additionally, answers to this question could:

Directions for future work

This post aimed to serve as something of an orientation to, research agenda for, and structured reading list regarding the matter of optimal timing for longtermist work and donations. But this post is certainly not the final say on the matter. We’d be excited to do or see future work which:

We’d very much appreciate input and feedback that could help us or others pursue such future work. Please feel free to get in touch with us if you are looking to do work on these questions.

This post was based in part on ideas and earlier writings by Justin Shovelain [LW · GW] and David Kristoffersson [LW · GW], and benefitted from input from them. I’m grateful also to feedback from Michael Dickens, Phil Trammell, Arden Koehler, and Alex Holness-Tofts. This does not imply these people’s endorsement of all aspects of this post.


  1. If you don’t subscribe to longtermism [EA · GW], many of the points and links in this post should still be relevant to you, though some might not be. ↩︎

  2. In some ways, decisions about optimal timing of work and donations can also overlap or interact with decisions about exploring vs. exploiting. ↩︎

  3. Unfortunately, this term is also used in other ways, most notably to distinguish between jobs that are “directly” impactful and those that can be impactful via allowing one to donate money. And “direct work”, as we and MacAskill use the term, may still be “indirect” in other senses, such as being quite “meta”. We would thus be happy to hear suggestions of alternative terms. We also considered “act-now strategies” and “present-influence strategies”, but both have their own issues. ↩︎

  4. Some alternative phrases [EA(p) · GW(p)] include “hingeyness”, “pivotality”, “criticality”, “influentialness”, “importance”, “significance”, and “momentousness”.

    “Leverage” was suggested by Siebe Rozendal [EA(p) · GW(p)]. I prefer that term, because I think it best highlights that, as MacAskill [EA · GW] notes, the focus here is “on how much influence a person at a time can have, rather than how much influence occurs during a time period. It could be the case, for example, that the 20th century was a bigger deal than the 17th century, but that, because there were 1/5th as many people alive during the 17th century, a longtermist altruist could have had more direct impact in the 17th century than in the 20th century”. ↩︎

  5. See also Section 2.3: Discounting in GPI’s research agenda. ↩︎

  6. I began writing the present post in March, and its core structure and points have been the same since April. Estimating the Philanthropic Discount Rate [EA · GW] and The case for investing to give later [EA · GW] were posted in July, at which point I read them, added links to them in appropriate places in this post, and added an idea from the latter post in the section “How will our knowledge about what we should do change over time?” Reading those posts did not lead to other major changes to this post. ↩︎

  7. Another phrase similar to “the precipice” is “the time of perils”.

    Here’s Will MacAskill’s proposal [EA · GW] for defining “most influential time”: “a time ti is more influential (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.”

    Note that we’re focused here on what will be the highest leverage period remaining, “because we can try to save resources to affect future times, but we know we can’t affect past times” (MacAskill [EA · GW]). That said, as MacAskill also discusses, “past hingeyness might still be relevant for assessing hingeyness today”. ↩︎

  8. That said, there could also be cases in which a problem’s window of opportunity closing soon would be a reason not to work on that problem. This could occur if making a difference “in time” would be very unlikely, or very costly. For example, it seems to not make sense to prioritise strategies for AI alignment that are optimised for extremely short timelines, and one reason for this is that we may stand little chance anyway if timelines are that short. ↩︎

  9. Note that Lewis seems to mean “punting to the future” in general, rather than just financial investment. ↩︎

  10. See also Failures in technology forecasting? A reply to Ord and Yudkowsky [LW · GW] and Discontinuous progress in history: an update [LW · GW]. ↩︎

  11. 80,000 Hours provides strong arguments that the best roles to take for building career capital that’s relevant to future impactful roles will often themselves be “directly” impactful roles. But there are likely to be some situations where the role that’s the very best for the goal of “directly” having an impact isn’t also the very best for the goal of building valuable career capital. In those situations, how much weight one gives to each of those goals would matter. ↩︎

  12. I expect Deep-time organizations: Learning institutional longevity from history is also relevant, but I haven’t read beyond its abstract. ↩︎

  13. To take this sort of idea to its extreme, we might wonder whether there are ways we can even avoid having to punt to people at all, by having our intentions automatically implemented somehow. ↩︎

  14. It’s also possible there are diminishing returns to additional work or spending on (some) punting activities. For example, perhaps adding another EA movement-builder matters less once there are already 1000 active in that year than when there are just 100 active in that year. This possibility seems worth exploring, but we will set it aside for this post. ↩︎

4 comments

Comments sorted by top scores.

comment by MichaelA · 2020-08-14T10:12:14.887Z · score: 4 (2 votes) · EA(p) · GW(p)

(Speaking for myself, not any of my employers, as per usual)

Here are my personal, tentative takeaways after reading and thinking about this topic off and on for several months: 

  • The case for punting and the case for doing/supporting "direct work" primarily for its "punting-like" benefits (e.g., value of information, field-building) both seem pretty strong.
  • The case for doing direct work primarily for its more "direct" benefits seems less strong.
  • If memory serves, I think:
    • I hadn't thought about these matters much at all last year
    • Then, when I heard things like Trammell's 80k episode, I began to feel that the arguments for punting were stronger than the arguments for doing/supporting direct work
    • Then, in the course of working on this post, I became more confident about the arguments for punting, and started to think that the key value of direct work might be its punting-like benefits (and that decisions about direct work - e.g., which org to donate to - should perhaps be primarily be based on those types of benefits)
  • I think "EA in general" had undervalued the arguments for punting until 2020. But I think that a major shift has occurred in 2020 (see e.g. the many recent posts under the Patient Altruism [? · GW] tag).
    • Our discourse may now roughly appropriately balance the case for punting and the case for "direct work now".
    • It's hard for me to comment on whether our actions strike the appropriate balance. [I edited this set of points in response to MichaelDickens' comment below.]
  • I think EAs may still pay too little attention to the idea that direct work might be valuable primarily for its punting-like benefits, and that that may be the key factor to consider when making decisions about direct work
  • I'm quite unsure about how we should allocate resources between punting vs direct work selected for its punting-like benefits
  • Next year, I think I'll give 10% of my income to "direct work" orgs/projects/people, which I'll select primarily based on their potential punting-like benefits (e.g., mentoring early-career researchers). And I'll invest as much as I easily can beyond that 10% (which I expect to be >10%) for giving later, once I've accrued interest on it and I know more.
    • A good counterargument to me doing that is that I may undergo value drift. To partially address that, I might use a donor advised fund.
    • It's also very possible I should invest the 10% as well. A non-negligible factor in me planning to support direct work with 10% of my income is simply that I want to (rather than that I'm confident it's morally best).
comment by MichaelDickens · 2020-08-14T21:31:44.861Z · score: 4 (2 votes) · EA(p) · GW(p)

I think "EA in general" had undervalued the arguments for punting until 2020. But I think that a major shift has occurred in 2020 (see e.g. the many recent posts under the Patient Altruism tag), and we might now be at approximately the right point.

If punting is indeed the right move, then this only seems true with regard to the discourse, not with regard to people's actual behavior. For example, Open Phil spends somewhere around 3% of its budget per year, which is too high on pure "patient longtermist" considerations--Phil Trammell's paper suggested an optimal spend rate of ~0.5% in general, but possibly lower than that if you believe other philanthropists are spending too quickly. (Global poverty donors in particular should be giving 0% per year. This claim seems pretty robustly true.)

Edited to add: I think a rate above 0.5% can be justified based on issues with value drift/expropriation, see https://forum.effectivealtruism.org/posts/3QhcSxHTz2F7xxXdY/estimating-the-philanthropic-discount-rate [EA · GW]. AFAIK, nobody has really put work into determining the optimal spending rate, so we don't know what the optimal spending rate is even if we accept the arguments for urgency. My best guess based on my limited research is that the optimal urgent spending rate is something like 1.5% for institutions and 6% for individuals (based on a 0.5% annual probability of existential catastrophe, 0.5% expropriation rate, 0.5% institutional value drift rate, and 5% individual value drift rate).

comment by MichaelA · 2020-08-14T23:49:05.482Z · score: 4 (2 votes) · EA(p) · GW(p)

Ah, good point that we should distinguish the discourse from the behaviours, and that what I said is clearer for the discourse than for the behaviours. I actually intended those sentences to just be about the discourse, but I didn't make that clear. (I've now edited those sentences.)

Also, whether people's discourse is at an appropriate point is probably less decision-relevant than whether their actions are, because: 

  • it might be more worthwhile to try to push their actions towards the appropriate balance than to push their discourse towards the appropriate balance
  • we might want to oversteer one way or the other to compensate for what other people are doing (and this is somewhat less true regarding what people are saying)

Unfortunately, I find it very hard to say whether EAs' actions are, in aggregate, overemphasising "direct work now", overemphasising punting, or striking roughly the right balance. (Alternative terms would be "too urgent" vs "too patient" [EA(p) · GW(p)] vs roughly right.) This is because I don't have a strong sense of what balance EAs are currently striking or of what balance they should be striking. (Though I've found your work helpful on the latter point.)

Also, I realise now that I'm basing my assessment of EA's discourse primarily on what I see on the forum and what I hear from the EAs I speak to, who are mostly highly engaged. This probably gives me a misleading picture, as ideas probably diffuse faster to these groups than to EAs in general.

comment by MichaelA · 2020-08-14T09:25:23.678Z · score: 2 (1 votes) · EA(p) · GW(p)

There are two subquestions that didn't feel important/commonly discussed enough to be worth including in the (already long!) post itself, but that felt important/commonly discussed enough to not simply delete. So I'll add them here. 

The first of these subquestions fits under "How will “leverage over the future” change over time?" The second fits under "How effectively can we “punt to the future”?"

How has leverage changed over history?

This is relevant to MacAskill's “inductive argument against HoH” [EA · GW].

Would punting be less likely to be effective in worlds where it’d be most useful?  

Plausibly, resources that can be dedicated towards longtermist causes are especially valuable if a global catastrophe [EA · GW] is likely to occur. But also plausibly, the likelier it is that such a catastrophe would occur, the likelier it is that punting actions will turn out to fail. This could occur due to, for example, resources being wiped out, the rule of law being disrupted, or relevant social movements unravelling.

Likewise, plausibly, resources that can be dedicated towards longtermist causes are especially valuable if EA, longtermism, and/or related values are likely to become less widespread or disappear entirely. But also plausibly, the likelier it is that that happens, the less likely it is that the people we’d be punting to would act in ways we’d endorse (reducing the effectiveness of our punting).

It seems possible that examples like these point towards a more general correlation between how valuable successful punting would be and how likely punting is to fail. In other words, this may suggest punting would be least likely to work in the worlds where it'd be mot valuable. This may reduce the expected value of punting. (But this is all somewhat speculative.)

I believe Kit [EA(p) · GW(p)] and Shulman [EA(p) · GW(p)] discuss similar ideas, though I may be misinterpreting them.