# Are we living at the most influential time in history?

post by William_MacAskill · 2019-09-03T04:55:31.501Z · EA · GW · 147 comments

## Contents

  Introduction
Getting the definitions down
Strong longtermism even if HoH is not true
Arguments for HoH
Arguments against HoH
#1: The outside-view argument against HoH
#2: The Inductive Argument against HoH
#3: The simulation update argument against HoH
Might today be merely an enormously influential time?
Possible other hinge times
Implications
None


I don’t claim originality for any content here; people who’ve been influential on this include Nick Beckstead, Phil Trammell, Toby Ord, Aron Vallinder, Allan Dafoe, Matt Wage, and, especially, Holden Karnofsky and Carl Shulman. Everything tentative; errors all my own.

EDIT: Since writing this post, I've written an article on the same topic, forthcoming in a Festschrift in honor of Derek Parfit, available at my website here. The broad thrust is the same, but the details are improved. I'd encourage you to read that article rather than this blog post.

The most important change is to the priors-based argument. The priors-based argument I give in the text below is very poorly stated, and probably led to unnecessary confusion. I've revised the argument which I think now better represents the perspective I was arguing for here.

I've made a few other changes, too:

• I frame the argument in terms of the most influential people, rather than the most influential times. It’s the more natural reference class, and is more action-relevant.
• I use the term ‘influential’ rather than ‘hingey’.
• I define ‘influentialness’ (aka ‘hingeyness’) in terms of ‘how much expected good you can do’, not just ‘how much expected good you can do from a longtermist perspective’. Again, that’s the more natural formulation, and, importantly, one way in which we could fail to be at the most influential time (in terms of expected good done by direct philanthropy) is if longtermism is false and, say, we only discover the arguments that demonstrate that in a few decades’ time.
• The paper includes a number of graphs, which I think helps make the case clearer.

The main things that are missing from the article version are (i) discussion the simulation argument, which is discussed in the post below, and (ii) the arguments and counterarguments about priors, which are in the comments.

# Introduction

Here are two distinct views:

Strong Longtermism := The primary determinant of the value of our actions is the effects of those actions on the very long-run future.
The Hinge of History Hypothesis (HoH) :=  We are living at the most influential time ever.

It seems that, in the effective altruism community as it currently stands, those who believe longtermism generally also assign significant credence to HoH; I’ll precisify ‘significant’ as >10% when ‘time’ is used to refer to a period of a century, but my impression is that many longtermists I know would assign >30% credence to this view.  It’s a pretty striking fact that these two views are so often held together — they are very different claims, and it’s not obvious why they should so often be jointly endorsed.

This post is about separating out these two views and introducing a view I call outside-view longtermism, which endorses longtermism but finds HoH very unlikely. I won’t define outside-view longtermism here, but the spirit is that — as our best guess — we should expect the future to continue the trends of the past, and we should be sceptical of the idea that now is a particularly unusual time. I think that outside-view longtermism is currently a neglected position within EA and deserves some defense and exploration.

Before we begin, I’ll note I’m not making any immediate claim about the actions that follow from outside-view longtermism. It’s plausible to me that whether we have 30% or just 0.1% credence in HoH, we should still be investing significant resources into the activities that would be best were HoH true. The most obvious implication, however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building. If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some future, more influential, time comes. So in what follows I’ll sometimes use this as the comparison activity.

Getting the definitions down

We’ve defined strong longtermism informally above and in more detail in this post. [EA · GW]

For HoH, defining ‘most influential time’ is pretty crucial. Here’s my proposal:

a time ti is more influential (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.

(I’ll also use the term ‘hingier’ to be synonymous with ‘more influential’.)

This definition gets to the nub of the matter, for me. It seems to me that, for most times in human history, longtermists ought, if they could, to have been investing their resources (via values-spreading as well as literal investment) in order that they have greater influence at hingey moments when one’s ability to influence the long-run future is high. It’s a crucial question for longtermists whether now is a very hingey moment, and so whether they should be investing or doing direct work.

It’s significant that my definition focuses on how much influence a person at a time can have, rather than how much influence occurs during a time period. It could be the case, for example, that the 20th century was a bigger deal than the 17th century, but that, because there were 1/5th as many people alive during the 17th century, a longtermist altruist could have had more direct impact in the 17th century than in the 20th century.

It’s also significant that, on this definition, you need to take into account the level of knowledge and understanding of the average longtermist altruist at the time. This seems right to me. For example, hunter-gatherers could contribute more to tech speed-up than people now (see Carl Shulman’s post here); but they wouldn’t have known, or been in a position to know, that trying to innovate was a good way to benefit the very long-run future. (In that post, Carl mentions some reasons for thinking that such impact was knowable, but prior to the 17th century people didn’t even have the concept of expected value, so I’m currently sceptical.)

So I’m really bundling two different ideas into the concept of ‘most influential’: how pivotal a particular moment in time is, and how much we’re able to do something about that fact.  Perhaps we’re at a really transformative moment now, and we can, in principle, do something about it, but we’re so bad at predicting the consequences of our actions, or so clueless about what the right values are, that it would be better for us to save our resources and give them to future longtermists who have greater knowledge and are better able to use their resources, even at that less pivotal moment. If this were true, I would not count this time as being exceptionally influential.

Strong longtermism even if HoH is not true

I mentioned that it’s surprising that strong longtermism and significant credence in HoH are so often held together. But here’s one reason why you might think you should put significant credence in HoH iff you believe longtermism: You might accept that most value is in the long-run future, but think that, at most times in history so far, we’ve been unable to do anything about that value. So it’s only because HoH is true that longtermism is true. But I don’t think that’s a good argument, for a few reasons.

First, given the stakes involved, it’s plausible that even a small chance of being at a period of unusually high extinction or lock-in risk is enough for working on extinction risk or lock-in scenarios to be higher expected value than short-run activities. So, you can reasonably think that (i) HoH is unlikely (e.g. 0.1% likely), but that (ii) when combined with the value of being able to influence the value of the long-run future, a small chance of HoH being true is enough to make strong longtermism true.

Second, even if we’re merely at a relatively hingey time — just not the most hingey time — as long as there are some actions that have persistent long-run effects that are positive in expected value, that’s plausibly sufficient for strong longtermism to be true.

Third, you could even be certain that HoH is false, and that there are currently no direct activities with persistent impacts, but still believe that longtermism is true if, as is natural to suppose, you have the option of investing resources, enabling future longtermist altruists to take action at a time which is more influential.

# Arguments for HoH

In this post, I’m going to simply state, but not discuss, some views on which something like HoH would be entailed, and some arguments for thinking HoH is likely. Each of these views and arguments require a lot more discussion, and often have had a lot more discussion elsewhere.

There are two commonly held views that entail something like HoH:

The Value Lock-in view

Most starkly, according to a view regarding AI risk most closely associated with Nick Bostrom and Eliezer Yudkowsky: it’s likely that we will develop AGI this century, and it’s likely that AGI will quickly transition to superintelligence. How we handle that transition determines how the entire future of civilisation goes: if the superintelligence ‘wins’, then the entire future of civilisation is determined in accord with the superintelligence’s goals; if humanity ‘wins’, then the entire future of civilisation is determined in accord with whoever controls the superintelligence, which could be everyone, or could be a small group of people. If this story is right, and we can influence which of these scenarios occurs, then this century is the most influential time ever.

A related, but more general, argument, is that the most pivotal point in time is when we develop techniques for engineering the motivations and values of the subsequent generation (such as through AI, but also perhaps through other technology, such as genetic engineering or advanced brainwashing technology), and that we’re close to that point. (H/T Carl Shulman for stating this more general view to me).

The Time of Perils view

According to the Time of Perils view, we live in a period of unusually high extinction risk, where we have the technological power to destroy ourselves but lack the wisdom to be able to ensure we don’t; after this point annual extinction risk will go to some very low level. Support for this view could come from both outside-view and inside-view reasoning: the outside-view argument would claim that extinction risk has been unusually high since the advent of nuclear weapons; the inside-view argument would point to extinction risk from forthcoming technologies like synthetic biology.

The ‘unusual’ is important here. Perhaps extinction risk is high at this time, but will be even higher at some future times. In which case those future times might be even hingier than today. Or perhaps extinction risk is high, but will stay high indefinitely, in which case the future is not huge in expectation, and the grounds for strong longtermism fall away.

And, for the Time of Perils view to really support HoH, it’s not quite enough to show that extinction risk is unusually high; what’s needed is that extinction risk mitigation efforts are unusually cost-effective. So part of the view must be not only that extinction risk is unusually high at this time, but also that longtermist altruists are unusually well-placed to decrease those risks — perhaps because extinction risk reduction is unusually neglected.

Outside-View Arguments

The Value Lock-In and Time of Perils views are the major views on which HoH — or something similar — would be supported. But there are also a number of more general, and more outside-view-y, arguments that might be taken as evidence in favour of HoH:

1. That we’re unusually early on in human history, and earlier generations in general have the ability to influence the values and motivations of later generations.[2]
2. That we’re at an unusually high period of economic and technological growth.
3. That the long-run trend of economic growth means we should expect extremely rapid growth into the near future, such that we should expect to hit the point of fastest-ever growth fairly soon, before slowing down.
4. That we’re unusually well-connected and able to cooperate in virtue of being on one planet.
5. That we’re unusually likely to become extinct in virtue of being on one planet.

My view is that, in the aggregate, these outside-view arguments should substantially update one from one’s prior towards HoH, but not all the way to significant credence in HoH.[3]

# Arguments against HoH

#1: The outside-view argument against HoH

Informally, the core argument against HoH is that, in trying to figure out when the most influential time is, we should consider all of the potential billions of years through which civilisation might exist. Out of all those years, there is just one time that is the most influential. According to HoH, that time is… right now. If true, that would seem like an extraordinary coincidence, which should make us suspicious of whatever reasoning led us to that conclusion, and which we should be loath to accept without extraordinary evidence in its favour. We don’t have such extraordinary evidence in its favour. So we shouldn’t believe in HoH.

I’ll take each of the key claims in this argument in turn:

1. It’s a priori extremely unlikely that we’re at the hinge of history
2. The belief that we’re at the hinge of history is fishy
3. Relative to such an extraordinary claim, the arguments that we’re at the hinge of history are not sufficiently extraordinarily powerful

Claim 1

That HoH is a priori unlikely should be pretty obvious. It’s hard to know exactly what ur-prior to use for this claim, though. One natural thought is that we could use, say, 1 trillion years’ time as an early estimate for the ‘end of time’ (due to the last naturally occurring star formation), and a 0.01% chance of civilisation surviving that long. Then, as a lower bound, there are an expected 1 million centuries to come, and the natural prior on the claim that we’re in the most influential century ever is 1 in 1 million. This would be too low in one important way, namely that the number of future people is decreasing every century, so it’s much less likely that the final century will be more influential than the first century. But even if we restricted ourselves to a uniform prior over the first 10% of civilisation’s history, the prior would still be as low as 1 in 100,000.

(This is a very rough argument. I really don’t know what the right ur-prior is to set here, and I’d be keen to see further discussion, as it potentially changes one’s posterior on HoH by an awful lot.)

[Later Edit (Mar 2020): The way I state the choice of prior in the text above was mistaken, and therefore caused some confusion. The way I should have stated the prior choice, to represent what I was thinking of, is as follows:

The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/n.

The unconditional prior probability over whether this is the most influential century would then depend on one's priors over how long Earth-originating civilization will last for. However, for the purpose of this discussion we can focus on just the claim that we are at the most influential century AND that we have an enormous future ahead of us. If the Value Lock-In or Time of Perils views are true, then we should assign a significant probability to that claim. (i.e. they are claiming that, if we act wisely this century, then this conjunctive claim is probably true.) So that's the claim we can focus our discussion on.

It's worth noting that my proposal follows from the Self-Sampling Assumption, which is roughly (as stated by Teru Thomas ('Self-location and objective chance' (ms)): "A rational agent’s priors locate him uniformly at random within each possible world." I believe that SSA is widely held: the key question in the anthropic reasoning literature is whether it should be supplemented with the self-indication assumption (giving greater prior probability mass to worlds with large populations). But we don't need to debate SIA in this discussion, because we can simply assume some prior probability distribution over sizes over the total population - the question of whether we're at the most influential time does not require us to get into debates over anthropics.]

Claim 2

Lots of things are a priori extremely unlikely yet we should have high credence in them: for example, the chance that you just dealt this particular (random-seeming) sequence of cards from a well-shuffled deck of 52 cards is 1 in 52! ≈ 1 in 10^68, yet you should often have high credence in claims of that form.  But the claim that we’re at an extremely special time is also fishy. That is, it’s more like the claim that you just dealt a deck of cards in perfect order (2 to Ace of clubs, then 2 to Ace of diamonds, etc) from a well-shuffled deck of cards.

Being fishy is different than just being unlikely. The difference between unlikelihood and fishiness is the availability of alternative, not wildly improbable, alternative hypotheses, on which the outcome or evidence is reasonably likely. If I deal the random-seeming sequence of cards, I don’t have reason to question my assumption that the deck was shuffled, because there’s no alternative background assumption on which the random-seeming sequence is a likely occurrence.  If, however, I deal the deck of cards in perfect order, I do have reason to significantly update that the deck was not in fact shuffled, because the probability of getting cards in perfect order if the cards were not shuffled is reasonably high. That is: P(cards not shuffled)P(cards in perfect order | cards not shuffled) >> P(cards shuffled)P(cards in perfect order | cards shuffled), even if my prior credence was that P(cards shuffled) > P(cards not shuffled), so I should update towards the cards having not been shuffled.

Similarly, if it seems to me that I’m living in the most influential time ever, this gives me good reason to suspect that the reasoning process that led me to this conclusion is flawed in some way, because P(I’m reasoning poorly)P(seems like I’m living at the hinge of history | I’m reasoning poorly) >> P(I’m reasoning correctly)P(seems like I’m living at the hinge of history | I’m reasoning correctly). In contrast, I wouldn’t have the same reason to doubt my underlying assumptions if I concluded that I was living in the 1047th most influential century.

The strength of this argument depends in part on how confident we are on our own reasoning abilities in this domain. But it seems to me there’s a strong risk of bias in our assessment of the evidence regarding how influential our time is, for a few reasons:

• Salience. It’s much easier to see the importance of what’s happening around us now, which we can see and is salient to us, than it is to assess the importance of events in the future, involving technologies and institutions that are unknown to us today, or (to a lesser extent) the importance of events in the past, which we take for granted and involve unsalient and unfamiliar social settings.
• Confirmation. For those of us, like myself, who would very much like for the world to be taking much stronger action on extinction risk mitigation (even if the probability of extinction is low) than it is today, it would be a good outcome if people (who do not have longtermist values) think that the risk of extinction is high, even if it’s low. So we might be biased (subconsciously) to overstate the case in our favour. And, in general, people have a tendency towards confirmation bias: once they have a conclusion (“we should take extinction risk a lot more seriously”), they tend to marshall arguments in its favour, rather than carefully assess arguments on either side, more than they should. Though we try our best to avoid such biases, it’s very hard to overcome them.
• Track record. People have a poor track record of assessing the importance of historical developments. And in particular, it seems to me, technological advances are often widely regarded as being more dangerous than they are. Some examples include assessment of risks from nuclear power, horse manure from horse-drawn carts, GMOs, the bicycle, the train, and many modern drugs.[4]

I don’t like putting weight on biases as a way of dismissing an argument outright (Scott Alexander gives a good run-down of reasons why here). But being aware that long-term forecasting is an area that’s very difficult to reason correctly about should make us quite cautious when updating from our prior.

If you accept you should have a very low prior in HoH, you need to be very confident that you’re good at reasoning about the long-run significance of events (such as the magnitude of risk from some new technology) in order to have  a significant posterior credence in HoH, rather than concluding we’re mistaken in some way. But we have no reason to believe that we’re very reliable in our reasoning in these matters. We don’t have a good track record of making predictions about the importance of historical events, and some track record of being badly wrong. So, if a chain of reasoning leads us to the conclusion that we’re living in the most important century ever, we should think it more likely that our reasoning has gone wrong than that the conclusion really is true. Given the low base rate, and given our faulty tools for assessing the claim, the evidence in favour of HoH is almost certainly a false positive.

Claim 3

I’ve described some of the arguments for thinking that we’re at an unusually influential time in the previous section above.

I won’t discuss the object-level of these arguments here, but it seems hard to see how these arguments could be strong enough to move us from the very low prior all the way to significant credence in HoH. To illustrate: a randomised controlled trial with a p-value of 0.05, under certain reasonable assumptions, corresponds to a Bayes factor of around 3; a Bayes factor of 100 is regarded as ‘decisive’ evidence. In order to move from a prior of 1 in 100,000 to a posterior of 1 in 10, one would need a Bayes factor of 10,000 — extraordinarily strong evidence.

But, so this argument goes, the evidence we have for either the Value Lock-in view or the Time of Perils view are informal arguments. They aren’t based on data (because they generally concern future events) nor, in general, are they based on trend extrapolation, nor are they based on very well-understood underlying mechanisms, such as physical mechanisms. And the range of deep critical engagement with those informal arguments, especially from ‘external’ critics, has, so far, been limited. So it’s hard to see why we should give them much more evidential weight than, say, a well-done RCT with a p-value at 0.05 — let alone assign them an evidential weight 3000 times that amount.

An alternative path to the same conclusion is as follows. Suppose that, if we’re at the hinge of history, we’d certainly have seeming evidence that we’re at the hinge of history; so say that P(E | HoH ) ≈ 1. But if we weren’t at the hinge of history, what would be the chances of us seeing seeming evidence that we are at the hinge of history? It’s not astronomically low; perhaps P(E | ¬HoH ) ≈ 0.01. (This would seem reasonable to believe if we found just one century in the past 10,000 years where people would have had strong-seeming evidence in favour of the idea that they were at the hinge of history. This seems conservative. Consider: the periods of the birth of Christ and early Christianity; the times of Moses, Mohammed, Buddha and other religious leaders; the Reformation; the colonial period; the start of the industrial revolution; the two world wars and the defeat of fascism; and countless other events that would have seemed momentous at the time but have since been forgotten in the sands of history. These might have all seemed like good evidence to the observers at the time that they were living at the hinge of history, had they thought about it.) But, if so, then our Bayes factor is 100 (or less): enough to push us from 1 in 100,000 to 1 in 1000 in HoH, but not all the way to significant credence.

#2: The Inductive Argument against HoH

In addition to the previous argument, which relies on priors and claims we shouldn’t move drastically far from those priors, there’s a positive argument against HoH, which gives us evidence against HoH, whatever our priors. This argument is based on induction from past times.

If, when looking into the past, we saw hinginess steadily decrease, that would be a good reason for thinking that now is hingier than all times to come, and so we should take action now rather than pass resources on to future longtermists.  If we had seen hinginess steadily increase, then we have some reason for thinking that the hingiest times are yet to come; if we had a good understanding of the mechanism of why hinginess is increasing, and knew that mechanism was set to continue into the future, that would strengthen that argument further.

I suggest that in the past, we have seen hinginess increase. I think that most longtermists I know would prefer that someone living in 1600 passed resources onto us, today, rather than attempting direct longtermist influence. (I certainly would prefer this.) One reason for thinking this would be if one thinks that now is simply a more pivotal point in time, because of our current level of technological progress. However, the stronger reason, it seems to me, is that our knowledge has increased so considerably since then. (Recall that on my definition a particularly hingey time depends both on how pivotal the period in history is and the extent to which a longtermist at the time would know enough to do something about it.) Someone in 1600 couldn’t have had knowledge of AI, or population ethics, or the length of time that humanity might continue for, or of expected utility theory, or of good forecasting practices; they would have had no clue about how to positively influence the long-run future, and might well have done harm. Much the same is true of someone in 1900 (though they would have had access to some of those concepts). It’s even true of someone in 1990, before people became aware of risks around AI. So, in general, hinginess is increasing, because our ability to think about the long-run effects of our actions, evaluate them, and prioritise accordingly, is increasing.

But we know that we aren’t anywhere close to having fully worked out how to think about the long-run effects of our actions, evaluate them, and prioritise accordingly. We should confidently expect that in the future we will come across new crucial considerations — as serious as the idea of population ethics, or AI risk — or major revisions of our views. So, just as we think that people in the past should have passed resources onto us rather than do direct work, so, this argument goes, we should pass resources into the future rather than do direct longtermist work. We should think, in virtue of future people’s far better epistemic state, that some future time is more influential.

There are at least three ways in which our knowledge is changing or improving over time, and it’s worth distinguishing them:

1. Our basic scientific and technological understanding, including our ability to turn resources into things we want.
2. Our social science understanding, including our ability to make predictions about the expected long-run effects of our actions.
3. Our values.

It’s clear that we are improving on (1) and (2). All other things being equal, this gives us reason to give resources to future people to use rather than to use those resources now. The importance of this, it seems to me, is very great.  Even just a few decades ago, a longtermist altruist would not have thought of risk from AI or synthetic biology, and wouldn’t have known that they could have taken action on them. Even now, the science of good forecasting practices is still in its infancy, and the study of how to make reliable long-term forecasts is almost nonexistent.

It’s more contentious whether we’re improving on (3) — for this argument one’s meta-ethics becomes crucial. Perhaps the Victorians would have had a very poor understanding of how to improve the long-run future by the lights of their own values, but they would have still preferred to do that than to pass resources onto future people, who would have done a better job of shaping the long-run future but in line with a different set of values. So if you endorse a simple subjectivist view, you might think that even in such an epistemically impoverished state you should still prefer to act now rather than pass the baton on to future generations with aims very different from yours (and even then you might still want to save money in a Victorian-values foundation to grant out at a later date). This view also makes the a priori unlikelihood of living at the hinge of history much less: from the perspective of your idiosyncratic values, now is the only time that they are instantiated in physical form, so of course this time is important!

In contrast, if you are more sympathetic to moral realism (or a more sophisticated form of subjectivism), as I am, then you’ll probably be more sympathetic to the idea that future people will have a better understanding of what’s of value than you do now, and this gives another reason for passing the baton on to future generations. For just some ways in which we should expect moral progress: Population ethics was first introduced as a field of enquiry in the 1980s (with Parfit’s Reasons and Persons); infinite ethics was only first seriously discussed in moral philosophy in the early 1990s (e.g. Vallentyne’s Utilitarianism and Infinite Utility), and it’s clear we don’t know what the right answers are; moral uncertainty was only first discussed in modern times in 2000 (with Lockhart’s Moral Uncertainty and its Consequences) and had very little attention until around the 2010s (with Andrew Sepielli’s PhD and then my DPhil), and again we’ve only just scraped the surface of our understanding of it.

So, just as we think that the intellectual impoverishment of the Victorians means they would have done a terrible job of trying to positively influence the long-run future, we should think that, compared to future people, we are thrashing around in ignorance. In which case we don’t have the level of understanding required for ours to be the most influential time.

#3: The simulation update argument against HoH

The final argument[5] is:

1. If it seems to you that you’re at the most influential time ever, you’re differentially much more likely to be in a simulation. (That is: P(simulation | seems like HoH ) >> P(not-simulation | seems like HoH).)
2. The case for focusing on AI safety and existential risk reduction is much weaker if you live in a simulation than if you don’t. (In general, I’d aver that we have very little understanding of the best things to do if we’re in a simulation, though there’s a lot more to be said here.)
3. So we should not make a major update in the most action-relevant proposition, which is that we’re both at the hinge of history and not in a simulation.

The primary reason for believing (1) is that the most influential time in history would seem likely to be a very common subject of study by our descendents, and much more common than other periods in time. (Just as crucial periods in time, like the industrial revolution, get vastly more study by academics today than less pivotal periods, like 4th century Indonesia.)  The primary reasons for believing (2) are that if we’re in a simulation it’s much more likely that the future is short, and that extending our future doesn’t change the total amount of lived experiences (because the simulators will just run some other simulation afterwards), and that we’re missing some crucial consideration around how to act.

This argument is really just a special case of argument #1: if it seems like you’re at the most influential point in time ever, probably something funny is going on. The simulation idea is just one way of spelling out ‘something funny going on’. I’m personally reticent to make major updates in the direction of living in a simulation on the basis of this rather than updates to more banal hypotheses like just some inside-view arguments not actually being very strong; but others might disagree on this.

# Might today be merely an enormously influential time?

In response to the arguments I’ve given above, you might say: “Ok, perhaps we don’t have good reasons for thinking that we’re at the most influential time in history. But the arguments support the idea that we’re at an enormously influential time. And very little changes whether you think that we’re at the most influential time ever, or merely at an enormously influential time, even though some future time is even more influential again.”

However, I don’t think this response is a good one, for three reasons.

First, the implication that we’re among the very most influential times is susceptible to very similar arguments to the ones that I gave against HoH. The idea that we’re in one of the top-10 most influential times is 10x more a priori likely than the claim that we’re in the most influential time, and it’s perhaps more than 10x less fishy. But it’s still extremely a priori unlikely, and still very fishy. So that should make us very doubtful of the claim, in the absence of extraordinarily powerful arguments in its favour.

Second, some views that are held in the effective altruism community seem to imply not just that we’re at some very influential time, but that we’re at the most influential time ever. On the fast takeoff story associated with Bostrom and Yudkowsky, once we develop AGI we rapidly end up with a universe determined in line with a singleton superintelligence’s values, or in line with the values of those who manage to control it. Either way, it’s the decisive moment for the entire rest of civilisation.  But if you find the claim that we’re at the most influential time ever hard to swallow, then you have, by modus tollens, to reject that story of the development of superintelligence.

Third, even if we’re at some enormously influential time right now, if there’s some future time that is even more influential, then the most obvious EA activity would be to invest resources (whether via financial investment or some sort of values-spreading) in order that our resources can be used at that future, more high-impact, time. Perhaps there’s some reason why that plan doesn’t make sense; but, currently, almost no-one is even taking that possibility seriously.

# Possible other hinge times

If now isn’t the most influential time ever, when is? I’m not going to claim to be able to answer that question, but in order to help make alternative possibilities more vivid I’ve put together a list of times in the past and future that seem particularly hingey to me.

Of course, it’s much more likely, a priori, that if HoH is false, then the most influential time is in the future. And we should also care more about the hingeyness of future times than of past times, because we can try to save resources to affect future times, but we know we can’t affect past times.[6] But past hingeyness might still be relevant for assessing hingeyness today: If hingeyness has been continually decreasing over time, that gives us some reason for thinking that the present time is more influential than any future time; if it’s been up and down, or increasing over time, that might give us evidence for thinking that some future time will be more influential.

Looking through history, some candidates for particularly influential times might include the following (though in almost every case, it seems to me, the people of the time would have been too intellectually impoverished to have known how hingey their time was and been able to do anything about it[7]):

• The hunter-gatherer era, which offered individuals the ability to have a much larger impact on technological progress than today.
• The Axial age, which offered opportunities to influence the formation of what are today the major world religions.
• The colonial period, which offered opportunities to influence the formation of nations, their constitutions and values.
• The formation of the USA, especially at the time just before, during and after the Philadelphia Convention when the Constitution was created.
• World War II, and the resultant comparative influence of liberalism vs fascism over the world.
• The post-WWII formation of the first somewhat effective intergovernmental institutions like the UN.
• The Cold War, and the resultant comparative influence of liberalism vs communism over the world.

In contrast, if the hingiest times are in the future, it’s likely that this is for reasons that we haven’t thought of. But there are future scenarios that we can imagine now that would seem very influential:

• If there is a future and final World War, resulting in a unified global culture, the outcome of that war could partly determine what values influence the long-run future.
• If one religion ultimately outcompetes both atheism and other religions and becomes a world religion, then the values embodied in that religion could partly determine what values influence the long-run future.[8]
• If a world government is formed, whether during peacetime or as a result of a future World War, then the constitution embodied in that could constrain development over the long-run future, whether by persisting indefinitely, having knock-on effects on future institutions, or by influencing how some other lock-in event takes place.
• The time at which settlement of other solar systems begins could be highly influential for longtermists. For example, the ownership of other solar systems could be determined by an auction among nations and/or companies and individuals (much as the USA purchased Alaska and a significant portion of the midwest in the 19th century[9]); or by an essentially lawless race between nations (as happened with European colonisation); or through war (as has happened throughout history). If the returns from interstellar settlement pay off only over very long timescales (which seems likely), and if most of the decision-makers of the time still intrinsically discount future benefits, then longtermists at the time would be able to cheaply buy huge influence over the future.
• The time when the settlement of other galaxies begins, which might obey similar dynamics to the settlement of other solar systems.

# Implications

I said at the start that it’s non-obvious what follows, for the purposes of action, from outside-view longtermism. The most obvious course of action that might seem comparatively more promising is investment, such as saving in a long-term foundation, or movement-building, with the aim of increasing the amount of resources longtermist altruists have at a future, more hingey time. And, if one finds my second argument compelling, then research, especially into social science and moral and political philosophy, might also seem unusually promising.

These are activities that seem like they would have been good strategies across many times in the past. If we think that today is not exceptionally different from times in the past, this gives us reason to think that they are good strategies now, too.

[1] The question of what ‘resources’ in this context are is tricky. As a working definition, I’ll use 1 megajoule of stored but useable energy, where I’ll allow the form of stored energy to vary over time: so it could be in the form of grain in the past, oil today, and antimatter in the future.

[2] H/T to Carl Shulman for this wonderful quote from C.S. Lewis, The Abolition of Man: “In order to understand fully what Man’s power over Nature, and therefore the power of some men over other men, really means, we must picture the race extended in time from the date of its emergence to that of its extinction. Each generation exercises power over its successors: and each, in so far as it modifies the environment bequeathed to it and rebels against tradition, resists and limits the power of its predecessors. This modifies the picture which is sometimes painted of a progressive emancipation from tradition and a progressive control of natural processes resulting in a continual increase of human power. In reality, of course, if any one age really attains, by eugenics and scientific education, the power to make its descendants what it pleases, all men who live after it are the patients of that power. They are weaker, not stronger: for though we may have put wonderful machines in their hands we have pre-ordained how they are to use them. And if, as is almost certain, the age which had thus attained maximum power over posterity were also the age most emancipated from tradition, it would be engaged in reducing the power of its predecessors almost as drastically as that of its successors. And we must also remember that, quite apart from this, the later a generation comes — the nearer it lives to that date at which the species becomes extinct—the less power it will have in the forward direction, because its subjects will be so few. There is therefore no question of a power vested in the race as a whole steadily growing as long as the race survives. The last men, far from being the heirs of power, will be of all men most subject to the dead hand of the great planners and conditioners and will themselves exercise least power upon the future.

The real picture is that of one dominant age—let us suppose the hundredth century A.D.—which resists all previous ages most successfully and dominates all subsequent ages most irresistibly, and thus is the real master of the human species. But then within this master generation (itself an infinitesimal minority of the species) the power will be exercised by a minority smaller still. Man’s conquest of Nature, if the dreams of some scientific planners are realized, means the rule of a few hundreds of men over billions upon billions of men. There neither is nor can be any simple increase of power on Man’s side. Each new power won by man is a power over man as well. Each advance leaves him weaker as well as stronger. In every victory, besides being the general who triumphs, he is also the prisoner who follows the triumphal car.”

[3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable.

[4] These are just anecdotes, and I’d love to see someone undertake a thorough investigation of how often people tend to overreact vs underreact to technological developments, especially in terms of risk-assessment and safety. As well as for helping us understand how likely we are to be biased, this is relevant to how much we should expect other actors in the coming decades to invest in safety with respect to AI and synthetic biology.

[5] I note that this argument has been independently generated quite a number of times by different people.

[6] Though if one endorses non-causal decision theory, those times might still be decision-relevant.

[7] An exception might have been some of the US founding fathers. For example, John Adams, the second US President, commented that: “The institutions now made in America will not wholly wear out for thousands of years. It is of the last importance, then, that they should begin right. If they set out wrong, they will never be able to return, unless by accident, to the right path." (H/T Christian Tarsney for the quote.)

[8] If you’re an atheist, it’s easy to think it’s inevitable that atheists will win out in the end. But because of differences in fertility rate, the global proportion of fundamentalists is predicted to rise and the proportion of atheists is predicted to decline. What’s more, religiosity is moderately heritable, so these differences could compound into the future.  For discussion, see Shall the religious inherit the earth? by Eric Kaufman.

[9] Some numbers on this: The Louisiana purchase cost $15 million at the time, or$250 million in today’s money, for what is now 23.3% of US territory.  https://www.globalpolicy.org/component/content/article/155/25993.html Alaska cost $120 million in today’s money; its GDP today is$54 billion per year. https://fred.stlouisfed.org/series/AKNGSP

comment by Toby_Ord · 2019-09-06T20:57:46.608Z · EA(p) · GW(p)

Hi Will,

It is great to see all your thinking on this down in one place: there are lots of great points here (and in the comments too). By explaining your thinking so clearly, it makes it much easier to see where one departs from it.

My biggest departure is on the prior, which actually does most of the work in your argument: it creates the extremely high bar for evidence, which I agree probably couldn’t be met. I’ve mentioned before that I’m quite sure the uniform prior is the wrong choice here and that this makes a big difference. I’ll explain a bit about why I think that.

As a general rule if you have a domain like this that extends indefinitely in one direction, the correct prior is one that diminishes as you move further away in that direction, rather than picking a somewhat arbitrary end point and using a uniform prior on that. People do take this latter approach in scientific papers, but I think it is usually wrong to do so. Moreover in your case in particular, there are also good reasons to suspect that the chance of a century being the most influential should diminish over time. Especially because there are important kinds of significant event (such as the value lock-in or an existential catastrophe) where early occurrence blocks out later occurrence.

This directly leads to diminishing credence over time. e.g. if there is a known constant chance of such a key event happening in any century conditional on not happening before that time then the chance it first happens in any century diminishes exponentially as time goes on. Or if this chance is unknown and could be anything between zero and one, then instead of an exponential decline, it diminishes more slowly (analogous to Weitzman discounting). The most famous model of this is Laplace’s Law of Succession, where if your prior for the unknown contstant hazard rate per time period is uniform on the interval between 0 and 1, then the chance it happens in the nth period if it hasn’t before is 1/n+2 — a hyperbola. I think hazard rates closer to zero and one are more likely than those in between, so I prefer the bucket shaped Jeffrey’s prior (= Beta(0.5, 0.5) for the maths nerds out there), which gives a different hyperbola of 1/2n+2 (and makes my case a little bit harder than if I’d settled for the uniform prior).

A raw application of this would say that since Homo sapiens has been around for 2,000 centuries (without, let us suppose, having had such a one-off critical time yet), the chance it happens this century is 1 in 2,002 (or 1 in 4,002). [Actually I’ll just say 1 in 2,000 or (1 in 4,000), as the +2 is just an artefact of how we cut up the time periods and can be seen to go to zero when we use continuous time.] This is a lot more likely than your 1 in a million or 1 in 100,000. And it gets even more so when you run it in terms of persons or person years (as I believe you should). i.e. measure time with a clock that ticks as each lifetime ends, rather than one that ticks each second. e.g. about 1/20th of all people who have ever lived are alive now, so the next century it is not really 1/2,000th of human history but more like 1/20th of it. On this clock and with this prior, one would expect a 1/20 (or 1/40) chance of a pivotal event (first) occurring.

Note that while your model applied a kind of principle of indifference uniformly across time, saying each century was equally likely (a kind of outside view), my model makes similar sounding assumptions. It assumes that each century is equally likely to have such a high stakes pivotal event (conditional on it not already happening), and if you do the maths, this also corresponds to each order of magnitude of time having an equal (unconditional) chance of the the pivotal event happening in it (i.e. instead of equal chance in century 1, century 2, century 3… it is equal chance in centuries 1 to 10, centuries 10 to 100, centuries 100 to 1,000), which actually seems more intuitive to me. Then there is the wrinkle that I don’t assign it across clock time, but across persons or person-years (e.g. where I say ‘century’ your could read it as ‘1 trillion person years’). All these choices are inspired by very similar motivations to how you chose your prior.

[As an interesting side-note, this kind of prior is also what you get if you apply Richard Gott’s version of the Doomsday Argument to estimate how long we will last (say, instead of the toy model you apply), and this is another famous way of doing outside-view forecasting.]

I doubt I can easily convince you that the prior I’ve chosen is objectively best, or even that it is better than the one you used. Prior-choice is a bit of an art, rather like choice of axioms. But I hope you see that it does show that the whole thing comes down to whether you choose a prior like you did, or another reasonable alternative. My prior gives a prior chance of HoH of about 5% or 2.5%, which is thousands of times more likely than yours, and can easily be bumped up by the available evidence to probabilities >10%. So your argument doesn’t do well on sensitivity analysis over prior-choice. Additionally, if you didn’t know which of these priors to use and used a mixture with mine weighted in to a non-trivial degree, this would also lead to a substantial prior probability of HoH. And this is only worse if instead of using a 1/n hyperbola like I did, you had arguments that it declined more quickly, like 1/n^2 or an exponential. So it only goes through if you are very solidly committed to a prior like the one you used.

comment by William_MacAskill · 2019-09-13T00:38:50.308Z · EA(p) · GW(p)

Hi Toby,

Thanks so much for this very clear response, it was a very satisfying read, and there’s a lot for me to chew on. And thanks for locating the point of disagreement — prior to this post, I would have guessed that the biggest difference between me and some others was on the weight placed on the arguments for the Time of Perils and Value Lock-In views, rather than on the choice of prior. But it seems that that’s not true, and that’s very helpful to know. If so, it suggests (advertisement to the Forum!) that further work on prior-setting in EA contexts is very high-value.

I agree with you that under uncertainty over how to set the prior, because we’re clearly so distinctive in some particular ways (namely, that we’re so early on in civilisation, that the current population is so small, etc), my choice of prior will get washed out by models on which those distinctive features are important; I characterised these as outside-view arguments, but I’d understand if someone wanted to characterise that as prior-setting instead.

I also agree that there’s a strong case for making the prior over persons (or person-years) rather than centuries. In your discussion, you go via number of persons (or person-years) per century to the comparative importance of centuries. What I’d be inclined to do is just change the claim under consideration to: “I am among the (say) 100,000 most influential people ever”. This means we still take into account the fact that, though more populous centuries are more likely to be influential, they are also harder to influence in virtue of their larger population.  If we frame the core claim in terms of being among the most influential people, rather than being at the most influential time, the core claim seems even more striking to me. (E.g. a uniform prior over the first 100 billion people would give a prior of 1 in 1 million of being in the 100,000 most influential people ever. Though of course, there would also be an extra outside-view argument for moving from this prior, which is that not many people are trying to influence the long-run future.)

However, I don’t currently feel attracted to your way of setting up the prior.  In what follows I’ll just focus on the case of a values lock-in event, and for simplicity I’ll just use the standard Laplacean prior rather than your suggestion of a Jeffreys prior.

In significant part my lack of attraction is because the claims — that (i) there’s a point in time where almost everything about the fate of the universe gets decided; (ii) that point is basically now; (iii) almost no-one sees this apart from us (where ‘us’ is a very small fraction of the world) — seem extraordinary to me, and I feel I need extraordinary evidence in order to have high credence in them. My prior-setting discussion was one way of cashing out why these seem extraordinary. If there’s some way of setting priors such that claims (i)-(iii) aren’t so extraordinary after all, I feel like a rabbit is being pulled out of a hat.

Then I have some specific worries with the Laplacean approach (which I *think* would apply to the Jeffreys prior, too, but I'm yet to figure out what a Fischer information matrix is, so I don't totally back myself here).

But before I mention the worries, I'll note that it seems to me that you and I are currently talking about priors over different propositions. You seem to be considering the propositions, ‘there is a lock-in event this century’ or ‘there is an extinction event this century’; I’m considering the proposition ‘I am at the most influential time ever’ or ‘I am one of the most influential people ever.’ As is well-known, when it comes to using principle-of-indifference-esque reasoning, if you use that reasoning over a number of different propositions then you can end up with inconsistent probability assignments. So, at best, one should use such reasoning in a very restricted way.

The reason I like thinking about my proposition (‘are we at the most important time?’ or ‘are we one of the most influential people ever?’) for the restricted principle of indifference, is that:

(i) I know the frequency of occurrence of ‘most influential person’, for each possible total population of civilization (past, present and future). Namely, it occurs once out of the total population. So I can look at each possible population size for the future, look at my credence in each possible population occurring, and in each case know the frequency of being the most influential person (or, more naturally, in the 100,000 most influential people).

(ii) it’s the most relevant proposition for the question of what I should do. (e.g. Perhaps it’s likely that there’s a lock-in event, but we can’t do anything about it and future people could, so we should save for a later date.)

Anyway, the worries about Laplacean (and Jeffreys) prior.

First, the Laplacean prior seems to get the wrong answer for lots of similar predicates. Consider the claims: “I am the most beautiful person ever” or “I am the strongest person ever”, rather than “I am the most important person ever”. If we used the Laplacean prior in the way you suggest for these claims, the first person would assign 50% credence to being the strongest person ever, even if they knew that there was probably going to be billions of people to come. This doesn’t seem right to me.

Second, it also seems very sensitive to our choice of start date. If the proposition under question is, ‘there will be a lock-in event this century’, I’d get a very different prior depending on whether I chose to begin counting from: (i) the dawn of the information age; (ii) the beginning of the industrial revolution; (iii) the start of civilisation; (iv) the origin of homo sapiens; (v) the origin of the genus homo; (vi) the origin of mammals, etc.

Of course, the uniform prior has something similar, but I think it handles the issue gracefully. e.g. On priors, I should think it’s 1 in 5 million likely that I’m the funniest person in Scotland; 1 in 65 million that I’m the funniest person in Britain, 1 in 7.5 billion that I’m the funniest person in the world. Similarly, with whether I’m the most influential person in the post-industrial era, the post-agricultural era, etc.

Third, the Laplacean prior doesn’t add up to 1 across all people. For example, suppose you’re the first person and you know that there will be 3 people. Then, on the Laplacean prior, the total probability for being the most influential person ever is ½ + ½(⅓) + ½(⅔)(¼) = ¾.  But I know that someone has to be the most influential person ever. This suggests the Laplacean prior is the wrong prior choice for the proposition I’m considering, whereas the simple frequency approach gets it right.

So even if one feels skeptical of the uniform prior, I think the Laplacean way of prior-setting isn't a better alternative. In general: I'm sympathetic to having a model where early people are more likely to be more influential, but a model which is uniform over orders of magnitude seems too extreme to me.

(As a final thought: Doesn’t this form of prior-setting also suffer from the problem of there being too many hypotheses?  E.g. consider the propositions:

A - There will be a value lock-in event this century
B - There will be a lock-in of hedonistic utilitarian values this century

C - There will be a lock-in of preference utilitarian values this century

D - There will be a lock-in of Kantian values this century

E - There will be a lock-in of fascist values this century

On the Laplacean approach, these would all get the same probability assignment - which seems inconsistent. And then just by stacking priors over particular lock-in events, we can get a probability that it’s overwhelmingly likely that there’s some lock-in event this century. I’ve put this comment in parentheses, though, as I feel *even less* confident about my worry here than my other worries listed.)

Replies from: Toby_Ord, Owen_Cotton-Barratt
comment by Toby_Ord · 2019-09-16T10:50:31.544Z · EA(p) · GW(p)

Thanks for this very thorough reply. There are so many strands here that I can't really hope to do justice to them all, but I'll make a few observations.

1) There are two versions of my argument. The weak/vague one is that a uniform prior is wrong and the real prior should decay over time, such that you can't make your extreme claim from priors. The strong/precise one is that it should decay as 1/n^2 in line with a version of LLS. The latter is more meant as an illustration. It is my go-to default for things like this, but my main point here is the weaker one. It seems that you agree that it should decay, and that the main question now is whether it does so fast enough to make your prior-based points moot. I'm not quite sure how to resolve that. But I note that from this position, we can't reach either your argument that from priors this is way too unlikely for our evidence to overturn (and we also can't reach my statement of the opposite of that).

2) I wouldn't use the LLS prior for arbitrary superlative properties where you fix the total population. I'd use it only if the population over time was radically unknown (so that the first person is much more likely to be strongest than the thousandth, because there probably won't be a thousand) or where there is a strong time dependency such that it happening at one time rules out later times.

3) You are right that I am appealing to some structural properties beyond mere superlatives, such as extinction or other permanent lock-in. This is because these things happening in a century would be sufficient for that century to have a decent chance of being the most influential (technically this still depends on the influenceability of the event, but I think most people would grant that conditional on next century being the end of humanity, it is no longer surprising at all if this or next century were the most influential). So I think that your prior setting approach proves too much, telling us that there is almost no chance of extinction or permanent lock-in next century (and even after updating on evidence). This feels fishy. A bit like Bostrom's 'presumptuous philosopher' example. I think it looks even more fishy in your worked example where the prior is low precisely because of an assumption about how long we will last without extinction: especially as that assumption is compatible with, say, a 50% chance of extinction in the next century. (I don't think this is a knockdown blow here: but I'm trying to indicate the part of your argument I think would be most likely to fall and roughly why).

4) I agree there is an issue to do with too many hypotheses . And a related issue with what is the first timescale on which to apply a 1/2 chance of the event occurring. I think these can be dealt with together. You modify the raw LLS prior by some other kind of prior you have for each particular type of event (which you need to have since some are sub-events of others and rationality requires you to assign lower probability to them). You could operationalise this by asking over what time frame you'd expect a 1/2 chance of that event occurring. Then LLS isn't acting as an indifference principle, but rather just as a way of keeping track of how to update your ur prior in light of how many time periods have elapsed without the event occurring. I think this should work out somewhat similarly, just with a stretched PDF that still decays as 1/n^2, but am not sure. There may be a literature on this.

comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2019-09-14T22:29:37.667Z · EA(p) · GW(p)

I appreciate your explicitly laying out issues with the Laplace prior! I found this helpful.

The approach to picking a prior here which I feel least uneasy about is something like: "take a simplicity-weighted average over different generating processes for distributions of hinginess over time". This gives a mixture with some weight on uniform (very simple), some weight on monotonically-increasing and monotonically-decreasing functions (also quite simple), some weight on single-peaked and single-troughed functions (disproportionately with the peak or trough close to one end), and so on…

If we assume a big future and you just told me the number of people in each generation, I think my prior might be something like 20% that the most hingey moment was in the past, 1% that it was in the next 10 centuries, and the rest after that. After I notice that hingeyness is about influence, and causality gives a time asymmetry favouring early times, I think I might update to >50% that it was in the past, and 2% that it would be in the next 10 centuries.

(I might start with some similar prior about when the strongest person lives, but then when I begin to understand something about strength the generating mechanisms which suggest that the strongest people would come early and everything would be diminishing thereafter seem very implausible, so I would update down a lot on that.)

Replies from: Toby_Ord
comment by Toby_Ord · 2019-09-16T10:18:42.947Z · EA(p) · GW(p)

I'm sympathetic to the mixture of simple priors approach and value simplicity a great deal. However, I don't think that the uniform prior up to an arbitrary end point is the simplest as your comment appears to suggest. e.g. I don't see how it is simpler than an exponential distribution with an arbitrary mean (which is the max entropy prior over R+ conditional on a finite mean). I'm not sure if there is a max entropy prior over R+ without the finite mean assumption, but 1/x^2 looks right to me for that.

Also, re having a distribution that increases over a fixed time interval giving a peak at the end, I agree that this kind of thing is simple, but note that since we are actually very uncertain over when that interval ends, that peak gets very smeared out. Enough so that I don't think there is a peak at the end at all when the distribution is denominated in years (rather than centiles through human history or something). That said, it could turn into a peak in the middle, depending on the nature of one's distribution over durations.

comment by CarlShulman · 2019-09-07T02:19:19.233Z · EA(p) · GW(p)
I doubt I can easily convince you that the prior I’ve chosen is objectively best, or even that it is better than the one you used. Prior-choice is a bit of an art, rather like choice of axioms. But I hope you see that it does show that the whole thing comes down to whether you choose a prior like you did, or another reasonable alternative... Additionally, if you didn’t know which of these priors to use and used a mixture with mine weighted in to a non-trivial degree, this would also lead to a substantial prior probability of HoH.

I think this point is even stronger, as your early sections suggest. If we treat the priors as hypotheses about the distribution of events in the world, then past data can provide evidence about which one is right, and (the principle of) Will's prior would have given excessively low credence to humanity's first million years being the million years when life traveled to the Moon, humanity becoming such a large share of biomass, the first 10,000 years of agriculture leading to the modern world, and so forth. So those data would give us extreme evidence for a less dogmatic prior being correct.

comment by bgarfinkel (bmg) · 2019-09-13T13:55:42.598Z · EA(p) · GW(p)
If we treat the priors as hypotheses about the distribution of events in the world, then past data can provide evidence about which one is right, and (the principle of) Will's prior would have given excessively low credence to humanity's first million years being the million years when life traveled to the Moon, humanity becoming such a large share of biomass, the first 10,000 years of agriculture leading to the modern world, and so forth.

On the other hand, the kinds of priors Toby suggests would also typically give excessively low credence to these events taking so long. So the data doesn't seem to provide much active support for the proposed alternative either.

It also seems to me like different kinds of priors are probably warranted for predictions about when a given kind of event will happen for the first time (e.g. the first year in which someone is named Steve) and predictions about when a given property will achieve its maximum value (e.g. the year with the most Steves). It can therefore be consistent to expect the kinds of "firsts" you list to be relatively bunched up near the start of human history, while also expecting relevant "mosts" (such as the most hingey year) to be relatively spread out.

That being said, I find it intuitive that periods with lots of "firsts" should tend to be disproportionately hingey. I think this intuition could be used to construct a model in which early periods are especially likely to be hingey.

comment by William_MacAskill · 2019-09-13T00:49:21.255Z · EA(p) · GW(p)

I don't think I agree with this, unless one is able to make a comparative claim about the importance (from a longtermist perspective) of these events relative to future events' importance - which is exactly what I'm questioning.

I do think that weighting earlier generations more heavily is correct, though; I don't feel that much turns on whether one construes this as prior choice or an update from one's prior.

comment by MichaelDickens · 2020-06-16T04:32:10.910Z · EA(p) · GW(p)

A related outside-view argument for the HoH being more likely to occur in earlier centuries:

1. New things must happen more frequently in earlier centuries because over time, we will run out of new things to do.
2. HoH will probably occur due to some significant thing (or things) happening.
3. HoH must coincide with the first occurrence of this thing, because later occurrences of the same thing or similar things cannot be more important.

If we accept these premises, this justifies using a diminishing prior like Laplace.

comment by bgarfinkel (bmg) · 2019-09-12T20:19:40.441Z · EA(p) · GW(p)
As a general rule if you have a domain like this that extends indefinitely in one direction, the correct prior is one that diminishes as you move further away in that direction, rather than picking a somewhat arbitrary end point and using a uniform prior on that.

Just a quick thought on this issue: Using Laplace's rule of succession (or any other similar prior) also requires picking a somewhat arbitrary start point. You suggest 200000BC as a start point, but one could of course pick earlier or later years and get out different numbers. So the uniform prior's sensitivity to decisions about how to truncate the relevant time interval isn't a special weakness; it doesn't seem to provide grounds for prefering the Laplacian prior.

I think that for some notion of an "arbitrary superlative," a uniform prior also makes a lot more intuitive sense than a Laplacian prior. The Laplacian prior would give very strange results, for example, if you tried to use it to estimate the hottest day on Earth, the year with the highest portion of Americans named Zach, or the year with the most supernovas.

Moreover in your case in particular, there are also good reasons to suspect that the chance of a century being the most influential should diminish over time.

I agree with this intuition, but I suppose see it as a reason to shift away from a uniform prior rather than to begin from something as lopsided as a Laplacian. I think that this intuition is also partially (but far from entirely) counterbalanced by the countervailing intuitions Will lists for expecting influence to increase over time.

Replies from: ESRogs, Toby_Ord
comment by ESRogs · 2020-11-05T17:52:02.632Z · EA(p) · GW(p)

Just a quick thought on this issue: Using Laplace's rule of succession (or any other similar prior) also requires picking a somewhat arbitrary start point.

Doesn't the uniform prior require picking an arbitrary start point and end point? If so, switching to a prior that only requires an arbitrary start point seems like an improvement, all else equal. (Though maybe still worth pointing out that all arbitrariness has not been eliminated, as you've done here.)

comment by Toby_Ord · 2019-09-16T09:57:50.337Z · EA(p) · GW(p)

You are right that having a fuzzy starting point for when we started drawing from the urn causes problems for Laplace's Law of Succession, making it less appropriate without modification. However, note that in terms of people who have ever lived, there isn't that much variation as populations were so low for so long, compared to now.

I see your point re 'arbitrary superlatives', but am not sure it goes through technically. If I could choose a prior over the relative timescale of beginning to the final year of humanity, I would intuitively have peaks at both ends. But denominated in years, we don't know where the final year is and have a distribution over this that smears that second peak out over a long time. This often leaves us just with the initial peak and a monotonic decline (though not necessarily of the functional form of LLS). That said, this interacts with your first point, as the beginning of humanity is also vague, smearing that peak out somewhat too.

comment by SoerenMind · 2019-09-12T19:40:51.455Z · EA(p) · GW(p)

So your prior says, unlike Will’s, that there are non-trivial probabilities of very early lock-in. That seems plausible and important. But it seems to me that your analysis not only uses a different prior but also conditions on “we live extremely early” which I think is problematic.

Will argues that it’s very weird we seem to be at an extremely hingy time. So we should discount that possibility. You say that we’re living at an extremely early time and it’s not weird for early times to be hingy. I imagine Will’s response would be “it’s very weird we seem to be living at an extremely early time then” (and it’s doubly weird if it implies we live in an extremely hingy time).

If living at an early time implies something that is extremely unlikely a priori for a random person from the timeline, then there should be an explanation. These 3 explanations seem exhaustive:

1) We’re extremely lucky.

2) We aren’t actually early: E.g. we’re in a simulation or the future is short. (The latter doesn’t necessarily imply that xrisk work doesn’t have much impact because the future might just be short in terms of people in our anthropic reference class).

3) Early people don’t actually have outsized influence: E.g. the hazard/hinge rate in your model is low (perhaps 1/N where N is the length of the future). In a Bayesian graphical model, there should be a strong update in favor of low hinge rates after observing that we live very early (unless another explanation is likely a priori).

Both 2) and 3) seem somewhat plausible a priori so it seems we don’t need to assume that a big coincidence explains how early we live.

Replies from: Toby_Ord
comment by Toby_Ord · 2019-09-16T10:06:00.848Z · EA(p) · GW(p)

I don't think I'm building in any assumptions about living extremely early -- in fact I think it makes as little assumption on that as possible. The prior you get from LLS or from Gott's doomsday argument says the median number of people to follow us is as many as have lived so far (~100 billion), that we have an equal chance of being in any quantile, and so for example we only have a 1 in a million chance of living in the first millionth. (Though note that since each order of magnitude contributes an equal expected value and there are infinitely many orders of magnitude, the expected number of people is infinite / has no mean.)

Replies from: SoerenMind
comment by SoerenMind · 2019-09-18T13:32:48.657Z · EA(p) · GW(p)

If you're just presenting a prior I agree that you've not conditioned on an observation "we're very early". But to the extent that your reasoning says there's a non-trivial probability of [we have extremely high influence over a big future], you do condition on some observation of that kind. In fact, it would seem weird if any Copernican prior could give non-trivial mass to that proposition without an additional observation.

I continue my response here [EA(p) · GW(p)] because the rest is more suitable as a higher-level comment.

Replies from: Liam_Donovan
comment by Liam_Donovan · 2019-12-12T09:48:54.727Z · EA(p) · GW(p)

What is a Copernican prior? I can't find any google results

comment by SoerenMind · 2019-12-13T20:33:27.214Z · EA(p) · GW(p)

It's just an informal way to say that we're probably typical observers. It's named after Copernicus because he found that the Earth isn't as special as people thought.

I don't know the history of the term or its relationship to Copernicus, but I can say how my forgotten source defined it. Suppose you want to ask, "How long will my car run?" Suppose it's a weird car that has a different engine and manufacturer than other cars, so those cars aren't much help. One place you could start is with how long it's currently be running for. This is based on the prior that you're observing it on average halfway through its life. If it's been running for 6 months so far, you would guess 1 year. There surely exists a more rigorous definition than this, but that's the gist.

comment by Linch · 2020-02-09T13:19:07.890Z · EA(p) · GW(p)

Wikipedia gives the physicist's version, but EAs (and maybe philosophers?) use it more broadly.

https://en.wikipedia.org/wiki/Copernican_principle

The short summary I use to describe it is that "we" are not that special, for various definitions of the word we.

Some examples on FB.

comment by Linch · 2019-09-06T23:54:00.809Z · EA(p) · GW(p)

>> And it gets even more so when you run it in terms of persons or person years (as I believe you should). i.e. measure time with a clock that ticks as each lifetime ends, rather than one that ticks each second. e.g. about 1/20th of all people who have ever lived are alive now, so the next century it is not really 1/2,000th of human history but more like 1/20th of it.

And if you use person-years, you get something like 1/7 - 1/14! [1]

>> I doubt I can easily convince you that the prior I’ve chosen is objectively best, or even that it is better than the one you used. Prior-choice is a bit of an art, rather like choice of axioms.

I'm pretty confused about how these dramatically different priors are formed, and would really appreciate it if somebody (maybe somebody less busy than Will or Toby?) could give pointers on how to read up more on forming these sort of priors. As you allude to, this question seems to map to anthropics, and I'm curious how much the priors here necessarily maps to your views on anthropics. Eg, am I reading the post and your comment correctly that Will takes an SIA view and you take an SSA view on anthropic questions?

In general, does anybody have pointers on how best to reason about anthropic and anthropic-adjacent questions?

comment by SoerenMind · 2019-09-18T13:26:59.739Z · EA(p) · GW(p)

P(high influence) isn't tiny. But if I understand correctly, that's just because

P(high influence | short future) isn't tiny whereas

P(high influence | long future) is still tiny. (I haven't checked the math, correct me if I'm wrong).

So your argument doesn't seems to save existential risk work. The only way to get a non-trivial P(high influence | long future) with your prior seems to be by conditioning on an additional observation "we're extremely early". As I argued here [EA(p) · GW(p)], that's somewhat sketchy to do.

Replies from: Toby_Ord, ofer
comment by Toby_Ord · 2019-09-19T10:01:40.556Z · EA(p) · GW(p)

I don't have time to get into all the details, but I think that while your intuition is reasonable (I used to share it) the maths does actually turn out my way. At least on one interpretation of what you mean. I looked into this when wondering if the doomsday argument suggested that the EV of the future must be small. Try writing out the algebra for a Gott style prior that there is an x% chance we are in the first x%, for all x. You get a Pareto distribution that is a power law with infinite mean. While there is very little chance on this prior that there is a big future ahead, the size of each possible future compensates for that, such that each order of magnitude of increasing size of the future contributes an equal expected amount of population to the future, such that the sum is infinite.

I'm not quite sure what to make of this, and it may be quite brittle (e.g. if we were somehow certain that there weren't more than 10^100 people in the future, the expected population wouldn't be all that high), but as a raw prior I really think it is both an extreme outside view, saying we are equally likely to live at any relative position in the sequence *and* that there is extremely high (infinite) EV in the future -- not because it thinks there is any single future whose EV is high, but because the series diverges.

This isn't quite the same as your claim (about influence), but does seem to 'save existential risk work' from this challenge based on priors (I don't actually think it needed saving, but that is another story).

Replies from: SoerenMind, SoerenMind
comment by SoerenMind · 2019-09-19T14:53:58.303Z · EA(p) · GW(p)

Interesting point!

The diverging series seems to be a version of the St Petersburg paradox, which has fooled me before. In the original version, you have a 2^-k chance of winning 2^k for every positive integer k, which leads to infinite expected payoff. One way in which it's brittle is that, as you say, the payoff is quite limited if we have some upper bound on the size of the population. Two other mathematical ways are 1) if the payoff is just 1.99^k or 2) if it is 2^0.99k.

comment by SoerenMind · 2019-09-23T14:53:42.445Z · EA(p) · GW(p)

On second thoughts, I think it's worth clarifying that my claim is still true even though yours is important in its own right. On Gott's reasoning, P(high influence | world has 2^N times the # of people who've already lived) is still just 2^-N (that's 2^-(N-1) if summed over all k>=N). As you said, these tiny probabilities are balanced out by asymptotically infinite impact.

I'll write up a separate objection to that claim but first a clarifying question: Why do you call Gott's conditional probability a prior? Isn't it more of a likelihood? In my model it should be combined with a prior P(number of people the world has). The resulting posterior is then the prior for further enquiries.

comment by Ofer (ofer) · 2019-09-19T15:33:33.055Z · EA(p) · GW(p)
So your argument doesn't seems to save existential risk work. The only way to get a non-trivial P(high influence | long future) with your prior seems to be by conditioning on an additional observation "we're extremely early". As I argued here [EA(p) · GW(p)], that's somewhat sketchy to do.

As you wrote [EA(p) · GW(p)], the future being short "doesn’t necessarily imply that xrisk work doesn’t have much impact because the future might just be short in terms of people in our anthropic reference class".

Another thought that comes to mind is that there may exist many evolved civilizations that their behavior is correlated with our behavior. If so, us deciding to work hard on reducing x-risks means it's more likely that those other civilizations would also decide—during early centuries—to work hard on reducing x-risks.

comment by WilliamKiely · 2019-09-07T21:56:27.096Z · EA(p) · GW(p)

Under Toby's prior, what is the prior probability that the most influential century ever is in the past?

Replies from: Toby_Ord
comment by Toby_Ord · 2019-09-10T09:16:17.709Z · EA(p) · GW(p)

Quite high. If you think it hasn't happened yet, then this is a problem for my prior that Will's doesn't have.

More precisely, the argument I sketched gives a prior whose PDF decays roughly as 1/n^2 (which corresponds to the chance of it first happening in the next period after n absences decaying as ~1/n). You might be able to get some tweaks to this such that it is less likely than not to happen by now, but I think the cleanest versions predict it would have happened by now. The clean version of Laplace's Law of Succession, measured in centuries, says there would only be a 1/2,001 chance it hadn't happened before now, which reflects poorly on the prior, but I don't think it quite serves to rule it out. If you don't know whether it has happened yet (e.g. you are unsure of things like Will's Axial Age argument), this would give some extra weight to that possibility.

comment by William_MacAskill · 2019-09-13T00:44:49.916Z · EA(p) · GW(p)

Given this, if one had a hyperprior over different possible Beta distributions, shouldn't 2000 centuries of no event occurring cause one to update quite hard against the (0.5, 0.5) or (1, 1) hyperparameters, and in favour of a prior that was massively skewed towards the per-century probability of no-lock-in-event being very low?

(And noting that, depending exactly on how the proposition is specified, I think we can be very confident that it hasn't happened yet. E.g. if the proposition under consideration was 'a values lock-in event occurs such that everyone after this point has the same values'.)

Replies from: Toby_Ord
comment by Toby_Ord · 2019-09-16T09:49:49.792Z · EA(p) · GW(p)

That's interesting. Earlier I suggested that a mixture of different priors that included some like mine would give a result very different to your result. But you are right to say that we can interpret this in two ways: as a mixture of ur priors or as a mixture of priors we get after updating on the length of time so far. I was implicitly assuming the latter, but maybe the former is better and it would indeed lessen or eliminate the effect I mentioned.

Your suggestion is also interesting as a general approach, choosing a distribution over these Beta distributions instead of debating between certainty in (0,0), (0.5, 0.5), and (1,1). For some distributions over Beta parameters these the maths is probably quite tractable. That might be an answer to the right meta-rational approach rather than an answer to the right rational approach, or something, but it does seem nicely robust.

comment by Tobias_Baumann · 2019-09-10T10:16:08.654Z · EA(p) · GW(p)

I don't understand this. Your last comment suggests that there may be several key events (some of which may be in the past), but I read your top-level comment as assuming that there is only one, which precludes all future key events (i.e. something like lock-in or extinction). I would have interpreted your initial post as follows:

Suppose we observe 20 past centuries during which no key event happens. By Laplace's Law of Succession, we now think that the odds are 1/22 in each century. So you could say that the odds that a key event "would have occurred" over the course of 20 centuries is 1 - (1-1/22)^20 = 60.6%. However, we just said that we observed no key event, and that's what our "hazard rate" is based on, so it is moot to ask what could have been. The probability is 0.

This seems off, and I think the problem is equating "no key event" with "not hingy", which is too simple because one can potentially also influence key events in the distant future. (Or perhaps there aren't even any key events, or there are other ways to have a lasting impact.)

comment by lewish · 2020-11-08T11:51:22.836Z · EA(p) · GW(p)

I know this is an old thread, and I'm not totally sure how this affects the debate here, but for what it's worth I think applying principle of indifference-type reasoning here implies that the appropriate uninformative prior is an exponential distribution.

I apply the principle of indifference  (or maybe of invariance, following Jaynes (1968)) as follows: If I wake up tomorrow knowing absolutely nothing about the world and am asked about the probability of 10 days into the future containing the most important time in history conditional on it being in the future, I should give the same answer as if I were to be woken up 100 years from now and were asked about the day 100 years and 10 days from now. I would need some further information (e.g. about the state of the world, of human society, etc.) to say why one would be more probable than the other, and here I'm looking for a prior from a state of total ignorance.

This invariance can be generalized as: Pr(X>t+k|X>t) = Pr(X>t'+k|X>t') for all k, t, t'. This happens to be the memoryless property, and the exponential distribution is the only continuous distribution that has this property. Thus if we think that our priors from a state of total ignorance should satisfy this requirement, our prior needs to be an exponential distribution. I imagine there are other ways of characterizing similar indifference requirements that imply memorylessness.

This is not to say our current beliefs should follow this distribution: we have additional information about the world, and we should update on this information. It’s also possible that the principle of indifference might be applied in a different way to give a different uninformative prior as in the Bertrand paradox.

(The Jaynes paper: https://bayes.wustl.edu/etj/articles/prior.pdf)

comment by Tobias_Baumann · 2019-09-07T11:02:00.992Z · EA(p) · GW(p)

The following is yet another perspective on which prior to use, which questions whether we should assume some kind of uniformity principle:

As has been discussed in other comments and the initial text, there are some reasons to expect later times to be hingier (e.g. better knowledge) and there are some reasons to expect earlier times to be hingier (e.g. because of smaller populations). It is plausible that these reasons skew one way or another, and this effect might outweigh other sources of variance in hinginess.

That means that the hingiest times are disproportionately likely to be either a) the earliest generation (e.g. humans in pre-historic population bottlenecks) or b) the last generation (i.e. the time just before some lock-in happens). Our time is very unlikely to be the hingiest in this perspective (unless you think that lock-in happens very soon). So this suggests a low prior for HoH; however, what matters is arguably comparing present hinginess to the future, rather than to the past. And in this perspective it would be not-very-unlikely that our time is hingier than all future times.

In other words, rather than there being anything special about our time, it could just the case that a) hinginess generally decreases over time and b) this effect is stronger than other sources of variance in hinginess. I'm fairly agnostic about both of these claims, and Will argued against a), but it's surely likelier than 1 in 100000 (in the absense of further evidence), and arguably likelier even than 5%. (This isn't exactly HoH because past times would be even hingier.)

Replies from: Habryka
comment by Habryka · 2019-09-07T22:02:17.371Z · EA(p) · GW(p)

At least in Will's model, we are among the earliest human generations, so I don't think this argument holds very much, unless you posit a very fast diminishing prior (which so far nobody has done).

comment by CarlShulman · 2019-09-04T22:38:00.815Z · EA(p) · GW(p)

Thanks for this post Will, it's good to see some discussion of this topic. Beyond our previous discussions, I'll add a few comments below.

hingeyness

I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.

Even just a few decades ago, a longtermist altruist would not have thought of risk from AI or synthetic biology, and wouldn’t have known that they could have taken action on them.

I would dispute this. Possibilities of AGI and global disaster were discussed by pioneers like Turing, von Neumann, Good, Minsky and others from the founding of the field of AI.

The possibility of engineered plagues causing an apocalypse was a grave concern of forward thinking people in the early 20th century as biological weapons were developed and demonstrated. Many of the anti-nuclear scientists concerned for the global prospects of humanity were also concerned about germ warfare.

Both of the above also had prominent fictional portrayals to come to mind for longtermist altruists engaging in a wide-ranging search. If there had been a longtermist altruist movement trying to catalog risks of human extinction I think they would have found both of the above, and could have worked to address them (there was reasonable scientific uncertainty about AI timelines, and people could reasonably have developed a lot more of the theory and analysis for AI alignment related topics at the time; on biological weapons arms control could have been much more effective, better governance of DURC developed, etc).

I think this goes to a broader question about the counterfactual to use for your HoH measure: there wasn't any longtermist altruist community as such in these periods, so the actual returns of all longtermist altruist strategies were zero. To talk about what they would have been one needs to consider a counterfactual in which we anachronistically introduce at least some minimal version of longtermist altruism, and what one includes in that intervention will affect the result one extracts from the exercise.

So, in general, hinginess is increasing, because our ability to think about the long-run effects of our actions, evaluate them, and prioritise accordingly, is increasing.

I agree we are learning more about how to effectively exert resources to affect the future, but if your definition is concerned with the effect of a marginal increment of resources (rather than the total capacity of an era), then you need to wrestle with the issue of diminishing returns. Smallpox eradication was extraordinarily high return compared to the sorts of global health interventions being worked on today with a more crowded field. Founding fields like AI safety or population ethics is much better on a per capita basis than expanding them by 1% after they have developed more. The longtermist of 1600 would indeed have mostly 'invested' in building a movement and eventually in things like financial assets when movement-building returns fell below financial returns, but they also should have made concrete interventions like causing the leveraged growth of institutions like science and the Enlightenment that looked to have a fair chance of contributing to HoH scenarios over the coming centuries, and those could have paid off.

This is analogous to the general point in financial markets that asset classes with systematically high returns only have them before those returns are widely agreed on to be valuable and accessible. So startup founders or CEOs can earn large excess returns in expected value for their huge concentrated positions in their firms (in their founder shareholding and stock-based compensation) because of asymmetric information and incentive problems: investors want the founder or CEO to have a concentrated position to ensure good management, but the risk-adjusted value of a concentrated position is less for the same expected value, so the net arrangement delivers a lot of excess expected value.

A world in which everyone has shared correct values and strong knowledge of how to improve things is one in which marginal longtermist resources are gilding the lily. Insofar as one knows that longtermist altruists happen to find themselves with some advantages (e.g. high education in an era of educational inequality, and longtermism relevant values or knowledge in particular) is a potentially important asset to make use of.

The simulation update argument against HoH

I would note that the creation of numerous simulations of HoH-type periods doesn't reduce the total impact of the actual HoH folk. E.g. say that HoH folk get to influence 10^60 future people, and also get their lives simulated 10^50 times (with no ability to impact things beyond their own lives), while folk in a non-HOH Earthly period get to influence 10^55 future people and get simulated 10^42 times. Because simulations account for a small minority of the total influence, the expected value of an action (or the evidential value of a strategy across all like minds) is still driven primarily by the non-simulated cases. Seeming HoH folk may be simulated more often, but still have most of their influence through unsimulated shaping of history.

If simulations were so numerous that most of the value in history lay in simulations, rather than in basement-level influence, then things might be different. But I think argument #3 doesn't work for this reason.

Third, even if we’re at some enormously influential time right now, if there’s some future time that is even more influential, then the most obvious EA activity would be to invest resources (whether via financial investment or some sort of values-spreading) in order that our resources can be used at that future, more high-impact, time. Perhaps there’s some reason why that plan doesn’t make sense; but, currently, almost no-one is even taking that possibility seriously.

I think this overstates the case. Diminishing returns to expenditures in a particular time favor a nonzero disbursement rate (e.g. with logarithmic returns to spending at a given time 10x HoH levels would drive a 10x expenditure for a given period).

Most resources associated with EA are in investments. As Michael Dickens writes, small donors are holding most of their assets as human capital and not borrowing against it, while large donors such as Good Ventures are investing the vast majority of their financial assets for future donations. Insofar as people who have not yet entered EA but will are part of the broad EA portfolio, the annual disbursement rate of the total portfolio is even lower, perhaps 1-2% or less. And investment returns mean that equal allocation of NPV of current assets between time periods yields larger total spending in future periods (growing with the investment rate).

Moreover, quite a lot of EA donations actually consist in field- and movement-building (EA, longtermism, x-risk reduction), to the point of drawing criticism about excessive inward focus in some cases. Insofar as those fields are actually built they will create and attract resources with some flexibility to address future problems, and look like investments (this is not universal; e.g. GiveDirectly cash transfers had a larger field-building element when GiveDirectly was newer, but it is hard to recover increased future altruistic capacities later from cash transfers).

Looking through history, some candidates for particularly influential times might include the following (though in almost every case, it seems to me, the people of the time would have been too intellectually impoverished to have known how hingey their time was and been able to do anything about it

I would distinguish between an era being important (on the metric of how much an individual or unit of resource could do) because its population was low, because there was important potential for a lock-in event in a period, and because of high visibility/tractability of longtermist altruists affecting such events (although the effects of that on marginal returns are nonobvious because of crowding, and the highest returns being on neglected assets).

The population factor gets ~monotonically and astronomically worse over time. The chance of lock-in should be distributed across eras (more by technological levels than calendar years), with more as technology advances towards actual high-fidelity stabilization as a possibility (via extinction or lock-in), and less over time thereafter due to pre-emption (if there is a 1/1000 year chance of stabilization in extinction or a locked-in civilization, then the world will almost certainly be in a stable state a million years hence, thus the expected per year chance of stabilization needs to decline enormously on average over the coming era, in addition to falling per capita influence; this is related to LaPlace's rule of succession, the longer we go under some conditions without an event happening the less likely that it will happen on the next timestep, even aside from the object-level reasons re the speed of light and lock-in tech).

So I would say both the population and pre-emption (by earlier stabillization) factors intensely favor earlier eras in per resource hingeyness, constrained by the era having any significant lock-in opportunities and the presence of longtermists.

When I check that against the opportunities of past periods that does make sense to me. It seems quite plausible that 1960 was a much better time for a marginal altruist to take object-level actions to reduce long-run x-risk (and better in expected terms without the benefit of hindsight regarding things like nuclear doomsday devices and AI and BW timelines) by building relevant fields with less crowding (building a good EAish movement looks even better; 'which is the hingeyest period' is distinct from 'is hingeyness declining faster than ROI for financial instruments or movement-building').

The growth of a longtermist altruist movement in particular would mean marginal per capita hingeyness (drawn around longtermist interests) should seriously decline going forward.

In contrast, if the hingiest times are in the future, it’s likely that this is for reasons that we haven’t thought of. But there are future scenarios that we can imagine now that would seem very influential

For the later scenarios here you're dealing with much larger populations. If the plausibility of important lock-in is similar for solar colonization and intergalactic colonization eras, but the population of the latter is billions of times greater, it doesn't seem to be at all an option that it could be the most HoH period on a per resource unit basis.

comment by William_MacAskill · 2019-09-05T03:48:33.157Z · EA(p) · GW(p)
So I would say both the population and pre-emption (by earlier stabillization) factors intensely favor earlier eras in per resource hingeyness, constrained by the era having any significant lock-in opportunities and the presence of longtermists.

I think this is a really important comment; I see I didn't put these considerations into the outside-view arguments, but I should have done as they are make for powerful arguments.

The factors you mention are analogous to the parameters that go into the Ramsey model for discounting: (i) a pure rate of time preference, which can account for risk of pre-emption; (ii) a term to account for there being more (and, presumably, richer) future agents and some sort of diminishing returns as a function of how many future agents (or total resources) there are. Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There's been some great societal catastrophe and we're rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness is very strong, then maybe we should be saving for such a possible scenario; that's the hinge moment.

For the later scenarios here you're dealing with much larger populations. If the plausibility of important lock-in is similar for solar colonization and intergalactic colonization eras, but the population of the latter is billions of times greater, it doesn't seem to be at all an option that it could be the most HoH period on a per resource unit basis.

I agree that other things being equal a time with a smaller population (or: smaller total resources) seems likelier to be a more influential time.  But ‘doesn't seem to be at all an option’ seems overstated to me.

Simple case: consider a world where there just aren’t options to influence the very long-run future. (Agents can make short-run perturbations but can’t affect long-run trajectories; some sort of historical determinism is true). Then the most influential time is just when we have the best knowledge of how to turn resources into short-run utility, which is presumably far in the future.

Or, more importantly, where hingeyness is essentially 0 up until a certain point far in the future.  If our ability to positively influence the very long-run future were no better than a dart-throwing chimp until we’ve got computers the size of solar systems, then the most influential times would also involve very high populations

More generally, per-resource hingeyness increases with:

• Availability of pivotal moments one can influence, and their pivotality
• Knowledge / understanding of how to positively influence the long-run future

And hingeyness decreases with:

• Population size
• Level of expenditure on long-term influence
• Chance of being pre-empted already

If knowledge or availability of pivotal moments at a time is 0, then hingeyness at the time is 0, and lower populations can’t outweigh that.

Replies from: CarlShulman, Tobias_Baumann, CarlShulman
comment by CarlShulman · 2019-09-05T16:32:03.719Z · EA(p) · GW(p)

> Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There's been some great societal catastrophe and we're rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness is very strong, then maybe we should be saving for such a possible scenario; that's the hinge moment.

I agree (and have used in calculations about optimal disbursement and savings rates) that the chance of a future altruist funding crash is an important reason for saving (e.g. medium-scale donors can provide insurance against a huge donor like the Open Philanthropy Project not entering an important area or being diverted). However, the particularly relevant kind of event for saving is the possibility of a 'catastrophe' that cuts other altruistic funding or similar while leaving one's savings unaffected. Good Ventures going awry fits that bill better than a nuclear war (which would also destroy a DAF saving for the future with high probability).

Saving extra for a catastrophe that destroys one's savings and the broader world at the same rate is a bet on proportional influence being more important in the poorer smaller post-disaster world, which seems like a weaker consideration. Saving or buying insurance to pay off in those cases, e.g. with time capsule messages to post-apocalyptic societies, or catastrophe bonds/insurance contracts to release funds in the event of a crash in the EA movement, get more oomph.

I'll also flag that we're switching back and forth here between the question of which century has the highest marginal impact per unit resources and which periods are worth saving/expending how much for.

>Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high.

I think this is true for what little EV of 'most important century' remains so far out, but that residual is very small. Note that Martin Weitzman's argument for discounting the future at the lowest possible rate (where we consider even very unlikely situations where discount rates remain low to get a low discount rate for the very long-term) gives different results with an effectively bounded utility function. If we face a limit like '~max value future' or 'utopian light-cone after a great reflection' then we can't make up for increasingly unlikely scenarios with correspondingly greater incremental probability of achieving ~ that maximum: diminishing returns mean we can't exponentially grow our utility gained from resources indefinitely (going from 99% of all wealth to 99.9% or 99.999% and so on will yield only a bounded increment to the chance of a utopian long-term). A related limit to growth (although there is some chance it could be avoided, making it another drag factor) comes if the chances of expropriation rise as one's wealth becomes a larger share of the world (a foundation with 50% of world wealth would be likely to face new taxes).

comment by Tobias_Baumann · 2019-09-05T11:04:43.795Z · EA(p) · GW(p)
inverse relationship between population size and hingeyness

Maybe it's a nitpick but I don't think this is always right. For instance, suppose that from now on, population size declines by 20% each century (indefinitely). I don't think that would mean that later generations are more hingy? Or, imagine a counterfactual where population levels are divided by 10 across all generations – that would mean that one controls a larger fraction of resources but can also affect fewer beings, which prima facie cancels out.

It seems to me that the relevant question is whether the present population size is small compared to the future, i.e. whether the present generation is a "population bottleneck". (Cf. Max Daniel's comment.) That's arguably true for our time (especially if space colonisation becomes feasible at some point) and also in the rebuilding scenario you mentioned.

comment by CarlShulman · 2019-09-05T06:27:14.811Z · EA(p) · GW(p)
But ‘doesn't seem to be at all an option’ seems overstated to me.

In expectation, just as a result of combining comparability within a few OOM on likelihood of a hinge in the era/transition, but far more in population. I was not ruling out specific scenarios, in the sense that it is possible that a random lottery ticket is the winner and worth tens of millions of dollars, but not an option for best investment.

Generally, I'm thinking in expectations since they're more action-guiding.

comment by William_MacAskill · 2019-09-05T03:13:26.225Z · EA(p) · GW(p)

Hi Carl,

Thanks so much for taking the time to write this excellent response, I really appreciate it, and you make a lot of great points.  I’ll divide up my reactions into different comments; hopefully that helps ease of reading.

I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.

This is a good idea. Some options: influentialness; criticality; momentousness; importance; pivotality; significance.

I’ve created a straw poll here to see as a first pass what the Forum thinks.

[Edit: Results:

Replies from: Jonas Vollmer, CarlShulman, RyanCarey
comment by Jonas Vollmer · 2020-09-25T12:28:44.772Z · EA(p) · GW(p)

Now it's officially on BBC: https://www.bbc.com/future/article/20200923-the-hinge-of-history-long-termism-and-existential-risk

But here’s another adjective for our times that you may not have heard before: “hingey”.

Although it also says:

(though MacAskill now prefers the term “influentialness”, as it sounds less flippant)

comment by CarlShulman · 2019-09-05T16:10:28.429Z · EA(p) · GW(p)

Thinking further, I would go with importance among those options for 'total influence of an era' but none of those terms capture the 'per capita/resource' element, and so all would tend to be misleading in that way. I think you would need an explicit additional qualifier to mean not 'this is the century when things will be decided' but 'this is the century when marginal influence is highest, largely because ~no one tried or will try.'

comment by RyanCarey · 2019-09-05T15:36:39.882Z · EA(p) · GW(p)

Criticality is confusing because it describes the point when nuclear reaction becomes self-sustaining, and relates to "critical points" in the related area of dynamical systems, which is somewhat different from what we're talking about.

I think Hingeyness should have a simple name because it is not a complicated concept - It's how much actions affect long-run outcomes. In RL, in discussion of prioritized experience replay, we would just use something like "importance". I would generally use "(long-run) importance" or "(long-run) influence" here, though I guess pivotality (from Yudkowsky's "pivotal act") is alright in a jargon-liking context (like academic papers).

Edit: From Carl's comment, and from rereading the post, the per-resource component seems key. So maybe per-resource importance.

comment by William_MacAskill · 2019-09-05T03:31:43.900Z · EA(p) · GW(p)
I think this overstates the case. Diminishing returns to expenditures in a particular time favor a nonzero disbursement rate (e.g. with logarithmic returns to spending at a given time 10x HoH levels would drive a 10x expenditure for a given period)

Sorry, I wasn’t meaning we should be entirely punting to the future, and in case it’s not clear from my post my actual all-things-considered views is that longtermist EAs should be endorsing a mixed strategy of some significant proportion of effort spent on near-term longtermist activities and some proportion of effort spent on long-term longtermist activities.

I do agree that, at the moment, EA is mainly investing (e.g. because of Open Phil and because of human capital and because much actual expenditure is field-building-y, as you say). But it seems like at the moment that’s primarily because of management constraints and weirdness of borrowing-to-give (etc), rather than a principled plan to spread giving out over some (possibly very long) time period. Certainly the vibe in the air is ‘expenditure (of money or labour) now is super important, we should really be focusing on that’.

(I also don’t think that diminishing returns is entirely true: there are fixed costs and economies of scale when trying to do most things in the world, so I expect s-curves in general. If so, that would favour a lumpier disbursement schedule.)

Replies from: CarlShulman
comment by CarlShulman · 2019-09-05T17:34:33.016Z · EA(p) · GW(p)
I do agree that, at the moment, EA is mainly investing (e.g. because of Open Phil and because of human capital and because much actual expenditure is field-building-y, as you say). But it seems like at the moment that’s primarily because of management constraints and weirdness of borrowing-to-give (etc), rather than a principled plan to spread giving out over some (possibly very long) time period.

I agree that many small donors do not have a principled plan and are trying to shift the overall portfolio towards more donation soon (which can have the effect of 100% now donation for an individual who is small relative to the overall portfolio).

However, I think that institutionally there are in fact mechanisms to regulate expenditures:

• Evaluations of investments in movement-building involve estimations of the growth of EA resources that will result, and comparisons to financial returns; as movement-building returns decline they will start to fall under the financial return benchmark and no longer be expanded in that way
• The Open Philanthropy Project has blogged about its use of the concept of a 'last dollar' opportunity cost of funds, asking for current spending whether in expectation it will do more good than saving it for future opportunities; assessing last dollars opportunity cost involves use of market investment returns, and the value of savings as insurance for the possibility of rare conditions that could provide enhanced returns (a collapse of other donors in core causes rather than a glut, major technological developments, etc)
• Some other large and small donors likewise take into account future opportuntiies
• Advisory institutions such as 80,000 Hours, charity evaluators, grantmakers, and affiliated academic researchers are positioned to advise change if donors start spending down too profligately (I for one stand ready for this wrt my advice to longtermist donors focused on existential risk)

All that said, it's valuable to improve broader EA community understanding of intertemporal tradeoffs, and estimation of the relevant parameters to determine disbursement rates better.

comment by William_MacAskill · 2019-09-05T03:27:35.223Z · EA(p) · GW(p)
I agree we are learning more about how to effectively exert resources to affect the future, but if your definition is concerned with the effect of a marginal increment of resources (rather than the total capacity of an era), then you need to wrestle with the issue of diminishing returns.

I agree with this, though if we’re unsure about how many resources will be put towards longtermist causes in the future, then the expected value of saving will come to be dominated by the scenario where very few resources are devoted to it. (As happens in the Ramsey model for discounting if one includes uncertainty over future growth rates and the possibility of catastrophe.) This considerations gets stronger if one thinks the diminishing marginal returns curve is very steep.

E.g. perhaps in 150 years’ time, EA and Open Phil and longtermist concern will be dust; in which case those who saved for the future (and ensured that there would be at least some sufficiently likeminded people to pass their resources onto) will have an outsized return. And perhaps returns diminish really steeply, so that what matters is guaranteeing that there are at least some longtermists around. If the outsized return in this scenario if large enough, then even a low probability of this scenario might be the dominant consideration.

Founding fields like AI safety or population ethics is much better on a per capita basis than expanding them by 1% after they have developed more.

Strongly agree, though by induction it seems we should think there will be more such fields in the future.

The longtermist of 1600 would indeed have mostly 'invested' in building a movement and eventually in things like financial assets when movement-building returns fell below financial returns, but they also should have made concrete interventions like causing the leveraged growth of institutions like science and the Enlightenment that looked to have a fair chance of contributing to HoH scenarios over the coming centuries, and those could have paid off.

You might think the counterfactual is unfair here, but I wouldn’t regard it as accessible to someone in 1600 to know that they could make contributions to science and the Enlightenment as a good way of influencing the long-run future.

This is analogous to the general point in financial markets that assets classes with systematically high returns only have them before those returns are widely agreed on to be valuable and accessible...
A world in which everyone has shared correct values and strong knowledge of how to improve things is one in which marginal longtermist resources are gilding the lily.

Though if we’re really clueless right now (perhaps not much better than the person in 1600) then perhaps that’s the best we can do.

And it would seem that the really high-value scenario is where (i) knowledge is very high but (ii) concern for the very long-run future is very low (but not nonexistent, allowing for resources to be passed onto those times.)

In terms of the financial analogy, that would be like how someone with strange preferences, who gets extraordinary utility from eating bread and potatoes, gets a much higher return (when measured in utility gained) from a regular salary than other people would.

And in general I'm more inclined to believe stories of us having extraordinary impact if that primarily results from a difference in what we care about compared with others, rather than from having greater insight.

I will say, though: the argument “we’re at an unusual period where longtermist (/impartial consequentialish) concern is very low but not nonexistent” as a reason for now being a particularly influential time seems pretty good to me, and wasn’t one that I included in my list of arguments in favour of HoH.

Replies from: CarlShulman
comment by CarlShulman · 2019-09-05T06:31:52.526Z · EA(p) · GW(p)
You might think the counterfactual is unfair here, but I wouldn’t regard it as accessible to someone in 1600 to know that they could make contributions to science and the Enlightenment as a good way of influencing the long-run future.

Is longtermism accessible today? That's a philosophy of a narrow circle, as Baconian science and the beginnings of the culture of progress were in 1600. If you are a specialist focused on moral reform and progress today with unusual knowledge, your might want to consider a counterpart in the past in a similar position for their time.

comment by William_MacAskill · 2019-09-05T03:18:23.386Z · EA(p) · GW(p)
To talk about what they would have been one needs to consider a counterfactual in which we anachronistically introduce at least some minimal version of longtermist altruism, and what one includes in that intervention will affect the result one extracts from the exercise.

I agree there’s a tricky issue of how exactly one constructs the counterfactual. The definition I’m using is trying to get it as close as possible to a counterfactual we really face: how much to spend now vs how much to pass resources onto future altruists. I’d be interested if others thought of very different approaches. It’s possible that I’m trying to pack too much into the concept of ‘most influential’, or that this concept should be kept separate from the idea of moving resources around to different times.

I feel that involving the anachronistic insertion of a longtermist altruist into the past, if anything, makes my argument harder to make, though. If I can’t guarantee that the past person I’m giving resources to would even be a longtermist, that makes me less inclined to give them resources. And if I include the possibility that longtermism might be wrong and that the future-person that I pass resources onto will recognise this, that’s (at least some) argument to me in favour of passing on resources. (Caveat subjectivist meta-ethics, possibility of future people’s morality going wayward, etc.)

Replies from: Habryka
comment by Habryka · 2019-09-05T04:55:43.717Z · EA(p) · GW(p)
I’d be interested if others thought of very different approaches. It’s possible that I’m trying to pack too much into the concept of ‘most influential’, or that this concept should be kept separate from the idea of moving resources around to different times.

I tried engaging with the post for 2-3 hours and was working on a response, but ended up kind of bouncing off at least in part because the definition of hingyness didn't seem particularly action-relevant to me, mostly for the reasons that Gregory Lewis and Kit outlined in their comments.

I also think a major issue with the current definition is that I don't know of any technology or ability to reliably pass on resources to future centuries, which introduces a natural strong discount factor into the system, but which seems like a major consideration in favor of spending resources now instead of trying to pass them on (and likely fail, as illustrated in Robin Hanson's original "giving later" post).

comment by William_MacAskill · 2019-09-05T03:16:53.906Z · EA(p) · GW(p)
I would dispute this. Possibilities of AGI and global disaster were discussed by pioneers like Turing, von Neumann, Good, Minsky and others from the founding of the field of AI.

Thanks, I’ve updated on this since writing the post and think my original claim was at least too strong, and probably just wrong. I don’t currently have a good sense of, say, if I were living in the 1950s, how likely I would be to figure out AI as the thing, rather than focus on something else that turned out not to be as important (e.g. the focus on nanotech by the Foresight Institute (a group of idealistic futurists) in the late 80s could be a relevant example).

Replies from: CarlShulman
comment by CarlShulman · 2019-09-05T06:36:47.672Z · EA(p) · GW(p)

I'd guess a longtermist altruist movement would have wound up with a flatter GCR porfolio at the time. It might have researched nuclear winter and dirty bombs earlier than in OTL (and would probably invest more in nukes than today's EA movement), and would have expedited the (already pretty good) reaction to the discovery of asteroid risk. I'd also guess it would have put a lot of attention on the possibility of stable totalitarianism as lock-in.

comment by SiebeRozendal · 2019-09-15T11:49:50.435Z · EA(p) · GW(p)
I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.

Some ideas: "Leverage", "temporal leverage", "path-dependence", "moment" (in relation to the concept from physics), "path-criticality" (meaning how many paths are closed off by decisions in the current time). Anyone else with ideas?

Replies from: MichaelA
comment by MichaelA · 2019-10-12T08:43:21.308Z · EA(p) · GW(p)

I like "leverage" (which I'd imagine being used in ways like "the highest leverage time in history" or "the time in history where an altruist can have the highest leverage"). Compared to the other options Will suggested, "leverage" seems to me to somewhat more clearly signal the "per capita/resource" element highlighted above [EA(p) · GW(p)] (or more simply the sense that one isn't just saying that x time is important, but also that something can predictably be done at x time to influence the future).

One potential downside is that it's possible "leverage" would cause a bit of confusion for some people, if the financial sense of "leverage" comes to their mind more readily than the sort of "Give me a lever long enough and a fulcrum on which to place it, and I shall move the world" sense.

comment by William_MacAskill · 2019-09-05T03:28:31.558Z · EA(p) · GW(p)
I would note that the creation of numerous simulations of HoH-type periods doesn't reduce the total impact of the actual HoH folk

Agree that it might well be that even though one has a very low credence in HoH, one should still act in the same way. (e.g. because if one is not at HoH, one is a sim, and your actions don’t have much impact).

The sim-arg could still cause you to change your actions, though. It’s somewhat plausible to me, for example, that the chance of being a sim if you’re at the very most momentous time is 1000x higher than the chance of being a sim if you’re at the 20th most hingey time, but the most hingey time is not 1000x more hingey than the 20th most hingey time. In which case the hypothesis that you’re at the 20th most hingey time has a greater relative importance than it had before.

Replies from: Johannes_Treutlein, Olle Häggström
comment by Johannes_Treutlein · 2019-09-07T22:50:01.337Z · EA(p) · GW(p)

Your argument seems to combine SSA style anthropic reasoning with CDT. I believe this is a questionable combination as it gives different answers from an ex-ante rational policy or from updateless decision theory (see e.g. https://www.umsu.de/papers/driver-2011.pdf). The combination is probably also dutch-bookable.

Consider the different hingeynesses of times as the different possible worlds and your different real or simulated versions as your possible locations in that world. Say both worlds are equally likely a priori and there is one real version of you in both worlds, but the hingiest one also has 1000 subjectively indistinguishable simulations (which don't have an impact). Then SSA tells you that you are much less likely a real person in the hingiest time than you are to be a real person in the 20th hingiest time. Using these probabilities to calculate your CDT-EV, you conclude that the effects of your actions on the 20th most hingiest time dominate.

Alternatively, you could combine CDT with SIA. Under SIA, being a real person in either time is equally likely. Or you could combine the SSA probabilities with EDT. EDT would recommend acting as if you were controlling all simulations and the real person at once, no matter whether you are in the simulation or not. In either case, you would conclude that you should do what is best for the hingiest time (given that they are equally likely a priori).

Unlike the SSA+CDT approach, either of these latter approaches would (in this case) yield the actions recommended by someone coordinating everyone's actions ex ante.

comment by Olle Häggström · 2019-09-06T17:16:52.713Z · EA(p) · GW(p)

Is this slightly off? The factor that goes into the expected impact is the chance of being a non-sim (not the chance of being a sim), so for the argument to make sense, you might wish to replace "the chance of being a sim [...] is 1000x higher than..." by "the chance of being a non-sim is just 1/1000 of..."?

comment by Gregory Lewis (Gregory_Lewis) · 2019-09-03T14:56:17.815Z · EA(p) · GW(p)

Excellent work; some less meritorious (and borderline repetitious) remarks:

1) One corollary of this line of argument is that even if one is living at a 'hinge of history', one should not reasonably believe this, given the very adverse prior and the likely weak confirmatory evidence one would have access to.

2) The invest for the future strategy seems to rely on our descendants improving their epistemic access to the point where they can reliably determine whether they're at a 'hinge' or not, and deploying resources appropriately. There are grounds for pessimism about this ability ever being attained. Perhaps history (or the universe as a whole) is underpowered for these inferences.

3) Although with the benefit of hindsight over previous times we could assess the distribution of hingeyness/influence across these, to get a sense of the distribution, and so a steer as to whether we should think there are hingey periods of vastly outsized influence in the first place.

4) If we grant the ground truth is occasional 'crucial moments', but we expect evidence at-the-time for living in one of these is scant, my intuition is the optimal strategy would to husband resources to spend these disproportionately when the evidence gives some (but not decisive) indication one of these crucial moments is now.

Depending on how common these 'probably false alarms' are (plus things like how reliably can we steward resources for long periods of time), this might amount to monomaniacal work on immediate challenges. E.g., the prior is (say) 1/million this decade, but if the evidence suggests it is 1%, perhaps we should drop everything to work on it, if we won't expect our credence to be this high again for another millenia.

5) Minor: Although partly priced in to considerations about how 'early' we are, there are also issues of conditional dependence. If extinction risk is 1% this century but 10% the next, one should probably spend somewhat disproportionately on the first one (and other cases where getting access to a 'bigger hinge' relies on going the right way on an earlier, smaller, one).

comment by William_MacAskill · 2019-09-05T04:02:10.863Z · EA(p) · GW(p)

The way I'd think about it is that we should be uncertain about how justifiably confident people can be that they're at the HoH. If our current credence in HoH is low, then the chance that it might be justifiably much higher in the future should be the significant consideration. At least if we put aside simulation worries, I can imagine evidence which would lead me to have high confidence that I'm at the HoH.

E.g., the prior is (say) 1/million this decade, but if the evidence suggests it is 1%, perhaps we should drop everything to work on it, if we won't expect our credence to be this high again for another millenia.

I think if that were one's credences, what you say makes sense. But it seems hard for me to imagine a (realistic) situation where I think that it's 1% chance of HoH this decade, but I'm confident that the chance will much much lower than that for all of the next 99 decades.

For what it's worth, my intuition is that pursuing a mixed strategy is best; some people aiming for impact now, in case now is a hinge, and some people aiming for impact in many many years, at some future hinge moment.

comment by gwern · 2019-09-10T03:10:34.431Z · EA(p) · GW(p)

One of the amusing things about the 'hinge of history' idea is that some people make the mediocrity argument about their present time - and are wrong.

Isaac Newton, for example, 300 years ago appears to have made an anthropic argument that claims that he lived in a special time which could be considered any kind of, say, 'Revolution', due to the visible acceleration of progress and recent inventions of technologies, were wrong, and in reality, there was an ordinary rate of innovation and the invention of many things recently merely showed that humans had a very short past and were still making up for lost time (because comets routinely drove intelligent species extinct).

And Lucretius ~1800 years before Newton (probably relaying older Epicurean arguments) made his own similar argument, arguing that Greece & Rome were not any kind of exception compared to human history - certainly humans hadn't existed for hundreds of thousands or millions of years! - and if Greece & Rome seemed innovative compared to the dark past, it was merely because "our world is in its youth: it was not created long ago, but is of comparatively recent origin. That is why at the present time some arts are still being refined, still being developed."

One could read these mistakes in a very Kurzweilian fashion: if progress is accelerating or even just stable, every era *can* be (much) more innovative and influential on the future than every preceding era was, and the mediocrity argument wrong every time.

comment by trammell · 2019-09-12T22:02:05.377Z · EA(p) · GW(p)

Interesting finds, thanks!

Similarly, people sometimes claim that we should discount our own intuitions of extreme historic importance because people often feel that way, but have so far (at least almost) always been wrong. And I’m a bit skeptical of the premise of this particular induction. On my cursory understanding of history, it’s likely that for most of history people saw themselves as part of a stagnant or cyclical process which no one could really change, and were right. But I don’t have any quotes on this, let alone stats. I’d love to know what proportion of people before ~1500 thought of themselves as living at a special time.

Replies from: CarlShulman
comment by CarlShulman · 2019-09-13T17:27:41.191Z · EA(p) · GW(p)

My read is that Millenarian religious cults have often existed in nontrivial numbers, but as you say the idea of systematic, let alone accelerating, progress (as opposed to past golden ages or stagnation) is new and coincided with actual sustained noticeable progress. The Wikipedia page for Millenarianism lists ~all religious cults, plus belief in an AI intelligence explosion.

So the argument seems, first order, to reduce to the question of whether credence in AI growth boom (to much faster than IR rates) is caused by the same factors as religious cults rather than secular scholarly opinion, and the historical share/power of those Millenarian sentiments as a share of the population. But if one takes a narrower scope (not exceptionally important transformation of the world as a whole, but more local phenomena like the collapse of empires or how long new dynasties would last) one sees smaller distortion of relative importance for propaganda frequently (not that it was necessarily believed by outside observers).

comment by William_MacAskill · 2019-09-13T01:05:44.562Z · EA(p) · GW(p)

Thanks for these links. I’m not sure if your comment was meant to be a criticism of the argument, though? If so: I’m saying “prior is low, and there is a healthy false positive rate, so don’t have high posterior.” You’re pointing out that there’s a healthy false negative rate too — but that won’t cause me to have a high posterior?

And, if you think that every generation is increasing in influentialness, that’s a good argument for thinking that future generations will be more influential and we should therefore save.

comment by Paul_Christiano · 2019-09-15T22:46:33.132Z · EA(p) · GW(p)

I think the outside view argument for acceleration deserves more weight. Namely:

• Many measures of "output" track each other reasonably closely: how much energy we can harness, how many people we can feed, GDP in modern times, etc.
• Output has grown 7-8 orders of magnitude over human history.
• The rate of growth has itself accelerated by 3-4 orders of magnitude. (And even early human populations would have seemed to grow very fast to an observer watching the prior billion years of life.)
• It's pretty likely that growth will accelerate by another order of magnitude at some point, given that it's happened 3-4 times before and faster growth seems possible.
• If growth accelerated by another order of magnitude, a hundred years would be enough time for 9 orders of magnitude of growth (more than has occurred in all of human history).
• Periods of time with more growth seem to have more economic or technological milestones, even if they are less calendar time.
• Heuristics like "the next X years are very short relative to history, so probably not much will happen" seem to have a very bad historical track record when X is enough time for lots of growth to occur, and so it seems like a mistake to call them the "outside view."
• If we go a century without doubling of growth rates, it will be (by far) the most that output has ever grown without significant acceleration.
• Data is noisy and data modeling is hard, but it is difficult to construct a model of historical growth that doesn't have a significant probability of massive growth within a century.
• I think the models that are most conservative about future growth are those where stable growth is punctuated by rapid acceleration during "revolutions" (with the agricultural acceleration around 10,000 years ago and the industrial revolution causing continuous acceleration from 1600-1900).
• On that model human history has had two revolutions, with about two orders of magnitude of growth between them, each of which led to >10x speedup of growth. It seems like we should have a significant probability (certainly >10%) of another revolution occurring within the next order of magnitude of growth, i.e. within the next century.
comment by Stefan_Schubert · 2019-09-12T23:43:36.951Z · EA(p) · GW(p)

Meta-comment: the level of discussion here has been fantastic. It's nice that these complex issues are discussed in this format; publically and relatively informally (though other formats obviously have their advantages too). Thanks to all contributors.

Replies from: SiebeRozendal
comment by SiebeRozendal · 2019-09-15T11:40:53.189Z · EA(p) · GW(p)

Exactly! It reminds me a lot of the Polymath Project in which maths problems were solved collaboratively. I really wish EA made more use of this - I think Will's recent choice to post his ideas to the Forum is turning out to be an excellent choice.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2019-09-15T11:54:24.947Z · EA(p) · GW(p)

Cf. this LessWrong-post on the Parliamentary Model for moral uncertainty which explicitly mentions the Polymath Project.

comment by Robert_Wiblin · 2019-09-13T18:15:12.833Z · EA(p) · GW(p)

Great discussion here, top quality comments. To make one aspect of this a bit clearer I made this figure of different 'hingeiness' trajectories and their implications:

Will adds: "In this post I’m just saying it’s unlikely we’re at A2, rather than at some other point in that curve, or on a different curve, and the evidence we have doesn’t give us strong enough evidence to think we’re at A2.

But then yeah it’s a really good point that even if one thinks hinginess is increasing locally, and feels confident about that, it doesn’t mean we’re atop the last peak.

A related point from the graphs: even if hinginess is locally decreasing faster than the real rate of interest, that’s still not sufficient for spending, if there will be some future time when hinginess starts increasing or staying the same or slowing to less than the real rate of interest (as long as you can save for that long)."

Replies from: SiebeRozendal
comment by SiebeRozendal · 2019-09-15T11:52:31.061Z · EA(p) · GW(p)

Upvote for using graphics to elucidate discussion on the Forum. Haven't seen it often and it's very helpful!

comment by Pablo (Pablo_Stafforini) · 2019-09-03T10:06:56.469Z · EA(p) · GW(p)

As a side note, Derek Parfit was an early advocate of what you call the 'Hinge of History Hypothesis'. He even uses the expression 'hinge of history' in the following quote (perhaps that's the inspiration for your label):

We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through this galaxy. (On What Matters, vol. 2, Oxford, 2011, p. 616)

Interestingly, he had expressed similar views already in 1984, though back then he didn't articulate why he believed that the present time is uniquely important:

the part of our moral theory... that covers how we affect future generations... is the most important part of our moral theory, since the next few centuries will be the most important in human history. (Reasons and Persons, Oxford, 1984, p. 351)
comment by William_MacAskill · 2019-09-04T04:55:57.309Z · EA(p) · GW(p)

Thanks, Pablo! Yeah, the reference was deliberate — I’m actually aiming to turn a revised version of this post into a book chapter in a Festschrift for Parfit. But I should have given the great man his due! And I didn’t know he’d made the ‘most important centuries’ claim in Reasons and Persons, that’s very helpful!

comment by Toby_Ord · 2019-09-06T14:55:54.474Z · EA(p) · GW(p)

Thanks Pablo, I also didn't know he had claimed this at the very time he was introducing population ethics and extinction risk.

comment by Pablo (Pablo_Stafforini) · 2019-09-03T13:05:57.370Z · EA(p) · GW(p)
The most obvious implication, however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building.

In his excellent Charity Cost Effectiveness in an Uncertain World, first published in 2013, Brian Tomasik calls this approach 'Punting to the Future'. Unless there are strong reasons for introducing a new label, I suggest sticking to Brian's original name, both to avoid unnecessary terminological profusion and to credit those who pioneered discussion of this idea.

comment by lukeprog · 2019-09-04T02:48:39.940Z · EA(p) · GW(p)

Great post!

Even just a few decades ago, a longtermist altruist would not have thought of risk from AI or synthetic biology, and wouldn’t have known that they could have taken action on them.

Minor point, but I think this is unclear. On AI see e.g. here [LW · GW]. On synbio I'm less familiar but I'm guessing someone more than a few decades ago was able to think thoughts like "Once we understand cell biology realy well, seems like we might be able to engineer pathogens much more destructive than those served up by nature."

comment by Howie_Lempel (HowieL) · 2019-09-05T17:39:47.489Z · EA(p) · GW(p)
On synbio I'm less familiar but I'm guessing someone more than a few decades ago was able to think thoughts like "Once we understand cell biology realy well, seems like we might be able to engineer pathogens much more destructive than those served up by nature."

+1. I don't know the intellectual history well but the risk from engineered pathogens should have been apparent 4 decades ago in 1975 if not (more likely, IMO) earlier.

A fairly random sample of writing on the topic:

• Jack London's 1910 short story "An Unparalleled Invasion" [CW: really racist] imagines genocide through biological warfare and the possibility that a "hybridization" between pathogens created "a new and frightfully virulent germ" (I don't think he's suggesting the hybridization was intentional but it's a bit ambiguous).
• the possibility of engineering pathogens was seriously discussed 4 decades ago at the Asilomar Conference in 1975.
• There's a 1982 sci-fi book by a famous writer where a vengeful molecular biologist releases a pathogen engineered to be GCR-or-worse.
• In 1986, a U.S. Defense Department official was quoted saying "“The technology that now makes possible so-called ‘designer drugs’ also makes possible designer BW.”
• In 2000 (admittedly just 2 decades ago) ~x-risk from engineered pathogens was explicitly worried about in "The Future Doesn't Need Us."
Replies from: CarlShulman, CarlShulman
comment by CarlShulman · 2019-09-05T18:06:33.197Z · EA(p) · GW(p)

Szilard anticipated nuclear weapons (and launched a large and effective strategy to cause the liberal democracies to get them ahead of totalitarian states, although with regret), and was also concerned about germ warfare (along with many of the anti-nuclear scientists). See this 1949 story he wrote. Szilard seems very much like an agenty sophisticated anti-xrisk actor.

comment by CarlShulman · 2020-03-09T22:12:09.603Z · EA(p) · GW(p)

Plus the Soviet bioweapons program was actively at work to engineer pathogens for enhanced destructiveness during the 70s and 80s using new biotechnology (and had been using progessively more advanced methods through the 20th century.

comment by William_MacAskill · 2019-09-04T04:56:51.773Z · EA(p) · GW(p)

Huh, thanks for the great link! I hadn’t seen that before, and had been under the impression that though some people (e.g. Good, Turing) had suggested the intelligence explosion, no-one really worried about the risks. Looks like I was just wrong about that.

comment by Max_Daniel · 2019-09-04T11:22:15.070Z · EA(p) · GW(p)

Just a quick thought: I wonder whether the hingiest times were during periods of potential human population bottlenecks. E.g., Wikipedia says:

A 2005 study from Rutgers University theorized that the pre-1492 native populations of the Americas are the descendants of only 70 individuals who crossed the land bridge between Asia and North America.
[...]
In 2000, a Molecular Biology and Evolution paper suggested a transplanting model or a 'long bottleneck' to account for the limited genetic variation, rather than a catastrophic environmental change.[6] This would be consistent with suggestions that in sub-Saharan Africa numbers could have dropped at times as low as 2,000, for perhaps as long as 100,000 years, before numbers began to expand again in the Late Stone Age.

(Note that the Wikipedia article doesn't seem super well done, and also that it appears there has been significant scholarly controversy around population bottleneck claims. I don't want to claim that there in fact were population bottlenecks; I'm just curious what the implications in terms of hinginess would be if there were.)

As a first pass, it seems plausible to me that e.g. the action of any one of those 70 humans could have made the difference between this group surviving or not, with potentially momentous consequences. (What if the Vikings, or even later European colonialists, had found a continent without a human population?) Similarly, compared to any human today, if at some point the global human population really was just 2,000, then as a first pass - just based on a crude prior determined by the total population - it seems that one of these 2,000 people could have been enormously influential. Depending on how concentrated the population was and how much of a "close call" it was that modern humans didn't go extinct, it might even be the case that some of these people's actions had - without them realizing it - significant impacts on the probability of human survival (say, shifting the probability by more than 0.1%).

Some unstructured closing thoughts:

• Scholars often use "history" in a narrow sense to refer to the period of time for which we have written descriptions. My impression is this would exclude periods of population bottlenecks - they'd all be in "prehistory." It's not clear to me if you intended to exclude prehistory based on the title of your post.
• Even if there were drastic population bottlenecks and these were in fact the hingiest times, it's not clear what would follow from this. E.g., it might be defensible to claim that prehistory is outside the relevant reference class.
• During a human population bottleneck, the distinction between "direct work" and "investing" on which your definitions rest might cease to make sense. Quite possibly, the best thing one of the 70 people in North America could have done is helping to hunt a bison or some other garden-variety action that helps the group survive - this seems good from the point of view of "longtermist altruism"/direct work, "investment", and selfish self-interest. This is a drastic example, but the direct work vs. investing distinction might also be quite blurry in less drastic times.
• The potential example of 70 people settling North America also makes me wonder about the distribution of influence across people for any given period of time. Your definition currently talks about "a longtermist altruist living at ti" - but if different longtermist altruists would have vastly different amounts of influence at time ti, it becomes unclear how to understand this definition. Do I randomly draw a member of the human population at that time according to a uniform distribution, and then imagine they are a longtermist altruist? Do we refer to the person with the median influence? The maximum influence? Etc. (A more contemporary example: If I'm someone who could launch a nuclear weapon, then presumably I have a lot more influence than a poor peasant in the Chinese or Indian countryside. The latter observation points to a potential problem with spelling our your definition in terms of the median member of the world population: Todays "longtermist altruists" are very unusual people relative to the world population; it's not clear how much influence a rural farmer in China, India, or Bangladesh has today even if, say, the Bostrom/Yudkowsky story about AI is correct.)
Replies from: Max_Daniel, Pablo_Stafforini, Tobias_Baumann
comment by Max_Daniel · 2019-09-04T15:26:06.989Z · EA(p) · GW(p)

On a second thought, maybe what we should do is: take some person at ti (bracketing for a moment whether we draw someone uniformly at random, or take the one with most influence, or whatever) and then look at the difference between their actual actions (or the actions we'd expect them to take in the possible world we're considering if the values of the person are also determined by our sampling procedure) and the actions they'd take if we "intervene" to assume this person in fact was a longtermist altruist.

This definition would suggest that hinginess in the periods I mentioned wasn't that high: It's true that one of 70 people helping to hunt a bison made a big difference when compared to doing nothing; however, probably there is approximately zero difference between what that person has actually done and what they would have done if there had been a longtermist altruists: they'd have helped hunting a bison in both cases.

comment by Pablo (Pablo_Stafforini) · 2019-09-26T19:10:59.873Z · EA(p) · GW(p)

I just realized that there are actually two separate reasons for thinking that the hingiest times in history were periods of population bottlenecks. First, because tiny populations are much more vulnerable to extinction than much larger populations are. Second, because in smaller populations an individual person has a larger share of influence than they do in larger populations, holding total influence constant.

Compare population bottlenecks to one of Will's examples:

It could be the case [...] that the 20th century was a bigger deal than the 17th century, but that, because there were 1/5th as many people alive during the 17th century, a longtermist altruist could have had more direct impact in the 17th century than in the 20th century.

Unlike the 17th century, which is hingier only because comparatively fewer people exist, periods of population bottlenecks are hingier both because of their unusually low population and because they are "a bigger deal" than other periods.

comment by Tobias_Baumann · 2019-09-04T14:50:07.082Z · EA(p) · GW(p)

Do you think that this effect only happens in very small populations settling new territory, or is it generally the case that a smaller population means more hinginess? If the latter, then that suggests that, all else equal, the present is hingier than the future (though the past is even hingier), if we assume that future populations are bigger (possibly by a large factor). While the current population is not small in absolute terms, it could plausibly be considered a population bottleneck relative to a future cosmic civilisation (if space colonisation becomes feasible).

Replies from: Max_Daniel
comment by Max_Daniel · 2019-09-04T15:20:09.374Z · EA(p) · GW(p)

I think as a super rough first pass it makes sense to think that, all else equal, smaller populations mean more hinginess.

I feel uncertain to what extent this is just because we should then expect any single person to own a greater share of total resources at some point in time. One extreme assumption would be that the relative distribution of resources at any given point in time is the prior for everyone's influence over the long-run future, perhaps weighted by how much they care about the long run. On that extreme assumption, this would probably mean that the maximum influence over all agents is higher today because global inequality is presumably higher than during population bottlenecks or in fact any past period. However, I think that assumption is too extreme: it's not the case that every generation can propagate their values indefinitely, with the share of their influence staying constant; for example, it might be that certain developments are determined by environmental conditions or other factors that are independent from any human's values. This turns on quite controversial questions around environmental/technological determinism that probably have a nuanced rather than simple answer.

comment by Kit · 2019-09-04T18:46:18.174Z · EA(p) · GW(p)

This was very thought-provoking. I expect I'll come back to it a number of times.

I suspect that how the model works depends a lot on exactly how this definition is interpreted:

a time t is more influential (from a longtermist perspective) than a time t iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at t rather than to a longtermist altruist living at t.

In particular, I think you intend direct work to include extinction risk reduction, and to be opposite to strategies which punt decisions to future generations. However, extinction risk reduction seems like the mother of all punting strategies, so it seems naturally categorised as not direct work for the purpose of considering whether to punt. Due to this, I expect some weirdness around the categorisation, and would guess that a precise definition would be productive.

(Added formatting and bold to the quote for clarity.)

Replies from: JanBrauner, Stefan_Schubert
comment by JanBrauner · 2019-09-06T21:13:48.559Z · EA(p) · GW(p)

How I see it:

Extinction risk reduction (and other type of "direct work") affects all future generations similarly. If the most influential century is still to come, extinction risk reduction also affects the people alive during that century (by making sure they exist). Thus, extinction risk reduction has a "punting to future generations that live in hingey times" component. However, extinction risk reduction also affects all the unhingey future generations directly, and the effects are not primarily mediated through the people alive in the most influential centuries.

(Then, by definition, if ours is not a very hingey time, direct work is not a very promising strategy for punting. The effect on people alive during the "most influential times" has to be small by definition. If direct work did strongly enable the people living in the most influential century (e.g. by strongly increasing the chance that they come into existence), it would also enable many other generations a lot. This would imply that the present was quite hingey after all, in contradiction to the assumption that the present is unhingey.)

Punting strategies, in contrast, affect future generations primarly via their effect on the people alive in the most influential centuries.

Replies from: Kit
comment by Kit · 2019-09-22T17:21:36.669Z · EA(p) · GW(p)
Punting strategies, in contrast, affect future generations primarly via their effect on the people alive in the most influential centuries.

That seems like a sufficiently precise definition. Whether there are any interventions in that category seems like an open question. (Maybe it is a lot more narrow than Will's intention.)

comment by Stefan_Schubert · 2019-09-05T11:35:36.454Z · EA(p) · GW(p)

I agree that it seems important to get more clarity over the direct work vs buck-passing/punting distinction.

extinction risk reduction seems like the mother of all punting strategies

Building capacity for future extinction risk reduction work may be seen as more "meta"/"buck-passing/"punting" still.

There has been an interesting discussion on direct vs meta-level work to reduce existential risk; see Toby Ord and Owen Cotton-Barratt.

Replies from: Kit
comment by Kit · 2019-09-05T13:12:46.569Z · EA(p) · GW(p)

Thanks! I hadn't seen the Cotton-Barratt piece before.

Extinction risk reduction punts on the question of which future problems are most important to solve, but not how best to tackle the problem of extinction risk specifically. Building capacity for future extinction risk reduction work punts on how best to tackle the problem of extinction risk specifically, but not the question of which future problems are most important to solve. They seem to do more/less punting than one another along different dimensions, so, depending on one's definition of direct vs punting, each could be more of a punt than the other. I'm not clear on whether this means we should pick a dimension to talk about, or whether there is no meaningful single spectrum of directness vs punting.

comment by richard_ngo · 2019-09-05T21:54:20.914Z · EA(p) · GW(p)

Nice post :) A couple of comments:

even if we’re at some enormously influential time right now, if there’s some future time that is even more influential, then the most obvious EA activity would be to invest resources (whether via financial investment or some sort of values-spreading) in order that our resources can be used at that future, more high-impact, time. Perhaps there’s some reason why that plan doesn’t make sense; but, currently, almost no-one is even taking that possibility seriously.

To me it seems that the biggest constraint on being able to invest in future centuries is the continuous existence of a trustworthy movement from now until then. I imagine that a lot of meta work implicitly contributes towards this; so the idea that the HoH is far in the future is an argument for more meta work (and more meta work targeted towards EA longevity in particular). But my prior on a given movement remaining trustworthy over long time periods is quite low, and becomes lower the more money it is entrusted with.

But there are future scenarios that we can imagine now that would seem very influential:

To the ones you listed, I would add:

• The time period during which we reach technological completion, since from then on the stochasticity from the rate of technological advancement becomes a much less important factor.
• As you mentioned previously, the time period during which we develop comprehensive techniques for engineering the motivations and values of the subsequent generation - if it actually happens to not be very close to us. (E.g. it might require a much more developed understanding of sociology than we currently have to carry out in practice).
comment by WilliamKiely · 2019-09-04T11:56:06.769Z · EA(p) · GW(p)
1. It’s a priori extremely unlikely that we’re at the hinge of history
Claim 1

I want to push back on the idea of setting the "ur-prior" at 1 in 100,000, which seems far too low to me. I also will critique the method that arrived at that number, and propose a method of determining the prior that seems superior to me.

(One note before that: I'm going to ignore the possibility that the hingiest century could be in the past and assume that we are just interested in the question of how probable it is that the current century is hingier than any future century.)

First, to argue that 1 in 100,000 is too low: The hingiest century of the future must occur before civilization goes extinct. Therefore, one's prior that the current century is the hingiest century of the future must be at least as high as one's credence that civilization will go extinct in the current century. I think this is already (significantly) greater than 1 in 100,000.

I'll come back to this idea when I propose my method of determining the prior, but first to critique yours:

The method you used to come up with the 1 in 100,000 prior that our current century is hingier than any future century was to estimate the expected number of centuries that civilization will survive (1,000,000) and then to try to "[restrict] ourselves to a uniform prior over the first 10%" of that expected number of centuries because "the number of future people is decreasing every century."

(Note that while I think the adjustment from 10^-6 to 10^-5 is an adjustment for a good reason in the right direction, I think it can be left out of the prior: You can update on the fact that "the number of future people is decreasing every century" (and other things) later after determining the prior.)

Now to critique the method Will used of arriving at the 1 in 1,000,000 prior. It basically starts with an implicit probability distribution for when civilization is going to go extinct (good), but then compresses that into an average expected number of centuries that civilization is going to survive and (mistakenly) essentially assumes that civilization is going to last precisely that long. It then computes one over the average expected number of centuries to get the base rate that a given century is the hingiest (determining a base rate is good, but this isn't the right way).

I propose that a better method is that one should start with the same implicit probability distribution for the expected lifespan of civilization, except make it explicit, and do the same base rate calculation but for each discrete possible length of civilization (1 century, 2 centuries, etc) instead of compressing the probability distribution for the expected lifespan of civilization into an average expected number of centuries.

That is, I'd argue that one's prior that the current century is the hingiest century of the future should be equal to one's credence that civilization will go extinct in the current century plus 1/2 times one's credence that civilization will go extinct in the second century (since there will then be two possible centuries and we are calculating a base rate), plus 1/3 times one's credence that civilization will go extinct in the third century (this is the third base rate we are summing), etc.

From my "1000 Century Model", assuming a 1% per century risk of extinction per century for 1000 centuries, the prior that the first century is the hingiest is ~4.65%.

From my "90% Likely to Survive 999 Centuries Model", assuming a 10% chance of extinction in the first century, and a 0% chance of extinction every year thereafter until the 1000th century, and a 100% chance of extinction in the 1000th century, my method gives a prior of ~10.09% that the first century is the hingiest. On the other hand, since the expected number of centuries is ~900 years, MacAskill's method gives an initial prior of ~0.111% and a prior of ~1.111% after "[restricting] ourselves to a uniform prior over the first 10% [of expected centuries]". Both priors calculated using MacAskill's method are below the 10% rate of extinction in the first century, which (I claim again) obviously means they are too low.

Replies from: Kit, WilliamKiely, MichaelA
comment by Kit · 2019-09-04T17:56:20.386Z · EA(p) · GW(p)

Using a distribution over possible futures seems important. The specific method you propose seems useful for getting a better picture of maxcentury most leveraged. However, what we want in order to make decisions is something more akin to maxleverage of century . The most obvious difference is that scenarios in which the future is short and there is little one can do about it score highly on expected ranking and low on expected value. I am unclear on whether a flat prior makes sense for expectancy, but it seems more reasonable than for probability.

Of course, even maxleverage of century does not accurately reflect what we are looking for. Similarly to Gregory_Lewis' comment, the decision-relevant thing (if 'punting to the future' is possible at all) is closer still to maxwhat we will assess the leverage of century i to be at the time. i.e. whether we will have higher expected leverage in some future century according to our beliefs at that time. Thinking this through, I also find it plausible that even this does not make sense when using the definitions in the post, and will make a related top-level comment.

Replies from: Habryka
comment by Habryka · 2019-09-05T04:41:43.967Z · EA(p) · GW(p)

While I agree with you that is not that action relevant, it is what Will is analyzing in the post, and think that William Kiely's suggested prior seems basically reasonable for answering that question. As Will said explicitly in another comment:

Agree that it might well be that even though one has a very low credence in HoH, one should still act in the same way. (e.g. because if one is not at HoH, one is a sim, and your actions don’t have much impact).

I do think that the focus on is the part of the post that I am least satisfied by, and that makes it hardest to engage with it, since I don't really know why we care about the question of "are we in the most influential time in history?". What we actually care about is the effectiveness of our interventions to give resources to the future, and the marginal effectiveness of those resources in the future, both of which are quite far removed from that question (because of the difficulties of sending resources to the future, and the fact that the answer to that question makes overall only a small difference for the total magnitude of the impact of any individual's actions).

Replies from: Kit
comment by Kit · 2019-09-05T10:08:17.684Z · EA(p) · GW(p)

I agree that, among other things, discussion of mechanisms for sending resources to the future would needed to make such a decision. I figured that all these other considerations were deliberately excluded from this post to keep its scope manageable.

However, I do think that one can interpret the post as making claims about a more insightful kind of probability: the odds with which the current century is the one which will have the highest leverage-evaluated-at-the-time (in contrast to an omniscient view / end-of-time evaluation, which is what this thread mostly focuses on). I think that William_MacAskill's main arguments are broadly compatible with both of these concepts, so one could get more out of the piece by interpreting it as about the more useful concept.

Formally, one could see the thing being analysed as

maximises leverage of century ,

where is the knowledge available at the beginning of century i. If we and all future generations may freely move resources across time, and some things that are maybe omitted from the leverage definition are held constant, this expression tells us with what odds we are correct to do 'direct work' today as opposed to transfer resources one century forward. (Confusion about what 'direct work' means noted here [EA(p) · GW(p)].)

However, you seem to be right that as soon as you don't hold other very important factors (such as how well one can send resources to the future) constant, those additional terms go inside the maximisation evaluation, and hence the above expression still isn't that useful. (In particular, it can't just be multiplied by an independent factor to get to a useable expression.)

(Also, I feel like I'm mathing from the hip here, so quite possibly I've got this quite wrong.)

comment by WilliamKiely · 2019-09-04T18:07:48.279Z · EA(p) · GW(p)

Another reason to think that MacAskill's method of determining the prior is flawed that I forgot to write down:

If one uses the same approach to come up with a prior that the second, third, fourth, X century is the hingiest century of the future, and then adds these priors together one ought to get 100%. This is true because exactly one of the set of all future centuries must be the hingiest century of the future. Yet with MacAskill's method of determining the priors, when one sums all the individual priors that the hingiest century is century X, one gets a number far greater than 100%. That is, MacAskill's estimate is that there are 1 million expected centuries ahead, so he uses a prior of 1 in 1 million that the first century is the hingiest (before the arbitrary 10x adjustment). However, his model assumes that it's possible that civilization could last as long as 10 billion centuries (1 trillion years). So what is his prior that e.g. the 2 billionth century is the hingiest? 1 in 1 million also? Surely this isn't reasonable, for if one uses a prior of 1 in 1 million for all 10 billion possible centuries then, one's prior expectation that one of the 10 billion centuries that civilization will possible live through is 10,000 (aka 1,000,000%). One's credence in this ought to be 1 (100%) by definition.

My method of determining the prior doesn't have this problem. On the contrary, as Column J of my linked spreadsheet from the previous comment shows, the prior probability that the Hingiest Century is somewhere in the Century 1-1000 range (which I calculate by summing the individual priors for those thousand centuries) approaches 100% as the probability that civilization goes extinct in those first 1000 centuries approaches 100%.

comment by William_MacAskill · 2019-09-13T01:07:29.453Z · EA(p) · GW(p)

Thanks, William!

Yeah, I think I messed up this bit. I should have used the harmonic mean rather than the arithmetic mean when averaging over possibilities of how many people will be in the future. Doing this brings the chance of being among the most influential person ever close to the chance of being the most influential person ever in a small-population universe.  But then we get the issue that being the most influential person ever in a small-population universe is much less important than being the most influential person in a big-population universe. And it’s only the latter that we care about.

So what I really should have said (in my too-glib argument) is: for simplicity, just assume a high-population future, which are the action-relevant futures if you're a longtermist. Then take a uniform prior over all times (or all people) in that high-population future. So my claim is: “In the action-relevant worlds, the frequency of ‘most important time’ (or ‘most important person’) is extremely low, and so should be our prior.”

Replies from: WilliamKiely
comment by WilliamKiely · 2019-09-13T06:03:57.416Z · EA(p) · GW(p)

Thanks for the reply, Will. I go by Will too by the way.

for simplicity, just assume a high-population future, which are the action-relevant futures if you're a longtermist

This assumption seems dubious to me because it seems to ignore the nontrivial possibility that there is something like a Great Filter in our future that requires direct-work to overcome (or could benefit from direct-work).

That is, maybe if we solve one challenge right in our near-term future right (e.g. hand-off the future to benevolent AGI) then it will be more or less inevitable that life will flourish for billions of years, and if we fail to overcome that challenge then we will go extinct fairly soon. As long as you put a nontrivial probability on such a challenge existing in the short-term future and it being tractable, then even longtermist altruists in the small-population worlds (possibly ours) who try punting to the future / passing the buck instead of doing direct work and thus fail to make it past the Great-Filter-like challenge can (I claim, contrary to you by my understanding) be said to be living in an action-relevant world despite living in a small-population universe. This is because they had the power (even though they didn't exercise it) to make the future a big-population universe.

comment by MichaelA · 2019-10-12T12:37:30.507Z · EA(p) · GW(p)
From my "1000 Century Model", assuming a 1% per century risk of extinction every year for 1000 years

Did you mean to say "assuming a 1% risk of extinction per century for 1000 centuries"? That seems to better fit the rest of what you said, and what's in your model, as best I can tell.

Replies from: WilliamKiely
comment by WilliamKiely · 2019-10-13T23:36:01.380Z · EA(p) · GW(p)

Did you mean to say "assuming a 1% risk of extinction per century for 1000 centuries"?

Yes, thank you for the correction!

comment by William_MacAskill · 2019-09-13T01:02:40.773Z · EA(p) · GW(p)

There were a couple of recurring questions, so I’ve addressed them here.

What’s the point of this discussion — isn’t passing on resources to the future too hard to be worth considering? Won’t the money be stolen, or used by people with worse values?

In brief: Yes, losing what you’ve invested is a risk, but (at least for relatively small donors) it’s outweighed by investment returns.

Longer: The concept of ‘influentialness of a time’ is the same as the cost-effectiveness (from a longtermist perspective) of the best opportunities accessible to longtermists at a time.  Suppose I think that the best opportunities in, say, 100 years, are as good as the best opportunities now. Then, if I have a small amount of money, then I can get (say) at least a 2% return per year on those funds. But I shouldn’t think that the chance of my funds being appropriated (or otherwise lost) is as high as 2% per year. So the expected amount of good I do is greater by saving.

So if you think that hingeyness (as I’ve defined it) is about the same in 100 years as it is now, or greater, then there’s a strong case for investing for 100 years before spending the money.

(Caveat that once we consider larger amounts of money, diminishing returns for expenditure becomes an issue, and chance of appropriation increases.)

What’s your view on anthropics? Isn’t that relevant here?

I’ve been trying to make claims that aren’t sensitive to tricky issues in anthropic reasoning. The claim that if there are n people, ordered in terms of some relation F (like ‘more important than’), then the claim that the prior probability that you are most F (‘most important’) person  is 1/n doesn’t distinguish between anthropic principles, because I’ve already conditioned on the number of people in the world. So I think anthropic principles aren’t directly relevant for the argument I’ve made, though obviously they are relevant more generally.

Replies from: Kit, Wei_Dai
comment by Kit · 2019-09-13T09:10:32.942Z · EA(p) · GW(p)

I was very surprised to see that 'funds being appropriated (or otherwise lost)' is the main concern with attempting to move resources 100 years into the future. Before seeing this comment, I would have been confident that the primary difficulty is in building an institution which maintains acceptable values† for 100 years.

Some of the very limited data [EA · GW] we have on value drift within individual people suggests losses of 11% and 18% per year for two groups over 5 years. I think these numbers are a reasonable estimate for people who have held certain values for 1-6 years, with long-run drop-off for individuals being lower.

A more relevant but less precise outside view is my intuitions about how long charities which have clear founding values tend to stick to those values after their founders leave. I think of this as ballpark a decade on average, though hopefully we could do better if investing time and money in increasing this.

Perhaps yet more relevant and yet less precise is the history of institutions through the eras which have built themselves around some values which they thought of as non-negotiable (in the same way that we might see impartiality as non-negotiable). For example, religious institutions. My vague, non-historian impression is that, even considering institutions founded with concrete values at their core, very few still had those values†† 100 years later, if they existed in the same form at all.

The thing I'd find most convincing in outweighing these outside views is simply an outline for how EAs can get this institutional value drift thing close to zero. I can imagine such a plan seeming obvious to others, but it currently looks like a potentially intractable problem to me.†††

Possible example [EA · GW] of acceptable values.

†† I'm excluding 'maximise profits' as a value!

††† This all becomes fairly simple upon the rise of any technology which would enable permanent lock-in. However, it seems that this would be a time to deploy a lot of resources immediately, so ways to move money into the future at that time seem less helpful. This seems like weak evidence for an unfortunate correlation between hingeyness and ability to move resources into the future.

comment by William_MacAskill · 2019-09-13T19:51:21.656Z · EA(p) · GW(p)

Sorry - 'or otherwise lost' qualifier was meant to be a catch-all for any way of the investment losing its value, including (bad) value-drift.

I think there's a decent case for (some) EAs doing better at avoiding this than e.g. typical foundations:

• If you have precise values (e.g. classical utilitarianism) then it's easier to transmit those values across time - you can write your values down clearly as part of the constitution of the foundation, and it's easier to find and identify younger people to take over the fund who also endorse those values. In contrast, for other foundations, the ultimate aims of the foundation are often not clear, and too dependent on a particular empirical situation (e.g. Benjamin Franklin's funds were to 'to provide loans for apprentices to start their businesses' (!!)).
• If you take a lot of time carefully choosing who your successors are (and those people take a lot of time over who their successors are).

Then to reduce appropriation, one could spread the funds across many different countries and different people who share your values. (Again, easier if you endorse a set of values that are legible and non-idiosyncratic.)

It might still be true that the chance of the fund becoming valueless gets large over time (if, e.g. there's a 1% risk of it losing its value per year), but the size of the resources available also increases exponentially over time in those worlds where it doesn't lose its value.

Caveat also tricky questions on when 'value drift' is a bad thing rather than the future fund owners just having a better understanding of the right thing to do than the founders did, which often seems to be true for long-lasting foundations.

Replies from: Kit
comment by Kit · 2019-09-14T22:36:10.535Z · EA(p) · GW(p)

Got it. Given the inclusion of (bad) value drift in 'appropriated (or otherwise lost)', my previous comment should just be interpreted as providing evidence to counter this claim:

But I shouldn’t think that the chance of my funds being appropriated (or otherwise lost) is as high as 2% per year.

[Recap of my previous comment] It seems that this quote predicts a lower rate than there has ever† been before. Such predictions can be correct! However, a plan for making the prediction come true is needed.

It seems that the plan should be different to what essentially all†† the people with higher rates of (bad) value drift did. These particular suggestions (succession planning and including an institution's objectives in its charter) seem qualitatively similar to significant minority practices in the past. (e.g. one of my outside views uses the reference class of 'charities with clear founding values'. For the 'institutions through the eras' one, religious groups with explicit creeds and explicit succession planning were prominent examples I had in mind.) The open question then seems to be whether EAs will tend to achieve sufficient improvement in such practices to bring (bad) value drift down by around an order of magnitude relative to what has been achieved historically. This seems unlikely to me, but not implausible. In particular, the idea that it is easier to design a constitution based on classical utilitarianism than for other goals people have had is very interesting.

Aside: investing heavily in these practices seems easier for larger donors. The quote seems very hard to defend for donors too small to attract a highly dedicated successor.

This discussion has made me think that insofar as one does punt to the future, making progress on how to reduce institutional value drift would be a very valuable project, even if I'm doubtful about how much progress is possible.

† It seems appropriate to exclude all groups coordinating for mutual self-interest, such as governments. (This is broader than my initial carving out of for-profits.)
†† However, it seems useful to think about a much wider set of mission-driven organisations than foundations because the sample of 100-year-old foundations is tiny.

Replies from: Max_Daniel
comment by Max_Daniel · 2019-09-15T13:27:14.048Z · EA(p) · GW(p)
It seems that this quote predicts a lower rate than there has ever† been before.

Just to make sure I understand - you're saying that, historically, the chance of funds (that were not intended just to advance mutual self-interest) being appropriated has always been higher than 2% per year?

If so, I'm curious what this is based on. - Do you have specific cases of appropriation in mind? Are you mostly appealing to charities with clear founding values and religious groups, both of which you mention later? [Asking because I feel like I don't have a good grasp on the probability we're trying to assess here.]

Replies from: Kit
comment by Kit · 2019-09-15T14:05:04.832Z · EA(p) · GW(p)

Not appropriated: lost to value drift. (Hence, yes, the historical cases I draw on are the same as in my comment 3 up in this thread.) I'm thinking of this quantity as something like the proportion of resources which will in expectation be dedicated 100 years later to the original mission as envisaged by the founders, annualised.

comment by Max_Daniel · 2019-09-13T11:18:43.264Z · EA(p) · GW(p)

I think you make good points, and overall I feel quite sympathetic to the view you expressed. Just one quick thought pushing a bit in the other direction:

†† I'm excluding 'maximise profits' as a value!

But perhaps this example is quite relevant? To put it crudely, perhaps we can get away with keeping the value "do the most good" stable. This seems more analogous to "maximize profits" than to any specification of value that refers to a specific content of "doing good" (e.g., food aid to country X, or "abolish factory farming", or "reduce existential risk").

More generally, the crucial point seems to be: the content and specifics of values might change, but some of this change might be something we endorse. And perhaps there's a positive correlation between the likelihood of a change in values and how likely we'd be to agree with it upon reflection. [Exploring this fully seems quite complex both in terms of metaethics and empirical considerations.]

Replies from: Kit
comment by Kit · 2019-09-13T18:12:26.542Z · EA(p) · GW(p)

Thanks. I agree that we might endorse some (or many) changes. Hidden away in my first footnote is a link to a pretty broad set of values. To expand: I would be excited to give (and have in the past given) resources to people smarter than me who are outcome-oriented, maximizing, cause-impartial and egalitarian, as defined by Will here [EA · GW], even (or especially) if they plan to use them differently to how I would. Similarly, keeping the value 'do the most good' stable maybe means something like keeping the outcome-oriented, maximizing, cause-impartial and egalitarian values stable.

For clarity, I excluded profit maximisation because incentives to pursue this goal seem powerful in a way that might never apply to effective altruism, however broadly it is construed. (The 'impartial' part seems especially hard to keep stable.) In particular, profit maximisation does not even need to be propagated: e.g. if a company does some random other stuff for a while, its stakeholders will still have a moderate incentive to maximise profits, so will typically return to doing this. A similar statement is that 'maximise profits' is the default state of things. No matter how broad our conception of 'do the most good' can be made, it seems likely to lack this property (except for lock-in scenarios).

comment by Wei_Dai · 2019-09-13T01:39:58.860Z · EA(p) · GW(p)

The concept of ‘influentialness of a time’ is the same as the cost-effectiveness (from a longtermist perspective) of the best opportunities accessible to longtermists at a time. [...] So if you think that hingeyness (as I’ve defined it) is about the same in 100 years as it is now, or greater, then there’s a strong case for investing for 100 years before spending the money.

Are you referring to average or marginal cost-effectiveness here? If "average", then this seems wrong. From the perspective of making a decision on whether to spend on longtermist causes now or later, what matters is the marginal cost-effectiveness of the best opportunities available now versus later. For example, it could well be the case that the next century is more influential than this century (has higher average cost-effectiveness) but because longtermism has gained a lot more ground in terms of popularity, all the highly cost-effective interventions are already done so the money I've invested will have to be spent on marginal interventions that are less cost-effective than the marginal opportunities available today.

If you're referring to marginal cost-effectiveness instead, then your conception of "influentialness of a time" seems really counterintuitive. For example suppose people in the next century manage to build a Singleton that locks in aligned values, thus largely preventing x-risks for all time, but because longtermism is extremely popular, there aren't any interventions with even medium cost-effectiveness left unfunded. It would be quite counterintuitive to say that century has low "influentialness".

In any case, if the ultimate motivation for this discussion here is to make the "spend now or later" decision, why not talk directly about "marginal cost-effectiveness"?

comment by CarlShulman · 2019-09-05T17:03:45.434Z · EA(p) · GW(p)
My view is that, in the aggregate, these outside-view arguments should substantially update one from one’s prior towards HoH, but not all the way to significant credence in HoH.
[3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable.

What credence to 'this century is the most HoH-ish there will ever be henceforth?' That claim soaks up credence from trends towards diminishing influence over time, and our time is among the very first to benefit from longtermist altruists actually existing to get non-zero returns from longtermist strategies and facing plausible x-risks. The combination of those two factors seems to have a good shot at 'most HoH century' but substantially better than that for 'most HoH century remaining.'

comment by CarlShulman · 2020-11-01T21:00:59.699Z · EA(p) · GW(p)

Wouldn't your framework also imply a similarly overwhelming prior against saving? If long term saving works with exponential growth then we're again more important than virtually everyone who will ever live, by being in the first n billion people who had any options for such long term saving. The prior for 'most important century to invest' and 'most important century to donate/act directly' shouldn't be radically uncoupled.

comment by CarlShulman · 2019-09-05T16:40:29.562Z · EA(p) · GW(p)
Here are two distinct views:
Strong Longtermism := The primary determinant of the value of our actions is the effects of those actions on the very long-run future.
The Hinge of History Hypothesis (HoH) :=  We are living at the most influential time ever.
It seems that, in the effective altruism community as it currently stands, those who believe longtermism generally also assign significant credence to HoH; I’ll precisify ‘significant’ as >10% when ‘time’ is used to refer to a period of a century, but my impression is that many longtermists I know would assign >30% credence to this view.  It’s a pretty striking fact that these two views are so often held together — they are very different claims, and it’s not obvious why they should so often be jointly endorsed.

Two clear and common channels I have seen are:

• Longtermism leads to looking around for things that would have lasting impacts (e.g. Parfit and Singer attending to existential risk, and noticing that a large portion of all technological advances have been in the last few centuries, and a large portion of the remainder look likely to come in the next few centuries, including the technologies for much higher existential risk)
• People pay attention to the fact that the last few centuries have accounted for so much of all technological progress, and the likely gains to be had in the next few centuries (based on our knowledge of physical laws, existence proofs, from biology, and trend extrapolation), noticing things that can have incredibly long-lasting effects that dwarf short-run concerns
Replies from: MichaelA
comment by MichaelA · 2020-04-10T06:55:16.083Z · EA(p) · GW(p)

Interesting comment.

I think personally I had a sort of amplifying feedback loop between longtermism and assigning a "significant" credence to HoH (not actually sure what credence I assign to it, but it probably at least sometimes feels >10%). Something very roughly like the following:

1. I had a general inclination towards utilitarianism and a large moral circle, which got me into EA.

2. EA introduced me to arguments about longtermism and existential risks this century being high enough to be a global priority (which could perhaps be quite "low" by usual standards)

3. I started becoming convinced by those arguments, and thus learning more about them, and beginning to switch my focus to x-risk reduction.

4. Learning and thinking more about x-risks made the potential scale and quality of the future if we avoid them more salient, which made longtermism more emotionally resonant. This then feeds back into 2 and 3.

5. Learning and thinking more about x-risks and longtermism also exposed me to more arguments against concerns about x-risks, and meant I was positioned to respond to them not with "Ok, let's shift some probability mass towards the best thing to work on being global poverty and/or animal welfare" but instead "Ok, let's shift some probability mass towards the best thing to work on being longtermist efforts other than current work on x-risks." This led me to think more about various ways longtermism could be acted on, and thus more ways the future could be excellent or terrible, and thus more reasons why longtermism feels important.

I'm not saying this is an ideal reasoning process. Some of it arguably looks a little like motivated reasoning or entering something of an echo chamber. But I think that's roughly the process I went through.

comment by SoerenMind · 2019-09-03T14:49:36.840Z · EA(p) · GW(p)

Important post!

I like your simulation update against HoH. I was meaning to write a post about this. Brian Tomasik has a great paper that quantitatively models the ratio of our influence on the short vs long-term. Though you've linked it, I think it's worth highlighting it more.

How the Simulation Argument Dampens Future Fanaticism

The paper cleverly argues that the simulation argument combined with anthropics either strongly dampens the expected impact of far future altruism or strongly increases the impact of short-term altruism. That conclusion seems fairly robust to the choice of decision- and anthropic theory and uncertainty over some empirical parameters. He doesn't directly discuss how the "seems like HoH" observation affects his conclusions, but I think it makes them stronger. (i recommend Brian's simplified calculations here [LW(p) · GW(p)]).

I assume this paper didn't get as much discussion as it deserves because Brian posted [LW · GW] it in the dark days of LW.

Replies from: SoerenMind
comment by SoerenMind · 2019-09-03T15:25:31.306Z · EA(p) · GW(p)

2.

For me, the HoH update is big enough to make a the simulation hypothesis a pretty likely explanation. It also makes it less likely that there are alternative explanations for "HoH seems likely". See my old post here [LW · GW] (probably better to read this comment though).

Imagine a Bayesian model with a variable S="HoH seems likely" (to us) and 3 variables pointing towards it: "HoH" (prior: 0.001), "simulation" (prior=0.1), and "other wrong but convincing arguments" (prior=0.01). Note that it seems pretty unlikely there will be convincing but wrong arguments a priori (I used 0.01) because we haven't updated on the outside view yet.

Further, assume that all three causes, if true, are equally likely to cause "HoH seems likely" (say with probability 1, but the probability doesn't affect the posterior).

Apply Bayes rule: We've observed "HoH seems likely". The denominator in Bayes rule is P(HoH seems likely) ~~ 0.111 (roughly the sum of the three priors because the priors are small). The numerator for each hypothesis H equals 1 * P(H).

Bayes rule gives an equal update (ca 1/0.111x = 9x) in favor of every hypothesis, bringing up the probability of "simulation" to nearly 90%.

Note that this probability decreases if we find, or think there are better explanations for "HoH seems likely". This is plausible but not overwhelmingly likely because we already have a decent explanation with prior 0.1. If we didn't have one, we would still have a lot of pressure to explain "HoH seems likely". The existence of the plausible explanation "simulation" with prior 0.1 "explains away" the need for other explanations such as those falling under "wrong but convincing argument".

This is just an example, feel free to plug in your numbers, or critique the model.

comment by Pablo (Pablo_Stafforini) · 2019-09-03T09:34:49.456Z · EA(p) · GW(p)

I liked this post. One comment:

Or perhaps extinction risk is high, but will stay high indefinitely, in which case the future is not huge in expectation, and the grounds for strong longtermism fall away.

I don't think this necessarily follows. If the present generation can reduce the risk of extinction for all future generations, the present value of extinction reduction may still be high enough to vindicate strong longtermism. For example, suppose that each century will be exposed to a 2% constant risk of extinction, and that we can bring that risk down to 1% by devoting sufficient resources to extinction risk reduction. Assuming a stable population of 10 billion, then thanks to our efforts an additional 500 billion lives will exist in expectation, and most of these lives will exist more than 10,000 years from now. Relaxing the stable population assumption strengthens this conclusion.

comment by William_MacAskill · 2019-09-04T04:56:20.135Z · EA(p) · GW(p)

Agreed, good point; I was thinking just of the case where you reduce extinction risk in one period but not in others.

I’ll note, though, that reducing extinction risk at all future times seems very hard to do. I can imagine, if we’re close to a values lock-in point, we could shift societal values such that they care about future extinction risk much more than they would otherwise have done. But if that's the pathway, then the Time of Perils view wouldn’t provide an argument for HoH independent of the Value Lock-In view.

comment by kbog · 2020-12-31T00:02:16.858Z · EA(p) · GW(p)

I think this argument implicitly assumes a moral objectivist point of view.

I'd say that most people in history have been a lot closer to the hinge of history when you recognize that the HoH depends on someone's values.

If you were a hunter-gatherer living in 20,000 BC then you cared about raising your family and building your weir and you lived at the hinge of history for that.

If you were a philosopher living in 400 BC then you cared about the intellectual progress of the Western world and you lived at the hinge of history for that.

If you were a theologian living in 1550 then you cared about the struggle of Catholic and Protestant doctrines and you lived at the hinge of history for that.

If you're an Effective Altruist living in 2020 then you care about global welfare and existential risk, and you live at the hinge of history for that.

If you're a gay space luxury communist living in 2100 then you care about seizing the moons of production to have their raw materials redistributed to masses, and you live at the hinge of history for that.

This isn't a necessary relationship. We may say that some of these historical hinges actually were really important in our minds, and maybe a future hinge will be more important. But generally speaking, the rise and fall of motivations and ideologies is correlated with the sociopolitical opportunity for them to matter. So most people throughout history have lived in hingy times.

comment by lauerjeremy · 2019-10-18T21:55:43.110Z · EA(p) · GW(p)

This is an unusual comment for me, since I will talk about religion. The Baha'i Faith claims, at least as it would be expressed in the terminology used here, that something very close to the following are both true:
-Strong Longtermism and
-The Hinge of History Hypothesis (HOH).
Conjoining/conflating these two claims is the position criticized by Will in this blog post, a position which is at least to a certain degree defended by Toby in his comments.

My sense is that the Baha'i Faith strongly agrees with Toby (and probably goes much farther than he would) in claiming that both these hypotheses are true, and in addition that certain other hypotheses mentioned below are true. I won't back all this up with quotes now, as I have no idea if anyone here is interested in that level of discussion, and it would anyway require some research time to get right. So what I am stating here amounts to my opinions about Baha'i views.

My views are that, at least at surface level, there is a strong coincidence, one well worth noting, between the views of the Baha'i Faith (in the domain under consideration) and the common views in Effective Altruism that Will intended to criticize. Indeed, it should not be surprising if there was to turn out to be a religious element to the conjunction of these two views, and Will relevantly cites the periods of the birth of Christ and early Christianity, the times of Moses, Mohammed, Buddha and other religious leaders, and the Reformation as potentially very hinge-y periods in history. In one sense, the founders of the Baha'i Faith claim to be merely the latest updates in what they term progressive revelation. But the Baha'i Faith does not claim to be merely a hinge event, but it also claims that the vast majority of human welfare and human flourishing will be experienced in the distant future.

Moreover, in my view anyway, the Baha'i Faith makes two subsidiary claims about what Will and Toby call "permanent lock-in events". The Baha'i Faith makes the auxiliary claim that the following two lock-in events are not something that may occur in the future but rather are actively unfolding at the present time:
1. That war (as one of the primary lock-in mechanisms observed in human history) has been exhausted (i.e. that in a relevant sense there has been an end to war as a world-historical engine), and that there will emerge in the not-very-distant future a unified global culture that will strongly determine what values influence the very long-run future of human flourishing.
2. That there will be a religion (i.e. the Baha'i Faith) that will out-compete both atheism and other religions and becomes a world religion, one whose values will therefore strongly determine the long-run future.
As I understand it, these latter are also joint claims, i.e. both are held to be true at the same time, rather than competing or mutually exclusive claims.

That said, the Baha'i Faith does not claim to be the *final* world religion. According to Baha'i theology, other religions will necessarily succeed and replace it in the future while remaining nevertheless "under its shadow". Therefore the Baha'i Faith in effect claims to be the Hinge Religion of the entire scope of what we think of as human history, whose flourishing will continue for many millennia eventually resulting in a far-future Golden Age of human civilization. These claims would seem to make the Baha'i Faith an object of curiosity for Effective Altruists who sympathize with strong long-termism and the hinge of history views.

Full disclosure: I have been a member of the Baha'i Faith since 1985.

comment by Pablo (Pablo_Stafforini) · 2019-09-27T07:50:29.031Z · EA(p) · GW(p)

Kelsey Piper has just published a Vox article, 'Is this the most important century in human history?', discussing this post.

comment by Stefan_Schubert · 2019-09-03T15:13:51.750Z · EA(p) · GW(p)

Thanks, I think this was very good.

Re movement-building as a buck-passing strategy, I guess that the formation of the major world religions can be seen as movement-building, in a sense. Yet my interpretation is that you don't see that as an example of buck-passing, but as a more direct change of world history (you mention it as an example of a particularly influential time). Thus some forms of movement-building are, on this view, seen as buck-passing, and not others (size of the movement is probably a relevant factor here, but no doubt there are others).

Maybe that serves to show that the distinction between directly changing world history and passing the buck for later isn't sharp (maybe it could be seen as a matter of degree). It would be good to see some further analysis of this distinction.

comment by William_MacAskill · 2019-09-04T04:59:37.220Z · EA(p) · GW(p)

Thanks - I agree that this distinction is not as crisp as would be ideal. I’d see religion-spreading, and movement-building, as in practice almost always a mixed strategy: in part one is giving resources to future people, and in part one is also directly altering how the future goes.

But it's more like buck-passing than it is like direct work, so I think I should just not include the Axial age in the list of particularly influential times (given my definition of 'influential').

comment by matthew.vandermerwe · 2019-09-04T13:09:09.604Z · EA(p) · GW(p)
there are an expected 1 million centuries to come, and the natural prior on the claim that we’re in the most influential century ever is 1 in 1 million. This would be too low in one important way, namely that the number of future people is decreasing every century, so it’s much less likely that the final century will be more influential than the first century. But even if we restricted ourselves to a uniform prior over the first 10% of civilisation’s history, the prior would still be as low as 1 in 100,000.

Half-baked thought: you might think that the very very long futures will mostly have been locked in very close to their start—i.e. that timescales for locking in the best futures are much much shorter than the maximum lifespan for civilisation. This would push you towards a prior over an even smaller chunk of the expected future.

Something like this view seems implicit in some ways of talking about the future, and feels plausible to me, though I’m not sure what the best arguments are.

comment by Tobias_Baumann · 2019-09-03T12:02:36.463Z · EA(p) · GW(p)

Great post! It's great to see more thought going into these issues. Personally, I'm quite sceptical about claims that our time is especially influential, and I don't have a strong view on whether our time is more or less hingy than other times. Some additional thoughts:

I got the impression that you assume that some time (or times) are particularly hingy (and then go on to ask whether it's our time). But it is also perfectly possible that no time is hingy, so I feel that this assumption needs to be justified. Of course, there is some variation and therefore there is inevitably a most influential time, but the crux of the matter is whether there are differences by a large factor (not just 1.5x). And that is not obvious; for instance, if we look at how people in the past could have shaped 21st century societies, it is not clear to me whether any time was especially important.

I think a key question for longtermism is whether the evolution of values and power will eventually settle in some steady state (i.e. the end of history). It is plausible that hinginess increases as one gets closer to this point. (But it's not obvious, e.g. there could just be a slow convergence to a world government without any pivotal events.) By contrast, if values and influence drift indefinitely, as they did so far in human history, then I don't see strong reasons to expect certain times to be particularly hingy. So it is crucial to ask whether a (non-extinction) steady state will happen, and how far away we are from it. (See also this related post of mine.)

"I suggest that in the past, we have seen hinginess increase. I think that most longtermists I know would prefer that someone living in 1600 passed resources onto us, today, rather than attempting direct longtermist influence."

Does this take into account that there have been fewer people around in 1600, and many ways to have an influence were far less competitive? I feel that a person in 1600 could have had a significant impact, e.g. via advocacy for the "right" moral views (e.g. publishing good arguments for consequentialism, antispeciesism, etc.) or by pushing for general improvements like reducing violence and increasing cooperation. So I don't quite agree with your take on this, though I wouldn't claim the opposite either – it is not obvious to me whether hinginess increased or decreased. (By your inductive argument, that suggests that it's not clear whether the future will be more or less hingy than the present.)

"A related, but more general, argument, is that the most pivotal point in time is when we develop techniques for engineering the motivations and values of the subsequent generation (such as through AI, but also perhaps through other technology, such as genetic engineering or advanced brainwashing technology), and that we’re close to that point."

Similar to your recent point about how creating smarter-than human intelligence has long been feasible [EA(p) · GW(p)], I'd guess that, given strong enough motivation, a lock-in would already be feasible via brainwashing, propaganda, and sufficiently ruthless oppression of opposition. (We've had these "technologies" for a long time.) The reason why this doesn't quite work in totalitarian states is that a) what you want to lock in is usually the power of an individual dictator or some group of humans, but there's no way to prevent death, and b) people are not fully aligned with the dictator even at the beginning, which limits what you can do (principal-agent problems etc.). The reason we don't it in liberal democracies is that a) we strongly disapprove of the necessary methods, b) we value free speech and personal autonomy, and c) most people don't really mind moderate forms of value drift. So it's to a large extent a question of motivation and taboos, and it is quite possible that people will reject the use of future lock-in technologies for similar reasons.

comment by Ofer (ofer) · 2019-09-04T15:36:19.496Z · EA(p) · GW(p)

Interesting post!

But even if we restricted ourselves to a uniform prior over the first 10% of civilisation’s history, the prior would still be as low as 1 in 100,000.

Why should we use a uniform distribution as a prior? If I had to bet on which century would be the most influential for a random alien civilization, my prior distribution for "most influential century" would be a monotonically decreasing function.

comment by Larks · 2019-09-06T02:17:23.939Z · EA(p) · GW(p)

Thanks for writing this all up! A few small comments:

And, for the Time of Perils view to really support HoH, it’s not quite enough to show that extinction risk is unusually high; what’s needed is that extinction risk mitigation efforts are unusually cost-effective. So part of the view must be not only that extinction risk is unusually high at this time, but also that longtermist altruists are unusually well-placed to decrease those risks — perhaps because extinction risk reduction is unusually neglected.

It could even be the case that extinction risks were unusually low right now, but this period is nonetheless unusually critical because of the tractability. For example, suppose the main risk to mankind was a asteroid or supervolcanos. Prior to the 20th century, there was little we could do about it - and after the 21st century we will have mature space colonies so it will also no longer be an extinction risk. Only in the interim can we do anything to reduce the probability, by researching the threats, attempting to redirect asteroids, accelerating colonization, and so on.

The primary reasons for believing (2) are that if we’re in a simulation it’s much more likely that the future is short, and that extending our future doesn’t change the total amount of lived experiences (because the simulators will just run some other simulation afterwards), and that we’re missing some crucial consideration around how to act.

I know you mention acausal decision theories elsewhere, but I think it is worthwhile bringing them up here. If we are in an ancestor simulation, it is rational for us to try to reduce existential risk, because this decision is acausally entangled with the decision of the 'original' people, whose existential risk reduction efforts causally lead to the existence of the simulation.

Similarly, I think your prior over our position needs directly address anthropic Doomsday-type arguments.

In contrast, if you are more sympathetic to moral realism (or a more sophisticated form of subjectivism), as I am, then you’ll probably be more sympathetic to the idea that future people will have a better understanding of what’s of value than you do now, and this gives another reason for passing the baton on to future generations.

I think you might be overstating the case here. Suppose you assigned 20% credence to some sort of subjectivist / lovecraftian parochialism that places a high value on our actual values right now, 50% to meta-ethical moral realism and predicted moral progress in the future, and 30% to other (e.g. moral realism but not moral progress). It seems this would suggest a nearly 20% credence in now being a hinge period. In contrast, according to the moral realist theory, now is not an especially important time. So for moral uncertainty reasons we should act as if now is an unusually important period.

even then you might still want to save money in a Victorian-values foundation to grant out at a later date

I suspect unfortunately the money may end up being essentially stolen and used for other purposes. There are many examples of this - a classic one is the Ford Foundation, which now promotes goals quite different from that which Henry Ford wanted.

Replies from: Ramiro
comment by Ramiro · 2019-09-06T18:13:41.110Z · EA(p) · GW(p)

I agree with your reasoning concerning uncertainty.

In the arguments against HoH, there’s an appeal to the uncertainty of our evaluations of "Influence". However, the definition of most influential time depends on an evaluation of the opportunity costs of investing in one time vs. another (such as the short-term vs. the long-term).

Uncertainty is a double-edged sword: I get confused when someone argues for “give later” mostly on the ground of our current uncertainty about impact (actually, uncertainty often induces risk-aversion and presentist bias). Suppose that I currently have a credence 0.7 over the statement “AMF saves at least a life (30 QALY) for every U\$3,000”; if I wait ten years, I can hope my confidence on such statements will increase to something like 0.8. However, my confidence in such an increase is just 0.9 – so, when I aggregate all of this uncertainty, it’s almost a draw – 0.72.

(Sorry about using point estimates, but I’m no statistician, and I guess we better keep it simple)

Something similar applies to “start a movement”, and I didn’t even mention cluelessness and value shift.

So, if I donate to a Fund that promises me to invest in the best actions in the long term future, instead of the short-term, I have to trust: a) that the world is not going to end first (so, I have to discount extinction rates); b) the Fund and the underlying financial structure will not end first (or significantly lose its value); c) the Fund will correctly identify a more influential moment, and d) its investment will be aligned with my impartial preferences (as I would decide if I had the same info).

comment by Ryan Baker · 2019-09-16T16:50:17.133Z · EA(p) · GW(p)

Interesting piece. One challenge in extending it to decision making is "resources". It's not clear if you mean financial instruments or some kind of stockpiling. There appears to be some not fully considered vacillation on that topic.

Financial instruments are probably the default, but as we move into more and more long term views, the meaning of these becomes more vague. Does it really pass "resources" to a future generation by having stockpiled financial instruments? While in a micro-economic level these are very translatable to actual resources, at the macro-level, the clarity of that relationship breaks down. Printing a trillion extra dollars doesn't increase available resources, it merely shifts the locus of acknowledged control over the existing resource pool.

Likewise, if a long term philanthropist stockpiled financial instruments to be released in 500 years, keeping them fully dormant, they wouldn't transport physical resources 500 years into the future. What it would do is to create a shock to the system in 500 years of a new source of control. During the 500 years where these instruments were dormant, the rest of society would likely behave as if they did not exist, using all available physical resources during the dormant period without any stockpiling.

In addition, many financial instruments aren't dormant by their very nature, but directive. Investment in a stock directs resources and is an active influence. If this follows societal norms it would have little impact, but also not shift resources into the future.

What it can do is shift influence within the future. If that's a valuable enough goal, you still have to consider hazards. If the future isn't willing to accept this influence, it's not in any sense guaranteed. Financial instruments (and even stockpiled resources) can be seized, and the eventual outcome of their stockpiling can be much different than intended. In small enough quantities, for short enough timespans, it's reasonable to expect them to be treated like they have always been. But there is an additional level of uncertainty that compounds with interest over time and is likely to increase when passing thresholds that draw broader attention.

comment by PeterMcCluskey · 2019-09-05T18:46:20.403Z · EA(p) · GW(p)

>The case for focusing on AI safety and existential risk reduction is much weaker if you live in a simulation than if you don’t.

It's true that a pure utilitarian would expect about an order of magnitude less utility from x-risk reduction if we have a 90% chance of being in a simulation compared to a zero chance of being in a simulation. But the pure utilitarian case for x-risk reduction isn't very sensitive to an order of magnitude change in utility, since the expected utility seems many orders of magnitude larger than what's needed to convince a pure utilitarian to focus on x-risks.

From a more selfish perspective, being in a simulation increases my desire to be involved in events that are interesting to the simulators, in case such people get simulated in more detail.

I'm somewhat concerned that being influenced much by the simulation hypothesis increases the risk that the simulation will be shut down, which seems like weak evidence for caution about altering my behavior much in response to the simulation hypothesis.

For these reasons, and WilliamKiely's comments [EA(p) · GW(p)] about priors, I want to treat HoH as more than 1% likely.

comment by Ben_West · 2022-01-06T19:12:51.930Z · EA(p) · GW(p)

This post introduced the "hinge of history hypothesis" to the broader EA community, and that has been a very valuable contribution. (Although note that the author states that they are mostly summarizing existing work, rather than creating novel insights.)

The definitions are clear, and time has proven that the terms "strong longtermism" and "hinge of history" are valuable when considering a wide variety of questions.

Will has since published an updated article, which he links to in this post, and the topic has received input from others, e.g. this critique [EA · GW] from Buck.

If I was going to introduce a new person to this concept today, I think I might instead link them to Holden's Most Important Century sequence [? · GW], although Will's article still seems like the canonical reference for skepticism about us living at the hinge of history.

comment by WilliamKiely · 2019-09-13T06:43:56.534Z · EA(p) · GW(p)

Claim: The most influential time in the future must occur before civilization goes extinct.

Thoughts on whether this is true or not?

Replies from: SiebeRozendal
comment by SiebeRozendal · 2019-09-15T11:59:15.055Z · EA(p) · GW(p)

Must is a strong word, so that's one reason I don't think it's true. What do you mean by "civilization goes extinct"? Because

1) There might be complex societies beyond Earth

2) New complex societies made up of intelligent beings can arise even after Homo Sapiens goes extinct

comment by WilliamKiely · 2019-09-04T04:59:05.144Z · EA(p) · GW(p)

Typo corrections:

Lots of things are a priori extremely [unlikely] yet we should have high credence in them

and

so I should update towards the cards having [not] been shuffled.

and

All other things being equal, this gives us reason to give resources to future people than to use rather than to use those resources now.

#3: The simulation update argument against HoH
comment by William_MacAskill · 2019-09-13T19:56:39.368Z · EA(p) · GW(p)

Thanks! :)

I believe the #3 not showing up is due to it having non-bold text on that line. (the [5] footnote). This is kinda awkwardly unexpected behavior, sorry about that. But I'm not sure what I'd rather the behavior be. The simple rule of "lines with only bold text are counted as h4, otherwise it's treated as a paragraph" probably leads to less surprise than some attempt to do a threshold.

comment by trammell · 2019-09-03T12:31:57.308Z · EA(p) · GW(p)

Cool, thanks for getting all these ideas out there!

Possible correction: You write "P(simulation | seems like HoH ) >> P(not-simulation | seems like HoH)". Shouldn't the term on the right just be "P(simulation | doesn't seem like HoH)"?

Replies from: SoerenMind
comment by SoerenMind · 2019-09-03T15:02:21.706Z · EA(p) · GW(p)

Both seem true and relevant. You could in fact write P(seems like HoH | simulation) >> P(seems like HoH | not simulation), which leads to the other two via Bayes theorem.

Replies from: Lukas_Finnveden, trammell, trammell
comment by Lukas_Finnveden · 2019-09-04T08:39:58.062Z · EA(p) · GW(p)

Not necessarily.

P(simulation | seems like HOH) = P(seems like HOH | simulation)*P(simulation) / (P(seems like HOH | simulation)*P(simulation) + P(seems like HOH | not simulation)*P(not simulation))

Even if P(seems like HoH | simulation) >> P(seems like HoH | not simulation), P(simulation | seems like HOH) could be much less than 50% if we have a low prior for P(simulation). That's why the term on the right might be wrong - the present text is claiming that our prior probability of being in a simulation should be large enough that HOH should make us assign a lot more than 50% to being in a simulation, which is a stronger claim than HOH just being strong evidence for us being in a simulation.

Replies from: SoerenMind
comment by SoerenMind · 2019-09-04T15:48:56.292Z · EA(p) · GW(p)

Agreed, I was assuming that the prior for the simulation hypothesis isn't very low because people seem to put credence in it even before Will's argument.

But I found it worth noting that Will's inequality only follows from mine (the likelihood ratio) plus having a reasonably even prior odds ratio.

Replies from: Lukas_Finnveden
comment by Lukas_Finnveden · 2019-09-04T17:34:06.886Z · EA(p) · GW(p)

Ok, I see.

people seem to put credence in it even before Will’s argument.

This is kind of tangential, but some of the reasons that people put credence in it before Will's argument are very similar to Will's argument, so one has to make sure to not update on the same argument twice. Most of the force from the original simulation argument comes from the intuition that ancestor simulations are particularly interesting. (Bostrom's trilemma isn't nearly as interesting for a randomly chosen time-and-space chunk of the universe, because the most likely solution is that nobody ever hade any reason to simulate it.) Why would simulations of early humans be particularly interesting? I'd guess that this bottoms out in them having disproportionately much influence over the universe relative to how cheap they are to simulate, which is very close to the argument that Will is making.

comment by trammell · 2019-09-04T08:51:49.354Z · EA(p) · GW(p)

Also, even if one could say P(simulation | seems like HoH) >> P(not-simulation | seems like HoH), that wouldn’t be decision relevant, since t could just be that P(simulation) >> P(not-simulation) in either case. What matters is which observation (seems like HoH or not) renders it more likely that the observer is being simulated.

comment by trammell · 2019-09-04T08:45:25.832Z · EA(p) · GW(p)

We have no idea if simulations are even possible! We can’t just casually assert “P(seems like HoH | simulation) > P(seems like HoH | not simulation)”! All that we can reasonably speculate is that, if simulations are made, they’re more likely to be of special times than of boring times.

Replies from: Lukas_Finnveden, SoerenMind
comment by Lukas_Finnveden · 2019-09-04T21:07:49.813Z · EA(p) · GW(p)

Did you make a typo here? "if simulations are made, they're more likely to be of special times than of boring times" is almost exactly what “P(seems like HoH | simulation) > P(seems like HoH | not simulation)” is saying. The only assumptions you need to go between them is that the world is more likely to seem like HoH for people living in special times than for people living in boring times, and that the statement "more likely to be of special times than of boring times" is meant relative to the rate at which special times and boring times appear outside of simulations.

Replies from: trammell
comment by trammell · 2019-09-04T21:46:22.928Z · EA(p) · GW(p)

And that P(simulation) > 0.

comment by SoerenMind · 2019-09-04T15:52:20.388Z · EA(p) · GW(p)

comment by DPiepgrass (dpiepgrass) · 2019-10-31T13:50:49.685Z · EA(p) · GW(p)
P(simulation | seems like HoH ) >> P(not-simulation | seems like HoH)

Disagree: as a software engineer, my prior for the simulation hypothesis is extraordinarily low because common sense and the laws of physics indicate convincingly that we don't live in a simulation. (The only plausible exception is if I am the only person in the simulation.)

I like Toby's point—seems like the prior about "one person's influence over the future" should decrease over time, and the point about how a significant fraction of all cognitively modern humans ever are alive today is well taken.

Meanwhile on the topic of "having the prerequisite knowledge necessary to positively impact the long-term future", that quantity has been increasing over time, particularly in the last century, given developments in science, philosophy, rationality etc., and that quantity will certainly increase in the coming centuries provided that civilization survives that long. Therefore, in consideration of how society has neglected X-risks and civilization-destroying risks, this point in time seems very hingey in the sense that we can probably already take actions that predictably and non-negligibly affect cataclysmic risk levels, and these actions may determine whether or not society survives long enough to reach a future time when our cluelessness is reduced, and our knowledge and values are improved.

Something I didn't see mentioned in the above discussion is the idea that hingeyness may be unclear even in hindsight. Certainly before the 19th century there is an argument to be made that one could have little impact on the future unless one was, say, Isaac Newton, and even then one's impact was perhaps just to bring science to people a little earlier than would have happened otherwise. But what's more hingey, the 19th or 20th century? Well, when it comes to X-risks, there was no atomic bomb until after modern physics was discovered in the early 20th century, and therefore no MAD cold war... no risk of superbugs until modern medicine, etc. When it comes to risk against civilization, the 20th century seems more hingey than the 19th, but on other topics (like when the best time to be a scientist or engineer is) it is less obvious.

Certain early choices had a lot of impact. A classic example is the Qwerty keyboard; on the other hand this layout was the choice of just one or two people, a choice that no one else could have influenced—this reminds me of a general problem with the 19th century: opportunities to have an impact were rare, because there was e.g. no government funding for science. Note that a successor keyboard like Dvorak could have been designed by vastly more people, so I wonder if things could have gone differently, e.g. what if someone had gone with the flow like I did with my own keyboard design, would it have sold better? What if it was sold in the 1920s instead of the 1930s? Or consider Esperanto—almost anyone could design a language. I heard that Esperanto was largely forgotten when WWI happened, but what if a commander in the allies knew about it, and observed that troops could communicate better if they had a common language? If we had a common language today, surely the world would be different—it's hard to be sure that it would be better, but today many people have to spend vast amounts of time learning English before they can meaningfully affect the course of history.

So I'd say overall that the 20th century was much more hingey, though it's hard to see how to assign credit—do we credit scientists for what they discovered, politicians for what policies they instituted that created funding for science, public servants for how they moderated new institutions, lawyers for the important cases they argued, activists for helping influence elections that led to policy, engineers for what they created, or companies that funded engineers? And what if communist China ultimately has the greatest impact, either by precipitating another world war, or by overturning democracy and free speech in favor of a authoritarian global regime in which the definition of truth can be chosen by the leadership?

So generally I think the knowledge we gather in the future will be crucial for our long-term future, but the things we do today will lay the foundation for that future, and perhaps this is the best thing to focus on: laying down a good foundation.

Each of us can contribute in our own way. As a software engineering veteran, I hope to contribute by designing foundational software, which could potentially act as an accelerator that brings benefits of the future more quickly to the present (my impact is no doubt eclipsed, however, by Steve Krause of Future of Coding who succeeded, where I failed, in building a community, or by Bret Victor who inspired countless people). If you work in medicine you might work on containing the risk of superbugs; if in politics you there are any number of causes that might help build a stable and prosperous world... we may be clueless now, but there are things we know, like: stability and prosperity good, war and catastrophe bad. And while rationalism is in its infancy, I think we have enough epistemological tools to point us in the right directions (my life might have gone quite differently if I had discovered rationalism and EA and left my religion fifteen years earlier!)

In any case, I'm not sure why we should be concerned with how hingey this century is—at least it's probably more hingey than the last century, and in any case we have to play the hand you're dealt. We are clueless about a great many things, but not about everything, suggesting a two-pronged course of action: first to work on reducing cluelessness (and figuring out how to act in the face of cluelessness), and second to help the future in ways we can understand, such as by reducing catastrophic risks.

comment by WillPearson · 2019-10-17T13:12:24.663Z · EA(p) · GW(p)

As currently defined, long termists have two possible choices.

1. Direct work to reduce X-risk
2. Investing for the future (by saving or movement building) to then spend on reduction of x-risk at a later date

There are however other actions that may be more beneficial.

Let us look again at the definition of influential again

a time ti is more influential (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.

While direct work is not formally defined it can be seen here to be mainly referred to as near-term existential risk mitigation.

The most obvious implication, however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building.

What happens if the answer is neither option? What are the other levers we have on the future. One is that we might be able to take actions that change the expected rate of return. Perhaps the expected rate of return on investments is very bad, but there are actions you can take to increase it to more normal levels? Or there is low-hanging fruit for vastly increasing the expected rate of return on investments in the long term.

So let us introduce another type of influential (and another version of hingeness). Influential-i, that is the time that has the most influential action being ones that attempt to impact interest rates.

a time ti is more influential-i (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work to alter investment rates (rather than normal investment itself or X-risk reduction), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.

So what would increase the rate of return on investment? New energy sources with high energy returned on energy invested could increase the rate of return on investment. For example, if you manage to help invent nuclear fusion you would increase the amount of cleaner energy available to civilisation giving more resources to future altruists to use to solve problems.

Avoiding vast decreases in the rate of return would be actions that manage to stave of civilization collapse. Civilization collapse should be a lot higher on the radar for long term altruists, as it is.

1. More likely than existential risk (considering civilisations have collapsed in the past and there are inside views for it happening in the future)
2. Likely to cause a collapse in the effective altruism movement as well as people focus lower on maslow’s hierarchy of needs.
3. Likely to cause hyperinflation (at the minimum), wiping out savings.
4. Lowering/destroying existing existential risk reduction mitigation efforts (meteorite monitoring programs).

The long-termist community would do well to look at these options when thinking about the time frame of hundreds of years.

comment by David Mears · 2019-10-14T17:28:10.437Z · EA(p) · GW(p)

Would someone be willing to translate these sentences from philosophy/maths into English? Or let me know how I can work it out for myself?

That is: P(cards not shuffled)P(cards in perfect order | cards not shuffled) >> P(cards shuffled)P(cards in perfect order | cards shuffled), even if my prior credence was that P(cards shuffled) > P(cards not shuffled), so I should update towards the cards having not been shuffled.
Similarly, if it seems to me that I’m living in the most influential time ever, this gives me good reason to suspect that the reasoning process that led me to this conclusion is flawed in some way, because P(I’m reasoning poorly)P(seems like I’m living at the hinge of history | I’m reasoning poorly) >> P(I’m reasoning correctly)P(seems like I’m living at the hinge of history | I’m reasoning correctly).

I think this type of writing puts a very high accessibility bar on these sentences. I fall into the class of people who might be expected to understand these formalisms (I work in programming, a supposedly mathsy job).

Replies from: wuschel
comment by wuschel · 2020-04-27T18:25:14.575Z · EA(p) · GW(p)

Imagine you play cards with your friends. You have the deck in your hand. You are pretty confident, that you have shuffled the deck. Than you seal the deck, and give yourself the first 10 cards. And what a surprise: You happen to find all the clubs in your hand!

What is more reasonable to assume? That you just happen do dray all the clubs, or that you where wrong about having suffeld the cards? Rather the latter one.

Compare this to:

Imagine, thinking about the HoH hypothesis. You are pretty confident, that you are good at long term-forecasting, and you predict, that the most influential time in history in: NOW?!

Here to, so the argument goes, it is more reasonable to assume, that your assumption of being good in forecasting the future, is flawed.

comment by MichaelA · 2019-10-13T00:51:32.065Z · EA(p) · GW(p)

Very interesting post and discussion in the comments.

I said at the start that it’s non-obvious whaput follows, for the purposes of action, from outside-view longtermism. The most obvious course of action that might seem comparatively more promising is investment, such as saving in a long-term foundation, or movement-building, with the aim of increasing the amount of resources longtermist altruists have at a future, more hingey time.

Throughout a lot of this post, I was wondering if the sort of reasoning given in that quote would generalise to an update in favouring of building "flexible" rather than "targeted" career capital. It seems to me that flexible career capital could be seen as a form of investment that at least allows you to "punt" to your future self, which could be valuable if a later time within your lifetime is "hingier" or at least provides a clearer view of what investment strategies would be best.

For example, instead of focusing specifically on becoming influential in AI policy in the next two decades, one could focus on developing generic prestige/credentials/connections that will be useful in decades after that, or if later insights suggest work on other x-risks has higher leverage in this lifetime, or for future movement-building activities that can then be informed by new insights (e.g., regarding population ethics or metaethics)

So I'm wondering if that's a sensible generalisation of that reasoning, and, if so, whether that would suggest Will would push somewhat against 80k's move towards prioritising targeted career capital (as shown for example in the update on this page).

comment by Ramiro · 2019-09-06T18:48:10.790Z · EA(p) · GW(p)

Thanks for this post. However, HoH still seems ambiguous to me, particularly when we take uncertainty seriously. For example, what kind of comparison is happening in “T is the most influential time ever” - and, consequently, what kind of probability function does one use to model credence in it?

1) Weak-HoH: “the sentence ‘t is hingey’ is more likely to be true for now (or for the next n years) than for any other similar set t in the future”

If you interpret hingey events as produced by stochastic processes modeled by an exponential distribution, then weak-HoH has a trivial explanation.

If the risk of rain is p= .03 per day, then today is most likely to be the next rainy day - because the risk of it being tomorrow is (.97 * .03) – i.e., the probability of not raining today multiplied by the probability of raining tomorrow, and so on.

So, even though it's very unlikely that we'll go extinct in the next year, if I had to bet on an exact year, 2020 is a priori more likely to be it than 2021 - we can only die once. Something similar for AGI: though I don't think it's gonna happen in the next decade, this century is more likely to be The One than the next - but not more likely than the next 900 years, for example.

2) The strongest version of HoH is (A): “Now is THE most important time ever”, which is so unlikely that it looks like a strawman. But (B): “Now is more important than the median / average” is very tempting: first, the prior is high – you need evidence 99 times weaker (1:99 against 1:1 odds) to ascertain (B) instead of (C): “Now is in the first percentile of the importance distribution”. Second, it fits the historical record better – it looks like most of the last 200 kyr were boring in comparison with now (of course, I agree there are some huge biases affecting this assessment). Also, the HoH defender may limit the considered time-span: "the next decade will be the most important in the century / the next 100 years"

3) In (1) and (2), I supposed HoH refers only to the future, but some of the arguments against HoH refer any time, even the past. What’s the relevance (and meaning) of comparing the influence of now to important times in the past – besides assessing the odds of existing more hingey times in the future?

Influence is asymmetric: the past influences both the present and the future. Also, It seems plausible that hingeness is not a “timeless property” or absolute property: 3 different rational individuals, X, Y and Z, each located in different times Tx, Ty and Tz, would have different impartial assessments of the set (Tx, Ty, Tz) – mostly because of uncertainty, or the path-dependency of their actions, or value differences. And since “hingeness” is an ordering, not a cardinal relation, it might be hard (if not impossible) to aggregate X-Y-Z assessments.