comment by Ben_Snodin ·
2021-05-01T08:18:41.210Z · EA(p) · GW(p)
Some initial thoughts on "Are We Living At The Hinge Of History"?
In the below I give a very rough summary of Will MacAskill’s article Are We Living At The Hinge Of History? and give some very preliminary thoughts on the article and some of the questions it raises.
I definitely don’t think that what I’m writing here is particularly original or insightful: I’ve thought about this for no more than a few days, any points I make are probably repeating points other people have already made somewhere, and/or are misguided, etc. This seems like an incredibly deep topic which I feel like I’ve barely scratched the surface of. Also, this is not a focussed piece of writing trying to make a particular point, it’s just a collection of thoughts on a certain topic.
(If you want to just see what I think, skip down to "Some thoughts on the issues discussed in the article")
A summary of the article
(note that the article is an updated version of the original EA Forum post Are we living at the most influential time in history? [EA · GW])
Definition for the Hinge of History (HH)
The Hinge of History claim (HH): we are among the most influential people ever (past or future). Influentialness is, roughly, how much good a particular person at a particular time can do through direct expenditure of resources (rather than investment)
Two prominent longtermist EA views imply HH
Two worldviews prominent in longtermist EA imply that HH is true:
- Time of Perils view: we live at a time of unusually high extinction risk, and we can do an unusual amount to reduce this risk
- Value Lock-In view: we’ll soon invent a technology that allows present-day agents to assert their values indefinitely into the future (in the Bostrom-Yudkowsky version of this view, the technology is AI)
Arguments against HH
The base rates argument
Claim: our prior should be that we’re as likely as anyone else, past or present, to be the most influential person ever (Bostrom’s Self-Sampling Assumption (SSA)). Under this prior, it’s astronomically unlikely that any particular person is the most influential person ever.
Then the question is how much should we update from this prior
- The standard of evidence (Bayes factor) required to favour HH is incredibly high. E.g. we need a Bayes factor of ~107 to move from a 1 in 100 million credence to a 1 in 10 credence. For comparison, a p=0.05 result from a randomised controlled trial gives a Bayes factor of 3 under certain reasonable assumptions.
- The arguments for Time of Perils or Value Lock-In might be somewhat convincing; but hard to see how they could be convincing enough
- E.g. our track record of understanding the importance of historical events is very poor
- When considering how much to update from the prior, we should be aware that there are biases that will tend to make us think HH is more likely than it really is
Counterargument 1: we only need to be at an enormously influential time, not the most influential, and the implications are ~the same either way
- Counter 1 to counterargument 1: the Bostrom-Yudkowsky view says we’re at the most influential time ever, so you should reject the Bostrom-Yudkowsky view if you’re abandoning the idea that we’re at the most influential time ever. So there is a material difference between “enormously influential time” and “most influential time”.
- Counter 2 to counterargument 1: if we’re not at the most influential time, presumably we should transfer our resources forward to the most influential time, so the difference between “enormously influential time” and “most influential time” is highly action-relevant.
Counterargument 2: the action-relevant thing is the influentialness of now compared to any time we can pass resources on to
- Again the Bostrom-Yudkowsky view is in conflict with this
- But MacAskill concedes that it does seem right that this is the action-relevant thing. So e.g. we could assume we can only transfer resources 1000 years into the future and define Restricted-HH: we are among the most influential people out of the people who will live over the next 1000 years
The inductive argument
- Claim: The influentialness of comparable people has been increasing over time, and we should expect this to continue, so the influentialness of future people who we can pass resources onto will be greater
- Evidence: if we consider the state of knowledge and ethics in 1600 vs today, or in 1920/1970 vs today, it seems clear that we have more knowledge and better ethics now than we did in 1600 or in 1920/1970
- And seems clear that there are huge gaps in our knowledge today (so doesn’t seem that we should expect this trend to break)
Arguments for HH
Argument 1: we’re living on a single planet, implying greater influentialness
- Implies particular vulnerabilities e.g. asteroid strikes
- Implies individual people have an unusually large fraction of total resources
- Implies instant global communication
- Asteroids are not a big risk
- For other prominent risks like AI or totalitarianism, being on multiple planets doesn’t seem to help
- We might well have quite a long future period on earth (1000s or 10,000s of years), which makes being on earth now less special
- And in the early stages of space settlement the picture isn’t necessarily that relevantly different to the single planet one
Argument 2: we’re now in a period of unusually fast economic and tech progress, implying greater influentialness. We can’t maintain the present-day growth rate indefinitely.
MacAskill seems sympathetic to the argument, but says it implies not that today is the most important time, but that the most important time is some time might be in the next few thousand years
- Also, maybe longtermist altruists are less influential during periods of fast economic growth because rapid change makes it harder to plan reliably
- And comparing economic power across long timescales is difficult
A few other arguments for HH are briefly touched on in a footnote: that existential risk / value lock-in lowers the number of future people in the reference class for the influentialness prior; that we might choose other priors that are more favourable for HH, and that earlier people can causally affect more future people
Some quick meta-level thoughts on the article
- I wish it had a detailed discussion about choosing a prior for influentialness, which I think is really important.
- There’s a comment that the article ignores the fact that the annual risk of extinction or lock-in in the future has implications for present-day influentialness because in Trammell’s model this is incorporated into the pure rate of time preference. I find that pretty weird. Trammell’s model is barely referenced elsewhere in the paper so I don’t really see why we should neglect to discuss something just because it happens to be interpreted in a certain way within his model. Maybe I missed the point here.
- I think it’s a shame that MacAskill doesn’t really give numbers in the article for his prior and posterior, either on HH or restricted-HH (this EA Forum comment thread by Greg Lewis [EA(p) · GW(p)] is relevant).
Some thoughts on the issues discussed in the article
Two main points from the article
It kind of feels like there are two somewhat independent things that are most interesting from the article:
- 1. The claim: we should reject the Time of Perils view, and the Bostrom-Yudkowsky view, because in both cases the implication for our current influentialness is implausible
- 2. The question: what do high level / relatively abstract arguments tell us about whether we can do the most good by expending resources now or by passing resources on to future generations?
Avoiding rejecting the Time of Perils and Bostrom-Yudkowsky views
I think there are a few ways we can go to avoid rejecting the Time of Perils and Bostrom-Yudkowsky views
- We can find the evidence in favour of them strong enough to overwhelm the SSA prior through conventional Bayesian updating
- We can find the evidence in favour of them weaker than in the previous case, but still strong enough that we end up giving them significant credence in the face of the SSA prior, through some more forgiving method than Bayesian updating
- We can use a different prior, or claim that we should be uncertain between different priors
- Or we can just turn the argument (back?) around, and say that the SSA prior is implausible because it implies such a low probability for the Time of Perils and Bostrom-Yudkowsky views. Toby Ord seems to say something like this in the comments to the EA Forum post [EA · GW] (see point 3).
A nearby alternative is to modify the Time of Perils and Bostrom-Yudkowsky views a bit so that they don’t imply we’re among the most influential people ever. E.g. for the Bostrom-Yudkowsky view we could make the value lock-in a bit “softer” by saying that for some reason, not necessarily known/stated, the lock-in would probably end after some moderate (on cosmological scales) length of time. I’d guess that many people might find a modified view more plausible even independently of the influentialness implications.
I’m not really sure what I think here, but I feel pretty sympathetic to the idea that we should be uncertain about the prior and that this maybe lends itself to having not too strong a prior against the Time of Perils and Bostrom-Yudkowsky views.
On the question of whether to expend resources now or later
The arguments MacAskill discusses suggest that the relevant time frame is the next few thousand years (because the next few thousand years seem (in expectation) especially high influentialness and because it might be effectively impossible to pass our resources further into the future).
It seems like the pivotal importance of priors on influentialness (or similar) then evaporates: it no longer seems that implausible on the SSA prior that now is a good time to expend resources rather than save. E.g. say there’ll be a 20 year period in the next 1000 years where we want to expend philanthropic resources rather than save them to pass on to future generations. Then a reasonable prior might be that we have a 20/1000 = 1 in 50 chance of being in that period. That’s a useful reference point and is enough to make us skeptical about arguments that we are in such a period, but it doesn’t seem overwhelming. In fact, we’d probably want to spend at least some resources now even purely based on this prior.
In particular, it seems like some kind of detailed analysis is needed, maybe along the lines of Trammell’s model or at least using that model as a starting point. I think many of the arguments in MacAskill’s article should be part of that detailed analysis, but, to stress the point, they don’t seem decisive to me.
This comment by Carl Shulman on the related EA Forum post [EA · GW] and its replies has some stuff on this.
The importance of the idea of moral progress
In the article, the Inductive Argument is supported by the idea of moral progress: MacAskill cites the apparent progress in our moral values over the past 400 years as evidence for the idea that we should expect future generations to have better moral values than we do. Obviously, whether we should expect moral progress in the future is a really complex question, but I’m at least sympathetic to the idea that there isn’t really moral progress, just moral fashions (so societies closer in time to ours seem to have better moral values just because they tend to think more like us).
Of course, if we don’t expect moral progress, maybe it’s not so surprising that we have very high influentialness: if past and future actors don’t share our values, it seems very plausible on the face of it that we’re better off expending our resources now than passing them off to future generations in the hope they’ll carry out our wishes. So maybe MacAskill’s argument about influentialness should update us away from the idea of moral progress?
But if we’re steadfast in our belief in moral progress, maybe it’s not so surprising that we have high influentialness because we find ourselves in a world where we are among the very few with a longtermist worldview, which won’t be the case in the future as longtermism becomes a more popular view. (I think Carl Shulman might say something like this in the comments to the original EA Forum post [EA · GW])
My overall take
- I think “how plausible is this stuff under an SSA prior” is a useful perspective
- Still, thinking about this hasn’t caused me to completely dismiss the Time of Perils View or the Bostrom-Yudkowsky view (I probably already had some kind of strong implausibility prior on those views).
- The arguments in the article are useful for thinking about how much (e.g.) the EA longtermist community should be spending rather than saving now, but a much more detailed analysis seems necessary to come to a firm view on this.
A quote to finish
I like the way the article ends, providing some motivation for the Inductive Argument in a way I find appealing on a gut level:
Just as our powers to grow crops, to transmit information, to discover the laws of nature, and to explore the cosmos have all increased over time, so will our power to make the world better — our influentialness. And given how much there is still to understand, we should believe, and hope, that our descendents look back at us as we look back at those in the medieval era, marvelling at how we could have got it all so wrong.