Posts

Are we living at the most influential time in history? 2019-09-03T04:55:31.501Z
Ask Me Anything! 2019-08-14T15:52:15.775Z
'Longtermism' 2019-07-25T21:27:11.568Z
Defining Effective Altruism 2019-07-19T10:49:54.253Z
Age-Weighted Voting 2019-07-12T15:21:31.538Z
A philosophical introduction to effective altruism 2019-07-10T13:40:19.228Z
Aid Scepticism and Effective Altruism 2019-07-03T11:34:22.630Z
Announcing the new Forethought Foundation for Global Priorities Research 2018-12-04T10:36:06.536Z
Projects I'd like to see 2017-06-12T16:19:52.178Z
Introducing CEA's Guiding Principles 2017-03-08T01:57:00.660Z
[CEA Update] Updates from January 2017 2017-02-13T20:56:21.121Z
Introducing the EA Funds 2017-02-09T00:15:29.301Z
CEA is Fundraising! (Winter 2016) 2016-12-06T16:42:36.985Z
[CEA Update] October 2016 2016-11-15T14:49:34.107Z
Setting Community Norms and Values: A response to the InIn Open Letter 2016-10-26T22:44:30.324Z
CEA Update: September 2016 2016-10-12T18:44:34.883Z
CEA Updates + August 2016 update 2016-10-12T18:41:43.964Z
Should you switch away from earning to give? Some considerations. 2016-08-25T22:37:19.691Z
Some Organisational Changes at the Centre for Effective Altruism 2016-07-23T04:29:02.144Z
Call for papers for a special journal issue on EA 2016-03-14T12:46:39.712Z
Assessing EA Outreach’s media coverage in 2014 2015-03-18T12:02:38.223Z
Announcing a forthcoming book on effective altruism 2014-03-16T13:00:35.000Z
The history of the term 'effective altruism' 2014-03-11T02:03:32.000Z
Where I'm giving and why: Will MacAskill 2013-12-30T23:00:54.000Z
What's the best domestic charity? 2013-12-10T19:16:42.000Z
Want to give feedback on a draft sample chapter for a book on effective altruism? 2013-09-22T04:00:15.000Z
How might we be wildly wrong? 2013-09-04T19:19:54.000Z
Money can buy you (a bit) of happiness 2013-07-29T04:00:59.000Z
On discount rates 2013-07-22T04:00:53.000Z
Notes on not dying 2013-07-15T04:00:05.000Z
Helping other altruists 2013-07-01T04:00:08.000Z
The rules of effective altruism. Rule #1: don’t die 2013-06-24T04:00:29.000Z
Vegetarianism, health, and promoting the right changes 2013-06-07T04:00:43.000Z
On the robustness of cost-effectiveness estimates 2013-05-24T04:00:47.000Z
Peter Singer's TED talk on effective altruism 2013-05-22T04:00:50.000Z
Getting inspired by cost-effective giving 2013-05-20T04:00:41.000Z
$1.25/day - What does that mean? 2013-05-17T04:00:25.000Z
An example of do-gooding done wrong 2013-05-15T04:00:16.000Z
What is effective altruism? 2013-05-13T04:00:31.000Z
Doing well by doing good: careers that benefit others also benefit you 2013-04-18T04:00:02.000Z
To save the world, don’t get a job at a charity; go work on Wall Street 2013-02-27T05:00:23.000Z
Some general concerns about GiveWell 2012-12-23T05:00:10.000Z
GiveWell's recommendation of GiveDirectly 2012-11-30T05:00:28.000Z
Researching what we should 2012-11-12T05:00:37.000Z
The most important unsolved problems in ethics 2012-10-15T02:28:58.000Z
How to be a high impact philosopher, part II 2012-09-27T04:00:27.000Z
How to be a high impact philosopher 2012-05-08T04:00:25.000Z
Practical ethics given moral uncertainty 2012-01-31T05:00:01.000Z
Giving isn’t demanding* 2011-11-25T05:00:04.000Z

Comments

Comment by william_macaskill on Gordon Irlam: an effective altruist ahead of his time · 2020-12-18T13:48:13.364Z · EA · GW

I agree that Gordon deserves great praise and recognition! 

One clarification: My discussion of Zhdanov was based on Gordon's work: he volunteered for GWWC in the early days, and cross-posted about Zhdanov on the 80k blog. In DGB, I failed to  cite him, which was a major oversight on my part, and I feel really bad about that. (I've apologized to him about this.)  So that discussion shouldn't be seen as independent convergence. 

Comment by william_macaskill on Gordon Irlam: an effective altruist ahead of his time · 2020-12-18T13:47:55.415Z · EA · GW

I agree that Gordon deserves great praise and recognition! 

One clarification: My discussion of Zhdanov was based on Gordon's work: he volunteered for GWWC in the early days, and cross-posted about Zhdanov on the 80k blog. In DGB, I failed to  cite him, which was a major oversight on my part, and I feel really bad about that. (I've apologized to him about this.)  So that discussion shouldn't be seen as independent convergence. 

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-12T20:09:24.555Z · EA · GW

Thanks Greg  - I asked and it turned out I had one remaining day to make edits to the paper, so I've made some minor ones in a direction you'd like, though I'm sure they won't be sufficient to satisfy you. 

Going to have to get back on with other work at this point, but I think your  arguments are important, though the 'bait and switch' doesn't seem totally fair - e.g. the update towards living in a simulation only works when you appreciate the improbability of living on a single planet.

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-10T10:54:22.851Z · EA · GW

Thanks for this, Greg.

"But what is your posterior? Like Buck, I'm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million."

I'm surprised this wasn't clear to you, which has made me think I've done a bad job of expressing myself.  

It's the former, and  for the reason of your explanation  (2): us being early, being on a single planet, being at such a high rate of economic growth, should collectively give us an enormous update. In the  blog post I describe what I call the outside-view arguments, including that we're very early on, and say: "My view is that, in the aggregate, these outside-view arguments should substantially update one from one’s prior towards HoH, but not all the way to significant credence in HoH.[3]
[3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable."


I'm going to think more about your claim that in the article I'm 'hiding the ball'. I say in the introduction that "there are some strong arguments for thinking that this century might be unusually influential",  discuss the arguments  that I think really should massively update us in section 5 of the article, and in that context I say "We have seen that there are some compelling arguments for thinking that the present time is unusually influential. In particular, we are growing very rapidly, and civilisation today is still small compared to its potential future size, so any given unit of resources is a comparatively large fraction of the whole. I believe these arguments give us reason to think that the most influential people may well live within the next few thousand years."   Then in the conclusion I say: "There are some good arguments for thinking that our time is very unusual, if we are at the start of a very long-lived civilisation: the fact that we are so early on, that we live on a single planet, and that we are at a period of rapid economic and technological progress, are all ways in which the current time is very distinctive, and therefore are reasons why we may be highly influential too." That seemed clear to me, but I should judge clarity by how  readers interpret what I've written. 

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-10T10:43:14.646Z · EA · GW

Actually, rereading my post I realize I had already made an edit similar to the one you suggest  (though not linking to the article which hadn't been finished) back in March 2020:

"[Later Edit (Mar 2020): The way I state the choice of prior in the text above was mistaken, and therefore caused some confusion. The way I should have stated the prior choice, to represent what I was thinking of, is as follows:

The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/n.

The unconditional prior probability over whether this is the most influential century would then depend on one's priors over how long Earth-originating civilization will last for. However, for the purpose of this discussion we can focus on just the claim that we are at the most influential century AND that we have an enormous future ahead of us. If the Value Lock-In or Time of Perils views are true, then we should assign a significant probability to that claim. (i.e. they are claiming that, if we act wisely this century, then this conjunctive claim is probably true.) So that's the claim we can focus our discussion on.

It's worth noting that my proposal follows from the Self-Sampling Assumption, which is roughly (as stated by Teru Thomas ('Self-location and objective chance' (ms)): "A rational agent’s priors locate him uniformly at random within each possible world." I believe that SSA is widely held: the key question in the anthropic reasoning literature is whether it should be supplemented with the self-indication assumption (giving greater prior probability mass to worlds with large populations). But we don't need to debate SIA in this discussion, because we can simply assume some prior probability distribution over sizes over the total population - the question of whether we're at the most influential time does not require us to get into debates over anthropics.]"

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T17:13:44.443Z · EA · GW

Thanks, Greg.  I really wasn't meaning to come across as super confident in a particular posterior (rather than giving an indicative number for a central estimate), so I'm sorry if I did.


"It seems more reasonable to say 'our' prior is rather some mixed gestalt on considering the issue as a whole, and the concern about base-rates etc. should be seen as an argument for updating this downwards, rather than a bid to set the terms of the discussion."

I agree with this (though see for the discussion with Lukas for some clarification about what we're talking about when we say  'priors', i.e. are we building the fact that we're early into our priors or not.).

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T17:04:27.894Z · EA · GW

Richard’s response is about right. My prior with respect to influentialness, is such that either: x-risk is almost surely zero, or we are almost surely not going to have a long future, or x-risk is higher now than it will be in the future but harder to prevent than it will be in the future or in the future there will be non-x-risk-mediated ways of affecting similarly enormous amounts of value in the future, or the idea that most of the value is in the future is false.

I do think we should update away from those priors, and I think that update is sufficient to make the case for longtermism. I agree that the location in time that we find ourselves in (what I call ‘outside-view arguments’ in my original post) is sufficient for a very large update.

Practically speaking, thinking through the surprisingness of being at such an influential time made me think: 

  • Maybe I was asymmetrically assessing evidence about how high x-risk is this century. I think that’s right; e.g. I now don’t think that x-risk from nuclear war is as high as 0.1% this century, and I think that longtermist EAs have sometimes overstated the case in favour.
  • If we think that there’s high existential risk from, say, war, we should (by default) think that such high risk will continue into the future. 
  • It’s more likely that we’re in a simulation

It also made me take more seriously the thoughts that in the future there might be non-extinction-risk mechanisms for producing comparably enormous amounts of (expected) value, and that maybe there’s some crucial consideration(s) that we’re currently missing such that our actions today are low-expected-value compared to actions in the future.

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T16:06:59.804Z · EA · GW

"Only using a single, simple function for something so complicated seems overconfident to me. And any mix of functions where one of them assigns decent probability to early people being the most influential is enough that it's not super unlikely that early people are the most influential."

I strongly agree with this. The fact that under a mix of  distributions, it becomes not super unlikely that early people are the most influential, is really important and was somewhat buried in the original comments-discussion. 

And then we're also very distinctive in other ways: being on one planet, being at such a high-growth period, etc. 

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T15:59:10.273Z · EA · GW

Thanks, I agree that this is  key. My thoughts: 

  • I agree that our earliness gives a dramatic update in favor of us being influential. I don't have a stable view on the magnitude of that. 
  • I'm not convinced that the negative exponential form of Toby's distribution is the right one, but I don't have any better suggestions 
  • Like Lukas, I think that Toby's distribution gives too much weight to early people, so the update I would make is less dramatic than Toby's
  • Seeing as Toby's prior is quite sensitive to choice of reference-class, I would want to choose the reference class of all observer-moments, where an observer is a conscious being. This means we're not as early as we would say if we used the distribution of Homo sapiens, or of hominids. I haven't thought about what exactly that means, though my intuition is that it means the update isn't nearly as big.    

So I guess the answer to your question is 'no': our earliness is an enormous update, but not as big as Toby would suggest.

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T15:51:55.513Z · EA · GW

"If we're doing things right, it shouldn't matter whether we're building earliness into our prior or updating on the basis of earliness."

Thanks, Lukas, I thought this was very clear and exactly right. 

"So now we've switched over to instead making a guess about P(X in E | X in H), i.e. the probability that one of the 1e10 most influential people also is one of the 1e11 earliest people, and dividing by 10. That doesn't seem much easier than making a guess about P(X in H | X in E), and it's not obvious whether our intuitions here would lead us to expect more or less influentialness."

That's interesting, thank you - this statement of the debate has helped clarify things for me.  It does seem to me that doing the update -  going via P(X in E | X in H) rather than directly trying to assess P(X in H | X in E)  - is helpful, but I'd understand the position of someone who wanted just to assess P(X in H | X in E) directly. 

I think it's helpful  to assess P(X in E | X in H) because it's not totally obvious how one should update on the basis of earliness. The arrow of causality and the possibility of lock-in over time definitely gives reasons in favor of  influential people being earlier. But there's still the big question of  how  great an update that should be. And the cumulative nature of knowledge and understanding gives reasons in favor thinking that later people are more likely to be more influential.

This seems important to me because, for someone claiming that we should think that we're at the HoH, the update on the basis of earliness is doing much more work than updates on the basis of, say, familiar arguments about when AGI is coming and what will happen when it does.  To me at least, that's a striking fact and wouldn't have been obvious before I started thinking about these things.

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T15:24:42.889Z · EA · GW

This comment of mine in particular seems to have been downvoted. If anyone were willing, I'd be interested to understand why: is that because (i) the tone is off (seemed too combative?); (ii) the arguments themselves are weak; (iii) it wasn't clear what I'm saying; (iv) it wasn't engaging with Buck's argument; (v) other?

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T15:19:19.567Z · EA · GW

Yeah, I do think the priors-based argument given in the post  was  poorly stated, and therefore led to  unnecessary confusion. Your suggestion  is very reasonable, and I've now edited the post.

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-04T16:00:41.863Z · EA · GW

Comment (5/5)

Smaller comments 

  • I agree that one way you can avoid thinking we’re astronomically influential is by believing the future is short, such as by believing you’re in a simulation, and I discuss that in the blog post at some length. But, given that there are quite a number of ways in which we could fail to be at the most influential time (perhaps right now we can do comparatively little to influence the long-term, perhaps we’re too lacking in knowledge to pick the right interventions wisely, perhaps our values are misguided, perhaps longtermism is false, etc), it seems strange to put almost all of the weight on one of those ways, rather than give some weight to many different explanations. 
  • “It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that.” This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that you’re one-in-a-million, it needn’t push you all the way to thinking that you’re one-in-a-thousand. This seems important to me. Yes, you can give me arguments for thinking that we’re (in expectation at least) at an enormously influential time - as I say in the blog post and the comments, I endorse those arguments! I think we should update massively away from our prior, in particular on the basis of the current rate of economic growth. But for direct philanthropy to beat patient philanthropy, being at a hugely influential time isn’t enough. Even if this year is hugely influential, next year might be even more influential again; even if this century is hugely influential, next century might be more influential again. And if that’s true then - as far as the consideration of wanting to spend our philanthropy at the most influential times goes - then we have a reason for saving rather than donating right now. 
  • You link to the idea that the Toba catastrophe was a bottleneck for human populations. Though I agree that we used to be more at-risk from natural catastrophes than we are today, more recent science has cast doubt on that particular hypothesis. From The Precipice: “the “Toba catastrophe hypothesis” was popularized by Ambrose (1998). Williams (2012) argues that imprecision in our current archeological, genetic and paleoclimatological techniques makes it difficult to establish or falsify the hypothesis. See Yost et al. (2018) for a critical review of the evidence. One key uncertainty is that genetic bottlenecks could be caused by founder effects related to population dispersal, as opposed to dramatic population declines.”
    • Ambrose, S. H. (1998). “Late Pleistocene Human Population Bottlenecks, Volcanic Winter, and Differentiation of Modern Humans.” Journal of Human Evolution, 34(6), 623–51
    • Williams, M. (2012). “Did the 73 ka Toba Super-Eruption have an Enduring Effect? Insights from Genetics, Prehistoric Archaeology, Pollen Analysis, Stable Isotope Geochemistry, Geomorphology, Ice Cores, and Climate Models.” Quaternary International, 269, 87–93.
    • Yost, C. L., Jackson, L. J., Stone, J. R., and Cohen, A. S. (2018). “Subdecadal Phytolith and Charcoal Records from Lake Malawi, East Africa, Imply Minimal Effects on Human Evolution from the ∼74 ka Toba Supereruption.” Journal of Human Evolution, 116, 75–94.
Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-04T15:59:15.166Z · EA · GW

(Comment 4/5) 

The argument against patient philanthropy

“I sometimes hear the outside view argument used as an argument for patient philanthropy, which it in fact is not.”

I don’t think this works quite in the way you think it does.

It is true that, in a similar vein to the arguments I give against being at the most influential time (where ‘influential’ is a technical term, excluding investing opportunities), you can give an outside-view argument against now being the time at which you can do the most good tout court. As a matter of fact, I believe that’s true: we’re almost certainly not at the point in time, in all history, at which one can do the most good by investing a given unit of resources to donate at a later date. That time could plausibly be earlier than now, because you get greater investment returns, or plausibly later than now, because in the future we might have a better understanding of how to structure the right legal instruments, specify the constitution of one’s foundation, etc.

But this is not an argument against patient philanthropy compared to direct action. In order to think that patient philanthropy is the right approach, you do not need to make the claim that now is the time, out of all times, when patient philanthropy will do the most expected good. You just need the claim that, currently, patient philanthropy will do more good than direct philanthropy. This is a (much, much) weaker claim to make.

And, crucially, there’s an asymmetry between patient philanthropy and direct philanthropy. 

Suppose there are 70 time periods at which you could spend your philanthropic resources (every remaining year of your life, say), and that the scale of your philanthropy is small (so that diminishing returns can be ignored). Then, if the expected cost-effectiveness of the best opportunities varies substantially over time, there will be just one point in time at which your philanthropy will have the most impact, and you should try to max out your philanthropy at that time period, donating all your philanthropy at that time if you can. (Perhaps that isn’t quite possible because you are limited in how much you can take out debt against future income; but still, the number of times you will donate in your life will be small.) So, in 69 out of 70 time periods (or, even if you need to donate a few times, ~67 out of 70 time periods), you should be saving rather than donating. That’s why direct philanthropy needs to make the claim that now is the most, or at least one of the most, potentially-impactful times, out of the relevant time periods when one could donate, whereas patient philanthropy doesn’t.

Second, the inductive argument against now being the optimal time for patient philanthropy is much weaker than the inductive argument against now being the most influential time (in the technical sense of ‘influential). It’s not clear there is an inductive argument against now being the optimal time for patient philanthropy: there’s at least a plausible argument that, on average, every year the value of patient philanthropy decreases, because one loses one extra year of investment returns. Combined with the fact that one cannot affect the past (well, putting non-causal decision theories to the side ;) ), this gives an argument for thinking that now will be higher-impact for patient philanthropy than all future times.

Personally, I don’t think that argument quite works, because you can still mess up patient philanthropy, so maybe future people will do patient philanthropy better than we do. But it’s an argument that’s much more compelling in the case of patient philanthropy than it is for the influentialness of a time.

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-04T15:57:57.776Z · EA · GW

(Comment 3/5) 

Earliness

“Will’s resolution is to say that in fact, we shouldn’t expect early times in human history to be hingey, because that would violate his strong prior that any time in human history is equally likely to be hingey.”

I don’t see why you think I think this. (I also don’t know what “violating” a prior would mean.)

The situation is: I have a prior over how influential I’m likely to be. Then I wake up, find myself in the early 21st century, and make a whole bunch of updates. This include updates on the facts that: I’m on one planet, I’m at a period of unusually high economic growth and technological progress, I *seem* to be unusually early on and can’t be very confident that the future is short. So, as I say in the original post and the comments, I update (dramatically) on my estimate of my influentialness, on the basis of these considerations. But by how much? Is it a big enough update to conclude that I should be spending my philanthropy this year rather than next, or this century rather than next century? I say: no. And I haven’t seen a quantitative argument, yet, for thinking that the argument is ‘yes’, whereas the inductive argument seems to give a positive argument for thinking 'no'.

One reason for thinking that the update, on the basis of earliness, is not enough, is related to the inductive argument: that it would suggest that hunter-gatherers, or Medieval agriculturalists, could do even more direct good than we can. But that seems wrong. Imagine you can give an altruistic person at one of these times a bag of oats, or sell that bag today at market prices. Where would you do more good? The case in favour of earlier is if you think that speeding up economic growth / technological progress is so good that the greater impact you’d have at earlier times outweighs the seemingly better opportunities we have today. But I don’t think you believe that, and at least the standard EA view is that the benefits of speed-up are small compared to x-risk reduction or other proportional impacts on the value of the long-run future.

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-04T15:56:10.361Z · EA · GW

(Comment 2/5)

The outside-view argument (in response to your first argument)

In the blog post, I stated the priors-based argument quite poorly - I thought this bit wouldn’t be where the disagreement was, so I didn’t spend much time on it. How wrong I was about that! For the article version (link), I tidied it up.

The key thing is that the way I’m setting priors is as a function from populations to credences: for any property F, your prior should be such that if there are n people in a population, the probability that you are in the m most F people in that population is m/n

This falls out of the self-sampling assumption, that a rational agent’s priors locate her uniformly at random within each possible world. If you reject this way of setting priors then, by modus tollens, you reject the self-sampling assumption. That’s pretty interesting if so! 

On this set-up of the argument (which is what was in my head but I hadn’t worked through), I don’t make any claims about how likely it is that we are part of a very long future. Only that, a priori, the probability that we’re *both* in a very large future *and* one of the most influential people ever is very low.  For that reason, there aren’t any implications from that argument to claims about the magnitude of extinction risk this century.  We could be comparatively un-influential in many ways: if extinction risk is high this century but continues to be high for very many centuries; if extinction risk is low this century and will be higher in coming centuries; if  extinction risk is any level and we can't do anything about it, or are not yet knowledgeable enough to choose actions wisely, or if longtermism is false. (etc)

Separately, I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early. Building earliness into your prior means you’ve got to give up on the very-plausible-seeming self-sampling assumption; means you’ve got to treat the predicate ‘is most influential’ differently than other predicates; has technical challenges; and  the case in favour seems to rely on a posteriori observations about how the world works, like those you give in your post.

Comment by william_macaskill on Thoughts on whether we're living at the most influential time in history · 2020-11-04T15:51:24.090Z · EA · GW

(Comment 1/5)

Thanks so much for engaging with this, Buck! :)

I revised the argument of the blog post into a forthcoming article, available at my website (link). I’d encourage people to read that version rather than the blog post, if you’re only going to read one. The broad thrust is the same, but the presentation is better. 

I’ll discuss the improved form of the discussion about priors in another comment. Some other changes in the article version:

  • I frame the argument in terms of the most influential people, rather than the most influential times. It’s the more natural reference class, and is more action-relevant. 
  • I use the term ‘influential’ rather than ‘hingey’. It would be great if we could agree on terminology here; as Carl noted on my last post, ‘hingey’ could make the discussion seem unnecessarily silly.
  • I define ‘influentialness’ (aka ‘hingeyness’) in terms of ‘how much expected good you can do’, not just ‘how much expected good you can do from a longtermist perspective’. Again, that’s the more natural formulation, and, importantly, one way in which we could fail to be at the most influential time (in terms of expected good done by direct philanthropy) is if longtermism is false and, say, we only discover the arguments that demonstrate that in a few decades’ time. 
  • The paper includes a number of graphs, which I think helps make the case clearer.
  • I don’t discuss the simulation argument. (Though that's mainly for space and academic normalcy reasons - I think it's important, and discuss it in the blog post.)
Comment by william_macaskill on How hot will it get? · 2020-04-24T10:37:36.810Z · EA · GW

Something I forgot to mention in my comments before: Peter Watson suggested to me it's reasonably likely that estimates of climate sensitivity will be revised upwards for the next IPCC, as the latest generation of models are running hotter. (e.g. https://www.carbonbrief.org/guest-post-why-results-from-the-next-generation-of-climate-models-matter, https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019GL085782 - "The range of ECS values across models has widened in CMIP6, particularly on the high end, and now includes nine models with values exceeding the CMIP5 maximum (Figure 1a). Specifically, the range has increased from 2.1–4.7 K in CMIP5 to 1.8–5.6 K in CMIP6.") This could drive up the probability mass over 6 degrees in your model by quite a bit, so could be worth doing a sensitivity analysis on that.

Comment by william_macaskill on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T15:35:31.459Z · EA · GW

How much do you worry that MIRI's default non-disclosure policy is going to hinder MIRI's ability to do good research, because it won't be able to get as much external criticism?

Comment by william_macaskill on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T15:34:24.024Z · EA · GW

Suppose you find out that Buck-in-2040 thinks that the work you're currently doing is a big mistake (which should have been clear to you, now). What are your best guesses about what his reasons are?

Comment by william_macaskill on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T15:33:03.058Z · EA · GW

What's the biggest misconception people have about current technical AI alignment work? What's the biggest misconception people have about MIRI?

Comment by william_macaskill on Reality is often underpowered · 2019-10-12T10:20:32.770Z · EA · GW

Thanks Greg - I really enjoyed this post.

I don't think that this is what you're saying, but I think if someone drew the lesson from your post that, when reality is underpowered, there's no point in doing research into the question, that would be a mistake.

When I look at tiny-n sample sizes for important questions (e.g.: "How have new ideas made major changes to the focus of academic economics?" or "Why have social movements collapsed in the past?"), I generally don't feel at all like I'm trying to get a p<0.05 ; it feels more like hypothesis generation. So when I find out that Kahneman and Tversky spent 5 years honing the article Prospect Theory into a form that could be published in an economics journal, I think "wow, ok, maybe that's the sort of time investment that we should be thinking of". Or when I see social movements collapse because of in-fighting (e.g. pre-Copenhagen UK climate movement), or romantic disputes between leaders (e.g. Objectivism), then - insofar as we just want to take all the easy wins to mitigate catastrophic risks to the EA community - I know that this risk is something to think about and focus on for EA.

For these sorts of areas, the right approach seems to be granular qualitative research - trying to really understand in depth what happened in some other circumstance, and then think through what lessons that entail for the circumstance you're interested in. I think that, as a matter of fact, EA does this quite a lot when relevant. (E.g. Grace on Szilard, or existing EA discussion of previous social movements). So I think this gives us extra reason to push against the idea that "EA-style analysis" = "quant-y RCT-esque analysis" rather than "whatever research methods are most appropriate to the field at hand". But even on qualitative research I think the "EA mindset" can be quite distinctive - certainly I think, for example, that a Bayesian-heavy approach to historical questions, often addressing counterfactual questions, and looking at those issues that are most interesting from an EA perspective (e.g. how modern-day values would be different if Christianity had never taken off), would be really quite different from almost all existing historical research.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T19:56:39.368Z · EA · GW

Thanks! :)

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T19:51:21.656Z · EA · GW

Sorry - 'or otherwise lost' qualifier was meant to be a catch-all for any way of the investment losing its value, including (bad) value-drift.

I think there's a decent case for (some) EAs doing better at avoiding this than e.g. typical foundations:

  • If you have precise values (e.g. classical utilitarianism) then it's easier to transmit those values across time - you can write your values down clearly as part of the constitution of the foundation, and it's easier to find and identify younger people to take over the fund who also endorse those values. In contrast, for other foundations, the ultimate aims of the foundation are often not clear, and too dependent on a particular empirical situation (e.g. Benjamin Franklin's funds were to 'to provide loans for apprentices to start their businesses' (!!)).
  • If you take a lot of time carefully choosing who your successors are (and those people take a lot of time over who their successors are).

Then to reduce appropriation, one could spread the funds across many different countries and different people who share your values. (Again, easier if you endorse a set of values that are legible and non-idiosyncratic.)

It might still be true that the chance of the fund becoming valueless gets large over time (if, e.g. there's a 1% risk of it losing its value per year), but the size of the resources available also increases exponentially over time in those worlds where it doesn't lose its value.

Caveat also tricky questions on when 'value drift' is a bad thing rather than the future fund owners just having a better understanding of the right thing to do than the founders did, which often seems to be true for long-lasting foundations.



Comment by william_macaskill on Ask Me Anything! · 2019-09-13T01:14:45.576Z · EA · GW

I think you might be misunderstanding what I was referring to. An example of what I mean: Suppose Jane is deciding whether to work for Deepmind on the AI safety team. She’s unsure whether this speeds up or slows down AI development; her credence is imprecise, represented by the interval [0.4, 0.6]. She’s confident, let’s say, that speeding up AI development is bad. Because there’s some precisification of her credences on which taking the job is good, and some on which taking the job is bad, then if she uses a Liberal decision rule (= it is permissible for you to perform any action that is permissible according to at least one of the credence functions in your set), it’s permissible for her to take the job or not take the job.

The issue is that, if you have imprecise credences and a Liberal decision rule, and are a longtermist, then almost all serious contenders for actions are permissible.

So the neartermist would need to have some way of saying (i) we can carve out the definitely-good part of the action, which is better than not-doing the action on all precisifications of the credence; (ii) we can ignore the other parts of the action (e.g. the flow-through effects) that are good on some precisifications and bad on some precisifications. It seems hard to make that theoretically justified, but I think it matches how people actually think, so at least has some common-sense motivation. 

But you could do it if you could argue for a pseudodominance principle that says: "If there's some interval of time t_i over which action x does more expected good than action y on all precisifications of one's credence function, and there's no interval of time t_j at which action y does more expected good than action x on all precisifications of one's credence function, then you should choose x over y".


(In contrast, it seems you thought I was referring to AI vs some other putative great longtermist intervention. I agree that plausible longtermist rivals to AI and bio are thin on the ground.)

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T01:07:29.453Z · EA · GW

Thanks, William! 

Yeah, I think I messed up this bit. I should have used the harmonic mean rather than the arithmetic mean when averaging over possibilities of how many people will be in the future. Doing this brings the chance of being among the most influential person ever close to the chance of being the most influential person ever in a small-population universe.  But then we get the issue that being the most influential person ever in a small-population universe is much less important than being the most influential person in a big-population universe. And it’s only the latter that we care about.  


So what I really should have said (in my too-glib argument) is: for simplicity, just assume a high-population future, which are the action-relevant futures if you're a longtermist. Then take a uniform prior over all times (or all people) in that high-population future. So my claim is: “In the action-relevant worlds, the frequency of ‘most important time’ (or ‘most important person’) is extremely low, and so should be our prior.”

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T01:05:44.562Z · EA · GW

Thanks for these links. I’m not sure if your comment was meant to be a criticism of the argument, though? If so: I’m saying “prior is low, and there is a healthy false positive rate, so don’t have high posterior.” You’re pointing out that there’s a healthy false negative rate too — but that won’t cause me to have a high posterior?

And, if you think that every generation is increasing in influentialness, that’s a good argument for thinking that future generations will be more influential and we should therefore save.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T01:02:40.773Z · EA · GW

There were a couple of recurring questions, so I’ve addressed them here.

What’s the point of this discussion — isn’t passing on resources to the future too hard to be worth considering? Won’t the money be stolen, or used by people with worse values?

In brief: Yes, losing what you’ve invested is a risk, but (at least for relatively small donors) it’s outweighed by investment returns. 

Longer: The concept of ‘influentialness of a time’ is the same as the cost-effectiveness (from a longtermist perspective) of the best opportunities accessible to longtermists at a time.  Suppose I think that the best opportunities in, say, 100 years, are as good as the best opportunities now. Then, if I have a small amount of money, then I can get (say) at least a 2% return per year on those funds. But I shouldn’t think that the chance of my funds being appropriated (or otherwise lost) is as high as 2% per year. So the expected amount of good I do is greater by saving. 

So if you think that hingeyness (as I’ve defined it) is about the same in 100 years as it is now, or greater, then there’s a strong case for investing for 100 years before spending the money.

(Caveat that once we consider larger amounts of money, diminishing returns for expenditure becomes an issue, and chance of appropriation increases.)

What’s your view on anthropics? Isn’t that relevant here?

I’ve been trying to make claims that aren’t sensitive to tricky issues in anthropic reasoning. The claim that if there are n people, ordered in terms of some relation F (like ‘more important than’), then the claim that the prior probability that you are most F (‘most important’) person  is 1/n doesn’t distinguish between anthropic principles, because I’ve already conditioned on the number of people in the world. So I think anthropic principles aren’t directly relevant for the argument I’ve made, though obviously they are relevant more generally.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T00:49:21.255Z · EA · GW

I don't think I agree with this, unless one is able to make a comparative claim about the importance (from a longtermist perspective) of these events relative to future events' importance - which is exactly what I'm questioning.

I do think that weighting earlier generations more heavily is correct, though; I don't feel that much turns on whether one construes this as prior choice or an update from one's prior.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T00:44:49.916Z · EA · GW

Given this, if one had a hyperprior over different possible Beta distributions, shouldn't 2000 centuries of no event occurring cause one to update quite hard against the (0.5, 0.5) or (1, 1) hyperparameters, and in favour of a prior that was massively skewed towards the per-century probability of no-lock-in-event being very low?

(And noting that, depending exactly on how the proposition is specified, I think we can be very confident that it hasn't happened yet. E.g. if the proposition under consideration was 'a values lock-in event occurs such that everyone after this point has the same values'.)

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T00:38:50.308Z · EA · GW

Hi Toby,

Thanks so much for this very clear response, it was a very satisfying read, and there’s a lot for me to chew on. And thanks for locating the point of disagreement — prior to this post, I would have guessed that the biggest difference between me and some others was on the weight placed on the arguments for the Time of Perils and Value Lock-In views, rather than on the choice of prior. But it seems that that’s not true, and that’s very helpful to know. If so, it suggests (advertisement to the Forum!) that further work on prior-setting in EA contexts is very high-value. 

I agree with you that under uncertainty over how to set the prior, because we’re clearly so distinctive in some particular ways (namely, that we’re so early on in civilisation, that the current population is so small, etc), my choice of prior will get washed out by models on which those distinctive features are important; I characterised these as outside-view arguments, but I’d understand if someone wanted to characterise that as prior-setting instead.

I also agree that there’s a strong case for making the prior over persons (or person-years) rather than centuries. In your discussion, you go via number of persons (or person-years) per century to the comparative importance of centuries. What I’d be inclined to do is just change the claim under consideration to: “I am among the (say) 100,000 most influential people ever”. This means we still take into account the fact that, though more populous centuries are more likely to be influential, they are also harder to influence in virtue of their larger population.  If we frame the core claim in terms of being among the most influential people, rather than being at the most influential time, the core claim seems even more striking to me. (E.g. a uniform prior over the first 100 billion people would give a prior of 1 in 1 million of being in the 100,000 most influential people ever. Though of course, there would also be an extra outside-view argument for moving from this prior, which is that not many people are trying to influence the long-run future.)

However, I don’t currently feel attracted to your way of setting up the prior.  In what follows I’ll just focus on the case of a values lock-in event, and for simplicity I’ll just use the standard Laplacean prior rather than your suggestion of a Jeffreys prior. 

In significant part my lack of attraction is because the claims — that (i) there’s a point in time where almost everything about the fate of the universe gets decided; (ii) that point is basically now; (iii) almost no-one sees this apart from us (where ‘us’ is a very small fraction of the world) — seem extraordinary to me, and I feel I need extraordinary evidence in order to have high credence in them. My prior-setting discussion was one way of cashing out why these seem extraordinary. If there’s some way of setting priors such that claims (i)-(iii) aren’t so extraordinary after all, I feel like a rabbit is being pulled out of a hat. 

Then I have some specific worries with the Laplacean approach (which I *think* would apply to the Jeffreys prior, too, but I'm yet to figure out what a Fischer information matrix is, so I don't totally back myself here).

But before I mention the worries, I'll note that it seems to me that you and I are currently talking about priors over different propositions. You seem to be considering the propositions, ‘there is a lock-in event this century’ or ‘there is an extinction event this century’; I’m considering the proposition ‘I am at the most influential time ever’ or ‘I am one of the most influential people ever.’ As is well-known, when it comes to using principle-of-indifference-esque reasoning, if you use that reasoning over a number of different propositions then you can end up with inconsistent probability assignments. So, at best, one should use such reasoning in a very restricted way. 

The reason I like thinking about my proposition (‘are we at the most important time?’ or ‘are we one of the most influential people ever?’) for the restricted principle of indifference, is that:

(i) I know the frequency of occurrence of ‘most influential person’, for each possible total population of civilization (past, present and future). Namely, it occurs once out of the total population. So I can look at each possible population size for the future, look at my credence in each possible population occurring, and in each case know the frequency of being the most influential person (or, more naturally, in the 100,000 most influential people).

(ii) it’s the most relevant proposition for the question of what I should do. (e.g. Perhaps it’s likely that there’s a lock-in event, but we can’t do anything about it and future people could, so we should save for a later date.)

Anyway, the worries about Laplacean (and Jeffreys) prior.

First, the Laplacean prior seems to get the wrong answer for lots of similar predicates. Consider the claims: “I am the most beautiful person ever” or “I am the strongest person ever”, rather than “I am the most important person ever”. If we used the Laplacean prior in the way you suggest for these claims, the first person would assign 50% credence to being the strongest person ever, even if they knew that there was probably going to be billions of people to come. This doesn’t seem right to me. 

Second, it also seems very sensitive to our choice of start date. If the proposition under question is, ‘there will be a lock-in event this century’, I’d get a very different prior depending on whether I chose to begin counting from: (i) the dawn of the information age; (ii) the beginning of the industrial revolution; (iii) the start of civilisation; (iv) the origin of homo sapiens; (v) the origin of the genus homo; (vi) the origin of mammals, etc. 

Of course, the uniform prior has something similar, but I think it handles the issue gracefully. e.g. On priors, I should think it’s 1 in 5 million likely that I’m the funniest person in Scotland; 1 in 65 million that I’m the funniest person in Britain, 1 in 7.5 billion that I’m the funniest person in the world. Similarly, with whether I’m the most influential person in the post-industrial era, the post-agricultural era, etc.

Third, the Laplacean prior doesn’t add up to 1 across all people. For example, suppose you’re the first person and you know that there will be 3 people. Then, on the Laplacean prior, the total probability for being the most influential person ever is ½ + ½(⅓) + ½(⅔)(¼) = ¾.  But I know that someone has to be the most influential person ever. This suggests the Laplacean prior is the wrong prior choice for the proposition I’m considering, whereas the simple frequency approach gets it right.

So even if one feels skeptical of the uniform prior, I think the Laplacean way of prior-setting isn't a better alternative. In general: I'm sympathetic to having a model where early people are more likely to be more influential, but a model which is uniform over orders of magnitude seems too extreme to me.


(As a final thought: Doesn’t this form of prior-setting also suffer from the problem of there being too many hypotheses?  E.g. consider the propositions:

A - There will be a value lock-in event this century
B - There will be a lock-in of hedonistic utilitarian values this century

C - There will be a lock-in of preference utilitarian values this century

D - There will be a lock-in of Kantian values this century

E - There will be a lock-in of fascist values this century

On the Laplacean approach, these would all get the same probability assignment - which seems inconsistent. And then just by stacking priors over particular lock-in events, we can get a probability that it’s overwhelmingly likely that there’s some lock-in event this century. I’ve put this comment in parentheses, though, as I feel *even less* confident about my worry here than my other worries listed.)

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T04:02:10.863Z · EA · GW

The way I'd think about it is that we should be uncertain about how justifiably confident people can be that they're at the HoH. If our current credence in HoH is low, then the chance that it might be justifiably much higher in the future should be the significant consideration. At least if we put aside simulation worries, I can imagine evidence which would lead me to have high confidence that I'm at the HoH.

E.g., the prior is (say) 1/million this decade, but if the evidence suggests it is 1%, perhaps we should drop everything to work on it, if we won't expect our credence to be this high again for another millenia.

I think if that were one's credences, what you say makes sense. But it seems hard for me to imagine a (realistic) situation where I think that it's 1% chance of HoH this decade, but I'm confident that the chance will much much lower than that for all of the next 99 decades.

For what it's worth, my intuition is that pursuing a mixed strategy is best; some people aiming for impact now, in case now is a hinge, and some people aiming for impact in many many years, at some future hinge moment.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:48:33.157Z · EA · GW
So I would say both the population and pre-emption (by earlier stabillization) factors intensely favor earlier eras in per resource hingeyness, constrained by the era having any significant lock-in opportunities and the presence of longtermists.

I think this is a really important comment; I see I didn't put these considerations into the outside-view arguments, but I should have done as they are make for powerful arguments.

The factors you mention are analogous to the parameters that go into the Ramsey model for discounting: (i) a pure rate of time preference, which can account for risk of pre-emption; (ii) a term to account for there being more (and, presumably, richer) future agents and some sort of diminishing returns as a function of how many future agents (or total resources) there are. Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There's been some great societal catastrophe and we're rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness is very strong, then maybe we should be saving for such a possible scenario; that's the hinge moment.

For the later scenarios here you're dealing with much larger populations. If the plausibility of important lock-in is similar for solar colonization and intergalactic colonization eras, but the population of the latter is billions of times greater, it doesn't seem to be at all an option that it could be the most HoH period on a per resource unit basis.

I agree that other things being equal a time with a smaller population (or: smaller total resources) seems likelier to be a more influential time.  But ‘doesn't seem to be at all an option’ seems overstated to me. 

Simple case: consider a world where there just aren’t options to influence the very long-run future. (Agents can make short-run perturbations but can’t affect long-run trajectories; some sort of historical determinism is true). Then the most influential time is just when we have the best knowledge of how to turn resources into short-run utility, which is presumably far in the future. 

Or, more importantly, where hingeyness is essentially 0 up until a certain point far in the future.  If our ability to positively influence the very long-run future were no better than a dart-throwing chimp until we’ve got computers the size of solar systems, then the most influential times would also involve very high populations

More generally, per-resource hingeyness increases with:

  • Availability of pivotal moments one can influence, and their pivotality 
  • Knowledge / understanding of how to positively influence the long-run future

And hingeyness decreases with:

  • Population size
  • Level of expenditure on long-term influence
  • Chance of being pre-empted already

If knowledge or availability of pivotal moments at a time is 0, then hingeyness at the time is 0, and lower populations can’t outweigh that.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:31:43.900Z · EA · GW
I think this overstates the case. Diminishing returns to expenditures in a particular time favor a nonzero disbursement rate (e.g. with logarithmic returns to spending at a given time 10x HoH levels would drive a 10x expenditure for a given period)

Sorry, I wasn’t meaning we should be entirely punting to the future, and in case it’s not clear from my post my actual all-things-considered views is that longtermist EAs should be endorsing a mixed strategy of some significant proportion of effort spent on near-term longtermist activities and some proportion of effort spent on long-term longtermist activities. 

I do agree that, at the moment, EA is mainly investing (e.g. because of Open Phil and because of human capital and because much actual expenditure is field-building-y, as you say). But it seems like at the moment that’s primarily because of management constraints and weirdness of borrowing-to-give (etc), rather than a principled plan to spread giving out over some (possibly very long) time period. Certainly the vibe in the air is ‘expenditure (of money or labour) now is super important, we should really be focusing on that’. 

(I also don’t think that diminishing returns is entirely true: there are fixed costs and economies of scale when trying to do most things in the world, so I expect s-curves in general. If so, that would favour a lumpier disbursement schedule.)

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:28:31.558Z · EA · GW
I would note that the creation of numerous simulations of HoH-type periods doesn't reduce the total impact of the actual HoH folk

Agree that it might well be that even though one has a very low credence in HoH, one should still act in the same way. (e.g. because if one is not at HoH, one is a sim, and your actions don’t have much impact).

The sim-arg could still cause you to change your actions, though. It’s somewhat plausible to me, for example, that the chance of being a sim if you’re at the very most momentous time is 1000x higher than the chance of being a sim if you’re at the 20th most hingey time, but the most hingey time is not 1000x more hingey than the 20th most hingey time. In which case the hypothesis that you’re at the 20th most hingey time has a greater relative importance than it had before.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:27:35.223Z · EA · GW
I agree we are learning more about how to effectively exert resources to affect the future, but if your definition is concerned with the effect of a marginal increment of resources (rather than the total capacity of an era), then you need to wrestle with the issue of diminishing returns.

I agree with this, though if we’re unsure about how many resources will be put towards longtermist causes in the future, then the expected value of saving will come to be dominated by the scenario where very few resources are devoted to it. (As happens in the Ramsey model for discounting if one includes uncertainty over future growth rates and the possibility of catastrophe.) This considerations gets stronger if one thinks the diminishing marginal returns curve is very steep.

E.g. perhaps in 150 years’ time, EA and Open Phil and longtermist concern will be dust; in which case those who saved for the future (and ensured that there would be at least some sufficiently likeminded people to pass their resources onto) will have an outsized return. And perhaps returns diminish really steeply, so that what matters is guaranteeing that there are at least some longtermists around. If the outsized return in this scenario if large enough, then even a low probability of this scenario might be the dominant consideration.

Founding fields like AI safety or population ethics is much better on a per capita basis than expanding them by 1% after they have developed more.

Strongly agree, though by induction it seems we should think there will be more such fields in the future.

The longtermist of 1600 would indeed have mostly 'invested' in building a movement and eventually in things like financial assets when movement-building returns fell below financial returns, but they also should have made concrete interventions like causing the leveraged growth of institutions like science and the Enlightenment that looked to have a fair chance of contributing to HoH scenarios over the coming centuries, and those could have paid off.

You might think the counterfactual is unfair here, but I wouldn’t regard it as accessible to someone in 1600 to know that they could make contributions to science and the Enlightenment as a good way of influencing the long-run future. 

This is analogous to the general point in financial markets that assets classes with systematically high returns only have them before those returns are widely agreed on to be valuable and accessible...
A world in which everyone has shared correct values and strong knowledge of how to improve things is one in which marginal longtermist resources are gilding the lily.

Though if we’re really clueless right now (perhaps not much better than the person in 1600) then perhaps that’s the best we can do.

And it would seem that the really high-value scenario is where (i) knowledge is very high but (ii) concern for the very long-run future is very low (but not nonexistent, allowing for resources to be passed onto those times.) 

In terms of the financial analogy, that would be like how someone with strange preferences, who gets extraordinary utility from eating bread and potatoes, gets a much higher return (when measured in utility gained) from a regular salary than other people would. 

And in general I'm more inclined to believe stories of us having extraordinary impact if that primarily results from a difference in what we care about compared with others, rather than from having greater insight.


I will say, though: the argument “we’re at an unusual period where longtermist (/impartial consequentialish) concern is very low but not nonexistent” as a reason for now being a particularly influential time seems pretty good to me, and wasn’t one that I included in my list of arguments in favour of HoH.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:18:23.386Z · EA · GW
To talk about what they would have been one needs to consider a counterfactual in which we anachronistically introduce at least some minimal version of longtermist altruism, and what one includes in that intervention will affect the result one extracts from the exercise.

I agree there’s a tricky issue of how exactly one constructs the counterfactual. The definition I’m using is trying to get it as close as possible to a counterfactual we really face: how much to spend now vs how much to pass resources onto future altruists. I’d be interested if others thought of very different approaches. It’s possible that I’m trying to pack too much into the concept of ‘most influential’, or that this concept should be kept separate from the idea of moving resources around to different times.

I feel that involving the anachronistic insertion of a longtermist altruist into the past, if anything, makes my argument harder to make, though. If I can’t guarantee that the past person I’m giving resources to would even be a longtermist, that makes me less inclined to give them resources. And if I include the possibility that longtermism might be wrong and that the future-person that I pass resources onto will recognise this, that’s (at least some) argument to me in favour of passing on resources. (Caveat subjectivist meta-ethics, possibility of future people’s morality going wayward, etc.)

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:16:53.906Z · EA · GW
I would dispute this. Possibilities of AGI and global disaster were discussed by pioneers like Turing, von Neumann, Good, Minsky and others from the founding of the field of AI.

Thanks, I’ve updated on this since writing the post and think my original claim was at least too strong, and probably just wrong. I don’t currently have a good sense of, say, if I were living in the 1950s, how likely I would be to figure out AI as the thing, rather than focus on something else that turned out not to be as important (e.g. the focus on nanotech by the Foresight Institute (a group of idealistic futurists) in the late 80s could be a relevant example).

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:13:26.225Z · EA · GW

Hi Carl,

Thanks so much for taking the time to write this excellent response, I really appreciate it, and you make a lot of great points.  I’ll divide up my reactions into different comments; hopefully that helps ease of reading. 

I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.

This is a good idea. Some options: influentialness; criticality; momentousness; importance; pivotality; significance. 

I’ve created a straw poll here to see as a first pass what the Forum thinks.

[Edit: Results:

Pivotality - 26% (17 votes)

Criticality - 22% (14 votes)

Hingeyness - 12% (8 votes)

Influentialness - 11% (7 votes)

Importance - 11% (7 votes)

Significance - 11% (7 votes)

Momentousness - 8% (5 votes)]

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-04T04:59:37.220Z · EA · GW

Thanks - I agree that this distinction is not as crisp as would be ideal. I’d see religion-spreading, and movement-building, as in practice almost always a mixed strategy: in part one is giving resources to future people, and in part one is also directly altering how the future goes.

But it's more like buck-passing than it is like direct work, so I think I should just not include the Axial age in the list of particularly influential times (given my definition of 'influential').

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-04T04:56:51.773Z · EA · GW

Huh, thanks for the great link! I hadn’t seen that before, and had been under the impression that though some people (e.g. Good, Turing) had suggested the intelligence explosion, no-one really worried about the risks. Looks like I was just wrong about that.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-04T04:56:20.135Z · EA · GW

Agreed, good point; I was thinking just of the case where you reduce extinction risk in one period but not in others. 

I’ll note, though, that reducing extinction risk at all future times seems very hard to do. I can imagine, if we’re close to a values lock-in point, we could shift societal values such that they care about future extinction risk much more than they would otherwise have done. But if that's the pathway, then the Time of Perils view wouldn’t provide an argument for HoH independent of the Value Lock-In view.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-04T04:55:57.309Z · EA · GW

Thanks, Pablo! Yeah, the reference was deliberate — I’m actually aiming to turn a revised version of this post into a book chapter in a Festschrift for Parfit. But I should have given the great man his due! And I didn’t know he’d made the ‘most important centuries’ claim in Reasons and Persons, that’s very helpful!

Comment by william_macaskill on Ask Me Anything! · 2019-08-30T17:30:57.064Z · EA · GW

I agree re value-drift and societal trajectory worries, and do think that work on AI is plausibly a good lever to positively affect them.

Comment by william_macaskill on Ask Me Anything! · 2019-08-30T17:27:38.994Z · EA · GW

One thing that moves me towards placing a lot of importance on culture and institutions: We've actually had the technology and knowledge to produce greater-than-human intelligence for thousands of years, via selective breeding programs. But it's never happened, because of taboos and incentives not working out.

Comment by william_macaskill on Ask Me Anything! · 2019-08-29T22:04:18.009Z · EA · GW

Population ethics; moral uncertainty.

I wonder if someone could go through Conceptually and make sure that all the wikipedia entries on those topics are really good?

Comment by william_macaskill on Ask Me Anything! · 2019-08-29T22:03:58.067Z · EA · GW

I think cluelessness-ish worries. From the perspective of longtermism, for any particular action, there are thousands of considerations/ scenarios that point in the direction of the action being good, and thousands of considerations/ scenarios that point in the direction of the action being bad. The standard response to that is that you should weigh all these and do what is in expectation best, according to your best-guess credences. But maybe we just don’t have sufficiently fine-grained credences for this to work, and there’s some principled grounds for saying “I’m confident that this short-run good thing I do is good, and (given my not-completely-precise credences) I shouldn’t think that the expected value of the more speculative stuff is either positive or negative.”

Comment by william_macaskill on Ask Me Anything! · 2019-08-29T22:02:24.620Z · EA · GW

It depends on who we point to as the experts, which I think there could be disagreement about. If we’re talking about, say, FHI folks, then I’m very clearly in the optimistic tail - others would put much higher x-risk, takeoff scenario, and chance of being superinfluential. But note I think there’s a strong selection effect with respect to who becomes an FHI person, so I don’t simply peer-update to their views. I’d expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view. If I were wrong about that I’d change my view. One relevant piece of evidence is that the Metaculus (a community prediction site) algorithm puts the chance of 95%+ of people dead by 2100 at 0.5%, which is in the same ballpark as me.

Comment by william_macaskill on Ask Me Anything! · 2019-08-29T22:00:50.764Z · EA · GW

Thanks! I’ve read and enjoyed a number of your blog posts, and often found myself in agreement. 

If you think that extinction risk this century is less than 1%, then in particular, you think that extinction risk from transformative AI is less than 1%. So, for this to be consistent, you have to believe either
a) that it's unlikely that transformative AI will be developed at all this century,
b) that transformative AI is unlikely to lead to extinction when it is developed, e.g. because it will very likely be aligned in at least a narrow sense. (I wrote up some arguments for this a while ago.)
Which of the two do you believe to what extent? For instance, if you put 10% on transformative AI this century – which is significantly more conservative than "median EA beliefs" – then you’d have to believe that the conditional probability of extinction is less than 10%. (I’m not saying I disagree – in fact, I believe something along these lines myself.)

See my comment to nonn. I want to avoid putting numbers on those beliefs to avoid anchoring myself; but I find them both very likely - it’s not that one is much more likely than the other. (Where ‘transformative AI not developed this century’ includes ‘AI is not transformative’ in the sense that it doesn’t precipitate a new growth mode in the next century - this is certainly my mainline belief.)


What do you think about the possibility of a growth mode change (i.e. much faster pace of economic growth and probably also social change, comparable to the industrial revolution) for reasons other than AI? I feel that this is somewhat neglected in EA – would you agree with that?

Yes, I’d agree with that. There’s a lot of debate about the causes of the industrial revolution. Very few commentators point to some technological breakthrough as the cause, so it's striking that people are inclined to point to a technological breakthrough in AI as the cause of the next growth mode transition. Instead, leading theories point to some resource overhang (‘colonies and coal’), or some innovation or change in institutions (more liberal laws and norms in England, or higher wages incentivising automation) or in culture. So perhaps there’s some novel governance system that could drive a higher growth mode, and that'll be the decisive thing.


I’d also be interested in more details on what these beliefs imply in terms of how we can improve the long-term future. I suppose you are now more sceptical about work on AI safety as the “default” long-termist intervention. But what is the alternative? Do you think we should focus on broad improvements to civilisation, such as better governance, working towards compromise and cooperation rather than conflict / war, or generally trying to make humanity more thoughtful and cautious about new technologies and the long-term future? These are uncontroversially good but not very neglected, and it seems hard to get a lot of leverage in this way. (Then again, maybe there is no way to get extraordinary leverage over the long-term future.)
Also, if we aren't at a particularly influential point in time regarding AI, then I think that expanding the moral circle, or otherwise advocating for "better" values, may be among the best things we can do. What are your thoughts on that?


I still think that working on AI is ultra-important — in one sense, whether there’s a 1% risk or a 20% risk doesn’t really matter; society is still extremely far from the optimum level of concern.  (Similarly: “Is the right carbon tax $50 or $200?” doesn’t really matter.)

For longtermist EAs more narrowly it might matter insofar as I think it makes some other options more competitive than otherwise: especially the idea of long-term investment (whether financial or via movement-building); doing research on longtermist-relevant topics; and, like you say, perhaps doing broader x-risk reduction strategies like preventing war, better governance, trying to improve incentives so that they align better with the long-term, and so on.

Comment by william_macaskill on Ask Me Anything! · 2019-08-29T21:55:26.981Z · EA · GW

The general background worldview that motivates this credence is that predicting the future is very hard, and we have almost no evidence that we can do it well. (Caveat I don’t think we have great evidence that we can’t do it either, though.) When it comes to short-term forecasting, the best strategy is to use reference-class forecasting (‘outside view’ reasoning; often continuing whatever trend has occurred in the past), and make relatively small adjustments based on inside-view reasoning. In the absence of anything better, I think we should do the same for long-term forecasts too. (Zach Groff is working on a paper making this case in more depth).

So when I look to predict the next hundred years, say, I think about how the past 100 years has gone (as well as giving consideration to how the last 1000 years and 10,000 years (etc) have gone).  When you ask me about how AI will go, as a best guess I continue the centuries-long trend of automation of both physical and intellectual labour; in the particular context of AI I continue the trend where within a task, or task-category, the jump from significantly sub-human to vastly-greater-than-human level performance is rapid (on the order of years), but progress from one category of task to another (e.g. from chess to Go) goes rather slowly, as different tasks seem to differ from each other by orders of magnitude in terms of how difficult they are to automate. So I expect progress in AI to be gradual. 

Then I also expect future AI systems to be narrow rather than general. When I look at the history of tech progress, I almost always see the creation of specific, highly optimised and generally very narrow tools, and very rarely the creation of general-purpose systems like general-purpose factories. And in general, when general-purpose tools are developed, they are worse than narrow tools on any given dimension: a swiss army knife is a crappier knife, bottle opener, saw, etc than any of those things individually. The current development of AI systems don’t give me any reason to think that AI is different: they’ve been very narrow to date; and when they’ve attempted to do things that are somewhat more general, like driving a car, progress has been slow and gradual, suffering from major difficulties in dealing with unusual situations. 

Finally, I expect the development of any new technology to be safe by default. As an intuition pump: suppose there was some new design of bomb and BAE Systems decided to build it. There were, however, some arguments that the new design was unstable, and that if designed badly the bomb would kill everyone in the company, including the designers, the CEO, the board, and all their families. These arguments have been made in the media and the designers and the companies were aware of them. What odds do you put on BAE Systems building the bomb wrong and blowing themselves up? I’d put it very low — certainly less than 1%, and probably less than 0.1%. That would be true even if BAE Systems were in a race with Lockheed Martin to be the first to market. People in general really want to avoid dying, so there’s a huge incentive (a willingness-to-pay measured in the trillions of dollars for the USA alone) to ensure that AI doesn’t kill everyone. And when I look at other technological developments I see society being very risk averse and almost never taking major risks - a combination of public opinion and regulation means that things go slow and safe; again, self-driving cars are an example.

For each of these views, I’m very happy to acknowledge that maybe AI is different. And, when we’re talking about what could be the most important event ever, the possibility of some major discontinuity is really worth guarding against. But discontinuity is not my mainline prediction of what will happen. 


(Later edit: I worry that the text above might have conveyed the idea that I'm just ignoring the Yudkowsky/Bostrom arguments, which isn't accurate. Instead, another factor in my change of view was placing less weight on the Y-B arguments because of: (i) finding the arguments that we'll get discontinuous progress in AI a lot less compelling than I used to (e.g. see here and here); (ii) trying to map the Yudkowsky/Bostrom arguments, which were made before the deep learning paradigm, onto actual progress in machine learning, and finding them hard to fit well. Going into this properly would require a lot more discussion though!)