Posts

Announcing the Future Fund's AI Worldview Prize 2022-09-23T16:28:35.127Z
What We Owe The Future is out today 2022-08-16T15:13:40.225Z
Future Fund June 2022 Update 2022-07-01T00:50:19.153Z
EA and the current funding situation 2022-05-10T02:26:06.446Z
Announcing What We Owe The Future 2022-03-30T19:37:50.554Z
The Future Fund’s Project Ideas Competition 2022-02-28T17:27:20.854Z
Announcing the Future Fund 2022-02-28T17:26:40.069Z
Are we living at the most influential time in history? 2019-09-03T04:55:31.501Z
Ask Me Anything! 2019-08-14T15:52:15.775Z
'Longtermism' 2019-07-25T21:27:11.568Z
Defining Effective Altruism 2019-07-19T10:49:54.253Z
Age-Weighted Voting 2019-07-12T15:21:31.538Z
A philosophical introduction to effective altruism 2019-07-10T13:40:19.228Z
Aid Scepticism and Effective Altruism 2019-07-03T11:34:22.630Z
Announcing the new Forethought Foundation for Global Priorities Research 2018-12-04T10:36:06.536Z
Projects I'd like to see 2017-06-12T16:19:52.178Z
Introducing CEA's Guiding Principles 2017-03-08T01:57:00.660Z
[CEA Update] Updates from January 2017 2017-02-13T20:56:21.121Z
Introducing the EA Funds 2017-02-09T00:15:29.301Z
CEA is Fundraising! (Winter 2016) 2016-12-06T16:42:36.985Z
[CEA Update] October 2016 2016-11-15T14:49:34.107Z
Setting Community Norms and Values: A response to the InIn Open Letter 2016-10-26T22:44:30.324Z
CEA Update: September 2016 2016-10-12T18:44:34.883Z
CEA Updates + August 2016 update 2016-10-12T18:41:43.964Z
Should you switch away from earning to give? Some considerations. 2016-08-25T22:37:19.691Z
Some Organisational Changes at the Centre for Effective Altruism 2016-07-23T04:29:02.144Z
Call for papers for a special journal issue on EA 2016-03-14T12:46:39.712Z
Assessing EA Outreach’s media coverage in 2014 2015-03-18T12:02:38.223Z
Announcing a forthcoming book on effective altruism 2014-03-16T13:00:35.000Z
The history of the term 'effective altruism' 2014-03-11T02:03:32.000Z
Where I'm giving and why: Will MacAskill 2013-12-30T23:00:54.000Z
What's the best domestic charity? 2013-12-10T19:16:42.000Z
Want to give feedback on a draft sample chapter for a book on effective altruism? 2013-09-22T04:00:15.000Z
How might we be wildly wrong? 2013-09-04T19:19:54.000Z
Money can buy you (a bit) of happiness 2013-07-29T04:00:59.000Z
On discount rates 2013-07-22T04:00:53.000Z
Notes on not dying 2013-07-15T04:00:05.000Z
Helping other altruists 2013-07-01T04:00:08.000Z
The rules of effective altruism. Rule #1: don’t die 2013-06-24T04:00:29.000Z
Vegetarianism, health, and promoting the right changes 2013-06-07T04:00:43.000Z
On the robustness of cost-effectiveness estimates 2013-05-24T04:00:47.000Z
Peter Singer's TED talk on effective altruism 2013-05-22T04:00:50.000Z
Getting inspired by cost-effective giving 2013-05-20T04:00:41.000Z
$1.25/day - What does that mean? 2013-05-17T04:00:25.000Z
An example of do-gooding done wrong 2013-05-15T04:00:16.000Z
What is effective altruism? 2013-05-13T04:00:31.000Z
Doing well by doing good: careers that benefit others also benefit you 2013-04-18T04:00:02.000Z
To save the world, don’t get a job at a charity; go work on Wall Street 2013-02-27T05:00:23.000Z
Some general concerns about GiveWell 2012-12-23T05:00:10.000Z
GiveWell's recommendation of GiveDirectly 2012-11-30T05:00:28.000Z

Comments

Comment by William_MacAskill on My take on What We Owe the Future · 2022-09-06T13:24:03.234Z · EA · GW

Hi Eli, thank you so much for writing this! I’m very overloaded at the moment, so I’m very sorry I’m not going to be able to engage fully with this. I just wanted to make the most important comment, though, which is a meta one: that I think this is an excellent example of constructive critical engagement — I’m glad that you’ve stated your disagreements so clearly, and I also appreciate that you reached out in advance to share a draft. 

Comment by William_MacAskill on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-30T08:12:26.020Z · EA · GW

Hi - thanks for writing this! A few things regarding your references to WWOTF:

The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)

I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”,  “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.

this still leaves open the question as to whether happiness and happy lives can outweigh suffering and miserable lives, let alone extreme suffering and extremely bad lives.

It’s true that I don’t discuss views on which some goods/bads are lexically more important than others; I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.)
 

these questions regarding tradeoffs and outweighing are not raised in MacAskill’s discussion of population ethics, despite their supreme practical significance

I talk about the asymmetry between goods and bads in chapter 9 on the value of the future in the section “The Case for Optimism”, and I actually argue that there is an asymmetry: I argue the very worst world is much more bad than the very best world is good. (A bit of philosophical pedantry partly explains why it’s in chapter 9, not 8: questions about happiness / suffering tradeoffs aren’t within the domain of population ethics, as they arise even in a fixed-population setting.)

In an earlier draft I talked at more length about relevant asymmetries (not just suffering vs happiness, but also objective goods vs objective bads, and risk-averse vs risk-seeking decision theories.) It got cut just because it was adding complexity to an already-complex chapter and didn’t change the bottom-line conclusion of that part of the discussion. The same is true for moral uncertainty - under reasonable uncertainty, you end up asymmetric on happiness vs suffering, objective goods vs objective bads, and you end up risk-averse.  Again, the thrust of the relevant discussion happens in the section “The Case for Optimism": "on a range of views in moral philosophy, we should weight one unit of pain more than one unit of pleasure... If this is correct, then in order to make the expected value of the future positive, the future not only needs to have more “goods” than “bads”; it needs to have considerably more goods than bads."

Of course,  there's only so much one can do in a single chapter of a general-audience book, and all of these issues warrant a lot more discussion than I was able to give!

Comment by William_MacAskill on What We Owe The Future is out today · 2022-08-30T08:00:30.093Z · EA · GW

It’s because we don’t get to control the price - that’s down to the publisher.

I’d love us to set up a non-profit publishing house or imprint that could mean that we would have control over the price.

Comment by William_MacAskill on What We Owe The Future is out today · 2022-08-30T08:00:04.851Z · EA · GW

It would be a very different book if the audience had been EAs. There would have been a lot more on prioritisation (see response to Berger thread above), a lot more numbers and back-of-the-envelope calculations, a lot more on AI, a lot more deep philosophy arguments, and generally more of a willingness to engage in more speculative arguments. I’d have had more of the philosophy essay “In this chapter I argue that..” style, and I’d have put less effort into “bringing the ideas to life” via metaphors and case studies. Chapters 8 and 9, on population ethics and on the value of the future, are the chapters that are most similar to how I’d have written the book if it were written for EAs - but even so, they’d still have been pretty different. 

Comment by William_MacAskill on What We Owe The Future is out today · 2022-08-30T07:59:33.742Z · EA · GW

Thanks! 

Comment by William_MacAskill on What We Owe The Future is out today · 2022-08-30T07:59:15.222Z · EA · GW

Yes, we got extensive advice on infohazards from experts on this and other areas, including from people who have both domain expertise and thought a lot about how to communicate about key ideas publicly given info hazard concerns. We were careful not to mention anything that isn’t already in the public discourse.

Comment by William_MacAskill on What We Owe The Future is out today · 2022-08-30T07:58:28.054Z · EA · GW

To be clear - these are a part of my non-EA life, not my EA life!  I’m not sure if something similar would be a good idea to have as part of EA events - either way, I don’t think I can advise on that!

Comment by William_MacAskill on What We Owe The Future is out today · 2022-08-30T07:58:10.021Z · EA · GW

Some sorts of critical commentary are well worth engaging with (e.g. Keiran Setiya’s review of WWOTF); in other cases, where criticism is clearly misrepresentative or strawmanning, I think it’s often best not to engage.

Comment by William_MacAskill on What We Owe The Future is out today · 2022-08-30T07:57:37.532Z · EA · GW

I think it’s a combination of multiplicative factors. Very, very roughly:

  • Prescribed medication and supplements: 2x improvement
  • Understanding my own mind and adapting my life around that (including meditation, CBT, etc): 1.5x improvement 
  • Work and personal life improvements (not stressed about getting an academic job, doing rewarding work, having great friends and a great relationship): 2x improvement 

To illustrate quantitatively (with normal weekly wellbeing on a +10 to -10 scale) with pretty made-up numbers, it feels like an average week used to be like:  1 days: +4; 4 days: +1; 1 day: -1; 1 day: -6.

Now it feels like I’m much more stable, around +2 to +7. Negative days are pretty rare; removing them from my life makes a huge difference to my wellbeing.  

I agree this isn’t the typical outcome for someone with depressive symptoms. I was lucky that I would continue to have high “self-efficacy” even when my mood was low, so I was able to put in effort to make my mood better. I’ve also been very lucky in other ways: I’ve been responsive to medication, and my personal and work life have both gone very well.

Comment by William_MacAskill on What We Owe The Future is out today · 2022-08-30T07:55:57.807Z · EA · GW

Huge question, which I’ll absolutely fail to do proper justice to in this reply! Very briefly, however:  

  • I think that AI itself (e.g. language models) will help a lot with AI safety.
  • In general, my perception of society is that it’s very risk-averse about new technologies, has very high safety standards, and governments are happy to slow down the introduction of new tech. 
  • I’m comparatively sceptical of ultra-fast takeoff scenarios, and of very near-term AGI (though I think both of these are possible, and that’s where much of the risk lies), which means that in combination with society’s risk-aversion, I expect a major endogenous societal response as we get closer to AGI. 
  • I haven’t been convinced of the arguments for thinking that AI alignment is extremely hard. I thought that Ben Garfinkel’s review of Joe Carlsmith’s report was good.

 That’s not to say that “it’s all fine”. But I’m certainly not on the “death with dignity” train.

Comment by William_MacAskill on What We Owe The Future is out today · 2022-08-30T07:54:28.396Z · EA · GW

Such a good question, and it’s something that I’ve really struggled with. 

Personally, I don’t see myself as a representative of anyone but myself (unless I’m explicitly talking about others’ ideas), and my identity as an academic makes me resistant to the “representative of EA” framing. I’m also worried about entrenching the idea that there is “an EA view” that one is able to represent, rather than a large collection of independent thinkers who agree on some things and disagree on others. 

But at the same time some people do see me as a representative of EA and longtermism, and I’m aware that they do, and will take what I say as representing EA and longtermism. Given the recent New Yorker and TIME profiles, and the surprising success of the book launch, that issue will probably only get stronger. 

So what should I do? Honestly, I don’t know, and I’d actually really value advice.  So far I’ve been just feeling it out, and make decisions on a case-by-case basis, weighing both “saying what I think” and “representing EA / longtermism” as considerations.

Comment by William_MacAskill on What We Owe The Future is out today · 2022-08-30T07:53:25.455Z · EA · GW

I will!

It’s already coming out in Swedish, Dutch and Korean, and we're in discussion about a German translation. Given the success of the launch, I suspect we’ll get more interest in the coming months. 

The bottleneck tends not to be translators, but reputable publishers who want to publish it.

Comment by William_MacAskill on What We Owe The Future is out today · 2022-08-30T07:52:22.571Z · EA · GW

Thanks so much Alexander — It’s a good thread!

Highlighting one aspect of it: I agree that being generally silent on prioritization across recommended actions is a way in which WWOTF lacks EA-helpfulness that it could have had. This is just a matter of time and space constraints. For chapters 2-7, my main aim was to respond to someone who says, “You’re saying we can improve the long-term future?!? That’s crazy!”, where my response is “Agree it seems crazy, but actually we can improve the long-term future in lots of ways!”

I wasn’t aiming to respond to someone who says “Ok, I buy that we can improve the long-term future. But what’s top-priority?” That would take another few books to do (e.g. one book alone on the magnitude of AI x-risk), and would also be less “timeless”, as our priorities might well change over the coming years. 

On the “how much does AI and pandemics need longtermism” - I respond to that line of thinking a bit here (also linked to in the OP).

Comment by William_MacAskill on "Long-Termism" vs. "Existential Risk" · 2022-08-16T15:10:26.338Z · EA · GW

Hey Scott - thanks for writing this, and sorry for being so slow to the party on this one!

I think you’ve raised an important question, and it’s certainly something that keeps me up at night. That said, I want to push back on the thrust of the post. Here are some responses and comments! :)

The main view I’m putting forward  in this comment is “we should promote a diversity of memes that we believe, see which ones catch on, and mould the ones that are catching on so that they are vibrant and compelling (in ways we endorse).” These memes include both “existential risk” and “longtermism”.


What is longtermism?

The quote of mine you give above comes from Spring 2020. Since then, I’ve distinguished between longtermism and strong longtermism.

My current preferred slogan definitions of each:

  • Longtermism is the view that we should do much more to protect the interests of future generations. (Alt: that protecting the interests of future generations should be a key moral priority of our time.)
  • Strong longtermism is the view that protecting the interests of future generations should be the key moral priority of our time. (That’s similar to the quote of mine you give.)

In WWOTF, I promote the weaker claim. In recent podcasts, I’ve described it something like the the following (depending on how flowery I’m feeling at the time):

Longtermism about taking seriously just how much is at stake when we look to humanity’s future. It’s about trying to figure out what are the challenges we face in our lifetime that could be pivotal for our long-run trajectory. And it’s about ensuring that we act responsibly and carefully to navigate those challenges, steering that trajectory in a better direction, making the world better not just in the present, but also for our grandchildren, and for their grandchildren in turn.

I prefer to promote longtermism rather than strong longtermism. ​​It's a weaker claim and so I have a higher credence in it, and I feel much more robustly confident in it; at the same time, it gets almost all the value because in the actual world strong longtermism recommends the same actions most of the time, on the current margin.


Is existential risk a more compelling intro meme than longtermism?

My main take is: What meme is good for which people is highly dependent on the person and the context (e.g., the best framing to use in a back-and-forth conversation may be different from one in a viral tweet). This favours diversity; having a toolkit of memes that we can use depending on what’s best in context.

I think it’s very hard to reason about which memes to promote, and easy to get it wrong from the armchair, for a bunch of reasons:

  • It’s inherently unpredictable which memes do well.
  • It’s incredibly context-dependent. To figure this out, the main thing is just about gathering lots of (qualitative and quantitative) data from the demographic you’re interacting with. The memes that resonate most with Ezra Klein podcast listeners are very different from those that resonant most with  Tyler Cowen podcast listeners, even though their listeners are very similar people compared to the wider world.  And even with respect to one idea, subtly different framings can have radically different audience reactions. (cf. “We care about future generations” vs “We care about the unborn.”)
  • People vary a lot. Even within very similar demographics, some people can love one message while other people hate it.
  • “Curse of knowledge” - when you’re really deep down the rabbit hole in a set of ideas, it’s really hard to imagine what it’s like being first exposed to those ideas. 

Then, at least when we’re comparing (weak) longtermism with existential risk, it’s not obvious which resonates better in general. (If anything, it seems to me that (weak) longtermism does better.) A few reasons:

First, message testing from Rethink suggests that longtermism and existential risk have similarly-good reactions from the educated general public, and AI risk doesn’t do great. The three best-performing messages they tested were:

  • “The current pandemic has shown that unforeseen events can have a devastating effect. It is imperative that we prepare both for pandemics and other risks which could threaten humanity's long-term future.”
  • “In any year, the risk from any given threat might be small - but the odds that your children or grandchildren will face one of them is uncomfortably high.”
  • “It is important to ensure a good future not only for our children's children, but also the children of their children.”

So people actually pretty like messages that are about unspecified, and not necessarily high-probability threats, to the (albeit nearer-term) future.

As terms to describe risk, “global catastrophic risk” and “long-term risk” did the best, coming out a fair amount better than “existential risk”.

They didn’t test a message about AI risk specifically.  The related thing was how much the government should prepare for different risks (pandemics, nuclear, etc), and AI came out worst among about 10 (but I don’t think that tells us very much).

Second, most media reception of WWOTF has been pretty positive so far. This is based mainly on early reviews (esp trade reviews), podcast and journalistic interviews, and the recent profiles (although the New Yorker profile was mixed). Though there definitely has been some pushback (especially on Twitter), I think it’s overall been dwarfed by positive articles. And the pushback I have gotten is on the Elon endorsement, association between EA and billionaires, and on standard objections to utilitarianism — less so to the idea of longtemism itself. 

Third, anecdotally at least, a lot of people just hate the idea of AI risk (cf Twitter), thinking of it as a tech bro issue, or doomsday cultism. This has been coming up in the twitter response to WWOTF, too, even though existential risk from AI takeover is only a small part of the book. And this is important, because I’d think that the median view among people working on x-risk (including me) is that the large majority of the risk comes from AI rather than bio or other sources. So “holy shit, x-risk” is mainly, “holy shit, AI risk”. 


Do neartermists and longtermists agree on what’s best to do?

Here I want to say: maybe. (I personally don’t think so, but YMMV.) But even if you do believe that, I think that’s a very fragile state of affairs, which could easily change as more money and attention flows into x-risk work, or if our evidence changes, and I don’t want to place a lot of weight on it.  (I do strongly believe that global catastrophic risk is enormously important even in the near term, and a sane world would be doing far, far better on it, even if everyone only cared about the next 20 years.)

More generally, I get nervous about any plan that isn’t about promoting what we fundamentally believe or care about (or a weaker version of what we fundamentally believe or care about, which is “on track” to the things we do fundamentally believe or care about).

What I mean by “promoting what we fundamentally believe or care about”:

  • Promoting goals rather than means. This means that (i) if the environment changes (e.g. some new transformative tech comes along, or the political environment changes dramatically, like war breaks out) or (ii) if our knowledge changes (e.g. about the time until transformative AIs, or about what actions to take), then we’ll take different means to pursue our goals. I think this is particularly important for something like AI, but also true more generally. 
  • Promoting the ideas that you believe most robustly - i.e. that you think you are least likely to change in the coming 10 years. Ideally these things aren’t highly conjunctive or relying on speculative premises. This makes it less likely that you will realise that you’ve been wasting your time or done active harm by promoting wrong ideas in ten years’ time. (Of course, this will vary from person to person. I think that (weak) longtermism is really robustly true and neglected, and I feel bullish about promoting it. For others, the thing that might feel really robustly true is “TAI is a BFD and we’re not thinking about it enough” - I suspect that many people feel they more robustly believe this than longtermism.) 

Examples of people promoting means rather than goals, and this going wrong:

  • “Eat less meat because it’s good for your health” -> people (potentially) eat less beef and more chicken.
  • “Stop nuclear power” (in the 70s) -> environmentalists hate nuclear power, even though it’s one of the best bits of clean tech we have. 

Examples of how this could go wrong by promoting “holy shit x-risk”:

  • We miss out on non-x-risk ways of promoting a good long-run future:
    • E.g. the risk that we solve the alignment problem but AI is used to lock in highly suboptimal values. (Personally, I think a large % of future expected value is lost in this way.)
  • We highlight the importance of AI to people who are not longtermist. They realise how transformatively good it could be for them and for the present generation (a digital immortality of bliss!) if AI is aligned, and they think the risk of misalignment is small compared to the benefits. They become AI-accelerationists (a common view among Silicon Valley types).
  • AI progress slows considerably in the next 10 years, and actually near-term x-risk doesn’t seem so high. Rather than doing whatever the next-best longtermist thing is, the people who came in via “holy shit x-risk” people just do whatever instead, and the people who promoted the “holy shit x-risk” meme get a bad reputation.

So, overall my take is:

  • “Existential risk” and “longtermism” are both important ideas that deserve greater recognition in the world.
  • My inclination is to prefer promoting “longtermism” because that’s closer to what I fundamentally believe (in the sense I explain above), and it’s nonobvious to me which plays better PR-wise, and it’s probably highly context-dependent.
  • Let’s try promoting them both, and see how they each catch on.
Comment by William_MacAskill on To PA or not to PA? · 2022-04-15T15:38:36.619Z · EA · GW

I reasonably often get asked about the value of executive assistants and other support staff. My estimate is that me + executive assistant is about 110%-200% of the value of me alone. 

The range is so wide because I feel very unsure about increasing vs diminishing returns. If having an ExA is equivalent to doing (say) 20% more work in a week, does that increase the value of a week by more or less than 20%? My honest guess is that, for many sorts of work we’re doing, the “increasing returns” model is closer to the truth, because so many sorts of work have winner-takes-all or rich-get-richer effects. The most widely-read books or articles get read far more than slightly-worse books or articles; the public perception of an academic position at Oxford is much greater than a position at UCL, even though the difficulty of getting the former is not that much greater than the difficulty of getting the latter. 

(Of course, there are also diminishing returns, which makes figuring this out so hard. E.g. there are only so many podcasts one can go on, and the listenership drops off rapidly.)

I think people normally think of the value of ExAs as just saving you time: doing things like emails, scheduling, and purchasing. In my experience, this is only a small part of the value-add. The bigger value-add comes from: (i) doing things that you just didn’t have capacity to do, or helping you do things to a higher level of quality; (ii) qualitative benefits that aren’t just about saving or gaining time. On (ii): For me that’s (a) meaning that I know that important emails, tasks, etc, won’t get overlooked, which dramatically reduces my stress levels, decreases burnout risk, and means I can do more deep work rather than feeling I need to check my emails and other messages every hour; (b) helping me prioritise (especially advising on when to say no to things, and making it easier to say no to things). Depending on the person, they can also bring skills that I simply lack, like graphic design, facility with spreadsheets, or mathematical knowledge.

Some caveats:

  • It is notable to me, and an argument against my view, that some of the highest-performing people I know don’t use ExAs. I’m not quite sure what’s going on there. My guess is that if you’re a really super-productive person, the benefits I list above aren’t as great for you.
  • It’s definitely an investment. It’s a short-term cost (to hire the person, think about a new structure for your life and workflow, think about what can be delegated, think about information security and data privacy, etc) for a longer-term gain. 
  • You should in general be cautious about hiring, and that applies to this, too: once you’ve hired someone, you now have an ongoing responsibility to them and their wellbeing, you have to think about things like compensation, performance evaluation, feedback, and so on.
Comment by William_MacAskill on Announcing What The Future Owes Us · 2022-04-01T15:58:39.499Z · EA · GW

I love this, haha.

But, as with many things, J.S. Mill did this meme first!!! 

In the Houses of Parliament on April 17th, 1866, he gave a speech arguing that we should keep coal in the ground (!!). As part of that speech, he said:
 

I beg permission to press upon the House the duty of taking these things into serious consideration, in the name of that dutiful concern for posterity [...] There are many persons in the world, and there may possibly be some in this House, though I should be sorry to think so, who are not unwilling to ask themselves, in the words of the old jest, "Why should we sacrifice anything for posterity; what has posterity done for us?"

They think that posterity has done nothing for them: but that is a great mistake. Whatever has been done for mankind by the idea of posterity; whatever has been done for mankind by philanthropic concern for posterity, by a conscientious sense of duty to posterity [...] all this we owe to posterity, and all this it is our duty to the best of our limited ability to repay."

all great deeds [and] all [of] culture itself [...] all this is ours because those who preceded us have cared, and have taken thought, for posterity [...] Not owe anything to posterity, Sir! We owe to it Bacon, and Newton, and Locke, and Bentham; aye, and Shakespeare, and Milton, and Wordsworth."

Huge H/T to Tom Moynihan for sending this to me back in December. Interestingly, in the 1860s there seems to have been a bit of a wave of longtermist thought among the utilitarians, though their empirical views about the amount of available coal were way off.

Comment by William_MacAskill on Announcing What We Owe The Future · 2022-03-31T18:43:15.717Z · EA · GW

Yeah I thought about this a lot, but I strongly prefer audiobooks to be read by the author, and anecdotally other people do, too. I didn't read DGB (to save time), and regretted that decision.

Comment by William_MacAskill on Democratising Risk - or how EA deals with critics · 2021-12-29T21:37:48.902Z · EA · GW

Hi - Thanks so much for writing this. I'm on holiday at the moment so have only have been able to  quickly  skim your post and paper.  But, having got the gist, I just wanted to say:
(i) It really pains me to hear that you lost time and energy as a result of people discouraging you from publishing the paper, or that you had to worry over funding on the basis of this. I'm sorry you had to go through that. 
(ii) Personally, I'm  excited to fund or otherwise encourage engaged and in-depth "red team" critical work on either (a) the ideas of EA, longtermism or strong longtermism, or (b) what practical implications have been taken to follow from EA, longtermism, or strong longtermism.  If anyone reading this comment  would like funding (or other ways of making their life easier) to do (a) or (b)-type work, or if you know of people in that position, please let me know at will@effectivealtruism.org.  I'll try to consider any suggestions, or put the suggestions in front of others to consider, by the end of  January. 

Comment by William_MacAskill on Towards a Weaker Longtermism · 2021-08-21T08:42:32.423Z · EA · GW

There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames  and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1. 

 

I agree with this, and the example of Astronomical Waste is particularly notable. (As I understand his views, Bostrom isn't even a consequentialist!). This is also true for me with respect to the CFSL paper, and to an even greater degree for Hilary: she really doesn't know whether she buys strong longtermism; her views are very sensitive to current facts about how much we can reduce extinction risk  with a given unit of resources.

The language-game of 'writing a philosophy article' is very different than 'stating your exact views on a topic' (the former is more about making a clear and forceful argument for a particular view, or particular implication of a view someone might have, and much less about conveying every nuance, piece of uncertainty, or in-practice constraints) and once philosophy articles get read more widely, that can cause confusion. Hilary and I didn't expect our paper to get read so widely - it's really targeted at academic philosophers. 

Hilary is on holiday, but I've  suggested we make some revisions to the language in the paper so that it's a bit clearer to people what's going on. This would mainly be changing  phrases like 'defend strong longtermism' to 'explore the case for strong longtermism', which I think more accurately represents what's actually going on in the paper.

Comment by William_MacAskill on Towards a Weaker Longtermism · 2021-08-21T08:26:29.033Z · EA · GW

I'm also not defending or promoting strong longtermism in my next book.  I defend (non-strong) longtermism, and the  definition I use is: "longtermism is the view that positively influencing the longterm future is among the key moral priorities of our time." I agree with Toby on the analogy to environmentalism.

(The definition I use of strong longtermism is that it's the view that positively influencing the longterm future is the moral priority of our time.)

Comment by William_MacAskill on Gordon Irlam: an effective altruist ahead of his time · 2020-12-18T13:48:13.364Z · EA · GW

I agree that Gordon deserves great praise and recognition! 

One clarification: My discussion of Zhdanov was based on Gordon's work: he volunteered for GWWC in the early days, and cross-posted about Zhdanov on the 80k blog. In DGB, I failed to  cite him, which was a major oversight on my part, and I feel really bad about that. (I've apologized to him about this.)  So that discussion shouldn't be seen as independent convergence. 

Comment by William_MacAskill on Gordon Irlam: an effective altruist ahead of his time · 2020-12-18T13:47:55.415Z · EA · GW

I agree that Gordon deserves great praise and recognition! 

One clarification: My discussion of Zhdanov was based on Gordon's work: he volunteered for GWWC in the early days, and cross-posted about Zhdanov on the 80k blog. In DGB, I failed to  cite him, which was a major oversight on my part, and I feel really bad about that. (I've apologized to him about this.)  So that discussion shouldn't be seen as independent convergence. 

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-12T20:09:24.555Z · EA · GW

Thanks Greg  - I asked and it turned out I had one remaining day to make edits to the paper, so I've made some minor ones in a direction you'd like, though I'm sure they won't be sufficient to satisfy you. 

Going to have to get back on with other work at this point, but I think your  arguments are important, though the 'bait and switch' doesn't seem totally fair - e.g. the update towards living in a simulation only works when you appreciate the improbability of living on a single planet.

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-10T10:54:22.851Z · EA · GW

Thanks for this, Greg.

"But what is your posterior? Like Buck, I'm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million."

I'm surprised this wasn't clear to you, which has made me think I've done a bad job of expressing myself.  

It's the former, and  for the reason of your explanation  (2): us being early, being on a single planet, being at such a high rate of economic growth, should collectively give us an enormous update. In the  blog post I describe what I call the outside-view arguments, including that we're very early on, and say: "My view is that, in the aggregate, these outside-view arguments should substantially update one from one’s prior towards HoH, but not all the way to significant credence in HoH.[3]
[3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable."


I'm going to think more about your claim that in the article I'm 'hiding the ball'. I say in the introduction that "there are some strong arguments for thinking that this century might be unusually influential",  discuss the arguments  that I think really should massively update us in section 5 of the article, and in that context I say "We have seen that there are some compelling arguments for thinking that the present time is unusually influential. In particular, we are growing very rapidly, and civilisation today is still small compared to its potential future size, so any given unit of resources is a comparatively large fraction of the whole. I believe these arguments give us reason to think that the most influential people may well live within the next few thousand years."   Then in the conclusion I say: "There are some good arguments for thinking that our time is very unusual, if we are at the start of a very long-lived civilisation: the fact that we are so early on, that we live on a single planet, and that we are at a period of rapid economic and technological progress, are all ways in which the current time is very distinctive, and therefore are reasons why we may be highly influential too." That seemed clear to me, but I should judge clarity by how  readers interpret what I've written. 

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-10T10:43:14.646Z · EA · GW

Actually, rereading my post I realize I had already made an edit similar to the one you suggest  (though not linking to the article which hadn't been finished) back in March 2020:

"[Later Edit (Mar 2020): The way I state the choice of prior in the text above was mistaken, and therefore caused some confusion. The way I should have stated the prior choice, to represent what I was thinking of, is as follows:

The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/n.

The unconditional prior probability over whether this is the most influential century would then depend on one's priors over how long Earth-originating civilization will last for. However, for the purpose of this discussion we can focus on just the claim that we are at the most influential century AND that we have an enormous future ahead of us. If the Value Lock-In or Time of Perils views are true, then we should assign a significant probability to that claim. (i.e. they are claiming that, if we act wisely this century, then this conjunctive claim is probably true.) So that's the claim we can focus our discussion on.

It's worth noting that my proposal follows from the Self-Sampling Assumption, which is roughly (as stated by Teru Thomas ('Self-location and objective chance' (ms)): "A rational agent’s priors locate him uniformly at random within each possible world." I believe that SSA is widely held: the key question in the anthropic reasoning literature is whether it should be supplemented with the self-indication assumption (giving greater prior probability mass to worlds with large populations). But we don't need to debate SIA in this discussion, because we can simply assume some prior probability distribution over sizes over the total population - the question of whether we're at the most influential time does not require us to get into debates over anthropics.]"

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T17:13:44.443Z · EA · GW

Thanks, Greg.  I really wasn't meaning to come across as super confident in a particular posterior (rather than giving an indicative number for a central estimate), so I'm sorry if I did.


"It seems more reasonable to say 'our' prior is rather some mixed gestalt on considering the issue as a whole, and the concern about base-rates etc. should be seen as an argument for updating this downwards, rather than a bid to set the terms of the discussion."

I agree with this (though see for the discussion with Lukas for some clarification about what we're talking about when we say  'priors', i.e. are we building the fact that we're early into our priors or not.).

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T17:04:27.894Z · EA · GW

Richard’s response is about right. My prior with respect to influentialness, is such that either: x-risk is almost surely zero, or we are almost surely not going to have a long future, or x-risk is higher now than it will be in the future but harder to prevent than it will be in the future or in the future there will be non-x-risk-mediated ways of affecting similarly enormous amounts of value in the future, or the idea that most of the value is in the future is false.

I do think we should update away from those priors, and I think that update is sufficient to make the case for longtermism. I agree that the location in time that we find ourselves in (what I call ‘outside-view arguments’ in my original post) is sufficient for a very large update.

Practically speaking, thinking through the surprisingness of being at such an influential time made me think: 

  • Maybe I was asymmetrically assessing evidence about how high x-risk is this century. I think that’s right; e.g. I now don’t think that x-risk from nuclear war is as high as 0.1% this century, and I think that longtermist EAs have sometimes overstated the case in favour.
  • If we think that there’s high existential risk from, say, war, we should (by default) think that such high risk will continue into the future. 
  • It’s more likely that we’re in a simulation

It also made me take more seriously the thoughts that in the future there might be non-extinction-risk mechanisms for producing comparably enormous amounts of (expected) value, and that maybe there’s some crucial consideration(s) that we’re currently missing such that our actions today are low-expected-value compared to actions in the future.

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T16:06:59.804Z · EA · GW

"Only using a single, simple function for something so complicated seems overconfident to me. And any mix of functions where one of them assigns decent probability to early people being the most influential is enough that it's not super unlikely that early people are the most influential."

I strongly agree with this. The fact that under a mix of  distributions, it becomes not super unlikely that early people are the most influential, is really important and was somewhat buried in the original comments-discussion. 

And then we're also very distinctive in other ways: being on one planet, being at such a high-growth period, etc. 

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T15:59:10.273Z · EA · GW

Thanks, I agree that this is  key. My thoughts: 

  • I agree that our earliness gives a dramatic update in favor of us being influential. I don't have a stable view on the magnitude of that. 
  • I'm not convinced that the negative exponential form of Toby's distribution is the right one, but I don't have any better suggestions 
  • Like Lukas, I think that Toby's distribution gives too much weight to early people, so the update I would make is less dramatic than Toby's
  • Seeing as Toby's prior is quite sensitive to choice of reference-class, I would want to choose the reference class of all observer-moments, where an observer is a conscious being. This means we're not as early as we would say if we used the distribution of Homo sapiens, or of hominids. I haven't thought about what exactly that means, though my intuition is that it means the update isn't nearly as big.    

So I guess the answer to your question is 'no': our earliness is an enormous update, but not as big as Toby would suggest.

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T15:51:55.513Z · EA · GW

"If we're doing things right, it shouldn't matter whether we're building earliness into our prior or updating on the basis of earliness."

Thanks, Lukas, I thought this was very clear and exactly right. 

"So now we've switched over to instead making a guess about P(X in E | X in H), i.e. the probability that one of the 1e10 most influential people also is one of the 1e11 earliest people, and dividing by 10. That doesn't seem much easier than making a guess about P(X in H | X in E), and it's not obvious whether our intuitions here would lead us to expect more or less influentialness."

That's interesting, thank you - this statement of the debate has helped clarify things for me.  It does seem to me that doing the update -  going via P(X in E | X in H) rather than directly trying to assess P(X in H | X in E)  - is helpful, but I'd understand the position of someone who wanted just to assess P(X in H | X in E) directly. 

I think it's helpful  to assess P(X in E | X in H) because it's not totally obvious how one should update on the basis of earliness. The arrow of causality and the possibility of lock-in over time definitely gives reasons in favor of  influential people being earlier. But there's still the big question of  how  great an update that should be. And the cumulative nature of knowledge and understanding gives reasons in favor thinking that later people are more likely to be more influential.

This seems important to me because, for someone claiming that we should think that we're at the HoH, the update on the basis of earliness is doing much more work than updates on the basis of, say, familiar arguments about when AGI is coming and what will happen when it does.  To me at least, that's a striking fact and wouldn't have been obvious before I started thinking about these things.

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T15:24:42.889Z · EA · GW

This comment of mine in particular seems to have been downvoted. If anyone were willing, I'd be interested to understand why: is that because (i) the tone is off (seemed too combative?); (ii) the arguments themselves are weak; (iii) it wasn't clear what I'm saying; (iv) it wasn't engaging with Buck's argument; (v) other?

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-09T15:19:19.567Z · EA · GW

Yeah, I do think the priors-based argument given in the post  was  poorly stated, and therefore led to  unnecessary confusion. Your suggestion  is very reasonable, and I've now edited the post.

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-04T16:00:41.863Z · EA · GW

Comment (5/5)

Smaller comments 

  • I agree that one way you can avoid thinking we’re astronomically influential is by believing the future is short, such as by believing you’re in a simulation, and I discuss that in the blog post at some length. But, given that there are quite a number of ways in which we could fail to be at the most influential time (perhaps right now we can do comparatively little to influence the long-term, perhaps we’re too lacking in knowledge to pick the right interventions wisely, perhaps our values are misguided, perhaps longtermism is false, etc), it seems strange to put almost all of the weight on one of those ways, rather than give some weight to many different explanations. 
  • “It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that.” This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that you’re one-in-a-million, it needn’t push you all the way to thinking that you’re one-in-a-thousand. This seems important to me. Yes, you can give me arguments for thinking that we’re (in expectation at least) at an enormously influential time - as I say in the blog post and the comments, I endorse those arguments! I think we should update massively away from our prior, in particular on the basis of the current rate of economic growth. But for direct philanthropy to beat patient philanthropy, being at a hugely influential time isn’t enough. Even if this year is hugely influential, next year might be even more influential again; even if this century is hugely influential, next century might be more influential again. And if that’s true then - as far as the consideration of wanting to spend our philanthropy at the most influential times goes - then we have a reason for saving rather than donating right now. 
  • You link to the idea that the Toba catastrophe was a bottleneck for human populations. Though I agree that we used to be more at-risk from natural catastrophes than we are today, more recent science has cast doubt on that particular hypothesis. From The Precipice: “the “Toba catastrophe hypothesis” was popularized by Ambrose (1998). Williams (2012) argues that imprecision in our current archeological, genetic and paleoclimatological techniques makes it difficult to establish or falsify the hypothesis. See Yost et al. (2018) for a critical review of the evidence. One key uncertainty is that genetic bottlenecks could be caused by founder effects related to population dispersal, as opposed to dramatic population declines.”
    • Ambrose, S. H. (1998). “Late Pleistocene Human Population Bottlenecks, Volcanic Winter, and Differentiation of Modern Humans.” Journal of Human Evolution, 34(6), 623–51
    • Williams, M. (2012). “Did the 73 ka Toba Super-Eruption have an Enduring Effect? Insights from Genetics, Prehistoric Archaeology, Pollen Analysis, Stable Isotope Geochemistry, Geomorphology, Ice Cores, and Climate Models.” Quaternary International, 269, 87–93.
    • Yost, C. L., Jackson, L. J., Stone, J. R., and Cohen, A. S. (2018). “Subdecadal Phytolith and Charcoal Records from Lake Malawi, East Africa, Imply Minimal Effects on Human Evolution from the ∼74 ka Toba Supereruption.” Journal of Human Evolution, 116, 75–94.
Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-04T15:59:15.166Z · EA · GW

(Comment 4/5) 

The argument against patient philanthropy

“I sometimes hear the outside view argument used as an argument for patient philanthropy, which it in fact is not.”

I don’t think this works quite in the way you think it does.

It is true that, in a similar vein to the arguments I give against being at the most influential time (where ‘influential’ is a technical term, excluding investing opportunities), you can give an outside-view argument against now being the time at which you can do the most good tout court. As a matter of fact, I believe that’s true: we’re almost certainly not at the point in time, in all history, at which one can do the most good by investing a given unit of resources to donate at a later date. That time could plausibly be earlier than now, because you get greater investment returns, or plausibly later than now, because in the future we might have a better understanding of how to structure the right legal instruments, specify the constitution of one’s foundation, etc.

But this is not an argument against patient philanthropy compared to direct action. In order to think that patient philanthropy is the right approach, you do not need to make the claim that now is the time, out of all times, when patient philanthropy will do the most expected good. You just need the claim that, currently, patient philanthropy will do more good than direct philanthropy. This is a (much, much) weaker claim to make.

And, crucially, there’s an asymmetry between patient philanthropy and direct philanthropy. 

Suppose there are 70 time periods at which you could spend your philanthropic resources (every remaining year of your life, say), and that the scale of your philanthropy is small (so that diminishing returns can be ignored). Then, if the expected cost-effectiveness of the best opportunities varies substantially over time, there will be just one point in time at which your philanthropy will have the most impact, and you should try to max out your philanthropy at that time period, donating all your philanthropy at that time if you can. (Perhaps that isn’t quite possible because you are limited in how much you can take out debt against future income; but still, the number of times you will donate in your life will be small.) So, in 69 out of 70 time periods (or, even if you need to donate a few times, ~67 out of 70 time periods), you should be saving rather than donating. That’s why direct philanthropy needs to make the claim that now is the most, or at least one of the most, potentially-impactful times, out of the relevant time periods when one could donate, whereas patient philanthropy doesn’t.

Second, the inductive argument against now being the optimal time for patient philanthropy is much weaker than the inductive argument against now being the most influential time (in the technical sense of ‘influential). It’s not clear there is an inductive argument against now being the optimal time for patient philanthropy: there’s at least a plausible argument that, on average, every year the value of patient philanthropy decreases, because one loses one extra year of investment returns. Combined with the fact that one cannot affect the past (well, putting non-causal decision theories to the side ;) ), this gives an argument for thinking that now will be higher-impact for patient philanthropy than all future times.

Personally, I don’t think that argument quite works, because you can still mess up patient philanthropy, so maybe future people will do patient philanthropy better than we do. But it’s an argument that’s much more compelling in the case of patient philanthropy than it is for the influentialness of a time.

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-04T15:57:57.776Z · EA · GW

(Comment 3/5) 

Earliness

“Will’s resolution is to say that in fact, we shouldn’t expect early times in human history to be hingey, because that would violate his strong prior that any time in human history is equally likely to be hingey.”

I don’t see why you think I think this. (I also don’t know what “violating” a prior would mean.)

The situation is: I have a prior over how influential I’m likely to be. Then I wake up, find myself in the early 21st century, and make a whole bunch of updates. This include updates on the facts that: I’m on one planet, I’m at a period of unusually high economic growth and technological progress, I *seem* to be unusually early on and can’t be very confident that the future is short. So, as I say in the original post and the comments, I update (dramatically) on my estimate of my influentialness, on the basis of these considerations. But by how much? Is it a big enough update to conclude that I should be spending my philanthropy this year rather than next, or this century rather than next century? I say: no. And I haven’t seen a quantitative argument, yet, for thinking that the argument is ‘yes’, whereas the inductive argument seems to give a positive argument for thinking 'no'.

One reason for thinking that the update, on the basis of earliness, is not enough, is related to the inductive argument: that it would suggest that hunter-gatherers, or Medieval agriculturalists, could do even more direct good than we can. But that seems wrong. Imagine you can give an altruistic person at one of these times a bag of oats, or sell that bag today at market prices. Where would you do more good? The case in favour of earlier is if you think that speeding up economic growth / technological progress is so good that the greater impact you’d have at earlier times outweighs the seemingly better opportunities we have today. But I don’t think you believe that, and at least the standard EA view is that the benefits of speed-up are small compared to x-risk reduction or other proportional impacts on the value of the long-run future.

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-04T15:56:10.361Z · EA · GW

(Comment 2/5)

The outside-view argument (in response to your first argument)

In the blog post, I stated the priors-based argument quite poorly - I thought this bit wouldn’t be where the disagreement was, so I didn’t spend much time on it. How wrong I was about that! For the article version (link), I tidied it up.

The key thing is that the way I’m setting priors is as a function from populations to credences: for any property F, your prior should be such that if there are n people in a population, the probability that you are in the m most F people in that population is m/n

This falls out of the self-sampling assumption, that a rational agent’s priors locate her uniformly at random within each possible world. If you reject this way of setting priors then, by modus tollens, you reject the self-sampling assumption. That’s pretty interesting if so! 

On this set-up of the argument (which is what was in my head but I hadn’t worked through), I don’t make any claims about how likely it is that we are part of a very long future. Only that, a priori, the probability that we’re *both* in a very large future *and* one of the most influential people ever is very low.  For that reason, there aren’t any implications from that argument to claims about the magnitude of extinction risk this century.  We could be comparatively un-influential in many ways: if extinction risk is high this century but continues to be high for very many centuries; if extinction risk is low this century and will be higher in coming centuries; if  extinction risk is any level and we can't do anything about it, or are not yet knowledgeable enough to choose actions wisely, or if longtermism is false. (etc)

Separately, I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early. Building earliness into your prior means you’ve got to give up on the very-plausible-seeming self-sampling assumption; means you’ve got to treat the predicate ‘is most influential’ differently than other predicates; has technical challenges; and  the case in favour seems to rely on a posteriori observations about how the world works, like those you give in your post.

Comment by William_MacAskill on Thoughts on whether we're living at the most influential time in history · 2020-11-04T15:51:24.090Z · EA · GW

(Comment 1/5)

Thanks so much for engaging with this, Buck! :)

I revised the argument of the blog post into a forthcoming article, available at my website (link). I’d encourage people to read that version rather than the blog post, if you’re only going to read one. The broad thrust is the same, but the presentation is better. 

I’ll discuss the improved form of the discussion about priors in another comment. Some other changes in the article version:

  • I frame the argument in terms of the most influential people, rather than the most influential times. It’s the more natural reference class, and is more action-relevant. 
  • I use the term ‘influential’ rather than ‘hingey’. It would be great if we could agree on terminology here; as Carl noted on my last post, ‘hingey’ could make the discussion seem unnecessarily silly.
  • I define ‘influentialness’ (aka ‘hingeyness’) in terms of ‘how much expected good you can do’, not just ‘how much expected good you can do from a longtermist perspective’. Again, that’s the more natural formulation, and, importantly, one way in which we could fail to be at the most influential time (in terms of expected good done by direct philanthropy) is if longtermism is false and, say, we only discover the arguments that demonstrate that in a few decades’ time. 
  • The paper includes a number of graphs, which I think helps make the case clearer.
  • I don’t discuss the simulation argument. (Though that's mainly for space and academic normalcy reasons - I think it's important, and discuss it in the blog post.)
Comment by William_MacAskill on How hot will it get? · 2020-04-24T10:37:36.810Z · EA · GW

Something I forgot to mention in my comments before: Peter Watson suggested to me it's reasonably likely that estimates of climate sensitivity will be revised upwards for the next IPCC, as the latest generation of models are running hotter. (e.g. https://www.carbonbrief.org/guest-post-why-results-from-the-next-generation-of-climate-models-matter, https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019GL085782 - "The range of ECS values across models has widened in CMIP6, particularly on the high end, and now includes nine models with values exceeding the CMIP5 maximum (Figure 1a). Specifically, the range has increased from 2.1–4.7 K in CMIP5 to 1.8–5.6 K in CMIP6.") This could drive up the probability mass over 6 degrees in your model by quite a bit, so could be worth doing a sensitivity analysis on that.

Comment by William_MacAskill on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T15:35:31.459Z · EA · GW

How much do you worry that MIRI's default non-disclosure policy is going to hinder MIRI's ability to do good research, because it won't be able to get as much external criticism?

Comment by William_MacAskill on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T15:34:24.024Z · EA · GW

Suppose you find out that Buck-in-2040 thinks that the work you're currently doing is a big mistake (which should have been clear to you, now). What are your best guesses about what his reasons are?

Comment by William_MacAskill on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T15:33:03.058Z · EA · GW

What's the biggest misconception people have about current technical AI alignment work? What's the biggest misconception people have about MIRI?

Comment by William_MacAskill on Reality is often underpowered · 2019-10-12T10:20:32.770Z · EA · GW

Thanks Greg - I really enjoyed this post.

I don't think that this is what you're saying, but I think if someone drew the lesson from your post that, when reality is underpowered, there's no point in doing research into the question, that would be a mistake.

When I look at tiny-n sample sizes for important questions (e.g.: "How have new ideas made major changes to the focus of academic economics?" or "Why have social movements collapsed in the past?"), I generally don't feel at all like I'm trying to get a p<0.05 ; it feels more like hypothesis generation. So when I find out that Kahneman and Tversky spent 5 years honing the article Prospect Theory into a form that could be published in an economics journal, I think "wow, ok, maybe that's the sort of time investment that we should be thinking of". Or when I see social movements collapse because of in-fighting (e.g. pre-Copenhagen UK climate movement), or romantic disputes between leaders (e.g. Objectivism), then - insofar as we just want to take all the easy wins to mitigate catastrophic risks to the EA community - I know that this risk is something to think about and focus on for EA.

For these sorts of areas, the right approach seems to be granular qualitative research - trying to really understand in depth what happened in some other circumstance, and then think through what lessons that entail for the circumstance you're interested in. I think that, as a matter of fact, EA does this quite a lot when relevant. (E.g. Grace on Szilard, or existing EA discussion of previous social movements). So I think this gives us extra reason to push against the idea that "EA-style analysis" = "quant-y RCT-esque analysis" rather than "whatever research methods are most appropriate to the field at hand". But even on qualitative research I think the "EA mindset" can be quite distinctive - certainly I think, for example, that a Bayesian-heavy approach to historical questions, often addressing counterfactual questions, and looking at those issues that are most interesting from an EA perspective (e.g. how modern-day values would be different if Christianity had never taken off), would be really quite different from almost all existing historical research.

Comment by William_MacAskill on Are we living at the most influential time in history? · 2019-09-13T19:56:39.368Z · EA · GW

Thanks! :)

Comment by William_MacAskill on Are we living at the most influential time in history? · 2019-09-13T19:51:21.656Z · EA · GW

Sorry - 'or otherwise lost' qualifier was meant to be a catch-all for any way of the investment losing its value, including (bad) value-drift.

I think there's a decent case for (some) EAs doing better at avoiding this than e.g. typical foundations:

  • If you have precise values (e.g. classical utilitarianism) then it's easier to transmit those values across time - you can write your values down clearly as part of the constitution of the foundation, and it's easier to find and identify younger people to take over the fund who also endorse those values. In contrast, for other foundations, the ultimate aims of the foundation are often not clear, and too dependent on a particular empirical situation (e.g. Benjamin Franklin's funds were to 'to provide loans for apprentices to start their businesses' (!!)).
  • If you take a lot of time carefully choosing who your successors are (and those people take a lot of time over who their successors are).

Then to reduce appropriation, one could spread the funds across many different countries and different people who share your values. (Again, easier if you endorse a set of values that are legible and non-idiosyncratic.)

It might still be true that the chance of the fund becoming valueless gets large over time (if, e.g. there's a 1% risk of it losing its value per year), but the size of the resources available also increases exponentially over time in those worlds where it doesn't lose its value.

Caveat also tricky questions on when 'value drift' is a bad thing rather than the future fund owners just having a better understanding of the right thing to do than the founders did, which often seems to be true for long-lasting foundations.



Comment by William_MacAskill on Ask Me Anything! · 2019-09-13T01:14:45.576Z · EA · GW

I think you might be misunderstanding what I was referring to. An example of what I mean: Suppose Jane is deciding whether to work for Deepmind on the AI safety team. She’s unsure whether this speeds up or slows down AI development; her credence is imprecise, represented by the interval [0.4, 0.6]. She’s confident, let’s say, that speeding up AI development is bad. Because there’s some precisification of her credences on which taking the job is good, and some on which taking the job is bad, then if she uses a Liberal decision rule (= it is permissible for you to perform any action that is permissible according to at least one of the credence functions in your set), it’s permissible for her to take the job or not take the job.

The issue is that, if you have imprecise credences and a Liberal decision rule, and are a longtermist, then almost all serious contenders for actions are permissible.

So the neartermist would need to have some way of saying (i) we can carve out the definitely-good part of the action, which is better than not-doing the action on all precisifications of the credence; (ii) we can ignore the other parts of the action (e.g. the flow-through effects) that are good on some precisifications and bad on some precisifications. It seems hard to make that theoretically justified, but I think it matches how people actually think, so at least has some common-sense motivation. 

But you could do it if you could argue for a pseudodominance principle that says: "If there's some interval of time t_i over which action x does more expected good than action y on all precisifications of one's credence function, and there's no interval of time t_j at which action y does more expected good than action x on all precisifications of one's credence function, then you should choose x over y".


(In contrast, it seems you thought I was referring to AI vs some other putative great longtermist intervention. I agree that plausible longtermist rivals to AI and bio are thin on the ground.)

Comment by William_MacAskill on Are we living at the most influential time in history? · 2019-09-13T01:07:29.453Z · EA · GW

Thanks, William! 

Yeah, I think I messed up this bit. I should have used the harmonic mean rather than the arithmetic mean when averaging over possibilities of how many people will be in the future. Doing this brings the chance of being among the most influential person ever close to the chance of being the most influential person ever in a small-population universe.  But then we get the issue that being the most influential person ever in a small-population universe is much less important than being the most influential person in a big-population universe. And it’s only the latter that we care about.  


So what I really should have said (in my too-glib argument) is: for simplicity, just assume a high-population future, which are the action-relevant futures if you're a longtermist. Then take a uniform prior over all times (or all people) in that high-population future. So my claim is: “In the action-relevant worlds, the frequency of ‘most important time’ (or ‘most important person’) is extremely low, and so should be our prior.”

Comment by William_MacAskill on Are we living at the most influential time in history? · 2019-09-13T01:05:44.562Z · EA · GW

Thanks for these links. I’m not sure if your comment was meant to be a criticism of the argument, though? If so: I’m saying “prior is low, and there is a healthy false positive rate, so don’t have high posterior.” You’re pointing out that there’s a healthy false negative rate too — but that won’t cause me to have a high posterior?

And, if you think that every generation is increasing in influentialness, that’s a good argument for thinking that future generations will be more influential and we should therefore save.

Comment by William_MacAskill on Are we living at the most influential time in history? · 2019-09-13T01:02:40.773Z · EA · GW

There were a couple of recurring questions, so I’ve addressed them here.

What’s the point of this discussion — isn’t passing on resources to the future too hard to be worth considering? Won’t the money be stolen, or used by people with worse values?

In brief: Yes, losing what you’ve invested is a risk, but (at least for relatively small donors) it’s outweighed by investment returns. 

Longer: The concept of ‘influentialness of a time’ is the same as the cost-effectiveness (from a longtermist perspective) of the best opportunities accessible to longtermists at a time.  Suppose I think that the best opportunities in, say, 100 years, are as good as the best opportunities now. Then, if I have a small amount of money, then I can get (say) at least a 2% return per year on those funds. But I shouldn’t think that the chance of my funds being appropriated (or otherwise lost) is as high as 2% per year. So the expected amount of good I do is greater by saving. 

So if you think that hingeyness (as I’ve defined it) is about the same in 100 years as it is now, or greater, then there’s a strong case for investing for 100 years before spending the money.

(Caveat that once we consider larger amounts of money, diminishing returns for expenditure becomes an issue, and chance of appropriation increases.)

What’s your view on anthropics? Isn’t that relevant here?

I’ve been trying to make claims that aren’t sensitive to tricky issues in anthropic reasoning. The claim that if there are n people, ordered in terms of some relation F (like ‘more important than’), then the claim that the prior probability that you are most F (‘most important’) person  is 1/n doesn’t distinguish between anthropic principles, because I’ve already conditioned on the number of people in the world. So I think anthropic principles aren’t directly relevant for the argument I’ve made, though obviously they are relevant more generally.

Comment by William_MacAskill on Are we living at the most influential time in history? · 2019-09-13T00:49:21.255Z · EA · GW

I don't think I agree with this, unless one is able to make a comparative claim about the importance (from a longtermist perspective) of these events relative to future events' importance - which is exactly what I'm questioning.

I do think that weighting earlier generations more heavily is correct, though; I don't feel that much turns on whether one construes this as prior choice or an update from one's prior.

Comment by William_MacAskill on Are we living at the most influential time in history? · 2019-09-13T00:44:49.916Z · EA · GW

Given this, if one had a hyperprior over different possible Beta distributions, shouldn't 2000 centuries of no event occurring cause one to update quite hard against the (0.5, 0.5) or (1, 1) hyperparameters, and in favour of a prior that was massively skewed towards the per-century probability of no-lock-in-event being very low?

(And noting that, depending exactly on how the proposition is specified, I think we can be very confident that it hasn't happened yet. E.g. if the proposition under consideration was 'a values lock-in event occurs such that everyone after this point has the same values'.)