Posts

Are we living at the most influential time in history? 2019-09-03T04:55:31.501Z · score: 169 (75 votes)
Ask Me Anything! 2019-08-14T15:52:15.775Z · score: 134 (82 votes)
'Longtermism' 2019-07-25T21:27:11.568Z · score: 91 (44 votes)
Defining Effective Altruism 2019-07-19T10:49:54.253Z · score: 85 (47 votes)
Age-Weighted Voting 2019-07-12T15:21:31.538Z · score: 60 (44 votes)
A philosophical introduction to effective altruism 2019-07-10T13:40:19.228Z · score: 56 (25 votes)
Aid Scepticism and Effective Altruism 2019-07-03T11:34:22.630Z · score: 70 (39 votes)
Announcing the new Forethought Foundation for Global Priorities Research 2018-12-04T10:36:06.536Z · score: 62 (38 votes)
Projects I'd like to see 2017-06-12T16:19:52.178Z · score: 32 (33 votes)
Introducing CEA's Guiding Principles 2017-03-08T01:57:00.660Z · score: 41 (45 votes)
[CEA Update] Updates from January 2017 2017-02-13T20:56:21.121Z · score: 9 (9 votes)
Introducing the EA Funds 2017-02-09T00:15:29.301Z · score: 46 (46 votes)
CEA is Fundraising! (Winter 2016) 2016-12-06T16:42:36.985Z · score: 9 (11 votes)
[CEA Update] October 2016 2016-11-15T14:49:34.107Z · score: 7 (9 votes)
Setting Community Norms and Values: A response to the InIn Open Letter 2016-10-26T22:44:30.324Z · score: 35 (38 votes)
CEA Update: September 2016 2016-10-12T18:44:34.883Z · score: 7 (11 votes)
CEA Updates + August 2016 update 2016-10-12T18:41:43.964Z · score: 7 (11 votes)
Should you switch away from earning to give? Some considerations. 2016-08-25T22:37:19.691Z · score: 14 (16 votes)
Some Organisational Changes at the Centre for Effective Altruism 2016-07-23T04:29:02.144Z · score: 31 (33 votes)
Call for papers for a special journal issue on EA 2016-03-14T12:46:39.712Z · score: 9 (11 votes)
Assessing EA Outreach’s media coverage in 2014 2015-03-18T12:02:38.223Z · score: 11 (11 votes)
Announcing a forthcoming book on effective altruism 2014-03-16T13:00:35.000Z · score: 1 (1 votes)
The history of the term 'effective altruism' 2014-03-11T02:03:32.000Z · score: 20 (16 votes)
Where I'm giving and why: Will MacAskill 2013-12-30T23:00:54.000Z · score: 1 (1 votes)
What's the best domestic charity? 2013-12-10T19:16:42.000Z · score: 1 (1 votes)
Want to give feedback on a draft sample chapter for a book on effective altruism? 2013-09-22T04:00:15.000Z · score: 0 (0 votes)
How might we be wildly wrong? 2013-09-04T19:19:54.000Z · score: 1 (1 votes)
Money can buy you (a bit) of happiness 2013-07-29T04:00:59.000Z · score: 0 (0 votes)
On discount rates 2013-07-22T04:00:53.000Z · score: 0 (0 votes)
Notes on not dying 2013-07-15T04:00:05.000Z · score: 1 (1 votes)
Helping other altruists 2013-07-01T04:00:08.000Z · score: 2 (2 votes)
The rules of effective altruism. Rule #1: don’t die 2013-06-24T04:00:29.000Z · score: 3 (2 votes)
Vegetarianism, health, and promoting the right changes 2013-06-07T04:00:43.000Z · score: 0 (0 votes)
On the robustness of cost-effectiveness estimates 2013-05-24T04:00:47.000Z · score: 1 (1 votes)
Peter Singer's TED talk on effective altruism 2013-05-22T04:00:50.000Z · score: 0 (0 votes)
Getting inspired by cost-effective giving 2013-05-20T04:00:41.000Z · score: 1 (1 votes)
$1.25/day - What does that mean? 2013-05-17T04:00:25.000Z · score: 0 (0 votes)
An example of do-gooding done wrong 2013-05-15T04:00:16.000Z · score: 3 (3 votes)
What is effective altruism? 2013-05-13T04:00:31.000Z · score: 9 (9 votes)
Doing well by doing good: careers that benefit others also benefit you 2013-04-18T04:00:02.000Z · score: 0 (0 votes)
To save the world, don’t get a job at a charity; go work on Wall Street 2013-02-27T05:00:23.000Z · score: 2 (2 votes)
Some general concerns about GiveWell 2012-12-23T05:00:10.000Z · score: 0 (2 votes)
GiveWell's recommendation of GiveDirectly 2012-11-30T05:00:28.000Z · score: 1 (1 votes)
Researching what we should 2012-11-12T05:00:37.000Z · score: 0 (0 votes)
The most important unsolved problems in ethics 2012-10-15T02:28:58.000Z · score: 6 (5 votes)
How to be a high impact philosopher, part II 2012-09-27T04:00:27.000Z · score: 0 (2 votes)
How to be a high impact philosopher 2012-05-08T04:00:25.000Z · score: 0 (2 votes)
Practical ethics given moral uncertainty 2012-01-31T05:00:01.000Z · score: 2 (2 votes)
Giving isn’t demanding* 2011-11-25T05:00:04.000Z · score: 0 (0 votes)

Comments

Comment by william_macaskill on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T15:35:31.459Z · score: 13 (6 votes) · EA · GW

How much do you worry that MIRI's default non-disclosure policy is going to hinder MIRI's ability to do good research, because it won't be able to get as much external criticism?

Comment by william_macaskill on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T15:34:24.024Z · score: 15 (5 votes) · EA · GW

Suppose you find out that Buck-in-2040 thinks that the work you're currently doing is a big mistake (which should have been clear to you, now). What are your best guesses about what his reasons are?

Comment by william_macaskill on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T15:33:03.058Z · score: 16 (6 votes) · EA · GW

What's the biggest misconception people have about current technical AI alignment work? What's the biggest misconception people have about MIRI?

Comment by william_macaskill on Reality is often underpowered · 2019-10-12T10:20:32.770Z · score: 36 (16 votes) · EA · GW

Thanks Greg - I really enjoyed this post.

I don't think that this is what you're saying, but I think if someone drew the lesson from your post that, when reality is underpowered, there's no point in doing research into the question, that would be a mistake.

When I look at tiny-n sample sizes for important questions (e.g.: "How have new ideas made major changes to the focus of academic economics?" or "Why have social movements collapsed in the past?"), I generally don't feel at all like I'm trying to get a p<0.05 ; it feels more like hypothesis generation. So when I find out that Kahneman and Tversky spent 5 years honing the article Prospect Theory into a form that could be published in an economics journal, I think "wow, ok, maybe that's the sort of time investment that we should be thinking of". Or when I see social movements collapse because of in-fighting (e.g. pre-Copenhagen UK climate movement), or romantic disputes between leaders (e.g. Objectivism), then - insofar as we just want to take all the easy wins to mitigate catastrophic risks to the EA community - I know that this risk is something to think about and focus on for EA.

For these sorts of areas, the right approach seems to be granular qualitative research - trying to really understand in depth what happened in some other circumstance, and then think through what lessons that entail for the circumstance you're interested in. I think that, as a matter of fact, EA does this quite a lot when relevant. (E.g. Grace on Szilard, or existing EA discussion of previous social movements). So I think this gives us extra reason to push against the idea that "EA-style analysis" = "quant-y RCT-esque analysis" rather than "whatever research methods are most appropriate to the field at hand". But even on qualitative research I think the "EA mindset" can be quite distinctive - certainly I think, for example, that a Bayesian-heavy approach to historical questions, often addressing counterfactual questions, and looking at those issues that are most interesting from an EA perspective (e.g. how modern-day values would be different if Christianity had never taken off), would be really quite different from almost all existing historical research.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T19:56:39.368Z · score: 2 (1 votes) · EA · GW

Thanks! :)

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T19:51:21.656Z · score: 10 (5 votes) · EA · GW

Sorry - 'or otherwise lost' qualifier was meant to be a catch-all for any way of the investment losing its value, including (bad) value-drift.

I think there's a decent case for (some) EAs doing better at avoiding this than e.g. typical foundations:

  • If you have precise values (e.g. classical utilitarianism) then it's easier to transmit those values across time - you can write your values down clearly as part of the constitution of the foundation, and it's easier to find and identify younger people to take over the fund who also endorse those values. In contrast, for other foundations, the ultimate aims of the foundation are often not clear, and too dependent on a particular empirical situation (e.g. Benjamin Franklin's funds were to 'to provide loans for apprentices to start their businesses' (!!)).
  • If you take a lot of time carefully choosing who your successors are (and those people take a lot of time over who their successors are).

Then to reduce appropriation, one could spread the funds across many different countries and different people who share your values. (Again, easier if you endorse a set of values that are legible and non-idiosyncratic.)

It might still be true that the chance of the fund becoming valueless gets large over time (if, e.g. there's a 1% risk of it losing its value per year), but the size of the resources available also increases exponentially over time in those worlds where it doesn't lose its value.

Caveat also tricky questions on when 'value drift' is a bad thing rather than the future fund owners just having a better understanding of the right thing to do than the founders did, which often seems to be true for long-lasting foundations.



Comment by william_macaskill on Ask Me Anything! · 2019-09-13T01:14:45.576Z · score: 6 (4 votes) · EA · GW

I think you might be misunderstanding what I was referring to. An example of what I mean: Suppose Jane is deciding whether to work for Deepmind on the AI safety team. She’s unsure whether this speeds up or slows down AI development; her credence is imprecise, represented by the interval [0.4, 0.6]. She’s confident, let’s say, that speeding up AI development is bad. Because there’s some precisification of her credences on which taking the job is good, and some on which taking the job is bad, then if she uses a Liberal decision rule (= it is permissible for you to perform any action that is permissible according to at least one of the credence functions in your set), it’s permissible for her to take the job or not take the job.

The issue is that, if you have imprecise credences and a Liberal decision rule, and are a longtermist, then almost all serious contenders for actions are permissible.

So the neartermist would need to have some way of saying (i) we can carve out the definitely-good part of the action, which is better than not-doing the action on all precisifications of the credence; (ii) we can ignore the other parts of the action (e.g. the flow-through effects) that are good on some precisifications and bad on some precisifications. It seems hard to make that theoretically justified, but I think it matches how people actually think, so at least has some common-sense motivation. 

But you could do it if you could argue for a pseudodominance principle that says: "If there's some interval of time t_i over which action x does more expected good than action y on all precisifications of one's credence function, and there's no interval of time t_j at which action y does more expected good than action x on all precisifications of one's credence function, then you should choose x over y".


(In contrast, it seems you thought I was referring to AI vs some other putative great longtermist intervention. I agree that plausible longtermist rivals to AI and bio are thin on the ground.)

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T01:07:29.453Z · score: 4 (3 votes) · EA · GW

Thanks, William! 

Yeah, I think I messed up this bit. I should have used the harmonic mean rather than the arithmetic mean when averaging over possibilities of how many people will be in the future. Doing this brings the chance of being among the most influential person ever close to the chance of being the most influential person ever in a small-population universe.  But then we get the issue that being the most influential person ever in a small-population universe is much less important than being the most influential person in a big-population universe. And it’s only the latter that we care about.  


So what I really should have said (in my too-glib argument) is: for simplicity, just assume a high-population future, which are the action-relevant futures if you're a longtermist. Then take a uniform prior over all times (or all people) in that high-population future. So my claim is: “In the action-relevant worlds, the frequency of ‘most important time’ (or ‘most important person’) is extremely low, and so should be our prior.”

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T01:05:44.562Z · score: 2 (3 votes) · EA · GW

Thanks for these links. I’m not sure if your comment was meant to be a criticism of the argument, though? If so: I’m saying “prior is low, and there is a healthy false positive rate, so don’t have high posterior.” You’re pointing out that there’s a healthy false negative rate too — but that won’t cause me to have a high posterior?

And, if you think that every generation is increasing in influentialness, that’s a good argument for thinking that future generations will be more influential and we should therefore save.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T01:02:40.773Z · score: 16 (5 votes) · EA · GW

There were a couple of recurring questions, so I’ve addressed them here.

What’s the point of this discussion — isn’t passing on resources to the future too hard to be worth considering? Won’t the money be stolen, or used by people with worse values?

In brief: Yes, losing what you’ve invested is a risk, but (at least for relatively small donors) it’s outweighed by investment returns. 

Longer: The concept of ‘influentialness of a time’ is the same as the cost-effectiveness (from a longtermist perspective) of the best opportunities accessible to longtermists at a time.  Suppose I think that the best opportunities in, say, 100 years, are as good as the best opportunities now. Then, if I have a small amount of money, then I can get (say) at least a 2% return per year on those funds. But I shouldn’t think that the chance of my funds being appropriated (or otherwise lost) is as high as 2% per year. So the expected amount of good I do is greater by saving. 

So if you think that hingeyness (as I’ve defined it) is about the same in 100 years as it is now, or greater, then there’s a strong case for investing for 100 years before spending the money.

(Caveat that once we consider larger amounts of money, diminishing returns for expenditure becomes an issue, and chance of appropriation increases.)

What’s your view on anthropics? Isn’t that relevant here?

I’ve been trying to make claims that aren’t sensitive to tricky issues in anthropic reasoning. The claim that if there are n people, ordered in terms of some relation F (like ‘more important than’), then the claim that the prior probability that you are most F (‘most important’) person  is 1/n doesn’t distinguish between anthropic principles, because I’ve already conditioned on the number of people in the world. So I think anthropic principles aren’t directly relevant for the argument I’ve made, though obviously they are relevant more generally.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T00:49:21.255Z · score: 4 (3 votes) · EA · GW

I don't think I agree with this, unless one is able to make a comparative claim about the importance (from a longtermist perspective) of these events relative to future events' importance - which is exactly what I'm questioning.

I do think that weighting earlier generations more heavily is correct, though; I don't feel that much turns on whether one construes this as prior choice or an update from one's prior.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T00:44:49.916Z · score: 3 (2 votes) · EA · GW

Given this, if one had a hyperprior over different possible Beta distributions, shouldn't 2000 centuries of no event occurring cause one to update quite hard against the (0.5, 0.5) or (1, 1) hyperparameters, and in favour of a prior that was massively skewed towards the per-century probability of no-lock-in-event being very low?

(And noting that, depending exactly on how the proposition is specified, I think we can be very confident that it hasn't happened yet. E.g. if the proposition under consideration was 'a values lock-in event occurs such that everyone after this point has the same values'.)

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-13T00:38:50.308Z · score: 57 (17 votes) · EA · GW

Hi Toby,

Thanks so much for this very clear response, it was a very satisfying read, and there’s a lot for me to chew on. And thanks for locating the point of disagreement — prior to this post, I would have guessed that the biggest difference between me and some others was on the weight placed on the arguments for the Time of Perils and Value Lock-In views, rather than on the choice of prior. But it seems that that’s not true, and that’s very helpful to know. If so, it suggests (advertisement to the Forum!) that further work on prior-setting in EA contexts is very high-value. 

I agree with you that under uncertainty over how to set the prior, because we’re clearly so distinctive in some particular ways (namely, that we’re so early on in civilisation, that the current population is so small, etc), my choice of prior will get washed out by models on which those distinctive features are important; I characterised these as outside-view arguments, but I’d understand if someone wanted to characterise that as prior-setting instead.

I also agree that there’s a strong case for making the prior over persons (or person-years) rather than centuries. In your discussion, you go via number of persons (or person-years) per century to the comparative importance of centuries. What I’d be inclined to do is just change the claim under consideration to: “I am among the (say) 100,000 most influential people ever”. This means we still take into account the fact that, though more populous centuries are more likely to be influential, they are also harder to influence in virtue of their larger population.  If we frame the core claim in terms of being among the most influential people, rather than being at the most influential time, the core claim seems even more striking to me. (E.g. a uniform prior over the first 100 billion people would give a prior of 1 in 1 million of being in the 100,000 most influential people ever. Though of course, there would also be an extra outside-view argument for moving from this prior, which is that not many people are trying to influence the long-run future.)

However, I don’t currently feel attracted to your way of setting up the prior.  In what follows I’ll just focus on the case of a values lock-in event, and for simplicity I’ll just use the standard Laplacean prior rather than your suggestion of a Jeffreys prior. 

In significant part my lack of attraction is because the claims — that (i) there’s a point in time where almost everything about the fate of the universe gets decided; (ii) that point is basically now; (iii) almost no-one sees this apart from us (where ‘us’ is a very small fraction of the world) — seem extraordinary to me, and I feel I need extraordinary evidence in order to have high credence in them. My prior-setting discussion was one way of cashing out why these seem extraordinary. If there’s some way of setting priors such that claims (i)-(iii) aren’t so extraordinary after all, I feel like a rabbit is being pulled out of a hat. 

Then I have some specific worries with the Laplacean approach (which I *think* would apply to the Jeffreys prior, too, but I'm yet to figure out what a Fischer information matrix is, so I don't totally back myself here).

But before I mention the worries, I'll note that it seems to me that you and I are currently talking about priors over different propositions. You seem to be considering the propositions, ‘there is a lock-in event this century’ or ‘there is an extinction event this century’; I’m considering the proposition ‘I am at the most influential time ever’ or ‘I am one of the most influential people ever.’ As is well-known, when it comes to using principle-of-indifference-esque reasoning, if you use that reasoning over a number of different propositions then you can end up with inconsistent probability assignments. So, at best, one should use such reasoning in a very restricted way. 

The reason I like thinking about my proposition (‘are we at the most important time?’ or ‘are we one of the most influential people ever?’) for the restricted principle of indifference, is that:

(i) I know the frequency of occurrence of ‘most influential person’, for each possible total population of civilization (past, present and future). Namely, it occurs once out of the total population. So I can look at each possible population size for the future, look at my credence in each possible population occurring, and in each case know the frequency of being the most influential person (or, more naturally, in the 100,000 most influential people).

(ii) it’s the most relevant proposition for the question of what I should do. (e.g. Perhaps it’s likely that there’s a lock-in event, but we can’t do anything about it and future people could, so we should save for a later date.)

Anyway, the worries about Laplacean (and Jeffreys) prior.

First, the Laplacean prior seems to get the wrong answer for lots of similar predicates. Consider the claims: “I am the most beautiful person ever” or “I am the strongest person ever”, rather than “I am the most important person ever”. If we used the Laplacean prior in the way you suggest for these claims, the first person would assign 50% credence to being the strongest person ever, even if they knew that there was probably going to be billions of people to come. This doesn’t seem right to me. 

Second, it also seems very sensitive to our choice of start date. If the proposition under question is, ‘there will be a lock-in event this century’, I’d get a very different prior depending on whether I chose to begin counting from: (i) the dawn of the information age; (ii) the beginning of the industrial revolution; (iii) the start of civilisation; (iv) the origin of homo sapiens; (v) the origin of the genus homo; (vi) the origin of mammals, etc. 

Of course, the uniform prior has something similar, but I think it handles the issue gracefully. e.g. On priors, I should think it’s 1 in 5 million likely that I’m the funniest person in Scotland; 1 in 65 million that I’m the funniest person in Britain, 1 in 7.5 billion that I’m the funniest person in the world. Similarly, with whether I’m the most influential person in the post-industrial era, the post-agricultural era, etc.

Third, the Laplacean prior doesn’t add up to 1 across all people. For example, suppose you’re the first person and you know that there will be 3 people. Then, on the Laplacean prior, the total probability for being the most influential person ever is ½ + ½(⅓) + ½(⅔)(¼) = ¾.  But I know that someone has to be the most influential person ever. This suggests the Laplacean prior is the wrong prior choice for the proposition I’m considering, whereas the simple frequency approach gets it right.

So even if one feels skeptical of the uniform prior, I think the Laplacean way of prior-setting isn't a better alternative. In general: I'm sympathetic to having a model where early people are more likely to be more influential, but a model which is uniform over orders of magnitude seems too extreme to me.


(As a final thought: Doesn’t this form of prior-setting also suffer from the problem of there being too many hypotheses?  E.g. consider the propositions:

A - There will be a value lock-in event this century
B - There will be a lock-in of hedonistic utilitarian values this century

C - There will be a lock-in of preference utilitarian values this century

D - There will be a lock-in of Kantian values this century

E - There will be a lock-in of fascist values this century

On the Laplacean approach, these would all get the same probability assignment - which seems inconsistent. And then just by stacking priors over particular lock-in events, we can get a probability that it’s overwhelmingly likely that there’s some lock-in event this century. I’ve put this comment in parentheses, though, as I feel *even less* confident about my worry here than my other worries listed.)

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T04:02:10.863Z · score: 5 (4 votes) · EA · GW

The way I'd think about it is that we should be uncertain about how justifiably confident people can be that they're at the HoH. If our current credence in HoH is low, then the chance that it might be justifiably much higher in the future should be the significant consideration. At least if we put aside simulation worries, I can imagine evidence which would lead me to have high confidence that I'm at the HoH.

E.g., the prior is (say) 1/million this decade, but if the evidence suggests it is 1%, perhaps we should drop everything to work on it, if we won't expect our credence to be this high again for another millenia.

I think if that were one's credences, what you say makes sense. But it seems hard for me to imagine a (realistic) situation where I think that it's 1% chance of HoH this decade, but I'm confident that the chance will much much lower than that for all of the next 99 decades.

For what it's worth, my intuition is that pursuing a mixed strategy is best; some people aiming for impact now, in case now is a hinge, and some people aiming for impact in many many years, at some future hinge moment.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:48:33.157Z · score: 25 (8 votes) · EA · GW
So I would say both the population and pre-emption (by earlier stabillization) factors intensely favor earlier eras in per resource hingeyness, constrained by the era having any significant lock-in opportunities and the presence of longtermists.

I think this is a really important comment; I see I didn't put these considerations into the outside-view arguments, but I should have done as they are make for powerful arguments.

The factors you mention are analogous to the parameters that go into the Ramsey model for discounting: (i) a pure rate of time preference, which can account for risk of pre-emption; (ii) a term to account for there being more (and, presumably, richer) future agents and some sort of diminishing returns as a function of how many future agents (or total resources) there are. Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There's been some great societal catastrophe and we're rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness is very strong, then maybe we should be saving for such a possible scenario; that's the hinge moment.

For the later scenarios here you're dealing with much larger populations. If the plausibility of important lock-in is similar for solar colonization and intergalactic colonization eras, but the population of the latter is billions of times greater, it doesn't seem to be at all an option that it could be the most HoH period on a per resource unit basis.

I agree that other things being equal a time with a smaller population (or: smaller total resources) seems likelier to be a more influential time.  But ‘doesn't seem to be at all an option’ seems overstated to me. 

Simple case: consider a world where there just aren’t options to influence the very long-run future. (Agents can make short-run perturbations but can’t affect long-run trajectories; some sort of historical determinism is true). Then the most influential time is just when we have the best knowledge of how to turn resources into short-run utility, which is presumably far in the future. 

Or, more importantly, where hingeyness is essentially 0 up until a certain point far in the future.  If our ability to positively influence the very long-run future were no better than a dart-throwing chimp until we’ve got computers the size of solar systems, then the most influential times would also involve very high populations

More generally, per-resource hingeyness increases with:

  • Availability of pivotal moments one can influence, and their pivotality 
  • Knowledge / understanding of how to positively influence the long-run future

And hingeyness decreases with:

  • Population size
  • Level of expenditure on long-term influence
  • Chance of being pre-empted already

If knowledge or availability of pivotal moments at a time is 0, then hingeyness at the time is 0, and lower populations can’t outweigh that.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:31:43.900Z · score: 21 (13 votes) · EA · GW
I think this overstates the case. Diminishing returns to expenditures in a particular time favor a nonzero disbursement rate (e.g. with logarithmic returns to spending at a given time 10x HoH levels would drive a 10x expenditure for a given period)

Sorry, I wasn’t meaning we should be entirely punting to the future, and in case it’s not clear from my post my actual all-things-considered views is that longtermist EAs should be endorsing a mixed strategy of some significant proportion of effort spent on near-term longtermist activities and some proportion of effort spent on long-term longtermist activities. 

I do agree that, at the moment, EA is mainly investing (e.g. because of Open Phil and because of human capital and because much actual expenditure is field-building-y, as you say). But it seems like at the moment that’s primarily because of management constraints and weirdness of borrowing-to-give (etc), rather than a principled plan to spread giving out over some (possibly very long) time period. Certainly the vibe in the air is ‘expenditure (of money or labour) now is super important, we should really be focusing on that’. 

(I also don’t think that diminishing returns is entirely true: there are fixed costs and economies of scale when trying to do most things in the world, so I expect s-curves in general. If so, that would favour a lumpier disbursement schedule.)

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:28:31.558Z · score: 5 (4 votes) · EA · GW
I would note that the creation of numerous simulations of HoH-type periods doesn't reduce the total impact of the actual HoH folk

Agree that it might well be that even though one has a very low credence in HoH, one should still act in the same way. (e.g. because if one is not at HoH, one is a sim, and your actions don’t have much impact).

The sim-arg could still cause you to change your actions, though. It’s somewhat plausible to me, for example, that the chance of being a sim if you’re at the very most momentous time is 1000x higher than the chance of being a sim if you’re at the 20th most hingey time, but the most hingey time is not 1000x more hingey than the 20th most hingey time. In which case the hypothesis that you’re at the 20th most hingey time has a greater relative importance than it had before.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:27:35.223Z · score: 17 (8 votes) · EA · GW
I agree we are learning more about how to effectively exert resources to affect the future, but if your definition is concerned with the effect of a marginal increment of resources (rather than the total capacity of an era), then you need to wrestle with the issue of diminishing returns.

I agree with this, though if we’re unsure about how many resources will be put towards longtermist causes in the future, then the expected value of saving will come to be dominated by the scenario where very few resources are devoted to it. (As happens in the Ramsey model for discounting if one includes uncertainty over future growth rates and the possibility of catastrophe.) This considerations gets stronger if one thinks the diminishing marginal returns curve is very steep.

E.g. perhaps in 150 years’ time, EA and Open Phil and longtermist concern will be dust; in which case those who saved for the future (and ensured that there would be at least some sufficiently likeminded people to pass their resources onto) will have an outsized return. And perhaps returns diminish really steeply, so that what matters is guaranteeing that there are at least some longtermists around. If the outsized return in this scenario if large enough, then even a low probability of this scenario might be the dominant consideration.

Founding fields like AI safety or population ethics is much better on a per capita basis than expanding them by 1% after they have developed more.

Strongly agree, though by induction it seems we should think there will be more such fields in the future.

The longtermist of 1600 would indeed have mostly 'invested' in building a movement and eventually in things like financial assets when movement-building returns fell below financial returns, but they also should have made concrete interventions like causing the leveraged growth of institutions like science and the Enlightenment that looked to have a fair chance of contributing to HoH scenarios over the coming centuries, and those could have paid off.

You might think the counterfactual is unfair here, but I wouldn’t regard it as accessible to someone in 1600 to know that they could make contributions to science and the Enlightenment as a good way of influencing the long-run future. 

This is analogous to the general point in financial markets that assets classes with systematically high returns only have them before those returns are widely agreed on to be valuable and accessible...
A world in which everyone has shared correct values and strong knowledge of how to improve things is one in which marginal longtermist resources are gilding the lily.

Though if we’re really clueless right now (perhaps not much better than the person in 1600) then perhaps that’s the best we can do.

And it would seem that the really high-value scenario is where (i) knowledge is very high but (ii) concern for the very long-run future is very low (but not nonexistent, allowing for resources to be passed onto those times.) 

In terms of the financial analogy, that would be like how someone with strange preferences, who gets extraordinary utility from eating bread and potatoes, gets a much higher return (when measured in utility gained) from a regular salary than other people would. 

And in general I'm more inclined to believe stories of us having extraordinary impact if that primarily results from a difference in what we care about compared with others, rather than from having greater insight.


I will say, though: the argument “we’re at an unusual period where longtermist (/impartial consequentialish) concern is very low but not nonexistent” as a reason for now being a particularly influential time seems pretty good to me, and wasn’t one that I included in my list of arguments in favour of HoH.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:18:23.386Z · score: 14 (6 votes) · EA · GW
To talk about what they would have been one needs to consider a counterfactual in which we anachronistically introduce at least some minimal version of longtermist altruism, and what one includes in that intervention will affect the result one extracts from the exercise.

I agree there’s a tricky issue of how exactly one constructs the counterfactual. The definition I’m using is trying to get it as close as possible to a counterfactual we really face: how much to spend now vs how much to pass resources onto future altruists. I’d be interested if others thought of very different approaches. It’s possible that I’m trying to pack too much into the concept of ‘most influential’, or that this concept should be kept separate from the idea of moving resources around to different times.

I feel that involving the anachronistic insertion of a longtermist altruist into the past, if anything, makes my argument harder to make, though. If I can’t guarantee that the past person I’m giving resources to would even be a longtermist, that makes me less inclined to give them resources. And if I include the possibility that longtermism might be wrong and that the future-person that I pass resources onto will recognise this, that’s (at least some) argument to me in favour of passing on resources. (Caveat subjectivist meta-ethics, possibility of future people’s morality going wayward, etc.)

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:16:53.906Z · score: 7 (6 votes) · EA · GW
I would dispute this. Possibilities of AGI and global disaster were discussed by pioneers like Turing, von Neumann, Good, Minsky and others from the founding of the field of AI.

Thanks, I’ve updated on this since writing the post and think my original claim was at least too strong, and probably just wrong. I don’t currently have a good sense of, say, if I were living in the 1950s, how likely I would be to figure out AI as the thing, rather than focus on something else that turned out not to be as important (e.g. the focus on nanotech by the Foresight Institute (a group of idealistic futurists) in the late 80s could be a relevant example).

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-05T03:13:26.225Z · score: 22 (10 votes) · EA · GW

Hi Carl,

Thanks so much for taking the time to write this excellent response, I really appreciate it, and you make a lot of great points.  I’ll divide up my reactions into different comments; hopefully that helps ease of reading. 

I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.

This is a good idea. Some options: influentialness; criticality; momentousness; importance; pivotality; significance. 

I’ve created a straw poll here to see as a first pass what the Forum thinks.

[Edit: Results:

Pivotality - 26% (17 votes)

Criticality - 22% (14 votes)

Hingeyness - 12% (8 votes)

Influentialness - 11% (7 votes)

Importance - 11% (7 votes)

Significance - 11% (7 votes)

Momentousness - 8% (5 votes)]

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-04T04:59:37.220Z · score: 5 (4 votes) · EA · GW

Thanks - I agree that this distinction is not as crisp as would be ideal. I’d see religion-spreading, and movement-building, as in practice almost always a mixed strategy: in part one is giving resources to future people, and in part one is also directly altering how the future goes.

But it's more like buck-passing than it is like direct work, so I think I should just not include the Axial age in the list of particularly influential times (given my definition of 'influential').

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-04T04:56:51.773Z · score: 5 (3 votes) · EA · GW

Huh, thanks for the great link! I hadn’t seen that before, and had been under the impression that though some people (e.g. Good, Turing) had suggested the intelligence explosion, no-one really worried about the risks. Looks like I was just wrong about that.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-04T04:56:20.135Z · score: 9 (7 votes) · EA · GW

Agreed, good point; I was thinking just of the case where you reduce extinction risk in one period but not in others. 

I’ll note, though, that reducing extinction risk at all future times seems very hard to do. I can imagine, if we’re close to a values lock-in point, we could shift societal values such that they care about future extinction risk much more than they would otherwise have done. But if that's the pathway, then the Time of Perils view wouldn’t provide an argument for HoH independent of the Value Lock-In view.

Comment by william_macaskill on Are we living at the most influential time in history? · 2019-09-04T04:55:57.309Z · score: 16 (7 votes) · EA · GW

Thanks, Pablo! Yeah, the reference was deliberate — I’m actually aiming to turn a revised version of this post into a book chapter in a Festschrift for Parfit. But I should have given the great man his due! And I didn’t know he’d made the ‘most important centuries’ claim in Reasons and Persons, that’s very helpful!

Comment by william_macaskill on Ask Me Anything! · 2019-08-30T17:30:57.064Z · score: 7 (5 votes) · EA · GW

I agree re value-drift and societal trajectory worries, and do think that work on AI is plausibly a good lever to positively affect them.

Comment by william_macaskill on Ask Me Anything! · 2019-08-30T17:27:38.994Z · score: 7 (3 votes) · EA · GW

One thing that moves me towards placing a lot of importance on culture and institutions: We've actually had the technology and knowledge to produce greater-than-human intelligence for thousands of years, via selective breeding programs. But it's never happened, because of taboos and incentives not working out.

Comment by william_macaskill on Ask Me Anything! · 2019-08-29T22:04:18.009Z · score: 5 (3 votes) · EA · GW

Population ethics; moral uncertainty.

I wonder if someone could go through Conceptually and make sure that all the wikipedia entries on those topics are really good?

Comment by william_macaskill on Ask Me Anything! · 2019-08-29T22:03:58.067Z · score: 6 (4 votes) · EA · GW

I think cluelessness-ish worries. From the perspective of longtermism, for any particular action, there are thousands of considerations/ scenarios that point in the direction of the action being good, and thousands of considerations/ scenarios that point in the direction of the action being bad. The standard response to that is that you should weigh all these and do what is in expectation best, according to your best-guess credences. But maybe we just don’t have sufficiently fine-grained credences for this to work, and there’s some principled grounds for saying “I’m confident that this short-run good thing I do is good, and (given my not-completely-precise credences) I shouldn’t think that the expected value of the more speculative stuff is either positive or negative.”

Comment by william_macaskill on Ask Me Anything! · 2019-08-29T22:02:24.620Z · score: 20 (6 votes) · EA · GW

It depends on who we point to as the experts, which I think there could be disagreement about. If we’re talking about, say, FHI folks, then I’m very clearly in the optimistic tail - others would put much higher x-risk, takeoff scenario, and chance of being superinfluential. But note I think there’s a strong selection effect with respect to who becomes an FHI person, so I don’t simply peer-update to their views. I’d expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view. If I were wrong about that I’d change my view. One relevant piece of evidence is that the Metaculus (a community prediction site) algorithm puts the chance of 95%+ of people dead by 2100 at 0.5%, which is in the same ballpark as me.

Comment by william_macaskill on Ask Me Anything! · 2019-08-29T22:00:50.764Z · score: 21 (6 votes) · EA · GW

Thanks! I’ve read and enjoyed a number of your blog posts, and often found myself in agreement. 

If you think that extinction risk this century is less than 1%, then in particular, you think that extinction risk from transformative AI is less than 1%. So, for this to be consistent, you have to believe either
a) that it's unlikely that transformative AI will be developed at all this century,
b) that transformative AI is unlikely to lead to extinction when it is developed, e.g. because it will very likely be aligned in at least a narrow sense. (I wrote up some arguments for this a while ago.)
Which of the two do you believe to what extent? For instance, if you put 10% on transformative AI this century – which is significantly more conservative than "median EA beliefs" – then you’d have to believe that the conditional probability of extinction is less than 10%. (I’m not saying I disagree – in fact, I believe something along these lines myself.)

See my comment to nonn. I want to avoid putting numbers on those beliefs to avoid anchoring myself; but I find them both very likely - it’s not that one is much more likely than the other. (Where ‘transformative AI not developed this century’ includes ‘AI is not transformative’ in the sense that it doesn’t precipitate a new growth mode in the next century - this is certainly my mainline belief.)


What do you think about the possibility of a growth mode change (i.e. much faster pace of economic growth and probably also social change, comparable to the industrial revolution) for reasons other than AI? I feel that this is somewhat neglected in EA – would you agree with that?

Yes, I’d agree with that. There’s a lot of debate about the causes of the industrial revolution. Very few commentators point to some technological breakthrough as the cause, so it's striking that people are inclined to point to a technological breakthrough in AI as the cause of the next growth mode transition. Instead, leading theories point to some resource overhang (‘colonies and coal’), or some innovation or change in institutions (more liberal laws and norms in England, or higher wages incentivising automation) or in culture. So perhaps there’s some novel governance system that could drive a higher growth mode, and that'll be the decisive thing.


I’d also be interested in more details on what these beliefs imply in terms of how we can improve the long-term future. I suppose you are now more sceptical about work on AI safety as the “default” long-termist intervention. But what is the alternative? Do you think we should focus on broad improvements to civilisation, such as better governance, working towards compromise and cooperation rather than conflict / war, or generally trying to make humanity more thoughtful and cautious about new technologies and the long-term future? These are uncontroversially good but not very neglected, and it seems hard to get a lot of leverage in this way. (Then again, maybe there is no way to get extraordinary leverage over the long-term future.)
Also, if we aren't at a particularly influential point in time regarding AI, then I think that expanding the moral circle, or otherwise advocating for "better" values, may be among the best things we can do. What are your thoughts on that?


I still think that working on AI is ultra-important — in one sense, whether there’s a 1% risk or a 20% risk doesn’t really matter; society is still extremely far from the optimum level of concern.  (Similarly: “Is the right carbon tax $50 or $200?” doesn’t really matter.)

For longtermist EAs more narrowly it might matter insofar as I think it makes some other options more competitive than otherwise: especially the idea of long-term investment (whether financial or via movement-building); doing research on longtermist-relevant topics; and, like you say, perhaps doing broader x-risk reduction strategies like preventing war, better governance, trying to improve incentives so that they align better with the long-term, and so on.

Comment by william_macaskill on Ask Me Anything! · 2019-08-29T21:55:26.981Z · score: 18 (11 votes) · EA · GW

The general background worldview that motivates this credence is that predicting the future is very hard, and we have almost no evidence that we can do it well. (Caveat I don’t think we have great evidence that we can’t do it either, though.) When it comes to short-term forecasting, the best strategy is to use reference-class forecasting (‘outside view’ reasoning; often continuing whatever trend has occurred in the past), and make relatively small adjustments based on inside-view reasoning. In the absence of anything better, I think we should do the same for long-term forecasts too. (Zach Groff is working on a paper making this case in more depth).

So when I look to predict the next hundred years, say, I think about how the past 100 years has gone (as well as giving consideration to how the last 1000 years and 10,000 years (etc) have gone).  When you ask me about how AI will go, as a best guess I continue the centuries-long trend of automation of both physical and intellectual labour; in the particular context of AI I continue the trend where within a task, or task-category, the jump from significantly sub-human to vastly-greater-than-human level performance is rapid (on the order of years), but progress from one category of task to another (e.g. from chess to Go) goes rather slowly, as different tasks seem to differ from each other by orders of magnitude in terms of how difficult they are to automate. So I expect progress in AI to be gradual. 

Then I also expect future AI systems to be narrow rather than general. When I look at the history of tech progress, I almost always see the creation of specific, highly optimised and generally very narrow tools, and very rarely the creation of general-purpose systems like general-purpose factories. And in general, when general-purpose tools are developed, they are worse than narrow tools on any given dimension: a swiss army knife is a crappier knife, bottle opener, saw, etc than any of those things individually. The current development of AI systems don’t give me any reason to think that AI is different: they’ve been very narrow to date; and when they’ve attempted to do things that are somewhat more general, like driving a car, progress has been slow and gradual, suffering from major difficulties in dealing with unusual situations. 

Finally, I expect the development of any new technology to be safe by default. As an intuition pump: suppose there was some new design of bomb and BAE Systems decided to build it. There were, however, some arguments that the new design was unstable, and that if designed badly the bomb would kill everyone in the company, including the designers, the CEO, the board, and all their families. These arguments have been made in the media and the designers and the companies were aware of them. What odds do you put on BAE Systems building the bomb wrong and blowing themselves up? I’d put it very low — certainly less than 1%, and probably less than 0.1%. That would be true even if BAE Systems were in a race with Lockheed Martin to be the first to market. People in general really want to avoid dying, so there’s a huge incentive (a willingness-to-pay measured in the trillions of dollars for the USA alone) to ensure that AI doesn’t kill everyone. And when I look at other technological developments I see society being very risk averse and almost never taking major risks - a combination of public opinion and regulation means that things go slow and safe; again, self-driving cars are an example.

For each of these views, I’m very happy to acknowledge that maybe AI is different. And, when we’re talking about what could be the most important event ever, the possibility of some major discontinuity is really worth guarding against. But discontinuity is not my mainline prediction of what will happen. 


(Later edit: I worry that the text above might have conveyed the idea that I'm just ignoring the Yudkowsky/Bostrom arguments, which isn't accurate. Instead, another factor in my change of view was placing less weight on the Y-B arguments because of: (i) finding the arguments that we'll get discontinuous progress in AI a lot less compelling than I used to (e.g. see here and here); (ii) trying to map the Yudkowsky/Bostrom arguments, which were made before the deep learning paradigm, onto actual progress in machine learning, and finding them hard to fit well. Going into this properly would require a lot more discussion though!)

Comment by william_macaskill on Ask Me Anything! · 2019-08-20T15:40:09.292Z · score: 3 (2 votes) · EA · GW

Anon asks: “When you gave evidence to the UK government about the impacts of artificial intelligence, why didn't you talk about AI safety (beyond surveillance)?

https://www.parliament.uk/ai-committee

I think you’re mistaking me for someone else!

Comment by william_macaskill on Ask Me Anything! · 2019-08-20T15:39:47.301Z · score: 14 (9 votes) · EA · GW

Anon asks: "1. Population ethics: what view do you put most credence in? What are the best objections to it?"

Total view: just add up total wellbeing. 

Best objection: very repugnant conclusion: Take any population Pi with N people in unadulterated bliss, for any N. Then there is some number M such that a population Pj that consists of 10^100(N) people living in utter hell, and M people with lives barely worth living, such that Pj is better than Pi. 

"2. Population ethics: do you think questions about better/worse worlds are sensibly addressed from a "fully impartial" perspective? (I'm unsure what that would even mean... maybe... the perspective of all possible minds?). Or do you prefer to anchor reflection on population ethics in the values of currently existing minds (e.g. human values)?"

Yeah, I think we should try to answer this ‘from the point of view of the universe’. 

"3. Given your work on moral uncertainty, how do you think about claims associated with conservative world views? In particular, things like (a) the idea that revolutionary individual reasoning is rather error prone, and requires the refining discipline of tradition as a guide (b) the (tragic?) view that human values – or values of many possible minds – are unlikely to converge (c) strong concern for the preservation of existing sources of moral value (d) general distrust of rapid social change."

I endorse (d) if the social change is really major; I think the world is going pretty well overall, and the ways in which it’s going really badly (e.g. factory farming) don’t require revolutionary social change. I believe (b). No strong views on (a) or (c).

Comment by william_macaskill on Ask Me Anything! · 2019-08-20T15:38:22.856Z · score: 42 (22 votes) · EA · GW

Anon asks: "Do you think climate change is neglected within EA?"

I think there’s a weird vibe where EA can feel ‘anti’ climate change work, and I think that’s an issue. I think the etiology of that sentiment is (i) some people raising climate change work as a proposal to benefit the global poor, and I think it’s very fair to argue that bednets do better than the best climate change actions with respect to that specific goal; (ii) climate change gets a lot of media time, including some claims that aren’t scientifically grounded (e.g. that climate change will literally directly kill everyone on the planet), and some people (fairly) respond negatively to those claims. 

But climate change is a huge problem, and working on clean tech, nuclear power, carbon policy etc are great things to do. And I think the upsurge of concern about the rights of future generations that we’ve seen from the wider public over the last couple of decades is really awesome, and I think that longtermists could do more to harness that concern and show how concern for future generations generalises to other issues too. So I want to be like, ‘Yes! And….’ with respect to climate change.

Then is climate change neglected within EA? My guess is that on the funding side the standard argument of ‘if neartermist, fund global health; if longtermist, fund AI / bio / other’ is probably approximately right. (Though you might want to offset your lifetime GHG emissions on non-consequentialist grounds.) But I think that neglectedness considerations play out very differently for allocation of labour, and so I don’t think it’s clear what to think in the case of career choice. If, for example, someone were going into nuclear policy, or devising a clever way of making a carbon tax politically feasible, or working on smart intergovernmental policies like REDD+, I’d think that was really cool; whether it was their best option would depend on the person and their other options.  

Comment by william_macaskill on Ask Me Anything! · 2019-08-20T15:33:53.636Z · score: 51 (19 votes) · EA · GW

I guess simply getting the ball rolling on GWWC should probably win, but the thing I feel proudest of is probably DGB — I don’t think it’s perfect, but I think it came together well, and it’s something where I followed my gut even though others weren’t as convinced that writing a book was a good idea, and I’m glad I did. 

On mistakes:  Huge number in the early days, of which poor communication with GiveWell was huge and really could have led to EA as a genuine unified community never forming; the controversial early 80k campaign around earning to give was myopic, too.  More recently, I think I really messed up in 2016 with respect to coming on as CEA CEO. I think for being CEO you should be either in or out, where being ‘in’ means 100% committed for 5+ years. Whereas for me it was always planned as a transitional thing (and this was understood internally but I think not communicated properly externally), and when I started I had just begun a tutorial fellowship at Oxford, which other tutorial fellows normally describe as ‘their busiest ever year’, and was also still dealing with the follow-on PR from DGB, so it was like I already had one and a half other full-time jobs. And there was, in retrospect, an obvious alternative, which was to invest time in the strategy side of CEA, but beyond that to hold a proper CEO search. I think this mistake had a lot of knock-on negative effects, that we’re clearing up now, but lasted quite a while. I also think the mistake here stems from a more general issue of mine, which is of being too impulsive and too keen to jump onto new projects. (I have worked on this a lot since then, though, and have gotten better.)

Comment by william_macaskill on Ask Me Anything! · 2019-08-20T15:32:08.326Z · score: 25 (17 votes) · EA · GW

Pretty hard to say, but the ‘hero worship’ comment (in the sense of ‘where opinions of certain people automatically get much more support instead of people thinking for themselves’) seems pretty accurate.

Insofar as this is a thing, it has a few bad effects: (i) means that more meme-y ideas get overrepresented relative to boring ideas; (ii) EA ideas don’t get stress-tested enough, or properly ‘voted’ on by crowds; (iii) there’s a problem of over-updating (“80k thinks everyone should earn to give!”; “80k thinks no-one should earn to give!” etc), especially on messages (like career advice) that are by their nature very person- and context-relative.

Comment by william_macaskill on Ask Me Anything! · 2019-08-20T15:30:52.638Z · score: 51 (25 votes) · EA · GW

Yeah, I do think there’s an issue of too much deference, and of subsequent information cascades. It’s tough, because intellectual division of labour and deference is often great, as it means not everyone has to reinvent the wheel for themselves. But I do think in the current state of play there’s too much deference, especially on matters that involve a lot of big-picture worldview judgments, or rely on priors a lot. I feel that was true in my own case - about a year ago I switched from deferring to others on a number of important issues to assessing them myself, and changed my views on a number of things (see my answer to ‘what have you changed your mind about recently’).

I wish more researchers wrote up their views, even if in brief form, so that others could see how much diversity there is, and where, and so we avoid a bias where the more meme-y views get more representation than more boring views simply by being more likely to be passed along communication channels. (Maybe more AMAs could help with this!) I also feel we could do more to champion less well-known people with good arguments, especially if their views are in some ways counter to the EA mainstream. (Two people I’d highlight here are Phil Trammell and Ben Garfinkel.)

Comment by william_macaskill on Ask Me Anything! · 2019-08-20T15:27:22.250Z · score: 64 (37 votes) · EA · GW

Relative to the base rate of how wannabe social movements go, I’m very happy with how EA is going. In particular: it doesn’t spend much of its time on internal fighting; the different groups in EA feel pretty well-coordinated; it hasn’t had any massive PR crises; it’s done a huge amount in a comparatively small amount of time, especially with respect to moving money to great organisations; it’s in a state of what seems like steady, sustainable growth. There’s a lot still to work on, but things are going pretty well. 

What I could change historically:  I wish we’d been a lot more thoughtful and proactive about EA’s culture in the early days.  In a sense the ‘product’ of EA (as a community) is a particular culture and way of life. Then the culture and way of life we want is whatever will have the best long-run consequences. Ideally I’d want a culture where (i) 10% or so of people interact with the EA community are like ‘oh wow these are my people, sign me up’; (ii) 90% of people are like ‘these are nice, pretty nerdy, people; it’s just not for me’; and (iii) almost no-one is like, ‘wow, these people are jerks’. (On (ii) and (iii): I feel like the Quakers is the sort of thing I’m thinking does well on this; the New Atheists is the sort of thing I want to avoid.) I feel we’re still pretty far from that ideal at the moment. 

I think the ways in which the culture is currently less than ideal fall into two main categories (which are interrelated). I’m thinking about ‘culture’ as ‘ways in which a casual outsider might perceive EA’ - crucially, it doesn’t matter whether this is a ‘fair’ representation or not, and I’m bearing in mind that even occasional instances of bad culture can have outsized impact on peoples’ perceptions. (So, to really hammer this home: I’m not saying that what follows is an accurate characterisation of the EA community in general. But I am saying that this is how some people experience the EA community, and I wish the number of people who experience it like this was 0.)

    • Coming across as unwelcoming or in-groupy. E.g. people having a consequentialist approach to interactions with other people (“What can I get from this other person? If nothing, move on.”); using a lot of jargon; simply not being friendly to new people; not taking an interest in people as people. 
    • Coming across as intellectually arrogant. E.g. giving a lot more weight to views and arguments from people inside the community than people outside the community; being dismissive of others’ value systems or worldviews, even when one’s own worldview is quite far from the mainstream.

And I think that can get in the way of the culture and perception we want of EA, which is something like: “These are the people who are just really serious about making a difference in the world, and are trying their damndest to figure out how best to do it.”

Comment by william_macaskill on Ask Me Anything! · 2019-08-19T18:48:11.640Z · score: 8 (3 votes) · EA · GW
I think it would be helpful for philosophers to think about those problems specifically in the context of AI alignment.

That makes sense; agree there's lots of work to do there.

Any chance you could discuss this issue with her and perhaps suggest adding working on technical AI safety as an option that EA-aligned philosophers or people with philosophy backgrounds should strongly consider?

Have sent an email! :)

Comment by william_macaskill on Ask Me Anything! · 2019-08-19T18:09:04.513Z · score: 71 (38 votes) · EA · GW

Lots! Treat all of the following as ‘things Will casually said in conversation’ rather than ‘Will is dying on this hill’ (I'm worried about how messages travel and transmogrify, and I wouldn't be surprised if I changed lots of these views again in the near future!). But some things include:

  • I think existential risk this century is much lower than I used to think — I used to put total risk this century at something like 20%; now I’d put it at less than 1%. 
  • I find ‘takeoff’ scenarios from AI over the next century much less likely than I used to. (Fast takeoff in particular, but even the idea of any sort of ‘takeoff’, understood in terms of moving to a higher growth mode, rather than progress in AI just continuing existing two-century-long trends in automation.) I’m not sure what numbers I’d have put on this previously, but I’d now put medium and fast takeoff (e.g. that in the next century we have a doubling of global GDP in a 6 month period because of progress in AI) at less than 10%. 
  • In general, I think it’s much less likely that we’re at a super-influential time in history; my next blog post will be about this idea 
  • I’m much more worried about a great power war in my lifetime than I was a couple of years ago. (Because of thinking about the base rate of war, not because of recent events.)
  • I find (non-extinction) trajectory change more compelling as a way of influencing the long-run future than I used to. 
  • I’m much more sceptical about our current level of understanding about how to influence the long-run future than I was before, and think its more likely than I did before that EAs in 50 years will think that EAs of today were badly mistaken.  
  • I’m more interested than I was in getting other people’s incentives right with respect to long-run outcomes, as compared to just trying to aim for good long-run outcomes directly. So for example, I’m more interested in institutional changes than I was, including intergovernmental institutions and design of world government, and space law. 
  • I’m much more sympathetic to the idea of giving later (and potentially much later) than I was before.

On the more philosophical end: 

  • I’m no longer convinced of naturalism as a metaphysical view, where by 'naturalism' I mean the view that everything that exists exists in space-time. (So now, e.g., I think that numbers and properties exist, and I no longer see what supports the idea that everything that exists must be spatio-temporal).
  • I haven’t really worked it through, but I probably have a pretty different take on the right theoretical approach to moral uncertainty than I used to have. (This would take a while to explain, and wouldn’t have major practical implications, but it’s different than the broad view I defend in the book and my PhD.)
Comment by william_macaskill on Ask Me Anything! · 2019-08-19T17:59:26.158Z · score: 82 (40 votes) · EA · GW

Honestly, the biggest benefit to my wellbeing was taking action about depression, including seeing a doctor, going on antidepressants, and generally treating it like a problem that needed to be solved. I really think I might not have done that, or might have done it much later, were it not for EA - EA made me think about things in an outcome-oriented way, and gave me an extra reason to ensure I was healthy and able to work well.

For others: I think that Scott Alexander's posts on anxiety and depression are really excellent and hard to beat in terms of advice. Other things I'd add: I'd generally recommend that your top goal should be ensuring that you're in a healthy state before worrying too much about how to go about helping others; if you're seriously unhappy or burnt our, fixing that first is almost certainly the best altruistic thing you can do. I also recommend maintaining and cultivating a non-EA life: having a multi-faceted identity means that if one aspect of your life isn't going so well, then you can take solace in other aspects.


Comment by william_macaskill on Ask Me Anything! · 2019-08-19T17:30:17.457Z · score: 7 (3 votes) · EA · GW

I’ve been on David Pakman; haven’t been invited onto Dave Rubin but I tend to do podcasts like those when I get the chance, unless I need to be in person.

Comment by william_macaskill on Ask Me Anything! · 2019-08-19T17:26:47.073Z · score: 24 (10 votes) · EA · GW

Hmm, that's a shame. I hereby promise to ask some questions to whoever does the next AMA!

Comment by william_macaskill on Ask Me Anything! · 2019-08-19T17:25:20.926Z · score: 30 (12 votes) · EA · GW

-

Comment by william_macaskill on Ask Me Anything! · 2019-08-19T17:21:41.858Z · score: 23 (12 votes) · EA · GW

I'm pro there being a diversity of worldviews and causes in EA - I'm not certain in longtermism, and think such diversity is a good thing even on longtermist grounds. I mention reasons in the 'steel manning arguments against EA's focus on longtermism' question. And I talked a little bit about this in my recent EAG London talk. Other considerations are helping to avoid groupthink (which I think is very important), positive externalities (a success in one area transfers to others) and the mundane benefit of economies of scale.

I do think that the traditional poverty/animals/x-risk breakdown feels a bit path-dependent though, and we could have more people pursuing cause areas outside of that. I think that your work fleshing out your worldview and figuring out what follows from it is the sort of thing I'd like to see more of.

Comment by william_macaskill on Ask Me Anything! · 2019-08-19T17:11:59.610Z · score: 21 (11 votes) · EA · GW

I think asking more personal questions in AMAs is a good idea! 

Favourite novel: I normally say Crime and Punishment by Dosteovsky, but it’s been a long time since I’ve read it so I’m not sure I can still claim that. I just finished The Dark Forest by Liu Chixin and thought it was excellent. 

Laugh: my partner is a very funny person. Last thing that made me laugh was our attempt at making cookies, but it’s hard to convey by text.

Comment by william_macaskill on Ask Me Anything! · 2019-08-19T17:10:53.099Z · score: 11 (9 votes) · EA · GW

I'm pretty terrified of chickens, so I'd go for the horses.

Comment by william_macaskill on Ask Me Anything! · 2019-08-19T17:09:57.066Z · score: 35 (20 votes) · EA · GW

No need to steelman - there are good arguments against this and it’s highly nonobvious what % of EA effort should be on longtermism, even from the perspective of longtermism.  Some arguments:

  • If longtermism is wrong (see another answer for more on this)
  • If getting a lot of short-run wins is important to have long-run influence
  • If longtermism is just too many inferential steps away from existing common-sense, and if more people would therefore get into longtermism if there were more focus on short-term wins
  • If now isn’t the right time for longtermism (because there isn’t enough to do) and instead it would be better if there were a push around longtermism at some time in the future 

I think all these considerations are significant, and are part of why I’m in favour of EA having a diversity of causes and worldviews. (Though not necessarily on the ‘three cause area’ breakdown which we currently have, which I think is a bit narrow).

Comment by william_macaskill on Ask Me Anything! · 2019-08-19T16:57:25.427Z · score: 62 (31 votes) · EA · GW

Because my life has been a string of lucky breaks, ex post I wouldn’t change anything. (If I’d gotten good advice age 20, my life would have gone worse than it in fact has gone.) But assuming I don’t know how my life would turn out: 

  • Actually think about stuff and look stuff up, including on big-picture questions, like 'what is the most important problem in the world?'
  • Take your career decision really seriously. Think of it as a research project, dedicate serious time to it. Have a timeline for your life-plans that’s much longer than your degree. Reach out to people you want to be like and try to talk with them for advice. 
  • It doesn’t matter whether the label ‘depressed’ applies to you or not, what matters is whether e.g. taking this pill, or seeing a counsellor, would be beneficial. (And it would.) 
  • You don’t need to be so scared - everyone else is just making it up as they go, too.

Then more concretely (again, this is assuming I don’t know how things actually turn out):

  • Switch degree from philosophy to maths with the aim afterwards of doing a PhD in economics. (At the time I had no idea what economics was about; I thought it was just for bankers.) But keep reading moral philosophy.  Accept that this will put you two years behind, but this isn’t a big deal. 

(I’m assuming that “Buy Apple stock” is not in the spirit of the question!)