Posts

Ending The War on Drugs - A New Cause For Effective Altruists? 2021-05-06T13:18:04.524Z
2020 Annual Review from the Happier Lives Institute 2021-04-26T13:25:51.249Z
The Comparability of Subjective Scales 2020-11-30T16:47:00.000Z
Life Satisfaction and its Discontents 2020-09-25T07:54:58.998Z
Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty 2020-08-03T16:17:32.230Z
Update from the Happier Lives Institute 2020-04-30T15:04:23.874Z
Understanding and evaluating EA's cause prioritisation methodology 2019-10-14T19:55:28.102Z
Announcing the launch of the Happier Lives Institute 2019-06-19T15:40:54.513Z
High-priority policy: towards a co-ordinated platform? 2019-01-14T17:05:02.413Z
Cause profile: mental health 2018-12-31T12:09:02.026Z
A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare 2018-10-25T15:48:03.377Z
Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was 2018-05-23T10:30:43.748Z
Could I have some more systemic change, please, sir? 2018-01-22T16:26:30.577Z
High Time For Drug Policy Reform. Part 4/4: Estimating Cost-Effectiveness vs Other Causes; What EA Should Do Next 2017-08-12T18:03:34.835Z
High Time For Drug Policy Reform. Part 3/4: Policy Suggestions, Tractability and Neglectedess 2017-08-11T15:17:40.007Z
High Time For Drug Policy Reform. Part 2/4: Six Ways It Could Do Good And Anticipating The Objections 2017-08-10T19:34:24.567Z
High Time For Drug Policy Reform. Part 1/4: Introduction and Cause Summary 2017-08-09T13:17:20.012Z
The marketing gap and a plea for moral inclusivity 2017-07-08T11:34:52.445Z
The Philanthropist’s Paradox 2017-06-24T10:23:58.519Z
Intuition Jousting: What It Is And Why It Should Stop 2017-03-30T11:25:30.479Z
The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik 2016-12-05T21:03:24.496Z
Are You Sure You Want To Donate To The Against Malaria Foundation? 2016-12-05T18:57:59.806Z
Is effective altruism overlooking human happiness and mental health? I argue it is. 2016-06-22T15:29:58.125Z

Comments

Comment by MichaelPlant on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-10T16:18:35.414Z · EA · GW

I was waiting for this! I thought there were going to be lots of "this would be bad for the EA brand" comments. As some evidence against this, and to my surprise, across all the places where I posted this, or saw others post it (on the EA forum, facebook, and twitter) the post received very little pushback.

I was actually pretty disappointed with this as it made me think it hadn't reached many who would disagree. On the plus side, this suggests this cause is not going to objectionable amongst people who are sympathetic to EA ideas.

Re the second para, I wasn't claiming that a new organisation would need to exist. My concern what whether it was reasonable to think this is where (for someone) their money or time could do the most good. That doesn't imply they would need to start something.

Comment by MichaelPlant on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-10T16:04:18.010Z · EA · GW

Right, so I do agree that if you're going to move away from prohibition, you do need to consider how non-prohibition would be implemented in reality, rather than some fictitious ideal world, and then whether it really would be better in reality. The thing people tend to forget is that you can evolve regulation, so I'm optimistic problems like those mentioned here can eventually be overcome.

Also, to state the obvious, that something has some problems is not an all-things-considered reason against doing it.

Comment by MichaelPlant on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-10T16:00:39.367Z · EA · GW

What I think the three different replies to this comment indicate is that crudely thinking "how many resources go to this thing?" is, in itself, neither necessary nor sufficient to deem something a high priority. We need a fuller story about the nature of the problem, it's scale, potential solutions, obstacles, and the rest. I don't think anyone has tried to do that for this issue, which is why I'd like someone to dig into it.

This strikes me as an issue where it's not obviously high priority, but because it's not obvious, it is worth researching further to see if it is.

Comment by MichaelPlant on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-07T07:58:38.211Z · EA · GW

Yes, there is some overlap here, certainly.

OPP has, I undestand it, worked on drug decriminalisation, cannabis legalisation, and prison reform, all within the US. What we might call 'global drug legalisation' goes further with respected to drug policy reform (legal, regulated markets for all drugs + global scope, rather than then US) but it also wouldn't cover non-drug related prison reforms.

Comment by MichaelPlant on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-06T21:28:21.872Z · EA · GW

I'm partially sympathetic to this. However, I think EAs have got a bit hung up on 'neglectedness' to the extent it's got in the way of clear thinking: if lots of people are doing something, and you can make them do it slightly better, then working on non-neglected things is promising. Really, I think you need to judge the 'facts on the grounds', what you can do, and go from there. If there aren't ruthlessly impact-focused types working on a problem, that would a good heuristic for some such people to get stuck in.

What was salient to me, compared to when I knew very little of the topic, is how much larger the expected value of drug legalisation now seems.

Comment by MichaelPlant on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-05-04T10:09:48.619Z · EA · GW

I think the least contentious argument is that 'an introduction' should introduce people to the ideas in the area, not just the ideas that the introducer thinks are most plausible. Eg a curriculum on political ideology wouldn't focus nearly exclusively on 'your favourite ideology'. A thoughtful educator would include arguments for and against their position and do their best to steelman. Even if your favourite ideology was communism and you were doing 'an intro to communism' you would still expect it not just to focus on your favourite strand of communism. Hence, I would have had more sympathy with (the original incarnation) if billed as "an intro to longtermism".

But, further, there can be good reasons to do things for symbolic or coalition reasons. To think otherwise implies a rather naive understanding of politics and human interaction. If you want people to support you - you can frame this in terms of moral trade, if you want - sometimes you also need to support to include them. The way I'd like EA to work is "this is what I believe matters most, but if you disagree because of A, B, C, then you should talk to my friend". This strikes me as coalitional moral trade that benefits all the actors individually (by their own lights). An alternative, and more or less what 80k had been proposing was, is "this is what I believe, but I'm not going to tell what the alternatives are or what you should do if you disagree". This isn't an engagement in moral trade.

I'm pretty worried about a scenario where the different parts of the EA world believe (rightly or wrongly) that others aren't engaging in moral trade and so decide to embark on 'moral trade wars' against each other instead.

Comment by MichaelPlant on 2020 Annual Review from the Happier Lives Institute · 2021-04-30T10:29:09.635Z · EA · GW

Hello!

I'm not really sure what Seligman means in the above quote, sorry. Perhaps it would make sense in a wider context.

Re PERMA, I'm not a fan of the concept and it strikes me as unmotivated. It's something like a subjective list theory of well-being, where Seligman takes well-being to consist in a bunch of different items, each of them subjective in some way. However, I don't see the justification for why he's chosen those 5 items (positive emotions, engagement, relationships, meaning, accomplishments) rather than any others. It seems to be the most plausible re-interpretation of PERMA is that those 5 items are major contributions to happiness, and well-being consists only in happiness.

I'm glad you like our transparency! We hope it helps us improve our decision-making and better allows others to see how we think.

Re Layard's book, Richard asked me to read a draft and I gave him extensive comments, primarily on the philosophical aspects, which were mostly in the earlier chapters. I also attended a conference he put on to discuss the book.

Comment by MichaelPlant on 2020 Annual Review from the Happier Lives Institute · 2021-04-29T10:30:10.790Z · EA · GW

I'm not sure exactly what you mean by "objective well-being". Here are two options.

One thing you might have in mind is that well-being is constituted by something subjective, eg happiness or life satisfaction, but you then wonder how objective life circumstances (health, wealth, relationship status, etc), positional concerns, etc. contribute to that subjective thing. In this case, health, etc are determinants of well-being, not actually well-being itself. This approach is pretty much exactly what the SWB literature does: you see how the right-hand side variables, many of which are objective,  relate to the left-hand side subjective one. I'm not sure what the shortcomings of this approach are in general - if you think well-being is subjective, this is just the sort of analysis you would want to undertake. 

An alternative thing you might mean is that well-being is properly constituted (at least in part) by something objective. One might adopt an objective list theory of well-being:

All objective list theories claim there can be things which make a person’s life go better which are neither pleasurable to nor desired by them. Classic items for this list include success, friendship, knowledge, virtuous behaviour, and health. Such items are ‘objective’ in the sense of being concerned with facts beyond both a person’s conscious experience and/or their desires

If one had this view, your question would be about how well-being, which is objective, relates to how people feel about their well-being. It's not clear what the purpose of this project would be: if you already know what well-being is, and you think it's something objective, why would you care how having well-being causes people to feel about their lives? So, I assume you mean the former!

Comment by MichaelPlant on 2020 Annual Review from the Happier Lives Institute · 2021-04-28T14:48:35.570Z · EA · GW

Thanks for your comment and for bringing this to our attention. One of the pleasures, but also pains, of SWB research, is that there is simply an enormous scope of it; basically everything impacts well-being one way or another. The result is that many potentially fruitful avenues of research are left unturned.

I don't expect we'll be pursuing this specific line of inquiry, or headaches in general, with the next year or so. The only scenarios in which I would see that change would be if (1) a major donor appeared and would (only) fund us to look at headaches or (2) we already had a lot of donors following our recommendations - we don't have any such donors now, which is necessarily the case because we don't have any all-things-considered recommendations(!) - and our inside view what the headaches might be more effective than our hypothetical top pick and so worth investigating.

As a hot take on your particular suggestion, this is a very small study and I've heard lots of horror stories about dietary research, so this causes me only a (very) minor update, sorry!

Comment by MichaelPlant on 2020 Annual Review from the Happier Lives Institute · 2021-04-28T14:40:17.059Z · EA · GW

Hello Engelhardt,

Thanks for the comment! In response to your comments:

  1. To clarify, the WELLBY is something that has come out of the academic SWB community - bits of economics and psychology, mostly. It's not been developed by us, as there are only a handful of papers that have used it so far; hence we're among the first to be applying it. I should add that, if you're already using measures of SWB, say, a 0-10 life satisfaction scale, it's not a big innovation to look at how much something changes that, then multiplying that by duration, which is really all the WELLBY is. (The more innovative bit is using SWB at all, rather than using WELLBYs given you're already using SWB.) So, it easiest to think of us as using a relatively new, but existing, methodology and applying it to new problems - namely, (re)assessing the cost-effectiveness of things EAs already focus on.

That said, there are some theoretical and practical kinks to be worked out in using WELLBYs - e.g. on the ‘neutral point’, mentioned above. Our plan - which we are already engaged in - is to do the work we think is necessary to improve the WELLBY approach, then feed that back into SWB academia. More generally, it’s not unusual that a measurement tool gets developed and then refined.

  1. Ideally, we’d like to see SWB metrics used across the board, where feasible, and we are pushing to make this happen. Part of the issue with Q/DALYs is that they are measures of health. Even if you thought they were the ideal measures of health (or, the contribution of health to well-being) you run into an issue comparing health to non-health outcomes. A chief virtue of SWB metrics is that you can measure changes in any domain in one currency, namely their impact on SWB.

Having said this, Q/DALYs are quite ingrained in the medical world and it’s an open question how valuable it is to push for change their vs do other things.

  1. I think the rules can be bent in search of a good name, and we're really just following what other SWB researchers call them. It has been suggested, notably by John Broome, that it should be the 'WALY', but that sounds a bit, well, silly (in British English, a ‘wally’ is a synonym for ‘fool’). Personally, I also like the SWELLBY, but that’s yet to catch on...
Comment by MichaelPlant on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-28T11:49:51.430Z · EA · GW

Hello Rob and Keiran,

I apologise if this is just rank incompetence/inattention on my part as a forum reader, but I actually can't find anything mentioning 1. or 2. in your comments on this thread, although I did see your note about 3. (I've done control-F for all the comments by "80000_Hours" and mentions of "Paul Christiano", "Ajeya Cotra", "Keiran", and "Rob". If I've missed them, and you provide a (digestible) hat, I will take a bite.)

In any case, the new structure seems pretty good to me - one series that deals with the ideas more or less in the abstract, another that gets into the object-level issues. I think that addresses my concerns but I don't know exactly what you're suggesting; I'd be interested to know exactly what the new list would be.

More generally, I'd be very happy to give you feedback on things (I'm not sure how to make this statement more precise, sorry). I would far prefer to be consulted in advance than feel I had to moan about it on the forum after the fact- this would also avoid conveying the misleading impression I don't think you do a lot of excellent work, which I do think. But obviously, it's up to you whose and how much input you solicit.

Comment by MichaelPlant on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-24T13:17:00.677Z · EA · GW

Thanks for somewhat engaging on this, but this response doesn't adequately address the main objection I, and others, have been making: your so-called 'introduction' will still only cover your preferred set of object-level problems.

To emphasise, if you're going to push your version of EA, call it 'EA', but ignore the perspectives of dedicated, sincere, thoughtful EAs just because you happen not to agree with them, that's (1) insufficiently epistemically modest, (2) uncooperative, and (3) is going to (continue to) needlessly annoy a lot of people off, myself included.

Comment by MichaelPlant on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-19T11:49:34.484Z · EA · GW

I suppose so. But if you don't think the article provides new reasons to care less about avoiding the Repugnant Conclusion, then it doesn't provide new reasons to focus on other moral problems more.

Comment by MichaelPlant on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-19T11:48:05.568Z · EA · GW

Thank you for your comments, Max and John. They inclined me to be quite a bit more favourable to the paper. I still have mixed feelings: while I respect the urge the move a stale conversation on, I don't think the authors provide new object-level reasons to do so. They do provide a raw (implicit?) appeal for others, as their peers, to update in their direction, but I'm sceptical that's what philosophy should involve.

Comment by MichaelPlant on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-18T12:49:22.488Z · EA · GW

When I first saw the paper, I thought "oh cool, how novel for philosophers to come together and say they agree on something, for once". But then, as I reflected on it a couple of days later, I thought the publication was odd. After all, there's not much in the way of argument, so the paper is really just a statement of opinion. As such, there is a problematic whiff of an appeal to authority and social pressure here: "oh, you think the repugnant conclusion is repugnant? But you shouldn't, because all these smart people disagree with you. Just get with the progamme, okay?"

In general, I don't see how papers which say (little more than) "We agree with X" merit publication. What would be the point of a paper which said, e.g. "We, some utilitarian philosophers, do not think the usual objections to utilitarianism succeed because of the usual counter-objections"? We already know that philosophers believe a variety of things.

Comment by MichaelPlant on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-17T15:35:19.638Z · EA · GW

TL;DR. I'm very substantially in agreement with Brian's comment. I expand on those concerns, put them in stronger terms, then make a further point about how I'd like 80k to have more of a 'public service broadcasting' role. Because this is quite long, I thought it was better to have it as a new comment.

It strikes me as obviously inappropriate to describe the podcast series as "effective altruism: an introduction" when it focuses almost exclusively on a specific worldview - longtermism. The fact this objection is acknowledged, and that a "10 problems areas" series is also planned, doesn't address it. In addition, and relatedly, it seems mistaken to produce and distribute such a narrow introduction to EA in the first place.

The point of EA is to work out how to do the most good, then do it. There are three target groups one might try to benefit - (1) (far) future lives, (2) near-term humans, (3) (near-term) animals. Given this, one cannot, in good faith, call something an 'introduction' when it focuses almost exclusively on object-level attempts to benefit just one group. At the very least, this does not seem to be in good faith when there is a substantial fraction of the EA community, and people who try to live by EA principles, who do prioritise each of three.

For people inside effective altruism who do not share 80k's worldview, stating that this is an introduction runs the serious risk of conveying to those people that they are not "real EAs", they are not welcome in the EA community, and their sincere and thoughtful labours and perspectives are unimportant. It does not seem adequately inclusive, welcoming, open-minded, and considerate - values EAs tend to endorse.

For people outside EA who are being introduced to the ideas for the first time, it genuinely fails to introduce them to the relevant possibilities of how they might do the most good, leaving them with a misleading impression of what EA is or can be. It would have been trivially easy to include the Bollard and Glennister interviews - or something else to represent those who focus on animals or humans in the near-term – and so indicate that those are credible altruistic paths and enthuse those who might take them.

By analogy, if someone taught an "introduction to political ideologies" course which glossed over conservatism and liberalism to focus primarily on (the merits of) socialism, you would assume they were either incompetent or pushing an agenda. Either way, if you hoped that they would cover all the material and do so in an even-handed manner, you would be disappointed.

Given this podcast series is not an introduction to effective altruism, it should not be called "effective altruism: an introduction". More apt might be “effective longtermism: an introduction” or “80k’s opinionated introduction to effective altruism” or “effective altruism: 80k’s perspective”. In all cases, there should be more generous signposting of what the other points of view are and where they could be found.

A good introduction to EA would, at the very least, include a wide range of steel-manned positions about how to do the most good that are held by sincere, thoughtful, individuals aspiring to do the most good. I struggle to see why someone would produce such a narrow introduction unless they thought those holding alternative views were errant and irrelevant fools.

I can imagine someone defending 80k by saying that this is their introduction to effective altruism and there’s nothing to stop someone else writing their own and sharing it (note RobBesinger does this below).

While this is technically true, I do not find it compelling for the following reason. In a cooperative altruistic community, you want to have a division, rather than a duplication, of labour, where people specialise in different tasks. 80k has become, in practice, the primary source of introductory materials to EA: it is the single biggest channel by which people are introduced to effective altruism, with 17% of EA survey respondents saying they first heard about EA through it; it produces much of the introductory content individuals read or listen to. 80k may not have a monopoly on telling people about EA, but it is something like the ‘market leader’.

The way I see it, given 80k’s dominant position, they should fulfil something like a public service broadcasting role for EA, where they strive to be impartial, inclusive, and informative (https://en.wikipedia.org/wiki/Public_broadcasting).

Why? Because they are much better placed to do it than anyone else! In terms any 80k reader will be familiar with, 80k should do this because it is their comparative advantage and they are not easily replaced. Their move to focusing on longtermism has left a gap. A new organisation, Probably Good, has recently stepped into this gap to provide more cause neutral careers advice but I see it as cause for regret that this had to happen.

While I think it would be a good idea if 80k had more of a public service broadcasting model, I don't expect this to happen, seeing as they've consciously moved away from it. It does, however, seem feasible for 80k to be a bit more inclusive - in this case, one very easy thing would be to expand the list from 10 to 12 items so concerns for animals and near-term humans feature. It would be a huge help to non-longtermist EAs that 80ks talks about them a bit (more), and it would be a small additional cost to 80k.

Comment by MichaelPlant on Confusion about implications of "Neutrality against Creating Happy Lives" · 2021-04-12T08:41:06.510Z · EA · GW

I want to focus on the following because it seems to be a problematic misunderstanding:

"1. Temporal position should not impact ethics (hence longtermism)"

This genuinely does seem to be a common view in EA, namely, that when someone exists doesn't (in itself) matter, and that, given impartiality with respect to time, longtermism follows. Longtermism is the view we should be particularly concerned with ensuring long-run outcomes go well.

The reason this understanding is problematic is that the probably two strongest objections to longtermism (in the sense that, if these objections hold, they rob longtermism of its practical force) have nothing to do with temporal position in itself. I won't say if these objections are, all things considered, plausible, I'll merely set out what they are.

First, there is the epistemic objection to longtermism (sometimes called the 'tractability', 'washing-out', or 'cluelessness' objection) that, in short, we can't be confident enough about the impact our actions will have on the longrun future to make it the practical priority. See this for recent discussion and references: https://forum.effectivealtruism.org/posts/z2DkdXgPitqf98AvY/formalising-the-washing-out-hypothesis#comments. Note this has nothing to do with different values of people due to time.

Second, there is the ethical objection that appeals to person-affecting views in population ethics and has the implication making (happy) lives is neutral.* What's the justification for this implication? One justification could be 'presentism', the view only presently existing people matter. This is a justification based on temporal position per se, but it is (I think) highly implausible.

An alternative justification, which does not rely on temporal position in itself, is 'necessitarianism', the view the only people that matter are those that exist necessarily (i.e. in all outcomes under consideration). The motivation for this is (1) outcomes can only be better or worse if they are better or worse for someone ('person-affecting restriction') and (2) existence is not comparable to non-existence for someone ('non-comparativism'). In short, it isn't better to create lives, because it's not better for the people that get created. (I am quite sympathetic to this view and think too many EAs dismiss it too quickly, often without understanding it.)

The further thought is that our actions change the specific individuals who get created (e.g. think if any particular individual alive today would exist if Napoleon had won Waterloo). The result is that our actions, which aim to benefit (far) future people, cause different people to exist. This isn't better for either the people that would have existed, or the people that will actually exist. This is known as the 'non-identity problem'. Necessitarians might explain that, although we really want to help (far) future people, we simply can't. There is nothing, in practice, we can do make their lives better. (Rough analogy: there is nothing, in practice, we can do to make trees' lives go better - only sentient entities can have well-being.)

Note, crucially, this has nothing to do with temporal position in itself either. It's the combination of only necessary lives mattering and our actions changing which people will exist. Temporal position is ethically relevant (i.e. instrumentally important), but not ethically significant (i.e. doesn't matter in itself).

*You can have symmetric person-affecting views (creating lives is neutral). You can also have asymmetric person-affecting views (creating happy lives is neutral, creating unhappy lives is bad). Asymmetric PAVs may, or may not, have concern for the long term depending on what the future looks likes and whether they think adding happy lives can compensate for adding unhappy lives. I don't want to get into this here as this is already long enough.

Comment by MichaelPlant on Announcing "Naming What We Can"! · 2021-04-05T09:04:52.929Z · EA · GW

Ha. I like this name.

While I'm writing, I'll mention I seriously proposed calling HLI the Bentham Institute for Global Happiness (BIGHAP), but it was put to an internal vote and I, tragically, lost. I am fairly confident not calling it BIGHAP will be my biggest deathbed regret.

Comment by MichaelPlant on Spears & Budolfson, 'Repugnant conclusions' · 2021-04-05T09:01:39.142Z · EA · GW

Pablo could you, or perhaps some other kind forum reader, provide a brief explanation of what they actually do? The abstract more-or-less says 'we solve a problem', but it's unclear exactly how they solve the problem - I have no intuitive purchase on what "more inclusive formalizations" means - so don't know whether it's a good use of time to read the paper.

Comment by MichaelPlant on Announcing "Naming What We Can"! · 2021-04-02T07:57:03.379Z · EA · GW

I'd like to know what the Happier Lives Institute should be; we never liked the name anyway.

Comment by MichaelPlant on How much does performance differ between people? · 2021-04-01T08:23:16.030Z · EA · GW

ah, this is great. evidence the selectors could tell the top 2% from the rest, but 2%-20% was much of a muchness. Shame that it doesn't give any more information on 'commercial success'.

Comment by MichaelPlant on Any EAs familiar with Partha Dasgupta's work? · 2021-03-31T14:17:15.863Z · EA · GW

I'm not sure how to assess what counts as 'core EA'! But I don't think the org bills itself as EA, or that the overwhelming majority of its staff self-identify as EAs (cf. the way the staff at, um, CEA probably do...)

Comment by MichaelPlant on Any EAs familiar with Partha Dasgupta's work? · 2021-03-31T12:58:26.244Z · EA · GW

Short answer: Yes. FWIW, Partha is the Chair of CSER (Centre for the Study of Existential Risk) which has, or has had, quite a few EA-sympathetic people in it. I have no idea how widely he is known across EA more broadly.

Comment by MichaelPlant on How much does performance differ between people? · 2021-03-29T12:45:30.167Z · EA · GW

Hello Ben.

I'm not trying to be obtuse, it wasn't super clear to me on a quick-ish skim; maybe if I'd paid more attention I've have clocked it.

Yup, I was too hasty on VCs. It seems like they are pretty confident they know what the top >5% are, but not that can say anything more precise than. (Although I wonder what evidence indicates they can reliably tell the top 5% from those below, rather than they just think they can).

Comment by MichaelPlant on How much does performance differ between people? · 2021-03-26T15:00:13.130Z · EA · GW

I was thinking the emphasis on outputs might be the important part as those are more controllable than outcomes, and so the decision-relevant bit, even though we want to maximise impartial value (outcomes).

I can imagine someone thinking the following way: "we must find and fund the best scientists because they have such outsized outcomes, in terms of citations." But that might be naive if it's really just the top scientist who gets the citations and the work of all the good scientists has a more or less equal contribution to impartial value.

FWIW, it's not clear we're disagreeing!

Comment by MichaelPlant on How much does performance differ between people? · 2021-03-26T14:50:32.416Z · EA · GW

Okay good! Yeah, I would be curious to see how much it changed the analysis distinguishing outputs from outcomes and, further, between different types of outputs.

Comment by MichaelPlant on How much does performance differ between people? · 2021-03-26T14:49:41.380Z · EA · GW

Yeah, I'd be interested to know if VC were better than chance. Not quite sure how you would assess this, but probably someone's tried.

But here's where it seems relevant. If you want to pick the top 1% of people, as they provide so much of the value, but you can only pick the top 10%, then your efforts to pick are much less cost-effective and you would likely want to rethink how you did it.

Comment by MichaelPlant on How much does performance differ between people? · 2021-03-26T12:42:06.135Z · EA · GW

I was going to raise a similar comment to what others have said here. I hope this adds something.

I think we need to distinguish quality and quantity of 'output' from 'success' (the outcome of their output). I am deliberately not using 'performance' as it's unclear, in common language, which one of the two it refers to. Various outputs are sometimes very reproducible - anyone can listen to a music track, or read an academic paper. There are often huge rewards to being the best vs second best - eg winning in sports. And sometimes success generates further success (the 'Matthew effect') - more people want to work with you, etc. Hence, I don't find it all weird to think that small differences in outputs, as measured on some cardinal scale, sometimes generate huge differences in outcomes.

I'm not sure exactly what follows from this. I'm a bit worried you're concentrated on the wrong metric - success - when it's outputs that are more important. Can you explain why you focus on outcomes?

Let's say you're thinking about funding research. How much does it matter to fund the best person? I mean, they will get most of the credit, but if you fund the less-than-best, that person's work is probably not much worse and ends up being used by the best person anyway. If the best person gets 1,000 more citations, should you be prepared to spend 1,000 more to fund their work? Not obviously.

I'm suspicious you can do a good job of predicting ex ante outcomes. After all, that's what VCs would want to do and they have enormous resources. Their strategy is basically to pick as many plausible winners as they can fund.

It might be interesting to investigate differences in quality and quantity of outputs separately. Intuitively, it seems the best people do produce lots more work than the good people, but it's less obvious the quality of the best people is much higher than of the good. I recognise all these terms are vague.

Comment by MichaelPlant on Formalising the "Washing Out Hypothesis" · 2021-03-25T12:47:52.224Z · EA · GW

Thanks very much for writing this it. I'd started to wonder about the same idea but this is a much better and clearer analysis than I could have done! A few questions as I try to get my head around this.

Could you say more about why the predictability trends towards zero? It's intuitive that it does, but I'm not sure I can explain that intuition. Something like: we should have a uniform prior over the actual value of the action at very distant periods of time, right? An alternative assumption would be that the action has a continuous stream of benefits in perpetuity. I'm not sure how reasonable that is. Or is it the inclusion of counterfactuals, i.e. if that you didn't do that good thing, someone else would be right behind you anyway?

Regarding 'attractor states', is the thought then that we shouldn't have a uniform prior regarding what happens to those in the long run?

I'm wondering if the same analysis can be applied to actions as to the 'business as usual' trajectory of the future, i.e. where we don't intervene. Many people seem to think it's clear that the future, if it happens, will be good, and that we shouldn't discount it to/towards zero.

Comment by MichaelPlant on Want to alleviate developing world poverty? Alleviate price risk.​ (2018) · 2021-03-23T09:22:47.938Z · EA · GW

I think this article and/or this excerpt of it, would be improved by an explanation of how derivatives work.

Comment by MichaelPlant on Against neutrality about creating happy lives · 2021-03-19T09:55:34.930Z · EA · GW

Yup. I suspect Bader's approach is ultimately ad hoc (I saw him present it at a conf and haven't been through the paper closely) but I do like it.

On the second bit, I think that's right with the A, A+ bit: the person-affector can see that letting them new people arrive and then redistributing to everyone is worse for the original people. So if you think that's what will happen, you should avoid it. Much the same thing to say about the child.

Comment by MichaelPlant on Against neutrality about creating happy lives · 2021-03-18T19:23:52.609Z · EA · GW

Not sure I follow. Are you assuming anti-realism about metaethics or something? Even so, if your assessment of outcomes depends, at least in part, on how good/bad those outcomes are for people, the problem remains.

Comment by MichaelPlant on Against neutrality about creating happy lives · 2021-03-18T18:53:33.972Z · EA · GW

Glad we made some progress!

FWIW, there's a sense in which total utilitarianism is my 2nd favourite view: I like its symmetry and I think it has the right approach to aggregation. In so far as I am totalist, I don't find the repugnant conclusion repugant. I just have issues with comparativism and impersonal value.

It's not obvious to me totalism does 'swamp' if one appeals to moral uncertainty, but that's another promissory note.

Anyway, a useful discussion.

Comment by MichaelPlant on Against neutrality about creating happy lives · 2021-03-18T17:11:42.866Z · EA · GW

Hello Joe!

I enjoyed the McMahan/Parfit move of saying things are 'good for' without being 'better for'. I think it's clever, but I don't buy it. It seems like an linguistic sleight of hand and I don't really understand how it works.

I agree we have preferences over existing, but, well, so what? The fact I do or would have a preference does not automatically reveal what the axiological facts are. It's hard to know, even if we grant this, how it extends to not yet existing people. A present non-existing possible person doesn't have any preference, including whether to exist. We might suppose that, if they could have preferences in their non-existent state, they would have a preference to exist, but this just seems arcane. What sort of hypothetical non-existent entity are we channeling here?

There's much the same to be said about being glad. I think I'm glad to be alive. But, again, so what? Who said my psychological attitudes generate or reveal axiological facts? Note, we can ask "I am glad, but am I justified in being glad?" and then we have to have the debates about comparativism etc. we've been having.

I understand that someone might think this point about understanding the betterness relation is somehow linguistic obscurantism, but it's not supposed to be. I think I understand how the 'better for' relationship works and, because of this, I don't see how comparativism works. If you say "existence is better for me than non-existence", I think I am entitled to ask "okay, and what do you mean by 'better for'?"

Re your last point, I'm not sure I understand your objective: you are trying to say something is intuitive when others say it isn't? But aren't our intuitions, well, intuitive, and it's just a psychological matter of fact whether we have them or not? I assume the neutrality intuition is intuitive for some and not others. It's a further question whether, on reflection, that intuition is plausible and that's the issue I was aiming to engage with.

Comment by MichaelPlant on Against neutrality about creating happy lives · 2021-03-18T16:45:56.885Z · EA · GW

Ah, well maybe we should just defer to Broome and Greaves and not engage in the object-level discussions at all! That would certainly save time... FWIW, it's pretty common in philosophy to say "Person X conceptualises problem P in such and such a way. What they miss out is such and such."

All views in pop ethics have bonkers results, something that is widely agreed by population ethicists. Your latest example is about the procreative asymmetry (creating happy lives neutral, creating unhappy lives bad). Quite of lot of people with person-affecting intuitions think there is a procreative asymmetry, so would agree with you, but it's proved quite hard to defend. Ralph Bader, here, has a rather interesting and novel defence of it: https://homeweb.unifr.ch/BaderR/Pub/Asymmetry (R. Bader).pdf. Another strategy is to say you have no reason not to create the miserable child, but you have reason to end it's life once it starts existing; this doesn't help with scenarios where you can't end the life.

You may just write me off as a monster, but I quite like symmetries and I'm minded to accept a symmetrical person-affecting view (at least, I quite a bunch of credence in it). The line of thought is that existence and non-existence are not comparable. The challenge in defending an asymmetric person-affecting view is arguing why it's not good for someone to be creatied with a happy life, but why it is bad for them to have an unhappy life.

Comment by MichaelPlant on Against neutrality about creating happy lives · 2021-03-18T09:06:54.122Z · EA · GW

As I said to MichaelStJules, I'm inclined to say the possibility of three-choice cases rests on a confusion for the reasons I already gave. The A, B1, B2 case should at least be structured differently into a sequence of two choices (1) create/don't create, (2) benefit/don't benefit. (1) is incomparable in value for someone, (2) is not. Should you create a child? Well, on necessitarianism, that depends solely on the effects this has on other, necessary people (and thus not the child). Okay, once you've had/are going to have a child, should you torture it. Um, what do you think? If this is puzzling, recall we are trying to think in terms of personal value ('good for'). I don't think we can say anything is good/bad for an entity that doesn't exist necessarily (i.e. in all contexts at hand).

FWIW, I find people do tend to very easily dismiss the view, but usually without really understanding how it works! It's a bit like when people say "oh, utilitarianism allows murder? Clearly false. What's the next topic?"

Comment by MichaelPlant on Against neutrality about creating happy lives · 2021-03-18T08:43:18.377Z · EA · GW

Re your cases, those about abortion and death you might want to treat differently from those about creating lives. But then you might not. The cases like saving for education I've already discussed.

I might be inclined to say something stronger, such as the 3-choice sets are not metaphysically possible, potentially with a caveat like 'at least from the perspective of the choosing agent'. I think the same thing about accusation person-affecting views and intransivity.

Comment by MichaelPlant on Against neutrality about creating happy lives · 2021-03-17T10:45:27.772Z · EA · GW

Ah, okay. I missed that the people in B1 and B2 were supposed to be the same - it's a so-called 'three-choice' case; 'two-choice' cases are where the only two options are the person doesn't exist or exist with a certain welfare level. I'm inclined to think three-choice cases, even though they are relied on lots in the literature, as also metaphysically problematic for reasons that I've not seen pointed out in the literature so far. I've sketched my answer below, but this is also a promissory note, sorry, even though it's ended up being rather long.

Roughly, the gist of my concern is this. A standard person-affecting route is to say the only persons who matter are those who exist necessarily (i.e. under all circumstances under consideration). This is based on the ideas discussed above that we just can't compare existence to non-existence for someone. To generate the three-choice case, what's needed is some action that (1) occurs to a future, necessarily-existing person and (2) benefits that person, whilst retaining that they are a future, necessary person, (3) leaves us with three outcomes to choose between. I don't see how (1) - (3) are jointly possible. Let's walk through that.

Why do we need (1)? Well, if they aren't a future person, but they are, instead, a necessarily existing present person, then we're in a choice between B1 and B2, not a choice between A, B1, and B2. Recall A is the outcome where the person doesn't exist. So we're down to two choices, not three.

Why do we need (2)? The type of 3-choice case that most often comes up in the literature - when people flesh out the details, rather than just stipulating that the case is possible - is where we are talking about providing medical treatment to cure an as-yet-unborn child of some genetic condition. Usually claim is "look, obviously you should provide the treatment and that will benefit that child without changing its identity." A usual observation made in these debates is that your genetics are a necessary condition for your identity: if you had had different genetics, you wouldn't have existed - consider non-identical twins being different people. Let's consider the two options: the intervention causes a different person to exist or it doesn't.

Suppose the former is true: the genetic intervention leads person, C, rather than person B to be created. Okay, so now the choice-set is really <A, B1, C1> not <A, B1, B2>. This is the familiar non-identity problem case.

Suppose the latter is true: the genetic intervention doesn't change identity. Recall the person must, crucially, be a future, necessary person. But how can you change anyone's genetics prior to their existence whilst maintaining that the original person will necessarily exist(! ). This, I'm afraid, is metaphysically problematic or, to put it in ordinary British English, bonkers.

The three-case enthusiast might try again by suggesting something like the following: they are considering whether to invest money for their future nephew to give to him when he turns 21. Now, we can imagine a case where you doing this sort of thing is identity changing: you tell your sibling and their spouse, who haven't yet had the child, you're going to do this. It causes them to conceive later and create a different child. Fine, but here we're back to <A, B1, C1> as we're talking about stopping one child existing, creating a different one instead, and benefitting that second one.

But suppose, for some reason, it's not identity changing. Maybe the child is already in utero, or it's a fertilised egg in a sperm bank and your sibling and their spouse are 100% going to have it, whatever you do, or something. Recall, future person needs to exist necessarily for the three-option case to arise. Well, if there is no possibility of the child not existing, there is no outcome A anymore - at least, not as far are you are concerned; you now face a choice-set of <B1, B2> and can say normal things about why you should choose B2: it's better for that particular child whose identity remains unchanged.

All told, I doubt the choice-set <A, B1, B2> is (metaphysically?) possible. This is important because its existence is taken as a strong objection to person-affecting views. I don't think the existence of choice sets like <A, B1, C1> - which is the ordinary non-identity problem - are nearly so problematic.

Comment by MichaelPlant on Against neutrality about creating happy lives · 2021-03-16T23:53:06.355Z · EA · GW

Right. Yeah, I don't share Hilary's intuitions and I wouldn't analyse the situation in this way. It's a somewhat subtle move, but I think about comparing pairs of outcomes by comparing how much better/worse they are for each person who exists in both, then adding up the individual differences (i.e. focusing on 'personal value'; to count 'impersonal value' you just aggregate the welfare in each outcome and then compare those totals). I'm inclined to say A, B1, and B2 are equally as good - they are equally good for the necessary people (those who exist in all outcomes under consideration).

FWIW, I think discussants should agree that the personal value of A, B1, and B2 are the same (there are some extra complexities related to harm-minimisation views I won't get into here). And I think discussants should also agree that the impersonal value of the outcomes is B2 > B1 > A. There is however reasonable scope for disagreement about the final value (aka 'ultimate value', 'value simpliciter', etc) of B2 vs B1 vs A, but that disagreement rests on whether one accepts the significance of impersonal and/or personal value. Neither I, nor anyone else in this post (I think) has advanced any arguments about the significance of personal vs impersonal value. That's a separate debate. We've been talking about comparativism vs non-comparativism.

Comment by MichaelPlant on Against neutrality about creating happy lives · 2021-03-16T20:51:25.943Z · EA · GW

Hello Jack,

Yes, I've heard Hilary's 80k podcast where she mentions her paper. It's not available on her website. If it's the same theme as in the slides you linked, then it I don't think it responds to the claims above. Bader supposes 'better for' is a dyadic (two-place) relation between the two lives. Hilary is responding to arguments that suppose 'better for' is a triadic (three-place) relation: between two worlds and the person. I don't think I understand why one would want to formulate it the latter way. I'll take a look at Hilary's paper when it's available.

Re your last point: I'm not 100% what you're claiming in the other post because I found the diagrams hard to follow. You're stating a standard version of the non-identity problem, right? I don't think person-affecting views do face intransitivity, but that's a promissory note that, if I'm honest, I don't expect to get around to writing up until maybe 2022 at the earliest.

Comment by MichaelPlant on Against neutrality about creating happy lives · 2021-03-16T12:48:20.120Z · EA · GW

I enjoyed reading this, but you don't seem to seriously engage with the point you're supposed to be arguing against, so much as instead focusing on poetically tugging your readers' intuitions in a particular direction. I think this has its place but I thought I should provide the (dry) philosophical counterpoint nevertheless.

The essence of your post is to advocate for comparativism, the view existence can be better for someone than non-existence. However, comparativism has problematic metaphysical commitments. I'm drawing heavily on unpublished work by Raph Bader here.

The obvious (only?) way to understand the 'personal betterness relation' – being “better for” – is a two-place relation that has lives (or 'time slices') as its 'relata' (the things being related). Hence, something can only be better for someone if they exist in both outcomes we're comparing.

The last paragraph was quite jargony. Sorry. Here's a more intuitive way of bringing out the same problem. Suppose I say "Joe is to the left of". You might look at me blankly and say "okay. Joe is to the left of ... what, exactly?" You would then point out, quite correctly, that it doesn't make sense to say "Joe is to the left of" in the abstract. For the relationship of 'being to the left of' to obtain for an object, there have to be two things, they need to have a location, and we need to establish positionality such that one is on the left of the other thing. We run into the same problem if we say "world one (where Joe exists) is better for Joe than world two (where he doesn't)". The 'better for Joe' relation doesn't hold unless Joe exists in both places. To be clear, I have no issue with saying "world one is impersonally better than world two" on the grounds the former contains more happiness. It just seems confused to say it's 'better for Joe'.

A more intuitive, but less analogous way to press this sort of complaint is if I say "blue is taller than green". Clearly, blue and green can't stand in the relationship of being taller than each other - neither has the property of height. It's not just that they are equally tall as each other: that would require them to have a property of height and for them to have the same quantity of it. Rather, neither have the property of height, hence we are not able to compare them with respect to their height. Note, having a height of zero is not the same as not having the property of having a height, much as there is a difference between not having a bank account and having a bank account with nothing in it.

The challenge for the comparativist is to explain which properties ground the personal betterness relationship. For a life to have evaluative properties - to be good/bad for the person - it has to have some non-evaluative properties, e.g. how happy/sad the person is. But, a non-existent life does not have any non-evaluative properties to get the evaluative ones off the ground. There's no way to compare existence to non-existence for someone; it is an attempt to compare something with nothing. Hence it is not the case existence is better, worse, or equally good as existence for someone; rather, existence and non-existence are incomparable in value for someone.

As Bader (very dryly) puts it: "Comparativism is thus not viable since there cannot be a betterness relation without relata, nor can there be goodness without good-making features" This is a quote from this paper (https://homeweb.unifr.ch/BaderR/Pub/Person-affecting (R. Bader).pdf) where he mentions, but doesn't develop, the points I made above.

Comment by MichaelPlant on Doing Good Badly? - Michael Plant's thesis, Chapters 5,6 on Cause Prioritization · 2021-03-09T19:21:15.052Z · EA · GW

In brief, I'm sceptical there are good heuristics for assessing an entire problem. Ask yourself: what are they, and what is the justification for them? What do, rather, is have intuitive views about how effective particular solutions to given problems are. So we should think more carefully about those.

If it helps, for context, I started writing my thesis is 2015. At that time, EAs (following, I think, Will's book at 80k's then analysis) seemed to think you could make enormous progress on what the priorities are by appealing to very vague and abstract heuristics like "the bigger the problem, the higher the EV". This all seemed and seems v suspicious to me. People don't do this so much anymore.

Comment by MichaelPlant on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-09T13:01:48.136Z · EA · GW

I don't have any comment to make about Torres or his motives (I think I was in a room with him once). However, as a more general point, I think it can still make sense to engage with someone's arguments, whatever their motivation, at least if there are other people who take them seriously. I also don't have a view on whether others in the longtermism/X-risk world do take Torres's concern seriously, it's not really my patch.

Comment by MichaelPlant on Doing Good Badly? - Michael Plant's thesis, Chapters 5,6 on Cause Prioritization · 2021-03-06T11:48:26.399Z · EA · GW

The point about 'dismissing too soon' comes from the realisation that one doesn't really evaluate the cost-effectiveness of resources to causes (entire problems), but you only evaluate solutions. Someone who thought they were able to evaluate causes as a whole, and so hadn't really looked at what you might do, would be liable to discount problems too soon.

This is all fairly abstract, but I suppose I take something like a 'no shortcuts' view to cause prioritisation: you actually have to look hard at what you might do, rather than appealing to heuristics to do the work for you.

Comment by MichaelPlant on Doing Good Badly? - Michael Plant's thesis, Chapters 5,6 on Cause Prioritization · 2021-03-06T11:43:00.117Z · EA · GW

Yep, for exactly this reason, I'm glad someone kindly summarised and surfaced these dark corners of my thesis.

Comment by MichaelPlant on Doing Good Badly? - Michael Plant's thesis, Chapters 5,6 on Cause Prioritization · 2021-03-06T11:41:28.543Z · EA · GW

Hmm. It seems like the only way this differs from my account is that 'cause comparisons' are/should be the comparison of the top interventions, rather than just intervention. But the 'cause comparison' is still impossible without (implicitly) evaluating the specific things you can do.

Comment by MichaelPlant on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-04T12:03:06.558Z · EA · GW

Hello Ian. Could you say a bit what providing strategy and research looks like? I don't have an intuitive grasp on what sort of things that involves and I'd appreciate an example or two!

Comment by MichaelPlant on Why "cause area" as the unit of analysis? · 2021-01-26T13:16:10.247Z · EA · GW

FWIW, I think it helps to think  of effective altruism along the following lines. This is more or less taken from chapters 5 and 6 of my PhD thesis which got stuck into all this in tedious (and, in the end, rather futile) depth. 

Who? As in, who are the beneficiary groups?

Options: people (in the near-term), animals (in the near-term), future sentient life

What? As in, what are the problems?

This gives you your cause areas, i.e. the problems you want to solve that directly benefit a particular group, e.g. poverty, factory farming, X-risks. 

Effective altruism is a practical project, ultimately concerned about what the best actions are. To solve a problem requires thinking, at least implicitly, about particular solutions to those problems, so I think it's basically a nonsense to try to compare "cause areas" without reference to specific things you can do, aka solutions. Hence, when we say we're comparing "cause areas" what we are really doing is assessing the best solution in each cause area "bucket" and evaluating their cost-effectiveness. The most important cause = the one with the very most cost-effective intervention.

How? As if, how can the problems be best solved?

Here, I think it helps to distinguish between interventions and barriers. Interventions are the thing you do that ultimately solve the problem, e.g, cash transfers and bednets for helping those in poverty. You can then ask what are the barriers, i.e. the things that stop those interventions from being delivered. Is it because people don't know about it? Do they want them but can't afford them, etc? A solution removes a particular barrier to a particular intervention, e.g. just provides a bednet.

What's confusing is where to fit in things like "improving rationality of decision-makers" and "growing the EA movement", which people sometimes call causes. I think of these as 'meta-causes' because they indirect and diffusely work to remove the barrier to many of the 'primary causes', e.g. poverty. 

It's not clear we need answers to the 'why?', 'when?', and 'where?' queries. Like I say, if you want to waste an hour or two, I slog through these issues in my thesis. 

Comment by MichaelPlant on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T22:41:38.079Z · EA · GW

I think you're right to point out that we should be clear about exactly what's repugnant about the repugnant conclusion.  However, Ralph Bader's answer (not sure I have a citation, I think it's in his book manuscript) is that what's objectionable about moving from world A (take as the current world) to world Z is that creating all those extra lives isn't good for the new people, but it is bad for the current population, whose lives are made worse off.  I share this intuition. So I think you can cast the repugnant conclusion as being about population ethics.

FWIW, I share your intuition that, in a fixed population, one should just maximise the average. 

Comment by MichaelPlant on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T22:31:13.679Z · EA · GW

Strong upvote. I thought this was a great reply: not least because you finally came clean about your eyes, but because I think the debate in population ethics is currently too focused on outputs and unduly disinterested in the rationales for those outputs.