The Comparability of Subjective Scales 2020-11-30T16:47:00.000Z
Life Satisfaction and its Discontents 2020-09-25T07:54:58.998Z
Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty 2020-08-03T16:17:32.230Z
Update from the Happier Lives Institute 2020-04-30T15:04:23.874Z
Understanding and evaluating EA's cause prioritisation methodology 2019-10-14T19:55:28.102Z
Announcing the launch of the Happier Lives Institute 2019-06-19T15:40:54.513Z
High-priority policy: towards a co-ordinated platform? 2019-01-14T17:05:02.413Z
Cause profile: mental health 2018-12-31T12:09:02.026Z
A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare 2018-10-25T15:48:03.377Z
Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was 2018-05-23T10:30:43.748Z
Could I have some more systemic change, please, sir? 2018-01-22T16:26:30.577Z
High Time For Drug Policy Reform. Part 4/4: Estimating Cost-Effectiveness vs Other Causes; What EA Should Do Next 2017-08-12T18:03:34.835Z
High Time For Drug Policy Reform. Part 3/4: Policy Suggestions, Tractability and Neglectedess 2017-08-11T15:17:40.007Z
High Time For Drug Policy Reform. Part 2/4: Six Ways It Could Do Good And Anticipating The Objections 2017-08-10T19:34:24.567Z
High Time For Drug Policy Reform. Part 1/4: Introduction and Cause Summary 2017-08-09T13:17:20.012Z
The marketing gap and a plea for moral inclusivity 2017-07-08T11:34:52.445Z
The Philanthropist’s Paradox 2017-06-24T10:23:58.519Z
Intuition Jousting: What It Is And Why It Should Stop 2017-03-30T11:25:30.479Z
The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik 2016-12-05T21:03:24.496Z
Are You Sure You Want To Donate To The Against Malaria Foundation? 2016-12-05T18:57:59.806Z
Is effective altruism overlooking human happiness and mental health? I argue it is. 2016-06-22T15:29:58.125Z


Comment by michaelplant on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T22:41:38.079Z · EA · GW

I think you're right to point out that we should be clear about exactly what's repugnant about the repugnant conclusion.  However, Ralph Bader's answer (not sure I have a citation, I think it's in his book manuscript) is that what's objectionable about moving from world A (take as the current world) to world Z is that creating all those extra lives isn't good for the new people, but it is bad for the current population, whose lives are made worse off.  I share this intuition. So I think you can cast the repugnant conclusion as being about population ethics.

FWIW, I share your intuition that, in a fixed population, one should just maximise the average. 

Comment by michaelplant on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T22:31:13.679Z · EA · GW

Strong upvote. I thought this was a great reply: not least because you finally came clean about your eyes, but because I think the debate in population ethics is currently too focused on outputs and unduly disinterested in the rationales for those outputs.

Comment by michaelplant on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T17:56:58.242Z · EA · GW

Ah, I see. No, you've got it right. I'd somehow misread it and the view works the way I had thought it was supposed to: non-existence as zero is not-existence can be compared to existence in terms of welfare levels. 

Comment by michaelplant on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T16:16:50.913Z · EA · GW

Right. So, looking at how HMW was specified up top - parts II and III - then people who exist in only one of two outcomes count for zero even if they have negative well-being  in the world where they exist. That what how I interpreted the view as working in my comment. 

One could specify a different view on which creating net-negative lives, even if they couldn't have had a higher level of welfare, is bad, rather than neutral.  This would need a fourth condition.

(My understanding is that people who like HMVs tend to think that creating uniquely exist negative lives is bad, rather than neutral, as that captures that procreative asymmetry. 

Comment by michaelplant on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T13:37:16.433Z · EA · GW

I found this post very thought-provoking (I want to write a paper in this area at some point) so might pop back with a couple more thoughts.

Arden, you said this decreased your confidence that person-affecting views can be made to work, but I'm not sure I understand your thinking here. 

To check, was this just because you thought the counterpart stuff was fishy, or because you thought it has radical implications? I'm assuming it's the former, because it wouldn't make sense to decrease one's confidence in a view on account of it's more or less obvious implications:  the gist of person-affecting views is that they give less weight to merely possible lives than impersonal views do.  Also, please show me a view in population ethics without (according to someone) 'radical implications'!

(Nerdy aside I'm not going to attempt to put in plain English: FWIW, I also think counterpart relations are fishy.  It seems you can have a de re or de dicto person-affecting views (I think this is the same as the 'narrow' vs 'wide' distinction). On the former, what matters is the particular individuals who do or will exist (whatever we do). On the latter, what matters is the individuals who do or will exist,  whomsoever they happen to be.  Meacham's is of the latter camp. For a different view which allows takes de dicto lives as what matters see Bader (forthcoming)

It seems to me that, if one is sympathetic to person-affecting views, it is because one finds these two theses plausible 1. only personal value is morally significant - things can only be good or bad if they are good or bad for someone and 2. non-comparativism, that is, that existence can not be better or worse for someone than non-existence. But if one accepts (1) and (2) it's obvious lives de re matter, but unclear why one would care about lives de dicto. What makes counterpart relations fishy is that they are unmotivated by what seem to be the key assumptions in the area. 

Comment by michaelplant on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-05T12:34:50.191Z · EA · GW

Thanks a lot for writing this up! I confess I'd had a crack at Meacham's paper some time ago and couldn't really work out what was going on, so this is helpful. One comment.

I don't think the view implies what you say it implies in the Your Reaction part. We have only two choices and all those people who exist in one outcome (i.e. the future people) have their welfare ignored on this view - they couldn't have been better off. So we just focus on the current people - who do exist in both "bomb" and "not-bomb". Their lives go better in "not-bomb". Hence, the view says we shouldn't blow up the world, not - as you claim - that we should. Did I miss something?

Comment by michaelplant on Julia Galef and Angus Deaton: podcast discussion of RCT issues (excerpts) · 2021-01-04T23:49:17.671Z · EA · GW

It strikes me that Deaton has, in theory, got a point. To put a label on it, one should not do 'randomisation (or replication) without explanation'.  Regarding Russell's chicken, the flaw with the chicken assuming it will get fed today is because it hasn't understood the structure of reality. Yet this does not show one should, in practice, give up on RCTs and replication, only that one should use them in combination with a thoughtful understanding of the world. 

For Deaton's worry to have force, one would need to believe that because one context might be different from another, we should assume it is. Yet, saliently, that doesn't follow. There could be a fairly futile argument about on whom the burden of proof lies to show that one context of replication is relevantly like another, but it seems the dutiful next thing to do would for advocates to argue why they think it is and critics to argue why it isn't. 

I am intrigued by his separate point that getting governments to be more receptive to their citizens is a valuable intervention - the point being that, in poor countries, the governments collect so little tax from those in poverty they feel little incentive to notice them.

Comment by michaelplant on Can I have impact if I’m average? · 2021-01-04T18:11:27.790Z · EA · GW

Thanks for bringing this up. I've been mulling on this for a while and might write something myself. A couple of thoughts.

If you discover you could be doing a lot more good than you currently are, you could have (at least) two reactions: disappointment that you haven't been doing more in the past and/or excitement that you could do better in the future. Both of these perspectives are valid and it seems you could focus on either one. 

For those who, like me, tend to find it quite easy to be disappointed with and hard on themselves, I might help to think "well, the past has happened. There's nothing you can do about that now. So let's look to the future."

The title of this post made me think you were going to talk about something else, which is whether those who aren't in the top 1% of a given field (I suppose this most naturally applies in academia) have very little impact. I don't know if this is true - it's certainly the sort of thing people believe, but it might just be folk wisdom. 

It does strike me as true that the people at the top of a field have a disproportionate share of the impact. 

What does that imply you should do if you're not in the top 1% and what to do the most good? Well, maybe you should keep going in your field but maybe you should switch. Depends on context.

A totally separate question is how you should feel if you aren't one of those people having a huge impact.

I take it I should be trying to do the most good I can do,  emphasis on the 'I'. I can't be anyone else, so it's irrelevant, in some important sense, whether or not others do more (or less). The right comparison is between how much you do in your actual life compared to the other lives you could have led. The important bit is that I am trying my best.  Nothing more can be asked because nothing more can be given. 

Comment by michaelplant on How modest should you be? · 2020-12-29T19:25:38.295Z · EA · GW

Three more quick thoughts.

First, how does listening to your peers solve the problem of overconfidence? Surely all your peers are, on average, as overconfident as you? Not saying you need to have an answer, more thinking out loud. 

Second, object-level reasons need to be in the story somewhere. What else are experts supposed to use to form their views - the opinions of existing experts? If experts can and must appeal to object-level reasons, it's then unsettling to say non-experts can make no use of them. 

Third, I agree those quotes are bananas. I've never really understood what continental philosophers take each other to be saying - it's all gloriously unclear to me. 

Comment by michaelplant on How modest should you be? · 2020-12-28T18:31:44.019Z · EA · GW

Thanks John, I really enjoyed this (as I do basically everything you write). Two comments. 

First, would this be a reasonable gloss on your position: "defer to the experts, except when you know what their reasoning is and can see where it's gone wrong"? FWIW, this gloss seems exactly the right response to epistemic humility, taking a principle middle line between "always defer" and "never defer".

Second, I know this is by-the-by to your central claim, but can you explain and/or give examples of where continental philosophers have done "poorly at object-level reasoning"? I am (obviously) very sympathetic to the conclusion, but you don't supply any reasons for it.

It seems quite difficult to argue that a whole class of people engages in poor reasoning, unless membership of that class necessitates accepting something that is clearly false (e.g. one might claim Holocaust deniers all engage in poor reasoning). But I can't think of anything that all continental philosophers subscribe to, in virtue in being continental philosophers, and hence I can't think of anything they all sign up to and clearly displays poor reasoning.

Comment by michaelplant on Wholehearted choices and "morality as taxes" · 2020-12-28T16:31:32.709Z · EA · GW

I like the thought experiment, but I think (unfortunately) the Singerian analogy is closer to reality.

In the "woodland commotion" case, you don't feel bad for not going to help because, well, how could you have known this weird situation was occurring? But it doesn't seem like the world is like that, where it's so non-obvious how we help that no one could blame us for not seeing it.

Indeed, even if the world were like that to us initially, the situation changes as soon as someone tells you what you can do to help.

To adjust your case, suppose you hear a commotion in the distance, but then someone next to you who has binoculars, sees what's going on and say "hey, there's a man stuck over there, shall we go help?" Then the case becomes much like Singer's shallow pond where you can easily help someone else at a cost to you and you know it.  So all the concerns about demandingness resurface.  But Singer, effective altruists, and many others in society, are basically being the guy with binoculars ("hey, do you know how you can do good? Don't buy that latte, buy a bednet instead") so once you've heard their pitch, you can hardly claim you had no idea how you could have helped. 

Comment by michaelplant on [Feedback Request] The compound interest of saving lives · 2020-12-24T20:00:25.472Z · EA · GW

yeah, it's the natural way to think about it unless you're only concerned about the current population. 

Comment by michaelplant on [Feedback Request] The compound interest of saving lives · 2020-12-24T19:55:05.348Z · EA · GW

Hello Monica. I agree there would be different optima given different assumptions. The natural thing to do is to take the world as we, in fact, expect it to be - we're trying to do ethics in the real world. 

Hilary's paper focuses on whether we are in relation to optimum population assuming a 'business as usual' trajectory, i.e. one whether we don't try to change what will happen currently. You need to settle your view on that to know whether you think you want to encourage or discourage extra people from being born. And, as Hilary quite really points out, this is not a straightforward question to answer. 

Comment by michaelplant on [Feedback Request] The compound interest of saving lives · 2020-12-22T19:23:34.795Z · EA · GW

Yeah, those are good links. To add to that, a key issue that the value of saving lives now, and the effects this has on the future, depends on the more general concern of where the Earth is in relation to it optimum population trajectory. However, as discussed in Hilary Greaves, Optimum Population Size  it's not clear on any of a range of models whether there are too many or too few people now. Hilary discusses this assuming totalism, but the results are more general, as I discuss in chapter 2.7 of my  PhD thesis.  (This discussion isn't the main point of the chapter, which is really noting and exploring  the tension between believing both that the Earth is overpopulated and that saving lives is good.) 

Comment by michaelplant on Introducing Family Empowerment Media · 2020-12-16T10:52:42.010Z · EA · GW

I'm excited to see this getting off the ground and hearing how you do. I thought this write up was very good. The only thing I was ready to quibble about - the fact that access is not generally such an issue - I see you've already covered. 

Comment by michaelplant on 80k hrs #88 - Response to criticism · 2020-12-11T17:37:12.038Z · EA · GW

Oh, what I said wasn't a criticism, so much as a suggestion to how more people might get up to speed on what's under debate!

Comment by michaelplant on 80k hrs #88 - Response to criticism · 2020-12-11T12:23:56.744Z · EA · GW

Thanks for writing this. I haven't (yet) listened to the podcast and that's perhaps why your post here felt like I was joining in the middle of a discussion. Could I suggest that at the top of your post you very briefly say who you are and what your main claim is, just so these are clear? I take it the claim is that YouTube's recommendations engine does not (contrary to recent popular opinion) push people towards polarisation and conspiracy theories. If that is your main claim, I'd like you to say why YouTube doesn't have that feature and why people who claim it does are mistaken.

(FWIW, I'm an old forum hand and I've learnt you can't expect people to read papers you link to. If you want people to discuss them, you need to make your main claims in the post here itself.)

Comment by michaelplant on Introduction to the Philosophy of Well-Being · 2020-12-10T17:46:48.318Z · EA · GW

I think we may well be speaking past each other someone. In my example, I took it the toe stubbing was unpleasant, and I don't see any problem in saying the toe stubbing is unpleasant but I am simultaneously experiencing other things such that I feel pleasure overall.

The usual case people discuss here is "how can BDSM be pleasant if it involves pain?" and the answer is to distinguish between bodily pain in certain areas vs a cognitive feeling of pleasure overall resulting from feeling bodily pain.

Comment by michaelplant on Health and happiness research topics—Part 1: Background on QALYs and DALYs · 2020-12-09T12:41:24.513Z · EA · GW

Hello Derek. Thanks for this. 

I don't have major comments on this - you and I have discussed basically all of this before. I'll just set out a few minor clarificatory things. 

In philosophy, welfarism is the view that well-being is the only thing of intrinsic value. There's then a further discussion to be had about what the right theory of well-being is. You say you have three critiques  - welfarist, extra-welfarist, and wellbeing - but those labels are confusing because, on the face of it, the "welfarist" and "wellbeing" critiques should just be the same thing. 

One objection to health measures is that they are not a measure of intrinsic value. There are then two further versions of that: the welfarist version (HALYs don't measure well-being, which is the only thing of value) and the non-welfarist version (HALYs don't measure value, which consists in well-being + some other stuff). 

A further objection, which I don't think you explicitly state (but maybe you did - if so, sorry) is about distributions, which can be had entirely separately from whatever you think value consists in. Classic answers here are utilitarianism (value of an outcome is the unweighted sum of whatever is valuable), prioritarianism (value of an outcome is the weighted sum where more weight is given to the worse off), and egalitaranism (value of an outcome is improved in some way if value is more evenly distributed). 

What you call "the welfarist critique" seems to be objections from a desire satisfaction theory of well-being. What you call the "extra-welfarist critique" is a combinaton of non-welfarist and distributional concerns. If your "wellbeing critique" you don't flag an objective list objections to HALYs.

Resultantly, and futhermore, I'd reconceptualise your critique of what the issues are. 

I agree issue 1 is health =/= value

What you call problem 2 I'd reframe as expectations =/= reality. Both the hedonism and desire satisfaction theories allow people can made mistake about what would increase their well-being.  What you think will make you happy isn't what will make you happy, etc. 

problem 3 is possibly better described as an issue of inadequate scaling (you could press this concern even if you weren't a hedonist)

One problem you're missing from your list is a concern about distributions

Re problem 4, you raise the issue that HALYs don't include spillovers. But then, neither do your alternatives. Hence, that's not really a problem for the question "what unit do we measure impact in?" so much as a further question of "how widely, in practice, do we count those impacts?" 

Problem 5 seems just to be a restatement of problem 1, rather than a separate concern, no?

Anyway, keep the good work!

Comment by michaelplant on Introduction to the Philosophy of Well-Being · 2020-12-09T11:48:51.900Z · EA · GW

Sorry, I really don't follow your point in the first para. 

One thing to say is that experience of suffering are pro tanto bad (bad 'as far as it goes'). So stubbing your toe is bad, but this may be accompanied by another sensation such that overall you feel good. But the toe stubbing is still pro tanto bad.

Anyway, like I said, none of this is directly relevant to the post itself!

Comment by michaelplant on Introduction to the Philosophy of Well-Being · 2020-12-08T14:24:32.191Z · EA · GW

Hello Akash, thanks for this!

One thing you could test, as an empirical matter, would be  to ask people break their life down into various domains (e.g. health, wealth, relationships, etc.), getting people to score those, then for them to assign weights to each domain, to so create an overall score. This would be their satisfaction of global desires. 

You could then compare this their single judgement of life satisfaction.

I don't see why this would be particularly interesting though, and I can't think why the two scores would be different except due to user error. It's not at all clear what life satisfaction is supposed to be if not the aggregate of one's global desires. I discuss this further in my working paper which is linked to on the blog post. 

Comment by michaelplant on Introduction to the Philosophy of Well-Being · 2020-12-08T13:23:26.655Z · EA · GW

I'm not quite sure I understand what you mean. My experiences have no value unless there is another experiencer in the world? If I'm the last person on Earth and I stub my toe, I think that's bad because it bad's for me, that is, it reduces my well-being. 

Also,  given your concerns, you'll need to define suffering in a way that is distinct from well-being. If I think suffering is just negative well-being - aka 'ill-being' - then your concerns about well-being apply to suffering too. 

Also also, if suffering isn't instrinsically bad, in what sense is  it bad?

Finally, I note that all of these concerns are about the value of well-being in a moral theory, which is a distinct question from what this post tackles, which is just what the theories of well-being are. One could (implausibly) say well-being had no moral value (which is, I suppose, almost what impersonal views of value do say...).

Comment by michaelplant on AMA: Jason Crawford, The Roots of Progress · 2020-12-08T11:27:37.715Z · EA · GW

Hello. Thanks for engaging!

First, there are a few different versions of the Easterlin paradox. The most relevant one, for this discussion, is whether economic growth over the long-term (i.e. 10+ years for economists - longer than the business cycle) increases subjective well-being. This version of the paradox holds in quite a few developed nations (see linked paper). That leaves it open what we might find for developing nations.

Second, the only paper I know of that looks globally at SWB over time is Neve et al. (2018). Those authors use affect data from the Gallup World Poll and find:

The level of (log) per  capita  GDP  is  not  significantly  related  to  the  day-to-day  emotional  experience  of  individuals within countries over time.  However, emotional well-being is significantly related to macroeconomic movements over the business cycle

Which indicates we should not expect further global growth will increase happiness. At least, there's a case to answer.

Third, the OWID point about flat rates of MH is interesting. I'd not seen that and I'll see if I can find out more. 

Fourth, you make this hypothetical point along the lines of "if SWB data told us this, we should disbelieve it" and then you sort of assume it does show us that. But it doesn't. If you look at the causes and correlates of SWB they tell a pretty intuitive story, for the most part: higher SWB (measured as happiness or life satisfaction) is associated with greater health and wealth, being in a relationship, lower crime, lower suicide rates, less air pollution, etc. The only result that's puzzling is the Easterlin paradox. But if you think SWB measure get the 'wrong' result with Easterlin, that implies the measures aren't valid, e.g.  life satisfaction measures don't actually measure life satisfaction. But then you need to explain how they get the 'right' answers basically everywhere else.

What's more, the Easterlin Paradox isn't that surprising when you try to explain it, e.g. that effect of income on SWB is mostly relative

Comment by michaelplant on My mistakes on the path to impact · 2020-12-08T00:48:08.289Z · EA · GW

I’d also guess that for most people they should be pushing themselves to apply for more roles than they’d naturally be inclined to.

Fairly minor thing in a big comment, but I'm curious about whether this works if people do this. My own limited experience, and that of a few friends, is that we only got the jobs/roles we really wanted in the end. I wonder if this is because we lacked intrinsic motivation and were probably obviously terrible candidates for the things we were trying ourselves excited for. In my case, I tried to be a management consultant after I did my postgrad and only applied for PhDs because I bombed at that (and everything else I applied for).  

Comment by michaelplant on The Comparability of Subjective Scales · 2020-12-05T12:04:35.035Z · EA · GW

Hello Jamie.  Thanks for your astute comment! The paper is quite long and I do cover all of this apart from your third bullet point.

We can't objectively measure subjective states and this seems to have led some people to think that you can't use any empirical evidence at all. But you're right that if you make some assumptions e.g. about vignettes, then if the data go one way that raises your confidence in there being/not being cardinality. This approach is just the basic "inference to the best explanation" used across the sciences (one might even say it's the fundamental method of science).

I discuss vignettes specifically in section 5.5. What you suggest has been done.  Angelini et al (2014) asked people their own life sat, then show them this (and another) vignette

John is 63 years old. His wife died 2 years ago and he still spends a lot of time thinking about her. He has four children and ten grandchildren who visit him regularly. John can make ends meet but has no money for extras such as expensive gifts for his grandchildren. He has had to stop working recently due to heart problems. He gets tired easily. Otherwise, he has no serious health conditions. How satisfied with his life do you think John is?

And then asked people to rate how satisfied John is. The is we can assume 'vignette equivalence' - everyone will agree how satisfied John is - use that to make inferences about differential scale use and therefore adjust each individual's scores. The issue, as I say (p24) is that:

However, respondents do not seem to agree [how satisfied John is]. For instance, Angelini et al. (2014) find about 30% of Germans rate ‘John’ from the above vignette as satisfied or very satisfied, but 30% rate him dissatisfied or very dissatisfied. To assume that the respondents agree about John’s life satisfaction requires us to conclude that respondents must mean the same thing by “satisfied” as “dissatisfied”, which strains credulity seeing as one is positive and the other negative. Faced with a choice of vignette equivalence or semantic equivalence (that respondents attach the same meaning to words) the latter seems more plausible

The general point people we need to think carefully about which assumptions we take as 'ground truths' to test to cardinality. Vignette equivalence is, I think, not rock solid.

Re your third bullet point, I think it would be really hard to do it that way around - I can't see any way to use that to a numerical interpretation from the answers, which is what's needed. 

Comment by michaelplant on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T12:09:23.456Z · EA · GW

Are you aware of the research on the questionable, and perhaps non-existent, relationship between economic growth and measures of subjective well-being (e.g. lif satisfaction and happiness) over the long run, aka the Easterlin Paradox? I assume you are if you work with OurWorldInData. If so, does this worry you about 'progress' as I think(?) you're understanding it? If not, why not?

 I suppose I'm pretty sceptical that (further) technological progress will do that much to improve our quality of life. There this related, not-so-well-known worry that rising rates of mental health are because of, not despite, modern living: we now live in ways quite far from our environment of evolutionary adaptation. I recognise my scepticism here is counterintuitive, but I think it's the most plausible reading of the well-being data. I could say a bit more about this and plan to write up my thoughts some time.

I run the Happier Lives Institute and have been itching to talk to advocates of progress studies about this concern for some time. 

Comment by michaelplant on Brief book review 2020 · 2020-12-03T22:41:12.407Z · EA · GW

Did you read any books you would not recommend? Because that would be a useful thing to hear too. 

(Also, this list makes me feel bad about my lack of book reading...)

Comment by michaelplant on The Comparability of Subjective Scales · 2020-12-01T10:46:02.257Z · EA · GW

Ah, that's a nice point. I discuss in 5.5 in the paper. Quote:

The final condition is whether different individuals use the same endpoints at a time [. There are two types of concern here. 

The first is whether there are what Nozick (1974, 41) called ‘utility monsters’, individuals who can and do experience much greater magnitudes of happiness (or any other sort of subjective state), than others.

I won’t dwell on this as it seems unlikely there would be substantial differences in humans’ capacities for subjective experiences. Presumably there are evolutionary pressures for each species to have range of sensitivity that is optimal for survival. To return to an example noted earlier, being immune to pain is an extremely problematic condition that would put someone at an evolutionary disadvantage. Further, even if there are differences, we would expect these to be randomly distributed, in which case they would wash out in large samples

So to generate a serious worry that there's a problem at the level of group averages (which is the relevant level for most relevant decision-making) you'd have to argue for and explain the existence of non-trivial difference between groups. It's tricky to think of real life cases outside people who have genetic conditions. But this wouldn't motivate us thinking, say, members of two nations have different capacities.

Comment by michaelplant on The Comparability of Subjective Scales · 2020-11-30T16:31:44.784Z · EA · GW

Just to flag, this topic has been the subject of three recent forum posts in the last 6 months. This paper addresses the concerns raised there.

Milan Griffes asks whether SWB scales might shift over time (intertemporal cardinality) and Fin Moorhouse shared his dissertation on the same topic

Aidan Goth, in a post which commented on a forum post by the Happier Live Institute ("Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty”)  wonders whether subjective scales are comparable across people (interpersonal cardinality).

In this paper, I argue the scales are likely to be cardinally comparable both over time and across people. This is something of a bold claim to make and, if true, is pretty important, because it means we can basically interpret subjective data at face value, rather than worrying about having to make fancy adjustments based on e.g. the nationality of the respondents.  

Comment by michaelplant on Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was · 2020-11-24T14:12:39.586Z · EA · GW

Yes, these are some of the many things I wish I'd known in advance of starting on the project! 

Comment by michaelplant on The effect of cash transfers on subjective well-being and mental health · 2020-11-23T21:46:04.219Z · EA · GW

Just on the different effect sizes from different methods, where do/would RCT methods fit in with the four discussed by Kaats?

FWIW, I agree that a meta-analysis of RCTs isn't a like-for-like to a single RCT. That said, when(if?) we exhaust the existing SWB literature relevant t cost-effectiveness we should present everything we find (which shouldn't hard as there's not much!).

Comment by michaelplant on Questions for Peter Singer's fireside chat in EAGxAPAC this weekend · 2020-11-20T19:05:48.994Z · EA · GW

Does he have a position on moral uncertainty and, if so, what does he takes implications to be?

Comment by michaelplant on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-19T11:16:03.203Z · EA · GW

"if it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it"

Comment by michaelplant on Research Summary: The Intensity of Valenced Experience across Species · 2020-11-13T12:40:24.918Z · EA · GW

Hello Jason. Thanks for doing all this work! I haven't kept up with all of it, so apologies if you've covered this elsewhere, but I had a nascent thought that links and challenges your two tentative conclusions.

Okay, so the idea is that valenced states - colloquially, pleasure and pain - provide "oomph" to get creatures to do things. That seems fine. But it's unclear what this tells us about the intensities of experiences. Imagine we have two creatures that are the same, except A has 10x valence intensity than B. Why should there be any difference about how the two of them behave and thus, their evolutionary fitness? Couldn't they just act in the same way? And supposing more oomph is better, how much oomph should we expect, given there are, e.g. energy costs to producing sensations?

From the armchair, what matters for behaviour the relative intensity of different things for a given creature: if the deer loves eating berries and doesn't fear pain enough to run away from wolves, it will get eaten. But that doesn't tell us about inter-creature  cardinal intensities. 

My thought is something like this. Creatures need a range of cardinal intensities large enough to allow them to choose between all the different behaviours they need to undertake to survive and reproduce. As a toy example, if you only have 3 levels of pleasure - zero, 1, and 2 - but you have very many different choices to make - eat, mate, run away, sleep, etc. - then that's not enough resolution to make decisions. An entity that needs to make more decisions needs a greater range of sensations. 

This takes us back, crude, to something about brain size a proxy for valenced states. And the possibility that 'simple' creatures, i.e. those that don't have lots of decisions to make, don't feel very much. I'm not sure where that leaves us in practice. 

Comment by michaelplant on Longtermism and animal advocacy · 2020-11-13T12:14:49.408Z · EA · GW

I don't yet have a strong view on how plausible it is that animal advocacy is a priority for longtermism. However, I think it's worth noting that, if it is, there are probably quite a few other sorts of projects that would qualify using exactly the same arguments. 

For instance, at the Happier Lives Institute, we spend a lot of time thinking about best to measure well-being. There's an analogous argument that, if governments had better measures of well-being  - e.g. better than GDP - and used them to make public policy decisions, that would have enormously valuable consequences over the long-run. I won't do it here, but the arguments are sufficiently analogous that, in Tobias' post, you could replace "animal advocacy" with "well-being measurement", keep the rest of the text the same and it would still make sense. So perhaps well-being measurement is a plausible longtermist priority too. 

Other examples that might work include, just from the top of my head: "democratic institutions", "peace building", "education".

It's not clear to me if the right way to update is (a) all these 'society change' interventions are plausible longterm priorities or (b) none of them are. I lean toward (a), but I'm not very confident. 

Comment by michaelplant on Life Satisfaction and its Discontents · 2020-10-21T21:33:52.539Z · EA · GW

That's a nice point. What life satisfaction views require more specifically is not just that the entity thinks about its life as a whole, but that it thinks about its life as a whole and makes a judgement about how its life is going overall. It's rather implausible animals do that latter thing, which means they have no well-being on this theory.

Comment by michaelplant on Evidence on correlation between making less than parents and welfare/happiness? · 2020-10-15T20:46:05.989Z · EA · GW

The most recent worldwide study on income and subjective well-being is Jebb et al. (2018). FWIW they find there are "satiation" points for the effect of income on SWB, measures as happiness, positive affect, and negative affect, nearly everywhere but that it's often higher than $75k.

Comment by michaelplant on TIO: A mental health chatbot · 2020-10-14T19:18:07.140Z · EA · GW

Hello Sanjay, thanks both for writing this up and actually having a go at building something! We did discuss this a few months ago but I can't remember all the details of what we discussed.

First, is there a link to the bot so people can see it or use it? I can't see one.

Second, my main question for you -sorry if I asked this before - is: what is the retention for the app? When people ask me about mental health tech, my main worry is not whether it might work if people used it, but whether people do want to use it, given the general rule that people try apps once or twice and then give up on them. If you build something people want to keep using and can provide that service cheaply, this would very likely be highly cost-effective.

I'm not sure it's that useful to create a cost-effectiveness model based on the hypothetical scenario where people use the chatbot: the real challenge is to get people to use it. It's a bit like me pitching a business to venture capitalists saying "if this works, it'll be the next facebook", to which they would say "sure, now tell us why you think it will be the next facebook".

Third, I notice your worst-cast scenario is the effect lasts 0.5 years, but I'd expect using a chatbot to only make me feel better for a few minutes or hours, so unless people are using it many times, I'd expect the impact to be slight. Quick maths: a 1 point increase on a 0-10 happiness scale for 1 day is 0.003 happiness life-years.

Comment by michaelplant on Comments on “Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty” · 2020-10-13T13:19:02.906Z · EA · GW

Okay, we're on the same page on all of this. :) A further specific empirical project would involve trying to understand population dynamics in the locations EAs are considering.

Comment by michaelplant on [Link] How understanding valence could help make future AIs safer · 2020-10-09T10:43:09.625Z · EA · GW

There are 10 reasons here, but isn't there just one key point: if we could explain to an AGI what happiness is, then we could get it to create more happiness (or, at least, not create more unhappiness)? I don't mean to sound like I'm dismissing this - this is an important and laudable goal - I'm wondering if I'm missing something.

Comment by michaelplant on If you like a post, tell the author! · 2020-10-08T09:36:50.476Z · EA · GW

In accordance with the post: I thought this was useful. As an old time forum hack I often have people say they feel too scared to post here because all you seem to get is people trying to destroy your ideas. It shouldn't be the case that the only people brave enough to post here are those types who score low in agreeableness (such as yours truly).

Comment by michaelplant on What actually is the argument for effective altruism? · 2020-10-07T11:45:20.374Z · EA · GW

If your goal is to do X, but you're not doing as much as you can of X, you are failing (with respect to X).

But your claim is more like "If your goal is to do X, you need to Y, otherwise you will not do as much as of X as you can". The Y here is "the project of effective altruism". Hence there needs to be an explanation of why you need to do Y to achieve X. If X and Y are the same thing, we have a tautology ("If you want do X, but you do not-X, you won't do X").

In short, it seems necessary to say that is distinctive about the project of EA.

Analogy: say I want to be a really good mountain climber. Someone could say, oh, if you want to do that, you need to "train really hard, invest in high quality gear, and get advice from pros". That would be helpful, specific advice about what the right means to achieve my end are. Someone who says "if you want to be good at mountain climbing, follow the best advice on how to good at mountain climbing" hasn't yet told me anything I don't already know.

Comment by michaelplant on Sortition Model of Moral Uncertainty · 2020-10-07T10:58:24.183Z · EA · GW

Regarding stakes, I think OP's point is that it's not obvious that being sensitive to stakes is a virtue of a theory, since it can lead to low credence-high stakes theories "swamping" the others, and that seems, in some sense, unfair. Bit like if you're really pushy friend always decides where the your group of friends goes for dinner, perhaps. :)

I'm not sure your point about money pumping works, at least as stated: you're talking about a scenario where you lose money over successive choices. But what we're interested in is moral value, and the sortition model will simply deny their's a fixed amount of money in the envelope each time one 'rolls' to see what one's moral view is. It's more like there's $10 in the envelope at stage 1, $100 at stage 2, $1 at stage 3, etc. What this brings out is the practical inconsistency of the view. But again, one might think that's a theoretical cost worth paying to avoid other theories costs, e.g. fanaticism.

I rather like the sortition model - I don't know if I buy it, but it's at least interesting and one option we should have on the table - and I thank the OP for bringing it to my attention. I would flag the "worldview diversification" model of moral uncertainty has a similar flavour, where you divide your resources into different 'buckets' depending on the credence you have in each bucket. See all the bargaining-theoretic model, which treats moral uncertainty as a problem of intra-personal moral trade. This two models also avoid fanaticism and leave one open to practical inconsistency.

Comment by michaelplant on Comments on “Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty” · 2020-10-05T16:01:04.359Z · EA · GW

On moral value as a linear function of well-being and comparability of SWB measures across different income settings

As you allude to, there are two issues here. If I think person A going from 0/10 to 1/10 life satisfaction has greater moral value than B going from 9/10 to 10/10, that might be because (1) you think each has the same increase in well-being, but you want to give extra weight to the worse off. This is the prioritarian point you say you are not making.

The alternative, (2) is that you think A really has had a bigger increase in well-being than B even though both have reported a 1-unit change in life satisfaction. (2) raises a concern about whether the subjective scales are cardinally comparable. This isn’t a moral problem, so much as a scientific one of measurement. Technically, the issue is whether numerical scores from subjective self-reports are cardinally comparable. I’ve got a working paper on this topic (not public apart from this link) where I delve into this and conclude subjective scales are likely cardinally comparable. The basic issue here, I think, is about how people are use language when interpreting survey questions; not much seems to have been written about it. With regards to your point about “comparability of SWB measures across different income settings” the document I linked to provides a rationale for why I suspect they are comparable.

Comment by michaelplant on Comments on “Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty” · 2020-10-05T15:58:52.762Z · EA · GW

On totalism and births averted per life saved

As you develop this methodology further, I think it’s important that you account for other moral views, most notably totalism. As you’re aware, totalism is a popular view (especially in EA) and, depending on how we ought to respond to moral uncertainty, we might think that totalism (or something similar) dominates our decision calculus when acting under moral uncertainty (Greaves and Ord 2017). I think it would be valuable to know what a similar totalist analysis yields.

I agree it’s important to see the value of our actions is sensitive to concerns about population ethics, especially in this case where it seems it could make such a difference. A few comments.

First, it’s worth noting all views of population ethics will be somewhat sensitive to the issue of how saving lives affects total population size. This is because whether there are more or fewer people now has, arguably, an impact on the well-being of everyone else (present and future). Many people seem to think the Earth is overpopulated, in the sense that adding people now is overall worse. There are a few different ways of thinking about this but one general practical implication is that the worse it is to add people (because you want a smaller population) the worse it will also be to save lives. See Greaves (2015) analysis and Plant (2019, chapter 2) which is an extension of Greaves’ paper.

Second, I agree that if you’re thinking about how mortality rates affect fertility, this will be particularly important on totalism in this context, because totalism gives so much weight to creating new lives, although it will apply to other views of population ethics too.

Third, when trying to understand what the “lives saved:births averted” ratio is, what’s relevant is not just mortality or fertility rates by themselves, but the combination of them. If parents are trying to have a set number of children (survive to adulthood) then the effects of reducing mortality might not change the total number of future people much, because parents adjust fertility. I think this is a topic for further work and I don’t claim expertise on the population dynamics in any particular context.

Comment by michaelplant on What actually is the argument for effective altruism? · 2020-09-28T14:56:18.592Z · EA · GW

Interesting write-up, thanks. However, I don't think that's quite the right claim. You said:

The claim: If you want to contribute to the common good, it’s a mistake not to pursue the project of effective altruism.

But this claim isn't true. If I only want to make a contribution to the common good, but I'm not all fussed about doing more good rather than less, (given whatever resources I'm deploying) then I don't have any reason to pursue the project of effective altruism, which you say is searching for the actions that do the most good.

A true alternative to the claim would be:

New claim: if you want to contribute to the common good as much as possible, it's a mistake not the pursue the project of effective altruism.

But this claim is effectively a tautology, seeing as effective altruism is defined as searching for the actions that do the most good. (I suppose someone who thought how to do the most good was just totally obvious would see no reason to pursue the project of EA).

Maybe the claim of EA should emphasise the non-obvious of what doing the most good is. Something like:

If you want to have the biggest positive impact with your resources, it's a mistake just trust your instincts(/common sense?) about what to do rather than engage in the project of effective altruism: to thoroughly and carefully evaluate what does the most good.

This is an empirical claim, not a conceptual one, and its justification would seem to be the three main premises you give.

Comment by michaelplant on Factors other than ITN? · 2020-09-28T09:45:59.040Z · EA · GW

If I can be forgiven for tooting my own horn, I also wrote a forum post about the framework around the same time as John posted his. EAs have often talked about "cause prioritisation" as being distinct from "intervention evaluation": the former is done in terms of ITN, the latter in term of cost-effectiveness. I agree with Ben Todd's suggestion the best way to understand ITN is as three factors that combine to a calculation of cost-effectiveness (aka "good done per dollar"). One result of this that I think it's confused to think that "cause prioritisation" and "intervention evaluation" are two different things. I discuss some implications of this.

Comment by michaelplant on Life Satisfaction and its Discontents · 2020-09-28T09:34:09.388Z · EA · GW

Glad you raise this: I discuss the possibility of different species having different accounts of welfare in the paper in section 5.2 on the "too few subjects" objection! The main weirdness of such a view is that it's vulnerable to spectrum arguments: it implies one of your ancestors had their well-being consist in (say) happiness and life satisfaction, but whose parents were slightly less cognitively developed and therefore their well-being consists just in happiness.

Comment by michaelplant on Life Satisfaction and its Discontents · 2020-09-28T09:28:32.683Z · EA · GW
Is automaximization not an objection to desire theories as well?

As I state above, the first point in the paper is that life satisfaction theories seem to be a particular kind of desire theory, the global desire theory, in disguise. Hence, the two objection I raise are objections to both life satisfaction and global desire theories (which I claim are really just the same view). The two objections won't apply to non-global desire theories; as I say in the paper, that might be reason for people who like desire theories to instead adopt a non-global version.

Or should we accept that we don't get to decide all of our desires or how easy it is to satisfy them?

It's clear we don't get to decide on many of our desires! We simply have urges to do all sorts of things. See distinction in the paper between local vs global desires.

Comment by michaelplant on Life Satisfaction and its Discontents · 2020-09-28T09:20:32.369Z · EA · GW

Just to flag: I've nearly finished another paper where I explore whether measures of subjective states are cardinally and conclude they probably are (at least, on average). Stay tuned.

There are many parts to this topic and I'm not sure whether you're denying (1) that subjective states are experienced in cardinal units or (2) that they are experienced in cardinal units but that our measures are (for one reason or another) not cardinal. I think you mean the former. But we do think of affect as being experienced in cardinal units, otherwise we wouldn't say things like "this will hurt you as much as it hurts me". Asking people to state their preferences doesn't solve the problem: what we are inquiring about are the intensities of sensations, not what you would choose, so asking about the latter doesn't address the former.