Update from the Happier Lives Institute 2020-04-30T15:04:23.874Z · score: 79 (41 votes)
Understanding and evaluating EA's cause prioritisation methodology 2019-10-14T19:55:28.102Z · score: 37 (19 votes)
Announcing the launch of the Happier Lives Institute 2019-06-19T15:40:54.513Z · score: 123 (84 votes)
High-priority policy: towards a co-ordinated platform? 2019-01-14T17:05:02.413Z · score: 22 (9 votes)
Cause profile: mental health 2018-12-31T12:09:02.026Z · score: 99 (59 votes)
A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare 2018-10-25T15:48:03.377Z · score: 64 (44 votes)
Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was 2018-05-23T10:30:43.748Z · score: 65 (55 votes)
Could I have some more systemic change, please, sir? 2018-01-22T16:26:30.577Z · score: 24 (19 votes)
High Time For Drug Policy Reform. Part 4/4: Estimating Cost-Effectiveness vs Other Causes; What EA Should Do Next 2017-08-12T18:03:34.835Z · score: 8 (8 votes)
High Time For Drug Policy Reform. Part 3/4: Policy Suggestions, Tractability and Neglectedess 2017-08-11T15:17:40.007Z · score: 8 (8 votes)
High Time For Drug Policy Reform. Part 2/4: Six Ways It Could Do Good And Anticipating The Objections 2017-08-10T19:34:24.567Z · score: 11 (10 votes)
High Time For Drug Policy Reform. Part 1/4: Introduction and Cause Summary 2017-08-09T13:17:20.012Z · score: 20 (22 votes)
The marketing gap and a plea for moral inclusivity 2017-07-08T11:34:52.445Z · score: 18 (30 votes)
The Philanthropist’s Paradox 2017-06-24T10:23:58.519Z · score: 2 (8 votes)
Intuition Jousting: What It Is And Why It Should Stop 2017-03-30T11:25:30.479Z · score: 5 (11 votes)
The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik 2016-12-05T21:03:24.496Z · score: 15 (15 votes)
Are You Sure You Want To Donate To The Against Malaria Foundation? 2016-12-05T18:57:59.806Z · score: 28 (29 votes)
Is effective altruism overlooking human happiness and mental health? I argue it is. 2016-06-22T15:29:58.125Z · score: 28 (29 votes)


Comment by michaelplant on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T21:18:13.641Z · score: 2 (1 votes) · EA · GW

Thanks for the thoughtful reply!

To fill out the details of what you're getting at, I think you're saying "the welfare level of an animal is X% of its capacity C. We're confident of both X and C in the given scenario for animal A is high enough that it's better to help animal A than animal B". That may be correct, but you're accepting that than you can know the welfare levels because you know the percentage of the capacity. But then I can make the same claim again: why should we be confident we've got the percentage of the capacity right?

I agree we should, in general, use inference to the best explanation. I'm not sure we know how to do that when we don't have access to the relevant evidence (the private, subjective states) to draw inferences. If it help, trying putting on the serious sceptic's hat and ask "okay, we might feel confident animal A is suffering more than animal B, and we do make these sort of judgement the whole time, but what justifies this confidence?". What I'd really like to understand (not necessary from you - I've been thinking about this for a while!) is what the chain of reasoning is that would go into that justification.

Comment by michaelplant on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T15:00:13.445Z · score: 11 (6 votes) · EA · GW

Thanks for writing this up - I thought this was a very philosophically high-quality forum post, both in terms of its clarity and familiarity with the literature, and have given it a strong upvote!

With that said, I think you've been too quick in responding to the first objection. An essential part of the project is to establish the capacities for welfare across species, but that's neither necessary or sufficient to make comparisons - for that, we need to know about actual levels of well-being for different entities (or, at least the differences in their well-being). But knowing about the levels seems very hard.

Let me quickly illustrate with some details. Suppose chicken welfare has a range of +2 to -2 well-being levels, but for cows it's -5 to +5. Suppose further the average actual well-being levels of chickens and cows in agriculture are -1 and -0.5, respectively. Should we prevent one time-period of cow-existence or of chicken-existence? The answer is chicken-existence, all else equal, even though cows have a greater capacity.

Can you make decisions about what maximises well-being if you know what the capacities but not the average levels are? No. What you need to know are the levels. Okay, so can we determine what the levels, in fact, are? You say:

Of course, measuring the comparative suffering of different types of animals is not always easy. Nonetheless, it does appear that we can get at least a rough handle on which practices generally inflict the most pain, and several experts have produced explicit welfare ratings for various groups of farmed animals that seem to at least loosely converge

My worry is: what makes us think that we can even "get a least a rough handle"? You appeal to experts, but why should we suppose that the experts have any idea? They could all agree with each other and still be wrong. (Arguably) silly comparison: suppose I tell you a survey of theological experts reported that approximately 1 to 100 angels could dance on the head of a pin. What should you conclude about how many angels can dance on a pin? Maybe nothing. What you might want to know is what evidence those experts have to form their opinions.

I'm sceptical we can have evidence-based inter-species comparisons of (hedonic) welfare-levels at all.

Suppose hedonism is right and well-being consists in happiness. Happiness is a subjective state. Subjective states are, of necessity, not measurable by objective means. I might measure what I suppose are the objective correlates of subjective states, e.g. some brain functionings, but how do I know what the relationship is between the objective correlates and the subjective intensities? We might rely on self-reports to determine that relationship. That seems fine. However, how do we extend that relationship to beings that can't give us self-reports? I'm not sure. We can make assumptions (about general relationship between objective brain states and subjective intensities) but we can't check if we're right or not. Of course, we will still form opinions here, but it's unclear how one could acquire expertise at all. I hope I'm wrong about this, but I think this problem is pretty serious.

If well-being consists in objective goods, e.g. friendship or knowledge, it might be easier to measure those, although there will be much apparent arbitrariness involved in operationalising these concepts.

There will be issues with desire theories too either way, depending whether one opts for a mental-state or non-mental-state version, but that's a further issue I don't want to get into here.

Comment by michaelplant on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T12:18:48.177Z · score: 13 (10 votes) · EA · GW

Ben, could you elaborate on how important you think representativeness is? I ask, because the gist of what you're saying is that it was bad the leaders' priorities were unrepresentative before, which is why it's good there is now more alignment. But this alignment has been achieved by the priorities of the community changing, rather than the other way around.

If one thought EA leaders should represent the current community's priorities, then the fact the current community's priorities had been changed - and changed, presumably, by the leaders - would seem to be a cause for remorse, not celebration.

As a further comment, if representativeness is a problem the simple way to solve this would be by inviting people to the leaders' forum to make it more representative. This seems easier than supposing current leaders should change their priorities (or their views on what they should be for the community).

Comment by michaelplant on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T12:06:07.280Z · score: 37 (14 votes) · EA · GW

I share Denise's worry.

My basic concern is that Ben is taking the fact there is high representativeness now to be a good thing while not seeming so worried about how this higher representativeness came about. This higher representativeness (as Denise points out) could well just be result of people who aren't enthused with the current leaders' vision simply leaving. The alternative route, where the community change their minds and follow the leaders, would be better.

Anecdotally, it seems like more of the first has happened (but I'd be happy to be proved wrong). Yet, if one thinks representativeness is good, achieving representativeness by having people who don't share your vision leave doesn't seem like a good result!

Comment by michaelplant on Reducing long-term risks from malevolent actors · 2020-05-07T10:03:11.938Z · score: 8 (4 votes) · EA · GW

Thanks for this write-up, I thought it was really interesting and not something I'd ever considered - kudos!

I'll now hone in on the bit of this I think needs most attention. :)

It seems you think that one of the essential things is developing and using manipulation-proof measures of malevolence. If you were very confident we couldn't do this, how much of an issue would that be? I raise this because it's not clear to me how such measures could be created or deployed. It seems you have (1) self-reports, (2) other-reports, (3) objective metrics, e.g. brain scans. If I were really sneaky, I would just lie or not take the test. If I were really sneaky, I would be able to con others, at least for a long-time - perhaps until I was in power. Regarding objective measures, there will be 'Minority Report' style objections to actually using them in advance, even if they have high predictive power (which might be tricky as it relies on collecting good data, which seems to require the consent of the malevolent).

The area where I see this sort of stuff working best is in large organisations, such as civil services, where the organisations have control over who gets promoted. I'm less optimistic this could work for the most important cases, political elections, where there is not a system that can enforce the use of such measures. But it's not clear to me how much of an innovation malevolence tests are over the normal feedback processes used in large organisations. Even if they could be introduced in politics somehow, it's unclear how much of an innovation this would be: the public already try to assess politicians for these negative traits.

It might be worth adding that the reason the Myers-Brigg style personality tests are, so I hear, more popular in large organisations than the (more predictive) "Big 5" personality test is that Myers-Briggs has no ostensibly negative dimensions. If you pass round a Big-5 test, people might score highly on neuroticism or low on openness and get annoyed. If this is the case, which seems likely, I find it hard e.g. Google will insist that staff take a test they know will assess them on their malevolence!

As a test for the plausibility of introducing and using malevolence tests, notice that we could already test for psychopathy but we don't. That suggests there are strong barriers to overcome.

Comment by michaelplant on Update from the Happier Lives Institute · 2020-05-05T18:21:54.792Z · score: 2 (1 votes) · EA · GW

Thanks very much for your support Sam, we are grateful for it! As we've discussed with you, we are also keen to see how thinking in terms of SWB illuminates the cause prioritisation analysis.

It's easier to see how it could do this in some areas rather than others. As we're relying on self-report data, it's not obvious how we could use that to compare humans to non-humans (although one project is to think through if this is really not possible). And for comparing near-term to long-term interventions, these are plausibly not sensitive to one's measure of welfare anyway. The usual long-termist line is that such concerns 'swamp' near-term ones whichever way you look at it.

Comment by michaelplant on Update from the Happier Lives Institute · 2020-05-05T18:19:55.252Z · score: 5 (3 votes) · EA · GW

Thanks for this! Our position hasn’t changed much since the last post. We still plan to focus on mostly near-term (human) welfare maximisation, but we'd like to see if we can, in the next couple of years, do/say something useful about welfare maximisation in other areas (i.e. animals, the long-term). We haven't thought much about what this would be yet: we want to develop expertise in the area that seems most useful (by our lights) before thinking about expanding our focus.

Speaking personally, I take what is effectively a worldview diversification view to moral uncertainty (this is a change) although the rationale is different (I plan to write this up at some point). This, combined with my person-affecting sympathies, means I want to put most, but not all, of my efforts into helping humans in the near-term.

Comment by michaelplant on How can I apply person-affecting views to Effective Altruism? · 2020-04-30T17:34:20.111Z · score: 4 (2 votes) · EA · GW

Yes, agree you could save existing animals. I'd actually forgotten until you jogged my memory, but I talk about that briefly in my thesis (chapter 3.3, p92) and suppose saving animals from shelters might be more cost-effective than saving humans (given a PAV combined with deprivationism about the badness of death).

Comment by michaelplant on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T09:59:19.933Z · score: 3 (2 votes) · EA · GW

I think you might not have clocked the OP's comment that the morally relevant being as just those that exist whatever we do, which would presumably rule out concerns for lives in the far future.*

*Pedantry: there could actually be future aliens who exist whatever we do now. Suppose some aliens will turn up on Earth in 1 million years and we've had no interaction with them. They will be 'necessary' from our perspective and thus the type of person-affecting view stated would conclude such people matter.**

**Further pedantry: if our actions changed their children, which they presumably would, it would just be the first generation of extraterrestrial visitors who mattered morally on this view.

Comment by michaelplant on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T09:50:12.374Z · score: 19 (12 votes) · EA · GW

I'm struggling to think of much written on this topic - I'm a philosopher and reasonably sympathetic to person-affecting views (although I don't assign them my full credence) so I've been paying attention to this space. One non-obvious consideration is whether to take an asymmetric person-affecting view (extra happy lives have no value, extra unhappy lives has negative value) or a symmetric person-affecting view (extra lives have no value).

If the former, one is pushed towards some concern for the long-term anyway, as Halstead argues here, because there will be lots of unhappy lives in the future it would be good to prevent existing.

If the latter - which I think, after long-reflection, is the more plausible version, even though it is more prima facie unintuitive - then that is practically sufficient, but not necessary, for concentrating on the near-term, i.e. this generation of humans; animals won't, for the most part, exist whatever we choose to do. I say not necessary because one could, in principle, think all possible lives matter and still focus on near-humans due to practical considerations.

But 'prioritise current humans' still leaves it wide-open what should you do. The 'canonical' EA answer for how to help current humans is by working on global (physical) health and development. It's not clear to me that this is the right answer. If I can be forgiven for tooting my own horn, I've written a bit about this in this (now somewhat dated) post on mental health, the relevant section being "why might you - and why might you not - prioritise this area [i.e. mental health]".

Comment by michaelplant on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T09:29:27.896Z · score: 6 (3 votes) · EA · GW

Plausibly, feotuses will not be morally relevant on such a view as they won't exist whatever we choose to do.

Comment by michaelplant on Coronavirus: how much is a life worth? · 2020-03-24T15:44:19.090Z · score: 2 (1 votes) · EA · GW

Yes, good point. Now inclined to think your and Paul F's analyses need to be combined in some way, not immediately clear to me how.

He is indeed converting money into quality and quality of health, not just quantity, my mistake.

Comment by michaelplant on Coronavirus: how much is a life worth? · 2020-03-24T15:36:21.613Z · score: 2 (1 votes) · EA · GW

An in-the-weeds methodological point: your analysis is arguably quite conservative because of where you place the 'neutral point' equivalent to non-existence is on a 0-10 life satisfaction scale. You say

Global life satisfaction averages 5.17/10 (as of 2018), making 4.5 years x 5.17/10 = 2.33 WALYs lost per death. An Australian National University model assumes 15 to 68 million pandemic deaths worldwide (in the first year), which would thus lose 35 to 158 million WALYs [...]
Between 2007 and 2011, global wellbeing (yellow line on chart) fell by nearly 0.2 life satisfaction points out of 10, then recovered. I will attribute all of this dip (blue area) to the financial crisis; and as it was mostly over a two-year period (2008–10), averaging 0.1/10 p.a., it totals about 0.2/10 = 0.02 WALYs lost per person worldwide, or 137 million WALYs overall; i.e. 0.9 to 3.9 times the impact of the deaths.

This counts 0/10 as equivalent on non-existence, i.e. it is not possible for respondents to say that their lives are worse than death.

It's unclear where the put the neutral point - an issue I flag in my D. Phil thesis and is noted in the Happier Lives Institute's Research Agenda. The other obvious place to put it is 5/10, on which the WALYs lost per death would be 0.07 (4.5 years x 0.17 per year / 10), rather than 2.33 and the value of saving lives would be smaller than the well-being value of the economic loss.

As a point about sensitivity then, the further from 0/10 the neutral point is, the easier it is for the conclusion you reach to be the case.

Comment by michaelplant on Coronavirus: how much is a life worth? · 2020-03-24T13:53:50.777Z · score: 4 (2 votes) · EA · GW

Hello Ben. Thanks for writing this up and showing how the value of outcomes can be compared using surveyed well-being data.

I've been thinking on somewhat similar lines about how to check if the medicine might be worse than the disease.

Your analysis doesn't get as far as telling us if governments' policies are the right ones (note: this is not a criticism - I didn't take you to be trying to address this issue).

You observe that COVID has some negative economic consequences, and if we make such and such assumptions, those economic consequences are worse (in terms of well-being) than the immediate health consequences.

To work out if govts are choosing the right policies, we need a counterfactual comparison between (1) what would happen on the basis of one policy, e.g. no 'suppression' (shutting down of businesses, movement restrictions, etc.) and (2) some other policy, e.g. suppression. Presumably COVID19 will have some negative economic consequences; the question is whether various strategies that save lives now at a cost of the economy are better overall than those that save fewer lives now but keep the economy stronger.

I think a neater way to get a handle on this action-guiding question is to realise that a smaller economy means there is less (public and private money) available to fund life-saving health services later, and to run the numbers making a comparison in terms of years of life (as opposed to comparing quality vs quantity of life, which is what your analysis does and is less apples-to-apples). Paul Frijters, a health economics prof at LSE who works on well-being, has two articles looking at this.

I think he's overlooked a couple of things is his calculations and I'm working up my own numbers (which may well end up telling the same story).

Comment by michaelplant on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-19T14:22:26.557Z · score: 5 (3 votes) · EA · GW

To what extent does GW base its recommendations of cost-effectiveness estimates?

Some parts of the GW website seem to argue (or caution) against using them.* However, if you're not using cost-effectiveness estimates, what criterion is being used instead?

For what it's worth, I think GW (and many others) should be trying to use cost-effectiveness estimates. One can distinguish implicit vs explicit estimates, 'naive' vs 'sophisticated' estimates, estimates of 'direct' effects vs total effects, so maybe GW objects to some of these but not others, and it would be helpful to know which ones.

*In an old (2011) blog post, Holden wrote

"we are arguing that focusing on directly estimating cost-effectiveness is not the best way to maximize cost-effectiveness"

and the cost-effectiveness part of GW website says

"We do not make charity recommendations solely on the basis of cost-effectiveness calculations "

Comment by michaelplant on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-19T14:05:01.683Z · score: 6 (4 votes) · EA · GW

Relatedly, what do you say to donors to wonder if their money would be better spent on (a) the long-term or (b) animal welfare?

Comment by michaelplant on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2020-02-28T13:16:36.021Z · score: 3 (2 votes) · EA · GW
More specifically, where else can I find (1) lists of the bazillion positive and negative externalities of an additional child and (2) some argument -- however weak -- that takes us beyond agnosticism on the question whether an additional child is overall a *net* positive or negative externality

Hello Dominic,

I do something of this in my DPhil thesis in chapter 2. I'm pretty uncertain whether the Earth is under- or overpopulated whatever one's views on population ethics.

Comment by michaelplant on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-14T12:33:30.496Z · score: 2 (10 votes) · EA · GW

I think this is likely too critical of this approach, given that this sort of thing already happens and works. Arguably, the mass-joining of Labour by Momentum is exactly 'entryism' of this sort. Such entryism was perhaps in bad faith, but conspicuously (a) this does seem to have changed the UK political landscape and (b) there haven't been serious attempts to stop it. I don't have a strong view on this, but it doesn't seem unreasonable for someone to claim "this happens anyway, it won't make things worse if we do it, we might as well do it too".

Comment by michaelplant on Is mindfulness good for you? · 2019-12-30T14:25:51.036Z · score: 7 (4 votes) · EA · GW

Hello John, thanks very much for doing this careful investigation. I was wondering, what makes you think there isn't also an overestimate for the effect sizes of CBT and antidepressants? I was wondering if the metanalyses on those had controlled for such biases, but you didn't mention that.

Comment by michaelplant on Logarithmic Scales of Pleasure and Pain (@Effective Altruism NYC) · 2019-11-19T18:09:36.606Z · score: 3 (2 votes) · EA · GW

I haven't watched the talk, but I have just left a long comment on original article, Logarithmic Scales of Pleasure and Pain

Here's the TL;DR of my comment:

I don't think this post provides an argument that we should interpret pleasure/pain scales as logarithmic. What's more, whether or not this is true is not necessary for post's practical claim - which is roughly that "the best/worst things are much better/worse than most people think".

Here's the link to my comment. I meant to write up my thoughts 3 months ago when the original article was posted, but never got around to it.

Comment by michaelplant on Logarithmic Scales of Pleasure and Pain: Rating, Ranking, and Comparing Peak Experiences Suggest the Existence of Long Tails for Bliss and Suffering · 2019-11-19T18:04:36.659Z · score: 4 (3 votes) · EA · GW

TL;DR I don't think this post provides an argument that we should interpret pleasure/pain scales as logarithmic. What's more, whether or not this is true is not necessary for post's practical claim - which is roughly that "the best/worst things are much better/worse than most people think".

Thanks for writing this up; sorry not to have got around to it sooner.

I think there are two claims that need be to carefully distinguished.

(A) that the relationship between actual and reported pleasure(/pain) is not linear but instead follows some other relationship, e.g. a logarithmic function where a 1-unit increase in self-reported pleasure represents a ten-fold increase in actual pleasure.

(B) whether the best/worst experiences that some people have are many times more intense than other people (who haven't had those experiences) assume they are.

I point this out because you say

the best way to interpret pleasure and pain scales is by thinking of them as logarithmic compressions of what is truly a long-tail. The most intense pains are orders of magnitude more awful than mild pains (and symmetrically for pleasure). [...]
Since the bulk of suffering is concentrated in a small percentage of experiences, focusing our efforts on preventing cases of intense suffering likely dominates most utilitarian calculations.

The idea, I take it, is that if we thought the relationship between self-reported and actual pleasure(/pain) was linear, but it turns out it was logarithmic, then the best(/worse) experiences are much better(/worse) that we expected they were because we'd be using the wrong scale.

However, I don't think you've provided (any?) evidence that (A) is true (or that it's true but we thought it was false). What's more, (B) is actually quite plausible by itself and you can claim (B) is true without needing (A) to be true.

Let me unpack this a bit.

(A) is a claim about how people choose to use self-reported scales. The idea is that people have experiences of a certain intensity they can distinguish for themselves in cardinal units, e.g. you can tell (roughly) how many perceivable increments of pleasure one experience gives you vs the next. A further question is how people choose to report these intensities when people give them a scale, say a 0-10 scale.

This reporting could be linear, logarithmic, etc. Indeed, people could choose to report anyway they want to. It seems most likely people use a linear reporting function because that's the helpful way to use language to convey how you feel to the person asking you how you feel. I won't get stuck into this here, but I say more about it in my PhD thesis at chapter 4, section 4.

Hence, on your pleasure/pain scales when you contrast 'intuitive' to 'long-tailed' scales, what I think you mean is that the intuitive scale is really 'reported' pleasure and the 'long-tailed' scale is 'actual' pleasure i.e. your claim is that there is a logarithmic relationship between reported and actual pleasure. I note you don't provide evidence that people generally use scales this way. Regarding the stings scale, that just is a logarithmic scale by construction, where going from a 1 to 2 on the scale represent a 10 times increase in actual pain. That doesn't show we have to report pleasure using log scales, or that we do, just that the guy who constructed that scale chose to build it that way. In fact, we can only use log pleasure/pain scales if we can somehow measure pain/pleasure on an arithmetic scale in the first place, and then convert from those numbers to a log scale, which requires that people are able to construct arithmetic pleasure/pain scales anyway.

(You might wonder if people can know, on an arithmetic scale, how much pleasure/pain they feel. However, if people really have no idea about this, then it follows they can't intelligibly report their pleasure/pain at all, whatever scale they are using.)

Regarding (B), note that claims such as "the worst stings are 1000x worse than the average person expects they are" can be true without it needing to be the case that people have misunderstood how other people tend to use pleasure/pain scale. For instance, I could alternatively claim that the relationship between reported pleasure/pain and actual pain is linear, but that people's predictions are just misinformed - e.g. torture is actually more worse than they thought. For comparison, if I claim "the heaviest building in the world weighs 1000x more than most people think it weighs" I don't need to say anything about the relationship between reports of perceived weight and actual weight.

Hence, if you want to claim "experiences X and Y are much better/worse than we thought", just claim that without getting into distracting stuff about reported vs actual scale use!

(P.S. The Fechner-Weber stuff is a red-herring: that's about the relationship between increases in an objective quantity and in subjective perceptions of increases in that quantity. That's different from talking about the relationship between a reported subjective quantity and the actually experienced subjective quantity. Plausibly the former relationship is logarithmic, but one shouldn't directly infer from that that the latter relationship is logarithmic too).

Comment by michaelplant on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-13T12:55:40.322Z · score: 9 (5 votes) · EA · GW

Thanks for writing this up - I found it helpful. I'm just trying to summarise this in my head and have some questions.

To get the claim that the best interventions are much better than the rest, don't you need to claim that interventions follow a (very) fat-tailed distribution, rather than the claim there are lots of interventions? If they were normally distributed, then (say) bednets would be a massive outlier in terms of effectiveness, right? Do you (or does someone else) have an argument that interventions should be heavy-tailed?

About predicting effectiveness, it seems your conclusion should be one of epistemic modesty relating to hard-to-quantify interventions, not that we should never think they are better. The thought seems to be people are bad at predicting interventions in general, but we can at least check for the easy-to-quantify predictions to overcome our bias; whereas we cannot do this for the hard ones. It seems the implication is that we should discount the naive cost-effectiveness of systemic interventions to account for this bias. But 'sophisticated' estimates of cost-effectiveness for hard-to-quantify interventions might still turn out to be better than those for estimates of simple interventions. Hence it's a note of caution about estimations, not a claim that, in fact, hard to quantify interventions are (always or generally) less cost-effective.

Comment by michaelplant on The (un)reliability of moral judgments: A survey and systematic(ish) review · 2019-11-01T16:11:34.726Z · score: 3 (2 votes) · EA · GW

Okay, that makes more sense. You could have a systematic review which unambiguously pointed in one conclusion, you perhaps you should add something like you've already said, i.e. that you're just trying to report the finding without drawing an overall conclusion (although I don't know why someone would avoid drawing an overall conclusion if they thought there was one). And again, it would be helpful to add that there doesn't seem to be a consensus on this point (and possibly that it 'falls between the gaps' of various disciplines).

Comment by michaelplant on The (un)reliability of moral judgments: A survey and systematic(ish) review · 2019-11-01T12:42:25.804Z · score: 8 (4 votes) · EA · GW

A couple of very general suggestions to aid the reader - I've only read the summary. Given the length of the post, could you add a line or two to your summary to say what conclusion you're arguing for? Reading the summary, I get what the topic is, but not what your take is. It would also be good if you could orientate the reader as to where this fits in the literature, e.g. what the consensus in the field is and whether you are agreeing with it.

Comment by michaelplant on Oddly, Britain has never been happier · 2019-10-23T18:08:42.096Z · score: 2 (1 votes) · EA · GW

I also thought the World Happiness Survey looked flat but it has gone up. 0.25/10 is not be sniffed at.

WHS has a much smaller sample size - around 1,000 per year - whereas the Office of National Statistics asks around 300,000 people a year. ONS data also shows a rise of about 0.3/10 between 2011 and 2019 (

Comment by michaelplant on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-18T14:35:41.416Z · score: 3 (2 votes) · EA · GW

I should just flag I've put a post on this topic on the forum too, albeit one that doesn't directly reply to John but addressed many of the points raised in the OP and in the comments.

I will make a direct reply to John on one issue. He suggests we should:

  1. We quantify importance to neglectedness ratios for different problems.

(that's supposed to be in quotation marks but they seem not be working).

I don't think this is a useful heuristic and I don't see problems which are higher scale:neglectedness should be higher priority. There are two issues with this. One is that problems with no resources going towards them will score infinitely highly on this schema. Another is that delineating one 'problem' from another is arbitrary anyway.

Let's illustrate what happens when we put these together. Suppose we’re talking about the cause of reducing poverty and, suppose further, it happens to be the case that it’s just as cost-effective to help one poor person as another. As a cause, quite a lot of money goes to poverty and let’s assume poverty scores badly (relative to our other causes) on this scale/neglectedness rating. I pick out person P, who is currently not receiving any aid and declare that ‘cause P’ – helping person P – is entirely neglected. Cause P now has infinite score on scale/neglectedness and suddenly looks very promising via this heuristic. This is perverse as, by stipulation, helping P is just as cost-effective any helping any other person in poverty.

Comment by michaelplant on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-18T14:25:05.305Z · score: 3 (2 votes) · EA · GW
Take technical AI safety research as an example. I'd have trouble directly estimating "How much good would we do by spending $1000 in this area", or sanity checking the result. I'd also have trouble with "What % of this problem would we solve by spending another $100?"

Hmmm. I don't really see how this is any harder, or different from, your proposed method, which is to figure out how much of the problem would be solved by increasing spend by 10%. In both cases you've got to do something like working out how much money it would take to 'solve' AI safety. Then you play with that number.

Comment by michaelplant on Understanding and evaluating EA's cause prioritisation methodology · 2019-10-17T14:07:55.146Z · score: 4 (3 votes) · EA · GW

Hello Risto,

Thanks for this. That's a good question. I think it partially depends on whether you agree with the above analysis. If you think it's correct that, when we drill down into it, evaluating problems (aka 'causes') by S, N, and T is just equivalent to evaluating the cost-effectiveness of particular solutions (aka 'interventions') to those problems, then that settles the mystery of what the difference really is between 'cause prioritisation' and 'intervention evaluation' - in short, they are the same thing and we were confused if we thought otherwise. However, if someone thought there was a difference, it would be useful to hear what it is.

The further question, if cause prioritisation is just the business of assessing particular solutions to problems, is: what the best ways to go about picking which particular solutions to assess first? Do we just pick them at random? Is there some systemic approach we can use instead? If so, what is it? Previously, we thought we have a two-step method: 1) do cause prioritisation, 2) do intervention evaluation. If they are the same, then we don't seem to have much of a method to use, which feels pretty dissatisfying.

FWIW, I feel inclined towards what I call the 'no shortcuts' approach to cause prioritisation: if you want to know how to do the most good, there isn't a 'quick and dirty' way to tell what those problems are, as it were, from 30,000 ft. You've just got to get stuck in and (intuitively) estimate particular different things you could do. I'm not confident that we can really assess things 'at-the-problem-level' without looking at solutions, or that we can appeal to e.g. scale or neglectedness by themselves and expect that to very much work. A problem can be large and neglecteded because its intractable, so can only make progress on cost-effectiveness by getting 'into the weeds' and looking at particular things you can do and evaluating them.

Comment by michaelplant on Defending the Procreation Asymmetry with Conditional Interests · 2019-10-14T19:54:53.627Z · score: 4 (3 votes) · EA · GW

Thanks for putting this up here. One major and three minor comments.

First, and probably most importantly, I don't see how this line of reasoning gets an asymmetry. If I understand it correctly, the idea is that people need to actually exist to have interests, so if people do or will exist, we can say existence will be good/bad for them. But that gets you to actualism, it seems, but not an asymmetry. If X would have a bad life, were X to exist, I take it we shouldn't create X. But then why, if X were to have a good life, were X to exist, do we not to have reason to create X? You say you're 'agnostic' about whether those who would have good lives have an interest in existing, but I don't think you give a reason for this agnosticism, which would be the crucial thing to do.

Second, I didn't really understand the explication of Meacham's view - you said it 'solves' a cavalcade of issues on pop ethics but didn't spell out how it actually solves them. I'm also not sure if your view is different from Meacham's and, if so, how.

Third, it would be useful if you could spell out what you take (some of) the practical implications of your view to be.

Fourth, because you get stuck into the deep end quite quickly, I wonder if you should add a note that this is a relatively more 'advanced' forum post.

Comment by michaelplant on Does improving animal rights now improve the far future? · 2019-09-16T14:30:22.605Z · score: 3 (2 votes) · EA · GW
To illustrate, suppose we have two (finite or infinite) sequences representing the amount of suffering in our sphere of influence at each point in time, but we make earlier progress on moral circle expansion in one so the amount of suffering in our sphere of influence is reduced by 1 at each step in that sequence compared to the other;

Just to say I really liked this point, which I think applies equally to focusing on the correct account of value (and opposed to who the value-bearers are, which is this point)

Comment by michaelplant on Movement Collapse Scenarios · 2019-09-02T21:39:24.003Z · score: 31 (13 votes) · EA · GW

Is putting some non-trivial budget into cash prizes for arguments against what you do the only way to show you're self-critical? Your statement suggests you believe something like that. But that doesn't seem the only way to show you're self-critical. I can't think of any other organisation that have ever done that, so if it is the only way to show you're self-critical, that suggests no organisation (I've heard of) is self-critical, which seems false. I wonder if you're holding CEA to a peculiarly high standard; would you expect MIRI, 80k, the Gates Foundation, Google, etc. to do the same?

Comment by michaelplant on Uncertainty and sensitivity analyses of GiveWell's cost-effectiveness analyses · 2019-09-02T10:16:06.572Z · score: 28 (11 votes) · EA · GW

Despite your reservations, I think it would actually be very useful for you to input your best guess inputs (and its likely to be more useful for you to do it than an average EA, given you've thought about this more). My thinking is this. I'm not sure I entirely followed the argument, but I took it that the thrust of what you're saying is "we should do uncertainty analysis (use Monte Carlo simulations instead of point-estimates) as our cost-effectiveness might be sensitive to it". But you haven't shown that GiveWell's estimates are sensitive to a reliance on point estimates (have you?), so you haven't (yet) demonstrated it's worth doing the uncertainty analysis you propose after all. :)

More generally, if someone says "here's a new, really complicated methodology we *could* use", I think its incumbent on them to show that we *should* use it, given the extra effort involved.

Comment by michaelplant on How to Make Billions of Dollars Reducing Loneliness · 2019-08-27T21:45:06.308Z · score: -1 (3 votes) · EA · GW

Well, how about starting "Tinder for sparerooms"?

Comment by michaelplant on Ask Me Anything! · 2019-08-19T14:55:47.849Z · score: 8 (6 votes) · EA · GW

I note your main project is writing a book on longtermism. Would you like to see the EA movement going in a direction where it focuses exclusively, or almost exclusively, on longtermist issues? If not, why not?

To explain the second question, it would seem answering 'no' to the first question would be in tension with advocating (strong) longtermism.

Comment by michaelplant on 'Longtermism' · 2019-08-02T16:01:01.453Z · score: 2 (1 votes) · EA · GW
shows a major problem

You mean, shows a major finding, no? :)

Comment by michaelplant on 'Longtermism' · 2019-08-02T16:00:01.979Z · score: 5 (3 votes) · EA · GW
suggesting a violation of transitivity

The (normal) person-affecting response here is to say that options 1 and 3 and incomparable in value to 2 - existence is neither better than, worse than, or equally good as, non-existence for someone. However, if Sam exists necessarily, then 2 isn't a option, so then we say 3 is better than 1. Hence, no issues with transitivity.

Comment by michaelplant on 'Longtermism' · 2019-07-31T19:42:48.957Z · score: 2 (1 votes) · EA · GW
(ii) Society currently privileges those who live today above those who will live in the future; and
(iii) We should take action to rectify that, and help ensure the long-run future goes well.

Do you mean Necessitarians wouldn't accept (iii) of the above? Necessitarians will agree with (ii) and deny (iii). (Not sure if this is what you were referring to).

I'm sympathetic to Necessitarianism, but I don't know how fringe it is. It strikes me as the most philosophically defensible population axiology that rejects long-termism which leans me towards thinking the definition shouldn't fall foul of it. (I think Hilary's suggestion would fall foul of it, but yours would not).

Comment by michaelplant on 'Longtermism' · 2019-07-27T17:40:44.187Z · score: 14 (4 votes) · EA · GW
An alternative minimal definition, suggested by Hilary Greaves (though the precise wording is my own), is that we could define longtermism as the view that the (intrinsic) value of an outcome is the same no matter what time it occurs.

Just to make a brief, technical (pedantic?) comment, I don't think this definition would give you want you want. (Strict) Necessitarianism holds the only persons who matter are those who exist whatever we do. On such a view, the practical implication is, in effect, that only present people matter. The view is thus not longtermist on your chosen definition. However, Necessitarianism doesn't discount for time per se (the discounting is contingent on time) and hence is longtermist on the quoted definition.

Comment by michaelplant on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-17T10:58:23.803Z · score: 7 (4 votes) · EA · GW
One idea for evening things out is to offer prizes for the best arguments against established EA donation targets!

This is a great idea!

Comment by michaelplant on A philosophical introduction to effective altruism · 2019-07-14T16:34:37.496Z · score: 10 (5 votes) · EA · GW

Hello Will,

Thank for linking the article, which I enjoyed. I'll jump straight to the three objections I have. I'll first state these briefly and then explain them in greater detail. First, the Duty of Beneficence, as you state it, seems both ad hoc and under-demanding. Second, your first argument for Maximising Beneficence does not provide the support you claim it does. Third, insufficient explanation is offered for the relationship between, one the one hand, assessing problems by their scale, neglectedness, and tractability and, on the other, cost-effectiveness. I'm (obviously) an EA enthusiast, so I offer these in the spirit of trying to improve the arguments for EA.

1. The Duty of Beneficence

You state "Most middle or upper class people in rich countries have a duty to make helping others a significant part of their lives" but why is it the case that "Most middle or upper class people in rich countries" have the duty, rather than everyone? Suppose if I'm rich but in a poor country, or poor but in a rich country. Do I have no duty of beneficence? That seems implausible. What if I am rich and currently live in a rich country, but move to a poor country - does my duty of beneficence disappear?

Let's call your version The Middle-Class-Rich-Country Duty of Beneficence (MCRC) and distinguish that from the General Duty of Beneficence: all individuals have a duty to make helping others a significant part of their lives. As you don't defend the stronger, General Duty, I assume you're taking the position that there is no General Duty, just MCRC.

I suppose you're taking this as an argumentative strategy (rather than you actually believe it) and doing this because you assume your readers would accept MCRC but not the General duty: presumably many people think that, at the very least, those who are globally fortunate ought to help others. Your (clever) move is to point out the reader is themself likely part of this elite.

One problem with this move is that MCRC is arbitrary. I do not think you provide an justification for this particular specification, nor do I think there is one. Second, as noted, it is grossly under-demanding to those who are not both middle-class and in rich countries.

I worry the response to your argument from those who dislike Effective Altruism will be to claim 1. MCRC is implausible, 2. note you haven't provided an argument for the General duty and thus 3. assume there is no general duty. I wonder if the better strategy is just to argue there must be a general duty to benefit others if the costs to us are small (a la Singer in Famine, Affluence and Morality). You would then note the costs to the world's rich are clearly small relative to the benefit they would give to others.

2. Maximising beneficence

Your first of two arguments for maximising beneficence is an appeal to cases. You write:

Suppose that, as a volunteer doctor in a resource-starved hospital in a poor country, you can do one of two things with your last day of work before you return home. First, you could perform surgery on an elderly man with prostate cancer, thereby saving his life. Or you could treat two children from malaria, thereby saving both their lives. If you had a personal attachment to the cause of fighting prostate cancer, would that give you sufficient reason to save the the life of the elderly man rather than the two children? Clearly not. The importance of saving two lives rather than one, and of saving people who have much more to gain from their treatment, clearly outweighs whatever reason a personal attachment might bring.

Immediately after you state:

Yet this is morally analogous to the decisions that we actually face when we try to use our resources to do good. The only way in which it is morally disanalogous is with respect to what’s at stake. (emphasis added)

One important way in which the case is morally disanalogous is that it is about a doctor performing their duties. We often think people, acting their professional roles, have duties that do not apply to individuals acting their private lives. To highlight this, I imagine many people have the intuition that if this same doctor were to run a marathon for charity, that doctor would be morally permitted to fundraise for whatever cause they wanted, not just whatever cause saves lives most cost-effectively (or, more broadly, does the most good). When running the marathon, the doctor is a merely a private citizen and, qua private citizen, they can do good any way they want. One could accept the doctor must save the two rather than the one while denying individuals are generally required to maximise beneficence.

Potentially, using a case where someone, acting a private citizen, can save either one life or two at the same cost - e.g. you stop the trolley killing two people rather than one - would have some the right intuitive pull whilst avoiding the objection it was morally disanalogous.

I think there might be an objection to your second argument for maximising beneficience, but I haven't been able to formulate it yet.

3. Scale, neglectedness, and tractability

You raise the problem of comparing effectiveness across different problems:

This is a legitimate concern, and effective altruists have developed an alternative heuristic framework for prioritising among causes, including when the impact within some of those causes is difficult to measure. According to this framework, the following factors are indicative of which causes are highest-priority [scale, neglectedness, tractability]

My concern is that in this essay you don't offer an explanation to the reader as to why they should find it plausible that "the following three factors are indicative of which causes are highest-priority". Why and exactly how should I use scale, neglectedness and tractability to compare farmed animal welfare to global health and development, two causes effective altruists find high priority?

You do provide a citation to your own book, Doing Good Better, but (as you and I have discussed before) it's not very clear from that book either (a) why scale, neglectedness and tractability are indicative of which causes are highest priority or (b) how precisely those factors can be used by individuals to determine what the priorities are.

The most plausible precisification of the framework is the one offered by Owen Cotton-Barratt here, where the three factors are multiplied together to give marginal cost-effectiveness (you mention this version of the framework in your Palgrave article). I presume you didn't want to get into the nitty-gritty in this essay, but I imagine some readers will be left confused about how the framework you mention in this article does the work of comparing causes. My only suggestion here is that you could consider also citing Owen's and/or 80k's explanations of the framework.

Comment by michaelplant on EA Forum Prize: Winners for May 2019 · 2019-07-13T20:14:45.192Z · score: 8 (5 votes) · EA · GW

As an active forum user, I would also be curious to hear about this.

Comment by michaelplant on Announcing the launch of the Happier Lives Institute · 2019-06-29T14:42:35.368Z · score: 3 (2 votes) · EA · GW
Thus, loosely speaking, I have some sense that agreeing with totalism in population ethics would "screen off" questions about the theory of well-being

Yes, this seems a sensible conclusion to me. I think we're basically in agreement: varying one's account of the good could lead to a new approach to prioritisation, but probably won't make a practical difference given totalism and some further plausible empirical assumptions.

That said, I suspect doing research into how to improve the quality of lives long-term would be valuable and is potentially worth funding (even from a totalist viewpoint, assuming you think we have or will hit diminishing returns to X-risk research eventually).

FWIW, my own guess is that explicitly defending or even mentioning a specific population ethical view would be net bad - because of the downsides you mention - for almost any audience other than EAs and academic philosophers. However, I anticipate my reaction being somewhat common among, say, readers of the EA Forum specifically.

Oh I'm glad you agree - I don't really want to tangle with all this on the HLI website. I thought about giving more details on the EA forum than were on the website itself, but that struck me as having the downside of looking sneaky and was a reason against doing so.

Comment by michaelplant on Announcing the launch of the Happier Lives Institute · 2019-06-29T12:39:36.177Z · score: 9 (4 votes) · EA · GW

Hello Max,

Thanks for this thoughtful and observant comment. Let me say a few things in reply. You raised quite a few points and my replies aren't in a particular order.

I'm sympathetic to person-affecting views (on which creating people has no value) but still a bit unsure about this (I'm also unsure what the correct response to moral uncertainty is and hence uncertain about how to respond to this uncertainty). However, this view isn't shared across all of HLI's supporters and contributors, hence it isn't true to say there is an 'HLI view'. I don't plan to insist on one either.

And perhaps an organization such as HLI is more useful as a broad tent that unites 'near-term happiness maximizers' irrespective of their reasons for why they focus on the near term.

I expect that HLI's primary audience to be those who have decided that they want to focus on near-term human happiness maximization. However, we want to leave open the possibility of working on improving the quality of lives of humans in the longer-term, as well as non-humans in the nearer- and longer-term. If you're wondering why this might be of interest, note that one might hold a wide person-affecting view on which it's good to increase the well-being of future lives that exist, whichever those lives are (just as one might care about the well-being on one's future child, whichever child that turns out to be (i.e. de dicto rather than de re)). Or one could hold creating lives can be good but still think it's worth working on the quality of future lives, rather than just the quantity (reducing extinction risks being a clear way to increase the quantity of lives). Some of these issues are discussed in section 6 of the mental health cause profile.

However, I'm struck by what seems to me a complete absence of such explicit population ethical reasoning in your launch post

Internally, we did discuss whether we should make this explicit or not. I was leaning towards doing so and saying that our fourth belief was something about prioritising making people happy rather than making people happy. In the end, we decided not to mention this. One reason is that, as noted above, it's not (yet) totally clear what HLI will focus on, hence we don't know what our colours are so as to be able to nail them to the mast, so to speak.

Another reason is that we assumed it would be confusing to many of our readers if we launched into an explanation of why we were making people happier as opposed to making happy people (or preventing the making of unhappy animals). We hope to attract the interest of non-EAs to our project; outside EA we doubt many people will have these alternatives to making people happier in mind. Working on the principle you shouldn't raise objections to your argument your opponent wouldn't consider, it seemed questionably useful to bring up the topic. To illustrate, if I explain what HLI is working on to a stranger I met in the pub, I would say 'we're focused on finding the best ways to make people happier' rather than 'we're focused on near-term human happiness maximisation', even though the latter is more accurate, as it will cause less confusion.

More generally, it's unclear how much work HLI should put into defending a stance in population ethics vs assuming one and then seeing what follows if one applies new metrics for well-being. I lean towards the latter. Saliently, I don't recall GiveWell taking a stance on population ethics so much as assuming its donors already care about global health and development and want to give to the best things in that category.

Much of the above equally applies to discussing the value of saving lives . I'm sympathetic to (although, again, not certain about) Epicureanism, on which living longer has no value, but I'm not sure anyone else in HLI shares that view (I haven't asked around, actually). In the cause profile of mental health, section 5 I do a cost-effectiveness comparison of saving lives to improving lives that using the 'standard' view of the badness of death, deprivationism (the badness of your death is the ammount of well-being you would have had if you lived, hence saving 2-year-olds is better than saving 20-year-olds, all other things equal). I imagine we'll set out how different views about the value of saving lives give you different priorities without committing, as an organisation, to a view, and leave readers to make up their own minds.

(Whereas, without such an explanation, I would be confused why someone would start their own organization "[a]ssessing which careers allow individuals to have the greatest counterfactual impact in terms of promoting happier lives.")

I don't see why this is confusing. Holding one's views on population ethics or the badness of death fixed, if one has a different view of what value is, or how it should be measured (or how it should be aggregated) that is clearly opens up scope for a new approach to prioritisation. The motivation to set up HLI came from the fact if we use self-reported subjective well-being scores are the measure of well-being, that does indicate potentially different priorities.

Thanks for your comments and engaging on this topic. If quite a few people flag similar concerns over time we may need to make a more explicit statement about such matters.

Comment by michaelplant on Announcing the launch of the Happier Lives Institute · 2019-06-27T21:56:32.205Z · score: 5 (3 votes) · EA · GW

Hello Nathan. I think HLI will probably focus on what we can do for others. There is already quite a lot of work by psychologists on what individuals can do for themselves, see e.g. The How of Happiness by Lyubormirsky and what is called 'positive psychology' more broadly. Hence, our comparative advantage and counterfactual impact will be on how best to altruistically promote happiness.

Sure though there are some kinds of misery you don't want to reduce

I think we should be maximising happiness over any organism's whole lifespan; hence, some sadness now and then may be good for maximising happiness over the whole life. It's an empirical question how much sadness is optimal for maximum lifetime happiness.

On the funeral point, I think you're capturing an intuition about what we ought to do rather than what makes life go well for someone: you might think that not going the funeral would make your life go better for you, but that you ought to go anyway. Hence, I don't think your point counts against happiness being what makes your life go well for you (leaving other considerations to the side).

Comment by michaelplant on Announcing the launch of the Happier Lives Institute · 2019-06-27T21:36:08.760Z · score: 3 (2 votes) · EA · GW
The objectivist might say that this is exactly the point, but the subjectivist could just respond that it doesn't matter as long as the individual is (more) satisfied.

Yes, the subjectivist could bite the bullet here. I doubt many(/any) subjectivists would deny this is a somewhat unpleasant bullet to bite.

Life satisfaction and preference satisfaction are different - the former refers to a judgement about one's life, the latter to one's preferences being satisfied in the sense that the world goes the way one wants it to. I think the example applies to both views. Suppose the grass counter is satisfied with his life and things are going the way he wants them to go: it still doesn't seem that his life is going well. You're right that preference satisfactionists often appeal to 'laundered' preferences - their have to prefer what their rationally ideal self would prefer, or something - but it's hard and unsatisfying to spell out what this looks like. Further, it's unclear how that would help in this case: if anyone is a rational agent, presumably Harvard mathematicians like the grass-counter are. What's more, stipulating preferences can/must be laundered is also borderline inconsistent with subjectivism: if you tell me that some of my preferences doesn't count towards my well-being because they 'irrational' you don't seem to be respecting the view that my well-being consists in whatever I say it does.

On the experience machine, this only helps preference satisfactionists, not life satisfactionist: I could plug you into the experience machine such that you judged yourself to be maximally satisfied with your life. If well-being just consists in judging one's life is going well, it doesn't matter how you come to that judgement.

Comment by michaelplant on Announcing the launch of the Happier Lives Institute · 2019-06-27T21:22:01.234Z · score: 3 (2 votes) · EA · GW

I'm not sure what Kahneman believes. I don't think he's publicly stated well-being consists in life satisfaction rather than happiness (or anything else). I don't think his personal beliefs are significant for the (potential) view either way (unless one was making an appeal to authority).

Comment by michaelplant on Announcing the launch of the Happier Lives Institute · 2019-06-22T01:10:01.148Z · score: 3 (2 votes) · EA · GW

Hello James,

Thanks for these.

I remember we discussed (1) a while back but I'm afraid I don't really remember the details anymore. To check, what exactly is the bias you have in mind - that people inflate their self-reports scores generally when they are being given treatment? Is there one or more studies you can point me to so I can read up on this, or is this a hypothetical concern?

I don't think I understand what you're getting at with (2): are you asking what we infer if some intervention increases consumption but doesn't increase self-reported life satisfaction in a scenario S but does in other scenarios? That sounds like a normal case where we get contradictory evidence. Let me know if I've missed something here.

What evidence currently exists around the external validity of the links between outcomes and ultimate impact (i.e. life satisfaction)?

I'm not sure what you mean by this. Are you asking what the evidence is on what the causes and correlated of life satisfaction is? Dolan et al 2008 have a much cited paper on this.

Comment by michaelplant on Announcing the launch of the Happier Lives Institute · 2019-06-22T00:45:56.045Z · score: 16 (11 votes) · EA · GW

RomeoStevens, thanks for this comment. I think you're getting at something interesting, but I confess I this quite hard to follow. Do you think you could possibly restate it, but do so more simply (i.e. with less jargon)? For instance, I don't know how to make sense of

There seems to be strong status quo bias and typical mind fallacy with regard to hedonic set point.
Comment by michaelplant on Announcing the launch of the Happier Lives Institute · 2019-06-21T15:07:36.322Z · score: 9 (4 votes) · EA · GW

Hello Aaron,

In the 'measuring happiness' bit of HLI's website we say

The ‘gold standard’ for measuring happiness is the experience sampling method (ESM), where participants are prompted to record their feelings and possibly their activities one or more times a day.[1] While this is an accurate record of how people feel, it is expensive to implement and intrusive for respondents. A more viable approach is the day reconstruction method (DRM) where respondents use a time-diary to record and rate their previous day. DRM produces comparable results to ESM, but is less burdensome to use (Kahneman et al. 2004).

Further, I don't think that fact happiness is subjective or timing-dependent is problematic: what I think matters is how pleasant/unpleasant people feel throughout the moments of their life. (In fact, this is the view Kahneman argued for in his 1999 paper 'Objective happiness'.)

Comment by michaelplant on Announcing the launch of the Happier Lives Institute · 2019-06-21T14:26:16.504Z · score: 14 (13 votes) · EA · GW

Thanks for this. Let me make three replies.

First, HLI will primarily use life satisfaction scores to determine our recommendations. Hence, if you think life satisfaction does a reasonably job of capturing well-being, I suppose you will still be interested in the outputs.

Second, it's not yet clear if there would be different priorities if life satisfaction rather than happiness were used as the measure of benefit. Hence, philosophical differences may not lead to different priorities in this case.

Third, I've been somewhat bemused by Kahneman's apparent recent conversation to thinking life satisfaction, rather than happiness, is what matters for well-being. I don't see why the descriptive claim that people, in fact, try to maximise their life satisfaction rather than their happiness should have any bearing on the evaluative claim of well-being consists in. To get such a claim off the ground, you'd need something like a 'subjectivist' view about well-being, on which well-being consists in whatever people choose their well-being to consist in. Hedonism (well-being consists in happiness) is an 'objectivist' view, because it holds your happiness is good for you whether you think it is or not. See Haybron for a brief discussion of this.

I don't find subjectivism about well-being plausible. Consider John Rawls' grass-counter case: imagine a brilliant Harvard mathematician, fully informed about the options available to her, who develops an overriding desire to count the blades of grass on the lawns. Suppose this person then does spend their time counting blades of grass and is miserable while doing so. On the subjectivist view, this person's life is going well for them. I think this person's life is going poorly for them because they are unhappy. I'm not sure there's much, if anything more, to say about this case: some will think grass-counter's life is going well, some won't.