Posts

Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty 2020-08-03T16:17:32.230Z · score: 62 (28 votes)
Update from the Happier Lives Institute 2020-04-30T15:04:23.874Z · score: 80 (42 votes)
Understanding and evaluating EA's cause prioritisation methodology 2019-10-14T19:55:28.102Z · score: 37 (19 votes)
Announcing the launch of the Happier Lives Institute 2019-06-19T15:40:54.513Z · score: 123 (84 votes)
High-priority policy: towards a co-ordinated platform? 2019-01-14T17:05:02.413Z · score: 22 (9 votes)
Cause profile: mental health 2018-12-31T12:09:02.026Z · score: 103 (63 votes)
A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare 2018-10-25T15:48:03.377Z · score: 65 (45 votes)
Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was 2018-05-23T10:30:43.748Z · score: 65 (55 votes)
Could I have some more systemic change, please, sir? 2018-01-22T16:26:30.577Z · score: 24 (19 votes)
High Time For Drug Policy Reform. Part 4/4: Estimating Cost-Effectiveness vs Other Causes; What EA Should Do Next 2017-08-12T18:03:34.835Z · score: 8 (8 votes)
High Time For Drug Policy Reform. Part 3/4: Policy Suggestions, Tractability and Neglectedess 2017-08-11T15:17:40.007Z · score: 8 (8 votes)
High Time For Drug Policy Reform. Part 2/4: Six Ways It Could Do Good And Anticipating The Objections 2017-08-10T19:34:24.567Z · score: 13 (11 votes)
High Time For Drug Policy Reform. Part 1/4: Introduction and Cause Summary 2017-08-09T13:17:20.012Z · score: 20 (22 votes)
The marketing gap and a plea for moral inclusivity 2017-07-08T11:34:52.445Z · score: 20 (31 votes)
The Philanthropist’s Paradox 2017-06-24T10:23:58.519Z · score: 2 (8 votes)
Intuition Jousting: What It Is And Why It Should Stop 2017-03-30T11:25:30.479Z · score: 5 (11 votes)
The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik 2016-12-05T21:03:24.496Z · score: 15 (15 votes)
Are You Sure You Want To Donate To The Against Malaria Foundation? 2016-12-05T18:57:59.806Z · score: 30 (30 votes)
Is effective altruism overlooking human happiness and mental health? I argue it is. 2016-06-22T15:29:58.125Z · score: 28 (29 votes)

Comments

Comment by michaelplant on Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty · 2020-08-06T20:37:14.800Z · score: 3 (2 votes) · EA · GW

Glad you were impressed! Would welcome any suggestions on how to improve the analysis.

Thanks for clarifying. Yes, I understand that economists lean towards a desire satisfaction theory of well-being and development economists lean towards Sen-style objective list theories. We're in discussion with a development economist about whether and how to transform this into an article for a development econ journal, and there we expect to have to say a lot more about justifying the approach. That didn't seem so necessary here: EAs tends to be quite sympathetic to hedonism and/or measuring well-being using SWB, and we've argued for that elsewhere, so we thought it more useful just to present the method.

Comment by michaelplant on Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty · 2020-08-06T16:53:50.663Z · score: 4 (3 votes) · EA · GW

Hello Jack, thanks for the comment. As you note, the document doesn’t attempt to address the issues you raised. We’re particularly interested to have people engage with the details of how we’ve done the analysis, although we recognise this will be far too far much ‘in the weeds’ for most (even members of this august forum).

I’d like to reply to your comment though, seeing as you've made it. There are quite a few separate points you could be making and I’m not sure which you mean to press.

You wonder about the suitability of SWB scores in low-income settings and raise Sen’s adaptive preferences point.

One way to understand the adaptive preferences point is as an argument against hedonism: poor people are happy, but their lives aren’t going well, so happiness can’t be what matters. From this it would follow that SWB scores might not be a good measure of well-being anywhere, not just in low-income contexts. Two replies. First, I’m pretty sympathetic to hedonism: if people are happy, then I think their lives are going well. Considering adaptive preferences doesn’t pull me to revise that. Second, as an empirical aside, it’s not at all obvious that people do adapt to poverty: the IDInsight survey found the Kenyan villagers had life satisfaction of around 2/10. That’s much lower than life satisfaction on average in Kenya of around 4.5. A quick gander at the worldwide distribution of life satisfaction scores (https://ourworldindata.org/happiness-and-life-satisfaction) tells you the poorer people are less satisfied than richer ones. The story might be interestingly different for measures of happiness (sometimes called ‘affect balance’).

Another way to understand the force of adaptive preferences is about what we owe one another. Here the idea is that we should help poor people even if doing so doesn’t improve their well-being (whatever well-being is) - the further thought being that it won’t improve their well-being because they’ve adapted. I don’t find this plausible. If I can provide resources for A or B, but helping A will have no impact on their well-being, whereas B will have their well-being increased, I say we help B. (To pull out the intuition that adaptive preferences is really about normative commitments, note we might think it makes sense for people in unfavourable circumstances to change their views to increase their well-being, but that there’s something odious about not helping people because they’ve managed to adapt; it’s as if we’re punishing them for their ingenuity)

A different concern one might have is that those in low-income contexts use scales very differently from those elsewhere: someone who says there are 4/10 but lives in poverty actually has a very different set of psychological states from someone who says they are 4/10 in the UK. In this case, it is mistaken to take these numbers at face value. The response to this problem is to have a theory of how and why people differently interpret subjective scales so you can account for and adjust the score: e.g. determine what the true SWB values are on the same scale. This is one of the most important issues not adequately addressed by current research. I’ve got a (long) paper on this I’ve nearly finished. The very short answer is that I think the answers are (cardinally) comparable and this is because individuals try to answer subjective scales in the same way as everyone else in order to make themselves understood. On this basis, I think it’s reasonable to interpret SWB scores at face value.

Comment by michaelplant on EA reading list: population ethics, infinite ethics, anthropic ethics · 2020-08-03T16:33:23.760Z · score: 7 (4 votes) · EA · GW

I think population ethics and infinite ethics should be separated. They are different topics, although with relevant to each other.

Comment by michaelplant on Utility Cascades · 2020-07-29T14:24:36.524Z · score: 11 (6 votes) · EA · GW

I enjoyed reading the paper but was unconvinced any serious problem was being raised (rather than merely a perception of a problem resulting from a misunderstanding).

Put very simply, the structure of the original case is that person chooses option B instead of option A because new information makes option B look better in expectation. It then turns out that option A, despite having lower expected value, produced the outcome with higher value. But there's nothing mysterious about this: it happens all the time and provides no challenge to expected value theory or act utilitarianism. The fact that I would have won if I'd put all my money on number 16 at the roulette table does not mean I was mistaken not to do so.

Comment by michaelplant on Do research organisations make theory of change diagrams? Should they? · 2020-07-29T13:44:14.848Z · score: 15 (4 votes) · EA · GW

At HLI, we've found creating a Theory of Change (TOC) very useful. It was (at least for me) quite a painful process of making explicit various assumptions and uncertainties and then talking through them. I think if we hadn't done it explicitly we would (a) have made a less thoughtful plan and (b) different members of the team would be carrying around their own plans in their heads.

Going through a ToC process has also helped us to focus on meeting the needs of our target audiences. After developing our TOC, we sent out surveys to some of our key stakeholders to identify their concerns about subjective well-being measures and what new information would make them more likely to use them. Their responses provided the basis for our research agenda and the questions we have chosen to investigate this year.

We have a slightly more detailed version of our ToC diagram on our blog. Thanks for pointing out that it’s hard to find; we’ll think about putting it on a main page.

Comment by michaelplant on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-21T18:17:18.483Z · score: 2 (1 votes) · EA · GW

Hmm. Okay, that's fair, on re-reading I note the OP did discuss this at the start, but I'm still unconvinced. I think the context may make a difference. If you are speaking to a member of the public, I think my concern stands, because of how they will misinterpret the thoughtfulness of your prediction. If you are speaking to other predict-y types, I think this concerns disappears, as they will interpret your statements the way you mean them. And if you're putting a set of predictions together into a calculation, not only it is useful to carry that precision through, but it's not as if your calculation will misinterpret you, so to speak.

Comment by michaelplant on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-20T09:51:55.755Z · score: 10 (7 votes) · EA · GW

I had a worry on similar lines that I was surprised not to see discussed.

I think the obvious objection to using additional precision is that this will falsely convey certainty and expertise to most folks (i.e. those outside the EA/rationalist bubble). If I say to a man in the pub either (A) "there's a 12.4% chance of famine in Sudan" or (B) "there's a 10% chance of famine in Sudan", I expect him to interpret me as an expert in (A) - how else could I get so precise? - even if I know nothing about Sudan and all I've read about discussing probabilities is this forum post. I might expect him to take my estimate more seriously than of someone who knows about Sudan but not about conveying uncertainty.

(In philosophy of language jargon, the use of a non-rounded percentage is a conversational implicature that you have enough information, by the standards of ordinary discourse, to be that precise.)

Comment by michaelplant on High stakes instrumentalism and billionaire philanthropy · 2020-07-20T09:33:01.156Z · score: 8 (5 votes) · EA · GW

I agree with this comment - thanks! A follow up: can you say why political theorists accept high stakes instrumentalism (as opposed to stating that they do)? It sounds like this is effectively a re-run of familiar debates between consequentialists and non-consequentialists (e.g. "can you kill one to save five? what about killing one to save a million?"), just wrapped in different language, so I'm wondering if something else is going on. I suppose I'm a bit surprised the view has no detractors - I imagine there are some (Kant?) who would hold the seemingly equivalent view you can never kill one to save any number of others.

Comment by michaelplant on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T20:14:06.308Z · score: 25 (14 votes) · EA · GW

Thanks for this write up. The list is quite substantial, which makes me think: do you have a list of problems you've considered, concluded are probably quite unpromising and therefore dissuade people from undertaking? I could imagine someone reading this and thinking "X and Y are on the list so Z, which wasn't mentioned explicitly [but 80k would advice against], is also likely a good area".

Comment by michaelplant on Towards Donor Coordination Via Mechanism Design · 2020-06-22T09:23:02.998Z · score: 3 (2 votes) · EA · GW

Just a quick note. It would be helpful if, at the start, you explained who you think this post is for and/or its practical upshot. I skimmed through the first 30% and wasn't sure if this was a purely academic discussion or you were suggesting a way for donors to coordinate.

Comment by michaelplant on HLI’s Mental Health Programme Evaluation Project - Update on the First Round of Evaluation · 2020-06-19T10:08:22.120Z · score: 3 (2 votes) · EA · GW

A couple of quick replies.

First, all your comments on the weirdness of Western mental healthcare are probably better described as 'the weirdness of the US healthcare system' rather than anything to do with mental health specifically. Note they are mostly to do with insurance issues.

Second, I think one can always raise the question of whether it's better to (A) improve the best of service/good X or (B) improve distribution of existing versions of X. This also isn't specific to mental health: one might retort to donors to AMF that they should be funding improvements in (say) health treatment in general or malaria treatment in particular. There's a saying I like which is "the future is here, it just isn't very evenly distributed" - if you compare Space-X launching rockets which can land themselves vs people not having clean drinking water. There seems to be very little we can say from the armchair about whether (A) or (B) is the more cost-effective option for a given X. I suspect that if there were a really strong 'pull' for goods/services to be provided, then we would already have 'solved' world poverty, which makes me think distribution is weakly related to innovation.

Aside: I wonder if there is some concept of 'trickle-down' innovation at play, and whether this is relevant analogous to that of 'trickle-down' economics.

Comment by michaelplant on HLI’s Mental Health Programme Evaluation Project - Update on the First Round of Evaluation · 2020-06-15T14:25:45.097Z · score: 2 (1 votes) · EA · GW

I'm not sure what you mean by going from 0 to 1 vs 1 to n. Can you elaborate? I take it you mean the challenge of going from no to current best practice treatment (in developing countries) vs improving best practice (in developed countries).

I don't have a cached answer on that question, but it's an interesting one. You'd need to make quite a few more assumptions to work through it, e.g. how much better MH treatment could be than the current best practice, how easy it would be to get it there, how fast this would spread, etc. If you'd thought through some of this, I'd be interested to hear it.

Comment by michaelplant on How to Measure Capacity for Welfare and Moral Status · 2020-06-15T09:24:55.751Z · score: 2 (1 votes) · EA · GW

Right. My thought is that we assume humans have the same capacity on average, because while there might be differences, we don't know which way they'll go so they should 'wash out' as statistical noise. Pertinently, this same response doesn't work for animals because we really don't know what their relatively max capacities are.

FWIW, the analogue to my response here would be to say we can expect all chickens to have approximately the same capacity as each other, even if individuals chickens differ. The claim isn't about humans per se, but about similarities borne out of genetics.

Comment by michaelplant on How to Measure Capacity for Welfare and Moral Status · 2020-06-09T11:10:29.701Z · score: 4 (2 votes) · EA · GW

Thanks for your response, but I don't think you're grasping the nettle of my objection. I agree with you that you and I both think we know something about the mental states of other adult humans and, further, human babies. I also think such assumptions are reasonable, if empirically unprovable. But that's not my point.

In short, my challenge is: articulate and defend the method you will use to determine how much more or less happy humans are than non-humans animals in particular contexts - say the average humans vs the average factory farmed chicken.

Here's what I think we can do with humans. We assume you and I have the same capacity for happiness. We assume we are able to learn about the experiences of others and communicate them via language, e.g. we've both stubbed our toes, but I haven't broken my leg, and when you say "breaking my leg is 10x worse" I can conclude that would be true for me too. Hence, when you say "I feel 2/10" or "I feel terrible" I might feel confident you mean the same things by those as I do.

What can do with chickens? We really have no idea what chickens' capacities for happiness are - is it 1/10th, 1/100th, etc? It doesn't seem at all reasonable to assume they are roughly the same as ours. The chicken cannot tell us how happy how it is relative to its maximum, our maximum, or, indeed, tell us anything at all. Of course, we may have intuitions - what we might perjoratively call "tummy feelings" - about these things. Fine. But what method do we use to assess if those intuitions are correct? The application of further intuitive reflection? Surely not. I cannot think of a justifiable empirical method to inform our priors. If you can explain why this project is not doomed, I would love to know why! But I fear it is.

Comment by michaelplant on How to Measure Capacity for Welfare and Moral Status · 2020-06-05T09:46:27.995Z · score: 3 (2 votes) · EA · GW

Thanks for writing this up. It seems what you've done with the atomistic approach is stated what, in principle, one would need to do, but not really wrestled with the difficulties and details of doing it. By analogy, it's a bit like you've said "if we want to get to space, we need to build a spaceship" and but not said how to build a spaceship ("well, it would need to get into space, and carry people, ...")

I think it would help to spell out a particular issue. Suppose we think happiness, the intrinsic pleasurableness/displeasurableness of experiences is one of the things that constitutes welfare. Okay, what proxy do we use for that? Happiness is a subjective experience, so no objective measure is possible. Of course, we have intuitions about relative magnitudes of happiness in different animals, but what makes us think we're right, even approximately?

(I note I raised effectively the same concern in your previous post and you haven't (yet) replied to my latest comment. You linked me this paper, but it doesn't address my concern: the author surveys didn't "suffering calculators" but doesn't provide an account of how we would test that some are more valid that others).

Comment by michaelplant on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T21:18:13.641Z · score: 2 (1 votes) · EA · GW

Thanks for the thoughtful reply!

To fill out the details of what you're getting at, I think you're saying "the welfare level of an animal is X% of its capacity C. We're confident of both X and C in the given scenario for animal A is high enough that it's better to help animal A than animal B". That may be correct, but you're accepting that than you can know the welfare levels because you know the percentage of the capacity. But then I can make the same claim again: why should we be confident we've got the percentage of the capacity right?

I agree we should, in general, use inference to the best explanation. I'm not sure we know how to do that when we don't have access to the relevant evidence (the private, subjective states) to draw inferences. If it help, trying putting on the serious sceptic's hat and ask "okay, we might feel confident animal A is suffering more than animal B, and we do make these sort of judgement the whole time, but what justifies this confidence?". What I'd really like to understand (not necessary from you - I've been thinking about this for a while!) is what the chain of reasoning is that would go into that justification.

Comment by michaelplant on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T15:00:13.445Z · score: 12 (7 votes) · EA · GW

Thanks for writing this up - I thought this was a very philosophically high-quality forum post, both in terms of its clarity and familiarity with the literature, and have given it a strong upvote!

With that said, I think you've been too quick in responding to the first objection. An essential part of the project is to establish the capacities for welfare across species, but that's neither necessary or sufficient to make comparisons - for that, we need to know about actual levels of well-being for different entities (or, at least the differences in their well-being). But knowing about the levels seems very hard.

Let me quickly illustrate with some details. Suppose chicken welfare has a range of +2 to -2 well-being levels, but for cows it's -5 to +5. Suppose further the average actual well-being levels of chickens and cows in agriculture are -1 and -0.5, respectively. Should we prevent one time-period of cow-existence or of chicken-existence? The answer is chicken-existence, all else equal, even though cows have a greater capacity.

Can you make decisions about what maximises well-being if you know what the capacities but not the average levels are? No. What you need to know are the levels. Okay, so can we determine what the levels, in fact, are? You say:

Of course, measuring the comparative suffering of different types of animals is not always easy. Nonetheless, it does appear that we can get at least a rough handle on which practices generally inflict the most pain, and several experts have produced explicit welfare ratings for various groups of farmed animals that seem to at least loosely converge

My worry is: what makes us think that we can even "get a least a rough handle"? You appeal to experts, but why should we suppose that the experts have any idea? They could all agree with each other and still be wrong. (Arguably) silly comparison: suppose I tell you a survey of theological experts reported that approximately 1 to 100 angels could dance on the head of a pin. What should you conclude about how many angels can dance on a pin? Maybe nothing. What you might want to know is what evidence those experts have to form their opinions.

I'm sceptical we can have evidence-based inter-species comparisons of (hedonic) welfare-levels at all.

Suppose hedonism is right and well-being consists in happiness. Happiness is a subjective state. Subjective states are, of necessity, not measurable by objective means. I might measure what I suppose are the objective correlates of subjective states, e.g. some brain functionings, but how do I know what the relationship is between the objective correlates and the subjective intensities? We might rely on self-reports to determine that relationship. That seems fine. However, how do we extend that relationship to beings that can't give us self-reports? I'm not sure. We can make assumptions (about general relationship between objective brain states and subjective intensities) but we can't check if we're right or not. Of course, we will still form opinions here, but it's unclear how one could acquire expertise at all. I hope I'm wrong about this, but I think this problem is pretty serious.

If well-being consists in objective goods, e.g. friendship or knowledge, it might be easier to measure those, although there will be much apparent arbitrariness involved in operationalising these concepts.

There will be issues with desire theories too either way, depending whether one opts for a mental-state or non-mental-state version, but that's a further issue I don't want to get into here.

Comment by michaelplant on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T12:18:48.177Z · score: 13 (10 votes) · EA · GW

Ben, could you elaborate on how important you think representativeness is? I ask, because the gist of what you're saying is that it was bad the leaders' priorities were unrepresentative before, which is why it's good there is now more alignment. But this alignment has been achieved by the priorities of the community changing, rather than the other way around.

If one thought EA leaders should represent the current community's priorities, then the fact the current community's priorities had been changed - and changed, presumably, by the leaders - would seem to be a cause for remorse, not celebration.

As a further comment, if representativeness is a problem the simple way to solve this would be by inviting people to the leaders' forum to make it more representative. This seems easier than supposing current leaders should change their priorities (or their views on what they should be for the community).

Comment by michaelplant on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T12:06:07.280Z · score: 43 (16 votes) · EA · GW

I share Denise's worry.

My basic concern is that Ben is taking the fact there is high representativeness now to be a good thing while not seeming so worried about how this higher representativeness came about. This higher representativeness (as Denise points out) could well just be result of people who aren't enthused with the current leaders' vision simply leaving. The alternative route, where the community change their minds and follow the leaders, would be better.

Anecdotally, it seems like more of the first has happened (but I'd be happy to be proved wrong). Yet, if one thinks representativeness is good, achieving representativeness by having people who don't share your vision leave doesn't seem like a good result!

Comment by michaelplant on Reducing long-term risks from malevolent actors · 2020-05-07T10:03:11.938Z · score: 8 (4 votes) · EA · GW

Thanks for this write-up, I thought it was really interesting and not something I'd ever considered - kudos!

I'll now hone in on the bit of this I think needs most attention. :)

It seems you think that one of the essential things is developing and using manipulation-proof measures of malevolence. If you were very confident we couldn't do this, how much of an issue would that be? I raise this because it's not clear to me how such measures could be created or deployed. It seems you have (1) self-reports, (2) other-reports, (3) objective metrics, e.g. brain scans. If I were really sneaky, I would just lie or not take the test. If I were really sneaky, I would be able to con others, at least for a long-time - perhaps until I was in power. Regarding objective measures, there will be 'Minority Report' style objections to actually using them in advance, even if they have high predictive power (which might be tricky as it relies on collecting good data, which seems to require the consent of the malevolent).

The area where I see this sort of stuff working best is in large organisations, such as civil services, where the organisations have control over who gets promoted. I'm less optimistic this could work for the most important cases, political elections, where there is not a system that can enforce the use of such measures. But it's not clear to me how much of an innovation malevolence tests are over the normal feedback processes used in large organisations. Even if they could be introduced in politics somehow, it's unclear how much of an innovation this would be: the public already try to assess politicians for these negative traits.

It might be worth adding that the reason the Myers-Brigg style personality tests are, so I hear, more popular in large organisations than the (more predictive) "Big 5" personality test is that Myers-Briggs has no ostensibly negative dimensions. If you pass round a Big-5 test, people might score highly on neuroticism or low on openness and get annoyed. If this is the case, which seems likely, I find it hard e.g. Google will insist that staff take a test they know will assess them on their malevolence!

As a test for the plausibility of introducing and using malevolence tests, notice that we could already test for psychopathy but we don't. That suggests there are strong barriers to overcome.

Comment by michaelplant on Update from the Happier Lives Institute · 2020-05-05T18:21:54.792Z · score: 2 (1 votes) · EA · GW

Thanks very much for your support Sam, we are grateful for it! As we've discussed with you, we are also keen to see how thinking in terms of SWB illuminates the cause prioritisation analysis.

It's easier to see how it could do this in some areas rather than others. As we're relying on self-report data, it's not obvious how we could use that to compare humans to non-humans (although one project is to think through if this is really not possible). And for comparing near-term to long-term interventions, these are plausibly not sensitive to one's measure of welfare anyway. The usual long-termist line is that such concerns 'swamp' near-term ones whichever way you look at it.

Comment by michaelplant on Update from the Happier Lives Institute · 2020-05-05T18:19:55.252Z · score: 5 (3 votes) · EA · GW

Thanks for this! Our position hasn’t changed much since the last post. We still plan to focus on mostly near-term (human) welfare maximisation, but we'd like to see if we can, in the next couple of years, do/say something useful about welfare maximisation in other areas (i.e. animals, the long-term). We haven't thought much about what this would be yet: we want to develop expertise in the area that seems most useful (by our lights) before thinking about expanding our focus.

Speaking personally, I take what is effectively a worldview diversification view to moral uncertainty (this is a change) although the rationale is different (I plan to write this up at some point). This, combined with my person-affecting sympathies, means I want to put most, but not all, of my efforts into helping humans in the near-term.

Comment by michaelplant on How can I apply person-affecting views to Effective Altruism? · 2020-04-30T17:34:20.111Z · score: 4 (2 votes) · EA · GW

Yes, agree you could save existing animals. I'd actually forgotten until you jogged my memory, but I talk about that briefly in my thesis (chapter 3.3, p92) and suppose saving animals from shelters might be more cost-effective than saving humans (given a PAV combined with deprivationism about the badness of death).

Comment by michaelplant on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T09:59:19.933Z · score: 3 (2 votes) · EA · GW

I think you might not have clocked the OP's comment that the morally relevant being as just those that exist whatever we do, which would presumably rule out concerns for lives in the far future.*

*Pedantry: there could actually be future aliens who exist whatever we do now. Suppose some aliens will turn up on Earth in 1 million years and we've had no interaction with them. They will be 'necessary' from our perspective and thus the type of person-affecting view stated would conclude such people matter.**

**Further pedantry: if our actions changed their children, which they presumably would, it would just be the first generation of extraterrestrial visitors who mattered morally on this view.

Comment by michaelplant on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T09:50:12.374Z · score: 19 (12 votes) · EA · GW

I'm struggling to think of much written on this topic - I'm a philosopher and reasonably sympathetic to person-affecting views (although I don't assign them my full credence) so I've been paying attention to this space. One non-obvious consideration is whether to take an asymmetric person-affecting view (extra happy lives have no value, extra unhappy lives has negative value) or a symmetric person-affecting view (extra lives have no value).

If the former, one is pushed towards some concern for the long-term anyway, as Halstead argues here, because there will be lots of unhappy lives in the future it would be good to prevent existing.

If the latter - which I think, after long-reflection, is the more plausible version, even though it is more prima facie unintuitive - then that is practically sufficient, but not necessary, for concentrating on the near-term, i.e. this generation of humans; animals won't, for the most part, exist whatever we choose to do. I say not necessary because one could, in principle, think all possible lives matter and still focus on near-humans due to practical considerations.

But 'prioritise current humans' still leaves it wide-open what should you do. The 'canonical' EA answer for how to help current humans is by working on global (physical) health and development. It's not clear to me that this is the right answer. If I can be forgiven for tooting my own horn, I've written a bit about this in this (now somewhat dated) post on mental health, the relevant section being "why might you - and why might you not - prioritise this area [i.e. mental health]".

Comment by michaelplant on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T09:29:27.896Z · score: 6 (3 votes) · EA · GW

Plausibly, feotuses will not be morally relevant on such a view as they won't exist whatever we choose to do.

Comment by michaelplant on Coronavirus: how much is a life worth? · 2020-03-24T15:44:19.090Z · score: 2 (1 votes) · EA · GW

Yes, good point. Now inclined to think your and Paul F's analyses need to be combined in some way, not immediately clear to me how.

He is indeed converting money into quality and quality of health, not just quantity, my mistake.

Comment by michaelplant on Coronavirus: how much is a life worth? · 2020-03-24T15:36:21.613Z · score: 2 (1 votes) · EA · GW

An in-the-weeds methodological point: your analysis is arguably quite conservative because of where you place the 'neutral point' equivalent to non-existence is on a 0-10 life satisfaction scale. You say

Global life satisfaction averages 5.17/10 (as of 2018), making 4.5 years x 5.17/10 = 2.33 WALYs lost per death. An Australian National University model assumes 15 to 68 million pandemic deaths worldwide (in the first year), which would thus lose 35 to 158 million WALYs [...]
Between 2007 and 2011, global wellbeing (yellow line on chart) fell by nearly 0.2 life satisfaction points out of 10, then recovered. I will attribute all of this dip (blue area) to the financial crisis; and as it was mostly over a two-year period (2008–10), averaging 0.1/10 p.a., it totals about 0.2/10 = 0.02 WALYs lost per person worldwide, or 137 million WALYs overall; i.e. 0.9 to 3.9 times the impact of the deaths.

This counts 0/10 as equivalent on non-existence, i.e. it is not possible for respondents to say that their lives are worse than death.

It's unclear where the put the neutral point - an issue I flag in my D. Phil thesis and is noted in the Happier Lives Institute's Research Agenda. The other obvious place to put it is 5/10, on which the WALYs lost per death would be 0.07 (4.5 years x 0.17 per year / 10), rather than 2.33 and the value of saving lives would be smaller than the well-being value of the economic loss.

As a point about sensitivity then, the further from 0/10 the neutral point is, the easier it is for the conclusion you reach to be the case.

Comment by michaelplant on Coronavirus: how much is a life worth? · 2020-03-24T13:53:50.777Z · score: 4 (2 votes) · EA · GW

Hello Ben. Thanks for writing this up and showing how the value of outcomes can be compared using surveyed well-being data.

I've been thinking on somewhat similar lines about how to check if the medicine might be worse than the disease.

Your analysis doesn't get as far as telling us if governments' policies are the right ones (note: this is not a criticism - I didn't take you to be trying to address this issue).

You observe that COVID has some negative economic consequences, and if we make such and such assumptions, those economic consequences are worse (in terms of well-being) than the immediate health consequences.

To work out if govts are choosing the right policies, we need a counterfactual comparison between (1) what would happen on the basis of one policy, e.g. no 'suppression' (shutting down of businesses, movement restrictions, etc.) and (2) some other policy, e.g. suppression. Presumably COVID19 will have some negative economic consequences; the question is whether various strategies that save lives now at a cost of the economy are better overall than those that save fewer lives now but keep the economy stronger.

I think a neater way to get a handle on this action-guiding question is to realise that a smaller economy means there is less (public and private money) available to fund life-saving health services later, and to run the numbers making a comparison in terms of years of life (as opposed to comparing quality vs quantity of life, which is what your analysis does and is less apples-to-apples). Paul Frijters, a health economics prof at LSE who works on well-being, has two articles looking at this.

http://clubtroppo.com.au/2020/03/18/has-the-coronavirus-panic-cost-us-at-least-10-million-lives-already/

http://clubtroppo.com.au/2020/03/21/the-corona-dilemma

I think he's overlooked a couple of things is his calculations and I'm working up my own numbers (which may well end up telling the same story).

Comment by michaelplant on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-19T14:22:26.557Z · score: 5 (3 votes) · EA · GW

To what extent does GW base its recommendations of cost-effectiveness estimates?

Some parts of the GW website seem to argue (or caution) against using them.* However, if you're not using cost-effectiveness estimates, what criterion is being used instead?

For what it's worth, I think GW (and many others) should be trying to use cost-effectiveness estimates. One can distinguish implicit vs explicit estimates, 'naive' vs 'sophisticated' estimates, estimates of 'direct' effects vs total effects, so maybe GW objects to some of these but not others, and it would be helpful to know which ones.

*In an old (2011) blog post, Holden wrote

"we are arguing that focusing on directly estimating cost-effectiveness is not the best way to maximize cost-effectiveness"

and the cost-effectiveness part of GW website says

"We do not make charity recommendations solely on the basis of cost-effectiveness calculations "

Comment by michaelplant on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-19T14:05:01.683Z · score: 6 (4 votes) · EA · GW

Relatedly, what do you say to donors to wonder if their money would be better spent on (a) the long-term or (b) animal welfare?

Comment by michaelplant on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2020-02-28T13:16:36.021Z · score: 3 (2 votes) · EA · GW
More specifically, where else can I find (1) lists of the bazillion positive and negative externalities of an additional child and (2) some argument -- however weak -- that takes us beyond agnosticism on the question whether an additional child is overall a *net* positive or negative externality

Hello Dominic,

I do something of this in my DPhil thesis in chapter 2. I'm pretty uncertain whether the Earth is under- or overpopulated whatever one's views on population ethics.

Comment by michaelplant on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-14T12:33:30.496Z · score: 2 (10 votes) · EA · GW

I think this is likely too critical of this approach, given that this sort of thing already happens and works. Arguably, the mass-joining of Labour by Momentum is exactly 'entryism' of this sort. Such entryism was perhaps in bad faith, but conspicuously (a) this does seem to have changed the UK political landscape and (b) there haven't been serious attempts to stop it. I don't have a strong view on this, but it doesn't seem unreasonable for someone to claim "this happens anyway, it won't make things worse if we do it, we might as well do it too".

Comment by michaelplant on Is mindfulness good for you? · 2019-12-30T14:25:51.036Z · score: 7 (4 votes) · EA · GW

Hello John, thanks very much for doing this careful investigation. I was wondering, what makes you think there isn't also an overestimate for the effect sizes of CBT and antidepressants? I was wondering if the metanalyses on those had controlled for such biases, but you didn't mention that.

Comment by michaelplant on Logarithmic Scales of Pleasure and Pain (@Effective Altruism NYC) · 2019-11-19T18:09:36.606Z · score: 3 (2 votes) · EA · GW

I haven't watched the talk, but I have just left a long comment on original article, Logarithmic Scales of Pleasure and Pain

Here's the TL;DR of my comment:

I don't think this post provides an argument that we should interpret pleasure/pain scales as logarithmic. What's more, whether or not this is true is not necessary for post's practical claim - which is roughly that "the best/worst things are much better/worse than most people think".

Here's the link to my comment. I meant to write up my thoughts 3 months ago when the original article was posted, but never got around to it.

Comment by michaelplant on Logarithmic Scales of Pleasure and Pain: Rating, Ranking, and Comparing Peak Experiences Suggest the Existence of Long Tails for Bliss and Suffering · 2019-11-19T18:04:36.659Z · score: 4 (3 votes) · EA · GW

TL;DR I don't think this post provides an argument that we should interpret pleasure/pain scales as logarithmic. What's more, whether or not this is true is not necessary for post's practical claim - which is roughly that "the best/worst things are much better/worse than most people think".

Thanks for writing this up; sorry not to have got around to it sooner.

I think there are two claims that need be to carefully distinguished.

(A) that the relationship between actual and reported pleasure(/pain) is not linear but instead follows some other relationship, e.g. a logarithmic function where a 1-unit increase in self-reported pleasure represents a ten-fold increase in actual pleasure.

(B) whether the best/worst experiences that some people have are many times more intense than other people (who haven't had those experiences) assume they are.

I point this out because you say

the best way to interpret pleasure and pain scales is by thinking of them as logarithmic compressions of what is truly a long-tail. The most intense pains are orders of magnitude more awful than mild pains (and symmetrically for pleasure). [...]
Since the bulk of suffering is concentrated in a small percentage of experiences, focusing our efforts on preventing cases of intense suffering likely dominates most utilitarian calculations.

The idea, I take it, is that if we thought the relationship between self-reported and actual pleasure(/pain) was linear, but it turns out it was logarithmic, then the best(/worse) experiences are much better(/worse) that we expected they were because we'd be using the wrong scale.

However, I don't think you've provided (any?) evidence that (A) is true (or that it's true but we thought it was false). What's more, (B) is actually quite plausible by itself and you can claim (B) is true without needing (A) to be true.

Let me unpack this a bit.

(A) is a claim about how people choose to use self-reported scales. The idea is that people have experiences of a certain intensity they can distinguish for themselves in cardinal units, e.g. you can tell (roughly) how many perceivable increments of pleasure one experience gives you vs the next. A further question is how people choose to report these intensities when people give them a scale, say a 0-10 scale.

This reporting could be linear, logarithmic, etc. Indeed, people could choose to report anyway they want to. It seems most likely people use a linear reporting function because that's the helpful way to use language to convey how you feel to the person asking you how you feel. I won't get stuck into this here, but I say more about it in my PhD thesis at chapter 4, section 4.

Hence, on your pleasure/pain scales when you contrast 'intuitive' to 'long-tailed' scales, what I think you mean is that the intuitive scale is really 'reported' pleasure and the 'long-tailed' scale is 'actual' pleasure i.e. your claim is that there is a logarithmic relationship between reported and actual pleasure. I note you don't provide evidence that people generally use scales this way. Regarding the stings scale, that just is a logarithmic scale by construction, where going from a 1 to 2 on the scale represent a 10 times increase in actual pain. That doesn't show we have to report pleasure using log scales, or that we do, just that the guy who constructed that scale chose to build it that way. In fact, we can only use log pleasure/pain scales if we can somehow measure pain/pleasure on an arithmetic scale in the first place, and then convert from those numbers to a log scale, which requires that people are able to construct arithmetic pleasure/pain scales anyway.

(You might wonder if people can know, on an arithmetic scale, how much pleasure/pain they feel. However, if people really have no idea about this, then it follows they can't intelligibly report their pleasure/pain at all, whatever scale they are using.)

Regarding (B), note that claims such as "the worst stings are 1000x worse than the average person expects they are" can be true without it needing to be the case that people have misunderstood how other people tend to use pleasure/pain scale. For instance, I could alternatively claim that the relationship between reported pleasure/pain and actual pain is linear, but that people's predictions are just misinformed - e.g. torture is actually more worse than they thought. For comparison, if I claim "the heaviest building in the world weighs 1000x more than most people think it weighs" I don't need to say anything about the relationship between reports of perceived weight and actual weight.

Hence, if you want to claim "experiences X and Y are much better/worse than we thought", just claim that without getting into distracting stuff about reported vs actual scale use!

(P.S. The Fechner-Weber stuff is a red-herring: that's about the relationship between increases in an objective quantity and in subjective perceptions of increases in that quantity. That's different from talking about the relationship between a reported subjective quantity and the actually experienced subjective quantity. Plausibly the former relationship is logarithmic, but one shouldn't directly infer from that that the latter relationship is logarithmic too).

Comment by michaelplant on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-13T12:55:40.322Z · score: 9 (5 votes) · EA · GW

Thanks for writing this up - I found it helpful. I'm just trying to summarise this in my head and have some questions.

To get the claim that the best interventions are much better than the rest, don't you need to claim that interventions follow a (very) fat-tailed distribution, rather than the claim there are lots of interventions? If they were normally distributed, then (say) bednets would be a massive outlier in terms of effectiveness, right? Do you (or does someone else) have an argument that interventions should be heavy-tailed?

About predicting effectiveness, it seems your conclusion should be one of epistemic modesty relating to hard-to-quantify interventions, not that we should never think they are better. The thought seems to be people are bad at predicting interventions in general, but we can at least check for the easy-to-quantify predictions to overcome our bias; whereas we cannot do this for the hard ones. It seems the implication is that we should discount the naive cost-effectiveness of systemic interventions to account for this bias. But 'sophisticated' estimates of cost-effectiveness for hard-to-quantify interventions might still turn out to be better than those for estimates of simple interventions. Hence it's a note of caution about estimations, not a claim that, in fact, hard to quantify interventions are (always or generally) less cost-effective.


Comment by michaelplant on The (un)reliability of moral judgments: A survey and systematic(ish) review · 2019-11-01T16:11:34.726Z · score: 3 (2 votes) · EA · GW

Okay, that makes more sense. You could have a systematic review which unambiguously pointed in one conclusion, you perhaps you should add something like you've already said, i.e. that you're just trying to report the finding without drawing an overall conclusion (although I don't know why someone would avoid drawing an overall conclusion if they thought there was one). And again, it would be helpful to add that there doesn't seem to be a consensus on this point (and possibly that it 'falls between the gaps' of various disciplines).

Comment by michaelplant on The (un)reliability of moral judgments: A survey and systematic(ish) review · 2019-11-01T12:42:25.804Z · score: 8 (4 votes) · EA · GW

A couple of very general suggestions to aid the reader - I've only read the summary. Given the length of the post, could you add a line or two to your summary to say what conclusion you're arguing for? Reading the summary, I get what the topic is, but not what your take is. It would also be good if you could orientate the reader as to where this fits in the literature, e.g. what the consensus in the field is and whether you are agreeing with it.

Comment by michaelplant on Oddly, Britain has never been happier · 2019-10-23T18:08:42.096Z · score: 2 (1 votes) · EA · GW

I also thought the World Happiness Survey looked flat but it has gone up. 0.25/10 is not be sniffed at.

WHS has a much smaller sample size - around 1,000 per year - whereas the Office of National Statistics asks around 300,000 people a year. ONS data also shows a rise of about 0.3/10 between 2011 and 2019 (https://www.ons.gov.uk/peoplepopulationandcommunity/wellbeing/datasets/headlineestimatesofpersonalwellbeing)

Comment by michaelplant on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-18T14:35:41.416Z · score: 3 (2 votes) · EA · GW

I should just flag I've put a post on this topic on the forum too, albeit one that doesn't directly reply to John but addressed many of the points raised in the OP and in the comments.

I will make a direct reply to John on one issue. He suggests we should:

  1. We quantify importance to neglectedness ratios for different problems.

(that's supposed to be in quotation marks but they seem not be working).

I don't think this is a useful heuristic and I don't see problems which are higher scale:neglectedness should be higher priority. There are two issues with this. One is that problems with no resources going towards them will score infinitely highly on this schema. Another is that delineating one 'problem' from another is arbitrary anyway.

Let's illustrate what happens when we put these together. Suppose we’re talking about the cause of reducing poverty and, suppose further, it happens to be the case that it’s just as cost-effective to help one poor person as another. As a cause, quite a lot of money goes to poverty and let’s assume poverty scores badly (relative to our other causes) on this scale/neglectedness rating. I pick out person P, who is currently not receiving any aid and declare that ‘cause P’ – helping person P – is entirely neglected. Cause P now has infinite score on scale/neglectedness and suddenly looks very promising via this heuristic. This is perverse as, by stipulation, helping P is just as cost-effective any helping any other person in poverty.

Comment by michaelplant on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-18T14:25:05.305Z · score: 3 (2 votes) · EA · GW
Take technical AI safety research as an example. I'd have trouble directly estimating "How much good would we do by spending $1000 in this area", or sanity checking the result. I'd also have trouble with "What % of this problem would we solve by spending another $100?"

Hmmm. I don't really see how this is any harder, or different from, your proposed method, which is to figure out how much of the problem would be solved by increasing spend by 10%. In both cases you've got to do something like working out how much money it would take to 'solve' AI safety. Then you play with that number.

Comment by michaelplant on Understanding and evaluating EA's cause prioritisation methodology · 2019-10-17T14:07:55.146Z · score: 4 (3 votes) · EA · GW

Hello Risto,

Thanks for this. That's a good question. I think it partially depends on whether you agree with the above analysis. If you think it's correct that, when we drill down into it, evaluating problems (aka 'causes') by S, N, and T is just equivalent to evaluating the cost-effectiveness of particular solutions (aka 'interventions') to those problems, then that settles the mystery of what the difference really is between 'cause prioritisation' and 'intervention evaluation' - in short, they are the same thing and we were confused if we thought otherwise. However, if someone thought there was a difference, it would be useful to hear what it is.

The further question, if cause prioritisation is just the business of assessing particular solutions to problems, is: what the best ways to go about picking which particular solutions to assess first? Do we just pick them at random? Is there some systemic approach we can use instead? If so, what is it? Previously, we thought we have a two-step method: 1) do cause prioritisation, 2) do intervention evaluation. If they are the same, then we don't seem to have much of a method to use, which feels pretty dissatisfying.

FWIW, I feel inclined towards what I call the 'no shortcuts' approach to cause prioritisation: if you want to know how to do the most good, there isn't a 'quick and dirty' way to tell what those problems are, as it were, from 30,000 ft. You've just got to get stuck in and (intuitively) estimate particular different things you could do. I'm not confident that we can really assess things 'at-the-problem-level' without looking at solutions, or that we can appeal to e.g. scale or neglectedness by themselves and expect that to very much work. A problem can be large and neglecteded because its intractable, so can only make progress on cost-effectiveness by getting 'into the weeds' and looking at particular things you can do and evaluating them.

Comment by michaelplant on Defending the Procreation Asymmetry with Conditional Interests · 2019-10-14T19:54:53.627Z · score: 4 (3 votes) · EA · GW

Thanks for putting this up here. One major and three minor comments.

First, and probably most importantly, I don't see how this line of reasoning gets an asymmetry. If I understand it correctly, the idea is that people need to actually exist to have interests, so if people do or will exist, we can say existence will be good/bad for them. But that gets you to actualism, it seems, but not an asymmetry. If X would have a bad life, were X to exist, I take it we shouldn't create X. But then why, if X were to have a good life, were X to exist, do we not to have reason to create X? You say you're 'agnostic' about whether those who would have good lives have an interest in existing, but I don't think you give a reason for this agnosticism, which would be the crucial thing to do.

Second, I didn't really understand the explication of Meacham's view - you said it 'solves' a cavalcade of issues on pop ethics but didn't spell out how it actually solves them. I'm also not sure if your view is different from Meacham's and, if so, how.

Third, it would be useful if you could spell out what you take (some of) the practical implications of your view to be.

Fourth, because you get stuck into the deep end quite quickly, I wonder if you should add a note that this is a relatively more 'advanced' forum post.

Comment by michaelplant on Does improving animal rights now improve the far future? · 2019-09-16T14:30:22.605Z · score: 3 (2 votes) · EA · GW
To illustrate, suppose we have two (finite or infinite) sequences representing the amount of suffering in our sphere of influence at each point in time, but we make earlier progress on moral circle expansion in one so the amount of suffering in our sphere of influence is reduced by 1 at each step in that sequence compared to the other;

Just to say I really liked this point, which I think applies equally to focusing on the correct account of value (and opposed to who the value-bearers are, which is this point)

Comment by michaelplant on Movement Collapse Scenarios · 2019-09-02T21:39:24.003Z · score: 32 (14 votes) · EA · GW

Is putting some non-trivial budget into cash prizes for arguments against what you do the only way to show you're self-critical? Your statement suggests you believe something like that. But that doesn't seem the only way to show you're self-critical. I can't think of any other organisation that have ever done that, so if it is the only way to show you're self-critical, that suggests no organisation (I've heard of) is self-critical, which seems false. I wonder if you're holding CEA to a peculiarly high standard; would you expect MIRI, 80k, the Gates Foundation, Google, etc. to do the same?

Comment by michaelplant on Uncertainty and sensitivity analyses of GiveWell's cost-effectiveness analyses · 2019-09-02T10:16:06.572Z · score: 28 (11 votes) · EA · GW

Despite your reservations, I think it would actually be very useful for you to input your best guess inputs (and its likely to be more useful for you to do it than an average EA, given you've thought about this more). My thinking is this. I'm not sure I entirely followed the argument, but I took it that the thrust of what you're saying is "we should do uncertainty analysis (use Monte Carlo simulations instead of point-estimates) as our cost-effectiveness might be sensitive to it". But you haven't shown that GiveWell's estimates are sensitive to a reliance on point estimates (have you?), so you haven't (yet) demonstrated it's worth doing the uncertainty analysis you propose after all. :)

More generally, if someone says "here's a new, really complicated methodology we *could* use", I think its incumbent on them to show that we *should* use it, given the extra effort involved.

Comment by michaelplant on How to Make Billions of Dollars Reducing Loneliness · 2019-08-27T21:45:06.308Z · score: -1 (3 votes) · EA · GW

Well, how about starting "Tinder for sparerooms"?

Comment by michaelplant on Ask Me Anything! · 2019-08-19T14:55:47.849Z · score: 8 (6 votes) · EA · GW

I note your main project is writing a book on longtermism. Would you like to see the EA movement going in a direction where it focuses exclusively, or almost exclusively, on longtermist issues? If not, why not?

To explain the second question, it would seem answering 'no' to the first question would be in tension with advocating (strong) longtermism.

Comment by michaelplant on 'Longtermism' · 2019-08-02T16:01:01.453Z · score: 2 (1 votes) · EA · GW
shows a major problem

You mean, shows a major finding, no? :)