Posts

Life Satisfaction and its Discontents 2020-09-25T07:54:58.998Z · score: 61 (29 votes)
Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty 2020-08-03T16:17:32.230Z · score: 79 (38 votes)
Update from the Happier Lives Institute 2020-04-30T15:04:23.874Z · score: 82 (44 votes)
Understanding and evaluating EA's cause prioritisation methodology 2019-10-14T19:55:28.102Z · score: 37 (19 votes)
Announcing the launch of the Happier Lives Institute 2019-06-19T15:40:54.513Z · score: 124 (85 votes)
High-priority policy: towards a co-ordinated platform? 2019-01-14T17:05:02.413Z · score: 23 (10 votes)
Cause profile: mental health 2018-12-31T12:09:02.026Z · score: 108 (66 votes)
A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare 2018-10-25T15:48:03.377Z · score: 65 (45 votes)
Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was 2018-05-23T10:30:43.748Z · score: 65 (55 votes)
Could I have some more systemic change, please, sir? 2018-01-22T16:26:30.577Z · score: 24 (19 votes)
High Time For Drug Policy Reform. Part 4/4: Estimating Cost-Effectiveness vs Other Causes; What EA Should Do Next 2017-08-12T18:03:34.835Z · score: 8 (8 votes)
High Time For Drug Policy Reform. Part 3/4: Policy Suggestions, Tractability and Neglectedess 2017-08-11T15:17:40.007Z · score: 8 (8 votes)
High Time For Drug Policy Reform. Part 2/4: Six Ways It Could Do Good And Anticipating The Objections 2017-08-10T19:34:24.567Z · score: 13 (11 votes)
High Time For Drug Policy Reform. Part 1/4: Introduction and Cause Summary 2017-08-09T13:17:20.012Z · score: 22 (23 votes)
The marketing gap and a plea for moral inclusivity 2017-07-08T11:34:52.445Z · score: 20 (31 votes)
The Philanthropist’s Paradox 2017-06-24T10:23:58.519Z · score: 2 (8 votes)
Intuition Jousting: What It Is And Why It Should Stop 2017-03-30T11:25:30.479Z · score: 5 (11 votes)
The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik 2016-12-05T21:03:24.496Z · score: 16 (16 votes)
Are You Sure You Want To Donate To The Against Malaria Foundation? 2016-12-05T18:57:59.806Z · score: 30 (30 votes)
Is effective altruism overlooking human happiness and mental health? I argue it is. 2016-06-22T15:29:58.125Z · score: 28 (29 votes)

Comments

Comment by michaelplant on Life Satisfaction and its Discontents · 2020-10-21T21:33:52.539Z · score: 4 (2 votes) · EA · GW

That's a nice point. What life satisfaction views require more specifically is not just that the entity thinks about its life as a whole, but that it thinks about its life as a whole and makes a judgement about how its life is going overall. It's rather implausible animals do that latter thing, which means they have no well-being on this theory.

Comment by michaelplant on Evidence on correlation between making less than parents and welfare/happiness? · 2020-10-15T20:46:05.989Z · score: 3 (2 votes) · EA · GW

The most recent worldwide study on income and subjective well-being is Jebb et al. (2018). FWIW they find there are "satiation" points for the effect of income on SWB, measures as happiness, positive affect, and negative affect, nearly everywhere but that it's often higher than $75k.

Comment by michaelplant on TIO: A mental health chatbot · 2020-10-14T19:18:07.140Z · score: 9 (4 votes) · EA · GW

Hello Sanjay, thanks both for writing this up and actually having a go at building something! We did discuss this a few months ago but I can't remember all the details of what we discussed.

First, is there a link to the bot so people can see it or use it? I can't see one.

Second, my main question for you -sorry if I asked this before - is: what is the retention for the app? When people ask me about mental health tech, my main worry is not whether it might work if people used it, but whether people do want to use it, given the general rule that people try apps once or twice and then give up on them. If you build something people want to keep using and can provide that service cheaply, this would very likely be highly cost-effective.

I'm not sure it's that useful to create a cost-effectiveness model based on the hypothetical scenario where people use the chatbot: the real challenge is to get people to use it. It's a bit like me pitching a business to venture capitalists saying "if this works, it'll be the next facebook", to which they would say "sure, now tell us why you think it will be the next facebook".

Third, I notice your worst-cast scenario is the effect lasts 0.5 years, but I'd expect using a chatbot to only make me feel better for a few minutes or hours, so unless people are using it many times, I'd expect the impact to be slight. Quick maths: a 1 point increase on a 0-10 happiness scale for 1 day is 0.003 happiness life-years.

Comment by michaelplant on Comments on “Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty” · 2020-10-13T13:19:02.906Z · score: 3 (2 votes) · EA · GW

Okay, we're on the same page on all of this. :) A further specific empirical project would involve trying to understand population dynamics in the locations EAs are considering.

Comment by michaelplant on [Link] How understanding valence could help make future AIs safer · 2020-10-09T10:43:09.625Z · score: 3 (2 votes) · EA · GW

There are 10 reasons here, but isn't there just one key point: if we could explain to an AGI what happiness is, then we could get it to create more happiness (or, at least, not create more unhappiness)? I don't mean to sound like I'm dismissing this - this is an important and laudable goal - I'm wondering if I'm missing something.

Comment by michaelplant on If you like a post, tell the author! · 2020-10-08T09:36:50.476Z · score: 10 (7 votes) · EA · GW

In accordance with the post: I thought this was useful. As an old time forum hack I often have people say they feel too scared to post here because all you seem to get is people trying to destroy your ideas. It shouldn't be the case that the only people brave enough to post here are those types who score low in agreeableness (such as yours truly).

Comment by michaelplant on What actually is the argument for effective altruism? · 2020-10-07T11:45:20.374Z · score: 9 (3 votes) · EA · GW

If your goal is to do X, but you're not doing as much as you can of X, you are failing (with respect to X).

But your claim is more like "If your goal is to do X, you need to Y, otherwise you will not do as much as of X as you can". The Y here is "the project of effective altruism". Hence there needs to be an explanation of why you need to do Y to achieve X. If X and Y are the same thing, we have a tautology ("If you want do X, but you do not-X, you won't do X").

In short, it seems necessary to say that is distinctive about the project of EA.

Analogy: say I want to be a really good mountain climber. Someone could say, oh, if you want to do that, you need to "train really hard, invest in high quality gear, and get advice from pros". That would be helpful, specific advice about what the right means to achieve my end are. Someone who says "if you want to be good at mountain climbing, follow the best advice on how to good at mountain climbing" hasn't yet told me anything I don't already know.

Comment by michaelplant on Sortition Model of Moral Uncertainty · 2020-10-07T10:58:24.183Z · score: 4 (3 votes) · EA · GW

Regarding stakes, I think OP's point is that it's not obvious that being sensitive to stakes is a virtue of a theory, since it can lead to low credence-high stakes theories "swamping" the others, and that seems, in some sense, unfair. Bit like if you're really pushy friend always decides where the your group of friends goes for dinner, perhaps. :)

I'm not sure your point about money pumping works, at least as stated: you're talking about a scenario where you lose money over successive choices. But what we're interested in is moral value, and the sortition model will simply deny their's a fixed amount of money in the envelope each time one 'rolls' to see what one's moral view is. It's more like there's $10 in the envelope at stage 1, $100 at stage 2, $1 at stage 3, etc. What this brings out is the practical inconsistency of the view. But again, one might think that's a theoretical cost worth paying to avoid other theories costs, e.g. fanaticism.

I rather like the sortition model - I don't know if I buy it, but it's at least interesting and one option we should have on the table - and I thank the OP for bringing it to my attention. I would flag the "worldview diversification" model of moral uncertainty has a similar flavour, where you divide your resources into different 'buckets' depending on the credence you have in each bucket. See all the bargaining-theoretic model, which treats moral uncertainty as a problem of intra-personal moral trade. This two models also avoid fanaticism and leave one open to practical inconsistency.

Comment by michaelplant on Comments on “Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty” · 2020-10-05T16:01:04.359Z · score: 3 (2 votes) · EA · GW

On moral value as a linear function of well-being and comparability of SWB measures across different income settings

As you allude to, there are two issues here. If I think person A going from 0/10 to 1/10 life satisfaction has greater moral value than B going from 9/10 to 10/10, that might be because (1) you think each has the same increase in well-being, but you want to give extra weight to the worse off. This is the prioritarian point you say you are not making.

The alternative, (2) is that you think A really has had a bigger increase in well-being than B even though both have reported a 1-unit change in life satisfaction. (2) raises a concern about whether the subjective scales are cardinally comparable. This isn’t a moral problem, so much as a scientific one of measurement. Technically, the issue is whether numerical scores from subjective self-reports are cardinally comparable. I’ve got a working paper on this topic (not public apart from this link) where I delve into this and conclude subjective scales are likely cardinally comparable. The basic issue here, I think, is about how people are use language when interpreting survey questions; not much seems to have been written about it. With regards to your point about “comparability of SWB measures across different income settings” the document I linked to provides a rationale for why I suspect they are comparable.

Comment by michaelplant on Comments on “Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty” · 2020-10-05T15:58:52.762Z · score: 3 (2 votes) · EA · GW

On totalism and births averted per life saved

As you develop this methodology further, I think it’s important that you account for other moral views, most notably totalism. As you’re aware, totalism is a popular view (especially in EA) and, depending on how we ought to respond to moral uncertainty, we might think that totalism (or something similar) dominates our decision calculus when acting under moral uncertainty (Greaves and Ord 2017). I think it would be valuable to know what a similar totalist analysis yields.

I agree it’s important to see the value of our actions is sensitive to concerns about population ethics, especially in this case where it seems it could make such a difference. A few comments.

First, it’s worth noting all views of population ethics will be somewhat sensitive to the issue of how saving lives affects total population size. This is because whether there are more or fewer people now has, arguably, an impact on the well-being of everyone else (present and future). Many people seem to think the Earth is overpopulated, in the sense that adding people now is overall worse. There are a few different ways of thinking about this but one general practical implication is that the worse it is to add people (because you want a smaller population) the worse it will also be to save lives. See Greaves (2015) analysis and Plant (2019, chapter 2) which is an extension of Greaves’ paper.

Second, I agree that if you’re thinking about how mortality rates affect fertility, this will be particularly important on totalism in this context, because totalism gives so much weight to creating new lives, although it will apply to other views of population ethics too.

Third, when trying to understand what the “lives saved:births averted” ratio is, what’s relevant is not just mortality or fertility rates by themselves, but the combination of them. If parents are trying to have a set number of children (survive to adulthood) then the effects of reducing mortality might not change the total number of future people much, because parents adjust fertility. I think this is a topic for further work and I don’t claim expertise on the population dynamics in any particular context.

Comment by michaelplant on What actually is the argument for effective altruism? · 2020-09-28T14:56:18.592Z · score: 17 (9 votes) · EA · GW

Interesting write-up, thanks. However, I don't think that's quite the right claim. You said:

The claim: If you want to contribute to the common good, it’s a mistake not to pursue the project of effective altruism.

But this claim isn't true. If I only want to make a contribution to the common good, but I'm not all fussed about doing more good rather than less, (given whatever resources I'm deploying) then I don't have any reason to pursue the project of effective altruism, which you say is searching for the actions that do the most good.

A true alternative to the claim would be:

New claim: if you want to contribute to the common good as much as possible, it's a mistake not the pursue the project of effective altruism.

But this claim is effectively a tautology, seeing as effective altruism is defined as searching for the actions that do the most good. (I suppose someone who thought how to do the most good was just totally obvious would see no reason to pursue the project of EA).

Maybe the claim of EA should emphasise the non-obvious of what doing the most good is. Something like:

If you want to have the biggest positive impact with your resources, it's a mistake just trust your instincts(/common sense?) about what to do rather than engage in the project of effective altruism: to thoroughly and carefully evaluate what does the most good.

This is an empirical claim, not a conceptual one, and its justification would seem to be the three main premises you give.

Comment by michaelplant on Factors other than ITN? · 2020-09-28T09:45:59.040Z · score: 7 (4 votes) · EA · GW

If I can be forgiven for tooting my own horn, I also wrote a forum post about the framework around the same time as John posted his. EAs have often talked about "cause prioritisation" as being distinct from "intervention evaluation": the former is done in terms of ITN, the latter in term of cost-effectiveness. I agree with Ben Todd's suggestion the best way to understand ITN is as three factors that combine to a calculation of cost-effectiveness (aka "good done per dollar"). One result of this that I think it's confused to think that "cause prioritisation" and "intervention evaluation" are two different things. I discuss some implications of this.

Comment by michaelplant on Life Satisfaction and its Discontents · 2020-09-28T09:34:09.388Z · score: 2 (1 votes) · EA · GW

Glad you raise this: I discuss the possibility of different species having different accounts of welfare in the paper in section 5.2 on the "too few subjects" objection! The main weirdness of such a view is that it's vulnerable to spectrum arguments: it implies one of your ancestors had their well-being consist in (say) happiness and life satisfaction, but whose parents were slightly less cognitively developed and therefore their well-being consists just in happiness.

Comment by michaelplant on Life Satisfaction and its Discontents · 2020-09-28T09:28:32.683Z · score: 4 (2 votes) · EA · GW
Is automaximization not an objection to desire theories as well?

As I state above, the first point in the paper is that life satisfaction theories seem to be a particular kind of desire theory, the global desire theory, in disguise. Hence, the two objection I raise are objections to both life satisfaction and global desire theories (which I claim are really just the same view). The two objections won't apply to non-global desire theories; as I say in the paper, that might be reason for people who like desire theories to instead adopt a non-global version.

Or should we accept that we don't get to decide all of our desires or how easy it is to satisfy them?

It's clear we don't get to decide on many of our desires! We simply have urges to do all sorts of things. See distinction in the paper between local vs global desires.

Comment by michaelplant on Life Satisfaction and its Discontents · 2020-09-28T09:20:32.369Z · score: 3 (2 votes) · EA · GW

Just to flag: I've nearly finished another paper where I explore whether measures of subjective states are cardinally and conclude they probably are (at least, on average). Stay tuned.

There are many parts to this topic and I'm not sure whether you're denying (1) that subjective states are experienced in cardinal units or (2) that they are experienced in cardinal units but that our measures are (for one reason or another) not cardinal. I think you mean the former. But we do think of affect as being experienced in cardinal units, otherwise we wouldn't say things like "this will hurt you as much as it hurts me". Asking people to state their preferences doesn't solve the problem: what we are inquiring about are the intensities of sensations, not what you would choose, so asking about the latter doesn't address the former.

Comment by michaelplant on Life Satisfaction and its Discontents · 2020-09-28T09:08:51.472Z · score: 3 (2 votes) · EA · GW

Some interesting points here, thanks!

[thinking that] well-being concerns on happiness or affect would lead us to conclude that happiness wireheading as a complete and final solution, and that's intuitively wrong for most people

Yes, I agree many people are against hedonism because of the (at least initially) counter-intuitive examples about wireheading and experience machines. As a purely sociological observation, I've been struck that social scientists I talk to are familiar with the objections to hedonism, but unfamiliar with those to desire theories and the objective list. Theorising doesn't penetrate too deeply into the social sciences. As you say:

Because psychologists are empiricists, they don't spend too much time worrying about whether affect, life satisfaction, or eudamonia are more important in a philosophical or ethical sense

I spend quite a lot of type talking to social scientists and it used to surprise me that they seem to think theorising is pointless ("you philosophers never agree on anything"). I now realise this is largely a selection effect: people who like empirical work more than theoretical work become social scientists instead of philosophers.That social scientists don't spend too much time theorising is, I think, a bit of a problem. The impetus to write the paper came from the fact social scientists have developed this notion that life satisfaction is what really matters, and been running with it for some decades, without really stopping to think about what that view would imply.

Comment by michaelplant on Life Satisfaction and its Discontents · 2020-09-28T08:51:15.069Z · score: 3 (2 votes) · EA · GW
Does this have implications for preference utilitarianism?

Yes, very probably. There are many different types of preference/desire theories ('preference' and 'desire' are generally used interchangeably), depending on which sorts of desires count - I say a bit about this in the paper and provide some links to further reading. If, as I argue, life satisfaction theories of well-being are really a type of desire theory going by another name, that the concerns apply to those life satisfaction/desire theories. I note that my objections are to a particular class of desire theory, so someone attracted to desire theories in general might just switch to a different one (e.g. from a global to a summative desire theory).

Re Maslow and Rosenberg, whether well-being comes from those things depends on what you think well-being is, which is the substantive topic at hand. If the best theory of well-being is that it consists in life satisfaction then whether hypothesised 'universal needs' are, in fact, determinants of well-being is a factual question - we need to go ask people about their life satisfaction, collect some data, and crunch the results. Maybe, in fact, the proposed need for "identity" makes very little difference to life satisfaction. However, if one argues that well-being literally consists in the fulfillment of universal needs, e.g having your need for "identity" met is intrinsically good for you, then that well-being "comes from" those things is true by your definition.

This seems to me much better than a single hedonic scale or global desire rating,

It's not all obvious to me that a pluralistic conception of well-being is theoretically preferable (that is, one where more than one thing is instrinically good for us). As I mention right at the end of the paper, one awkward issue is how to combine different seemingly incommensurable goods - how does one trade-off units of 'identity' vs 'affection' if one wants to have high well-being ? Another challenge is providing a compelling story for why, whatever goods are chosen, it is those, and only those, that are good for us.

Comment by michaelplant on Comparing Utilities · 2020-09-15T18:52:42.674Z · score: 12 (4 votes) · EA · GW

Might be worth noting, utilities in this sense are preferences, which may or may not matter intrinsically. On preference/desire theories of well-being, your life goes better the more you get of what you want. But on, say, hedonist theories of well-being, your life goes better the more happiness you have (where happiness is often understood as a positive balance of pleasure over displeasure). Historically, 'utilities' in economics referred to happiness rather than preferences. This switched in the early 20th century with work by Pareto and Robbins and others.

Comment by michaelplant on Hedging against deep and moral uncertainty · 2020-09-14T10:44:59.731Z · score: 4 (2 votes) · EA · GW

I thought these ideas were interesting, but it would be useful to have a less technical and/or more intuitive explanation.

Comment by michaelplant on Are there any other pro athlete aspiring EAs? · 2020-09-11T17:45:11.844Z · score: 33 (21 votes) · EA · GW

I think the issue with your comment was that someone said "I want to do some good, can anyone help me?" and your response reads as"oh, well, you and your type don't seem as smart or important as another group of people" which seemed needlessly rude to me. I say it was needless because, pace your follow up comment, there was no strategic decision to make; it wasn't as if the decision was to help fundraise from athletes or poker places, but just a request for assistance relevant to the former group.

Comment by michaelplant on The case of the missing cause prioritisation research · 2020-08-24T11:22:15.126Z · score: 13 (8 votes) · EA · GW

Thanks very much for writing this up Sam. Two points from my perspective at the Happier Lives Institute, who you kindly mention and is a new entrant to cause prioritisation work.

First, you say this on theories of change:

But for a new organisation to solely focus on doing the research that they believed would be most useful for improving the world it is unclear what the theory of change would be. Some options are:
Do research → build audience on quality of research → then influence audience
Do research + persuade other organisations to use your research → influence their audiences and money

I think this nails the difficulty for new cause prioritisation research (where 'new' means 'not being done by an existing EA organisation'). The existing organisations are the 'gatekeepers' for resources but doing novel cause prioritsation work requires, of necessity, doing work those organisations themselves consider low-priority (otherwise they would do it themselves). This creates a tension: funders often want potential entrants to show they have 'buy-in' from existing orgs. But the more novel the project, the less 'buy-in' it will have, and so the less chance it gets off the ground. I confess I don't have a solution for this, other than that, if funders want to see new research, they need to be prepared to back it themselves.

Second, you say you'd like to see research on

unexplored areas that could be highly impactful such as access to painkillers or mental health

I'm pleased to say HLI is working on both those areas - see our April update.

Comment by michaelplant on Shifts in subjective well-being scales? · 2020-08-21T08:49:56.739Z · score: 4 (2 votes) · EA · GW

It's not clear to me what relationship one should expect between the cardinality (or not) of subjective scales and the relationship between ratings of overall SWB and rating of sub-domains (and thus what one could infer about the other from results in the first).

As a separate point, I'm not sure how to make sense of the putative inconsistency Kaj's notes. I haven't looked into the relationships between overall rating and rating of sub-domains; it's not something that I've heard SWB researchers discuss much either. The most obvious explanations, in addition to those mentioned below, are to appeal to missing domains and/or different temporal foci (i.e. you just think about sub domains are they are now, but your life you also think about the future.

Comment by michaelplant on Shifts in subjective well-being scales? · 2020-08-19T15:28:25.082Z · score: 28 (8 votes) · EA · GW

TL;DR Evidence suggests there aren't shifts in SWB scales over time. This topic isn't well understood. I've got a paper on this area in the works.

The question you're asking here - do individuals rescale, that is, alter what the end-points of their scales refer to? - is one component of a broader concern.

The broader question is whether subjective scales, those where individuals give numerical ratings of subjective phenomena are cardinally comparable, that is, whether a one-point change, on a given scale, represents the same size change in subjective experience for different people and at different times. For instance, if I say my happiness has gone from a 4 to an 5 out of 10, and you say your happiness have gone from a 3 to 4, can we conclude we each had the same increase in happiness?

Given how fundamental the concern is - it applies to all subjective data, not just SWB data - I've been surprised to find the topic hasn't been looked into a great deal. Two leading SWB researchers, Stone and Krueger, said this in an 2018 review article

one of the most important issues inadequately addressed by current [SWB] research is that of systematic differences in question interpretation and response styles between population groups. Is there conclusive evidence that this is a problem? And, if so, are there ways to adjust for it? Information is needed about which types of group comparisons are affected, about the magnitude of the problem, and about the psychological mechanisms underlying these systematic differences

I've been looking at the cardinality of subjective scales. I've got a working paper that I'm not quite ready to put online - this should only be another couple of months. The paper is an evolution of work I had in my DPhil thesis (pp. 135), where I broke cardinal comparability into a number of components, reviewed the evidence for each, and concluded SWB data probably best interpreted as cardinally comparable.

The topic is pretty complicated and addressing all of it would take too long here. I'll just provide a 'quick and dirty' answer to the specific concern you raise about rescaling (aka 'intertemporal cardinality'). Prati and Senik (2020) compare remembered SWB—how satisfied individuals recall being—with observed past SWB—how satisfied individuals they said they were at the time. The use a German panel data where individuals were given 9 different pictures of changes in life satisfaction over time (e.g. staying flat, going up, going up then going down, etc) and asked to pick the one that best represented their own life.

There turns out be an (I think) pretty amazing match between the patterns of observed past and remembered SWB. This is only possible if either (A) individuals both use the same scale over time and have good memories or (B) individuals change the scale use and have bad memories. If individuals used the same scales and had bad memories, or used different scales and had good memories, there would be an inconsistency between the recalled and past observed patterns. Of the two options, (A) seems far more probable than (B). It's hard to believe individuals really can't remember how their lives have gone. Further, we might expect individuals will try not to rescale so that their answers are comparable over time.*

Hence, there doesn't seem to be rescaling at the population level. Further research into whether there are some individuals who rescale, and what causes this to happen, would be good. I'm not aware of any.

*In fact, (B) requires quite specific and implausible patterns of memory failure. To illustrate, suppose your experienced satisfaction has been flat but, because your scale has been shrinking, your reported 0-10 level of satisfaction had been rising over time. To make your observed past satisfaction and your recalled satisfaction consistent, given this scale shrinkage, you would need to falsely recall that your satisfaction has increased. If you instead erroneously recalled that your satisfaction had decreased, then there would be an inconsistency between observation and recall.

Comment by michaelplant on Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty · 2020-08-06T20:37:14.800Z · score: 3 (2 votes) · EA · GW

Glad you were impressed! Would welcome any suggestions on how to improve the analysis.

Thanks for clarifying. Yes, I understand that economists lean towards a desire satisfaction theory of well-being and development economists lean towards Sen-style objective list theories. We're in discussion with a development economist about whether and how to transform this into an article for a development econ journal, and there we expect to have to say a lot more about justifying the approach. That didn't seem so necessary here: EAs tends to be quite sympathetic to hedonism and/or measuring well-being using SWB, and we've argued for that elsewhere, so we thought it more useful just to present the method.

Comment by michaelplant on Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty · 2020-08-06T16:53:50.663Z · score: 4 (3 votes) · EA · GW

Hello Jack, thanks for the comment. As you note, the document doesn’t attempt to address the issues you raised. We’re particularly interested to have people engage with the details of how we’ve done the analysis, although we recognise this will be far too far much ‘in the weeds’ for most (even members of this august forum).

I’d like to reply to your comment though, seeing as you've made it. There are quite a few separate points you could be making and I’m not sure which you mean to press.

You wonder about the suitability of SWB scores in low-income settings and raise Sen’s adaptive preferences point.

One way to understand the adaptive preferences point is as an argument against hedonism: poor people are happy, but their lives aren’t going well, so happiness can’t be what matters. From this it would follow that SWB scores might not be a good measure of well-being anywhere, not just in low-income contexts. Two replies. First, I’m pretty sympathetic to hedonism: if people are happy, then I think their lives are going well. Considering adaptive preferences doesn’t pull me to revise that. Second, as an empirical aside, it’s not at all obvious that people do adapt to poverty: the IDInsight survey found the Kenyan villagers had life satisfaction of around 2/10. That’s much lower than life satisfaction on average in Kenya of around 4.5. A quick gander at the worldwide distribution of life satisfaction scores (https://ourworldindata.org/happiness-and-life-satisfaction) tells you the poorer people are less satisfied than richer ones. The story might be interestingly different for measures of happiness (sometimes called ‘affect balance’).

Another way to understand the force of adaptive preferences is about what we owe one another. Here the idea is that we should help poor people even if doing so doesn’t improve their well-being (whatever well-being is) - the further thought being that it won’t improve their well-being because they’ve adapted. I don’t find this plausible. If I can provide resources for A or B, but helping A will have no impact on their well-being, whereas B will have their well-being increased, I say we help B. (To pull out the intuition that adaptive preferences is really about normative commitments, note we might think it makes sense for people in unfavourable circumstances to change their views to increase their well-being, but that there’s something odious about not helping people because they’ve managed to adapt; it’s as if we’re punishing them for their ingenuity)

A different concern one might have is that those in low-income contexts use scales very differently from those elsewhere: someone who says there are 4/10 but lives in poverty actually has a very different set of psychological states from someone who says they are 4/10 in the UK. In this case, it is mistaken to take these numbers at face value. The response to this problem is to have a theory of how and why people differently interpret subjective scales so you can account for and adjust the score: e.g. determine what the true SWB values are on the same scale. This is one of the most important issues not adequately addressed by current research. I’ve got a (long) paper on this I’ve nearly finished. The very short answer is that I think the answers are (cardinally) comparable and this is because individuals try to answer subjective scales in the same way as everyone else in order to make themselves understood. On this basis, I think it’s reasonable to interpret SWB scores at face value.

Comment by michaelplant on EA reading list: population ethics, infinite ethics, anthropic ethics · 2020-08-03T16:33:23.760Z · score: 8 (5 votes) · EA · GW

I think population ethics and infinite ethics should be separated. They are different topics, although with relevant to each other.

Comment by michaelplant on Utility Cascades · 2020-07-29T14:24:36.524Z · score: 11 (6 votes) · EA · GW

I enjoyed reading the paper but was unconvinced any serious problem was being raised (rather than merely a perception of a problem resulting from a misunderstanding).

Put very simply, the structure of the original case is that person chooses option B instead of option A because new information makes option B look better in expectation. It then turns out that option A, despite having lower expected value, produced the outcome with higher value. But there's nothing mysterious about this: it happens all the time and provides no challenge to expected value theory or act utilitarianism. The fact that I would have won if I'd put all my money on number 16 at the roulette table does not mean I was mistaken not to do so.

Comment by michaelplant on Do research organisations make theory of change diagrams? Should they? · 2020-07-29T13:44:14.848Z · score: 17 (5 votes) · EA · GW

At HLI, we've found creating a Theory of Change (TOC) very useful. It was (at least for me) quite a painful process of making explicit various assumptions and uncertainties and then talking through them. I think if we hadn't done it explicitly we would (a) have made a less thoughtful plan and (b) different members of the team would be carrying around their own plans in their heads.

Going through a ToC process has also helped us to focus on meeting the needs of our target audiences. After developing our TOC, we sent out surveys to some of our key stakeholders to identify their concerns about subjective well-being measures and what new information would make them more likely to use them. Their responses provided the basis for our research agenda and the questions we have chosen to investigate this year.

We have a slightly more detailed version of our ToC diagram on our blog. Thanks for pointing out that it’s hard to find; we’ll think about putting it on a main page.

Comment by michaelplant on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-21T18:17:18.483Z · score: 3 (2 votes) · EA · GW

Hmm. Okay, that's fair, on re-reading I note the OP did discuss this at the start, but I'm still unconvinced. I think the context may make a difference. If you are speaking to a member of the public, I think my concern stands, because of how they will misinterpret the thoughtfulness of your prediction. If you are speaking to other predict-y types, I think this concerns disappears, as they will interpret your statements the way you mean them. And if you're putting a set of predictions together into a calculation, not only it is useful to carry that precision through, but it's not as if your calculation will misinterpret you, so to speak.

Comment by michaelplant on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-20T09:51:55.755Z · score: 16 (9 votes) · EA · GW

I had a worry on similar lines that I was surprised not to see discussed.

I think the obvious objection to using additional precision is that this will falsely convey certainty and expertise to most folks (i.e. those outside the EA/rationalist bubble). If I say to a man in the pub either (A) "there's a 12.4% chance of famine in Sudan" or (B) "there's a 10% chance of famine in Sudan", I expect him to interpret me as an expert in (A) - how else could I get so precise? - even if I know nothing about Sudan and all I've read about discussing probabilities is this forum post. I might expect him to take my estimate more seriously than of someone who knows about Sudan but not about conveying uncertainty.

(In philosophy of language jargon, the use of a non-rounded percentage is a conversational implicature that you have enough information, by the standards of ordinary discourse, to be that precise.)

Comment by michaelplant on High stakes instrumentalism and billionaire philanthropy · 2020-07-20T09:33:01.156Z · score: 8 (5 votes) · EA · GW

I agree with this comment - thanks! A follow up: can you say why political theorists accept high stakes instrumentalism (as opposed to stating that they do)? It sounds like this is effectively a re-run of familiar debates between consequentialists and non-consequentialists (e.g. "can you kill one to save five? what about killing one to save a million?"), just wrapped in different language, so I'm wondering if something else is going on. I suppose I'm a bit surprised the view has no detractors - I imagine there are some (Kant?) who would hold the seemingly equivalent view you can never kill one to save any number of others.

Comment by michaelplant on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T20:14:06.308Z · score: 25 (14 votes) · EA · GW

Thanks for this write up. The list is quite substantial, which makes me think: do you have a list of problems you've considered, concluded are probably quite unpromising and therefore dissuade people from undertaking? I could imagine someone reading this and thinking "X and Y are on the list so Z, which wasn't mentioned explicitly [but 80k would advice against], is also likely a good area".

Comment by michaelplant on Towards Donor Coordination Via Mechanism Design · 2020-06-22T09:23:02.998Z · score: 3 (2 votes) · EA · GW

Just a quick note. It would be helpful if, at the start, you explained who you think this post is for and/or its practical upshot. I skimmed through the first 30% and wasn't sure if this was a purely academic discussion or you were suggesting a way for donors to coordinate.

Comment by michaelplant on HLI’s Mental Health Programme Evaluation Project - Update on the First Round of Evaluation · 2020-06-19T10:08:22.120Z · score: 3 (2 votes) · EA · GW

A couple of quick replies.

First, all your comments on the weirdness of Western mental healthcare are probably better described as 'the weirdness of the US healthcare system' rather than anything to do with mental health specifically. Note they are mostly to do with insurance issues.

Second, I think one can always raise the question of whether it's better to (A) improve the best of service/good X or (B) improve distribution of existing versions of X. This also isn't specific to mental health: one might retort to donors to AMF that they should be funding improvements in (say) health treatment in general or malaria treatment in particular. There's a saying I like which is "the future is here, it just isn't very evenly distributed" - if you compare Space-X launching rockets which can land themselves vs people not having clean drinking water. There seems to be very little we can say from the armchair about whether (A) or (B) is the more cost-effective option for a given X. I suspect that if there were a really strong 'pull' for goods/services to be provided, then we would already have 'solved' world poverty, which makes me think distribution is weakly related to innovation.

Aside: I wonder if there is some concept of 'trickle-down' innovation at play, and whether this is relevant analogous to that of 'trickle-down' economics.

Comment by michaelplant on HLI’s Mental Health Programme Evaluation Project - Update on the First Round of Evaluation · 2020-06-15T14:25:45.097Z · score: 2 (1 votes) · EA · GW

I'm not sure what you mean by going from 0 to 1 vs 1 to n. Can you elaborate? I take it you mean the challenge of going from no to current best practice treatment (in developing countries) vs improving best practice (in developed countries).

I don't have a cached answer on that question, but it's an interesting one. You'd need to make quite a few more assumptions to work through it, e.g. how much better MH treatment could be than the current best practice, how easy it would be to get it there, how fast this would spread, etc. If you'd thought through some of this, I'd be interested to hear it.

Comment by michaelplant on How to Measure Capacity for Welfare and Moral Status · 2020-06-15T09:24:55.751Z · score: 2 (1 votes) · EA · GW

Right. My thought is that we assume humans have the same capacity on average, because while there might be differences, we don't know which way they'll go so they should 'wash out' as statistical noise. Pertinently, this same response doesn't work for animals because we really don't know what their relatively max capacities are.

FWIW, the analogue to my response here would be to say we can expect all chickens to have approximately the same capacity as each other, even if individuals chickens differ. The claim isn't about humans per se, but about similarities borne out of genetics.

Comment by michaelplant on How to Measure Capacity for Welfare and Moral Status · 2020-06-09T11:10:29.701Z · score: 4 (2 votes) · EA · GW

Thanks for your response, but I don't think you're grasping the nettle of my objection. I agree with you that you and I both think we know something about the mental states of other adult humans and, further, human babies. I also think such assumptions are reasonable, if empirically unprovable. But that's not my point.

In short, my challenge is: articulate and defend the method you will use to determine how much more or less happy humans are than non-humans animals in particular contexts - say the average humans vs the average factory farmed chicken.

Here's what I think we can do with humans. We assume you and I have the same capacity for happiness. We assume we are able to learn about the experiences of others and communicate them via language, e.g. we've both stubbed our toes, but I haven't broken my leg, and when you say "breaking my leg is 10x worse" I can conclude that would be true for me too. Hence, when you say "I feel 2/10" or "I feel terrible" I might feel confident you mean the same things by those as I do.

What can do with chickens? We really have no idea what chickens' capacities for happiness are - is it 1/10th, 1/100th, etc? It doesn't seem at all reasonable to assume they are roughly the same as ours. The chicken cannot tell us how happy how it is relative to its maximum, our maximum, or, indeed, tell us anything at all. Of course, we may have intuitions - what we might perjoratively call "tummy feelings" - about these things. Fine. But what method do we use to assess if those intuitions are correct? The application of further intuitive reflection? Surely not. I cannot think of a justifiable empirical method to inform our priors. If you can explain why this project is not doomed, I would love to know why! But I fear it is.

Comment by michaelplant on How to Measure Capacity for Welfare and Moral Status · 2020-06-05T09:46:27.995Z · score: 3 (2 votes) · EA · GW

Thanks for writing this up. It seems what you've done with the atomistic approach is stated what, in principle, one would need to do, but not really wrestled with the difficulties and details of doing it. By analogy, it's a bit like you've said "if we want to get to space, we need to build a spaceship" and but not said how to build a spaceship ("well, it would need to get into space, and carry people, ...")

I think it would help to spell out a particular issue. Suppose we think happiness, the intrinsic pleasurableness/displeasurableness of experiences is one of the things that constitutes welfare. Okay, what proxy do we use for that? Happiness is a subjective experience, so no objective measure is possible. Of course, we have intuitions about relative magnitudes of happiness in different animals, but what makes us think we're right, even approximately?

(I note I raised effectively the same concern in your previous post and you haven't (yet) replied to my latest comment. You linked me this paper, but it doesn't address my concern: the author surveys didn't "suffering calculators" but doesn't provide an account of how we would test that some are more valid that others).

Comment by michaelplant on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T21:18:13.641Z · score: 2 (1 votes) · EA · GW

Thanks for the thoughtful reply!

To fill out the details of what you're getting at, I think you're saying "the welfare level of an animal is X% of its capacity C. We're confident of both X and C in the given scenario for animal A is high enough that it's better to help animal A than animal B". That may be correct, but you're accepting that than you can know the welfare levels because you know the percentage of the capacity. But then I can make the same claim again: why should we be confident we've got the percentage of the capacity right?

I agree we should, in general, use inference to the best explanation. I'm not sure we know how to do that when we don't have access to the relevant evidence (the private, subjective states) to draw inferences. If it help, trying putting on the serious sceptic's hat and ask "okay, we might feel confident animal A is suffering more than animal B, and we do make these sort of judgement the whole time, but what justifies this confidence?". What I'd really like to understand (not necessary from you - I've been thinking about this for a while!) is what the chain of reasoning is that would go into that justification.

Comment by michaelplant on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T15:00:13.445Z · score: 12 (7 votes) · EA · GW

Thanks for writing this up - I thought this was a very philosophically high-quality forum post, both in terms of its clarity and familiarity with the literature, and have given it a strong upvote!

With that said, I think you've been too quick in responding to the first objection. An essential part of the project is to establish the capacities for welfare across species, but that's neither necessary or sufficient to make comparisons - for that, we need to know about actual levels of well-being for different entities (or, at least the differences in their well-being). But knowing about the levels seems very hard.

Let me quickly illustrate with some details. Suppose chicken welfare has a range of +2 to -2 well-being levels, but for cows it's -5 to +5. Suppose further the average actual well-being levels of chickens and cows in agriculture are -1 and -0.5, respectively. Should we prevent one time-period of cow-existence or of chicken-existence? The answer is chicken-existence, all else equal, even though cows have a greater capacity.

Can you make decisions about what maximises well-being if you know what the capacities but not the average levels are? No. What you need to know are the levels. Okay, so can we determine what the levels, in fact, are? You say:

Of course, measuring the comparative suffering of different types of animals is not always easy. Nonetheless, it does appear that we can get at least a rough handle on which practices generally inflict the most pain, and several experts have produced explicit welfare ratings for various groups of farmed animals that seem to at least loosely converge

My worry is: what makes us think that we can even "get a least a rough handle"? You appeal to experts, but why should we suppose that the experts have any idea? They could all agree with each other and still be wrong. (Arguably) silly comparison: suppose I tell you a survey of theological experts reported that approximately 1 to 100 angels could dance on the head of a pin. What should you conclude about how many angels can dance on a pin? Maybe nothing. What you might want to know is what evidence those experts have to form their opinions.

I'm sceptical we can have evidence-based inter-species comparisons of (hedonic) welfare-levels at all.

Suppose hedonism is right and well-being consists in happiness. Happiness is a subjective state. Subjective states are, of necessity, not measurable by objective means. I might measure what I suppose are the objective correlates of subjective states, e.g. some brain functionings, but how do I know what the relationship is between the objective correlates and the subjective intensities? We might rely on self-reports to determine that relationship. That seems fine. However, how do we extend that relationship to beings that can't give us self-reports? I'm not sure. We can make assumptions (about general relationship between objective brain states and subjective intensities) but we can't check if we're right or not. Of course, we will still form opinions here, but it's unclear how one could acquire expertise at all. I hope I'm wrong about this, but I think this problem is pretty serious.

If well-being consists in objective goods, e.g. friendship or knowledge, it might be easier to measure those, although there will be much apparent arbitrariness involved in operationalising these concepts.

There will be issues with desire theories too either way, depending whether one opts for a mental-state or non-mental-state version, but that's a further issue I don't want to get into here.

Comment by michaelplant on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T12:18:48.177Z · score: 13 (10 votes) · EA · GW

Ben, could you elaborate on how important you think representativeness is? I ask, because the gist of what you're saying is that it was bad the leaders' priorities were unrepresentative before, which is why it's good there is now more alignment. But this alignment has been achieved by the priorities of the community changing, rather than the other way around.

If one thought EA leaders should represent the current community's priorities, then the fact the current community's priorities had been changed - and changed, presumably, by the leaders - would seem to be a cause for remorse, not celebration.

As a further comment, if representativeness is a problem the simple way to solve this would be by inviting people to the leaders' forum to make it more representative. This seems easier than supposing current leaders should change their priorities (or their views on what they should be for the community).

Comment by michaelplant on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T12:06:07.280Z · score: 43 (16 votes) · EA · GW

I share Denise's worry.

My basic concern is that Ben is taking the fact there is high representativeness now to be a good thing while not seeming so worried about how this higher representativeness came about. This higher representativeness (as Denise points out) could well just be result of people who aren't enthused with the current leaders' vision simply leaving. The alternative route, where the community change their minds and follow the leaders, would be better.

Anecdotally, it seems like more of the first has happened (but I'd be happy to be proved wrong). Yet, if one thinks representativeness is good, achieving representativeness by having people who don't share your vision leave doesn't seem like a good result!

Comment by michaelplant on Reducing long-term risks from malevolent actors · 2020-05-07T10:03:11.938Z · score: 8 (4 votes) · EA · GW

Thanks for this write-up, I thought it was really interesting and not something I'd ever considered - kudos!

I'll now hone in on the bit of this I think needs most attention. :)

It seems you think that one of the essential things is developing and using manipulation-proof measures of malevolence. If you were very confident we couldn't do this, how much of an issue would that be? I raise this because it's not clear to me how such measures could be created or deployed. It seems you have (1) self-reports, (2) other-reports, (3) objective metrics, e.g. brain scans. If I were really sneaky, I would just lie or not take the test. If I were really sneaky, I would be able to con others, at least for a long-time - perhaps until I was in power. Regarding objective measures, there will be 'Minority Report' style objections to actually using them in advance, even if they have high predictive power (which might be tricky as it relies on collecting good data, which seems to require the consent of the malevolent).

The area where I see this sort of stuff working best is in large organisations, such as civil services, where the organisations have control over who gets promoted. I'm less optimistic this could work for the most important cases, political elections, where there is not a system that can enforce the use of such measures. But it's not clear to me how much of an innovation malevolence tests are over the normal feedback processes used in large organisations. Even if they could be introduced in politics somehow, it's unclear how much of an innovation this would be: the public already try to assess politicians for these negative traits.

It might be worth adding that the reason the Myers-Brigg style personality tests are, so I hear, more popular in large organisations than the (more predictive) "Big 5" personality test is that Myers-Briggs has no ostensibly negative dimensions. If you pass round a Big-5 test, people might score highly on neuroticism or low on openness and get annoyed. If this is the case, which seems likely, I find it hard e.g. Google will insist that staff take a test they know will assess them on their malevolence!

As a test for the plausibility of introducing and using malevolence tests, notice that we could already test for psychopathy but we don't. That suggests there are strong barriers to overcome.

Comment by michaelplant on Update from the Happier Lives Institute · 2020-05-05T18:21:54.792Z · score: 2 (1 votes) · EA · GW

Thanks very much for your support Sam, we are grateful for it! As we've discussed with you, we are also keen to see how thinking in terms of SWB illuminates the cause prioritisation analysis.

It's easier to see how it could do this in some areas rather than others. As we're relying on self-report data, it's not obvious how we could use that to compare humans to non-humans (although one project is to think through if this is really not possible). And for comparing near-term to long-term interventions, these are plausibly not sensitive to one's measure of welfare anyway. The usual long-termist line is that such concerns 'swamp' near-term ones whichever way you look at it.

Comment by michaelplant on Update from the Happier Lives Institute · 2020-05-05T18:19:55.252Z · score: 5 (3 votes) · EA · GW

Thanks for this! Our position hasn’t changed much since the last post. We still plan to focus on mostly near-term (human) welfare maximisation, but we'd like to see if we can, in the next couple of years, do/say something useful about welfare maximisation in other areas (i.e. animals, the long-term). We haven't thought much about what this would be yet: we want to develop expertise in the area that seems most useful (by our lights) before thinking about expanding our focus.

Speaking personally, I take what is effectively a worldview diversification view to moral uncertainty (this is a change) although the rationale is different (I plan to write this up at some point). This, combined with my person-affecting sympathies, means I want to put most, but not all, of my efforts into helping humans in the near-term.

Comment by michaelplant on How can I apply person-affecting views to Effective Altruism? · 2020-04-30T17:34:20.111Z · score: 4 (2 votes) · EA · GW

Yes, agree you could save existing animals. I'd actually forgotten until you jogged my memory, but I talk about that briefly in my thesis (chapter 3.3, p92) and suppose saving animals from shelters might be more cost-effective than saving humans (given a PAV combined with deprivationism about the badness of death).

Comment by michaelplant on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T09:59:19.933Z · score: 3 (2 votes) · EA · GW

I think you might not have clocked the OP's comment that the morally relevant being as just those that exist whatever we do, which would presumably rule out concerns for lives in the far future.*

*Pedantry: there could actually be future aliens who exist whatever we do now. Suppose some aliens will turn up on Earth in 1 million years and we've had no interaction with them. They will be 'necessary' from our perspective and thus the type of person-affecting view stated would conclude such people matter.**

**Further pedantry: if our actions changed their children, which they presumably would, it would just be the first generation of extraterrestrial visitors who mattered morally on this view.

Comment by michaelplant on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T09:50:12.374Z · score: 19 (12 votes) · EA · GW

I'm struggling to think of much written on this topic - I'm a philosopher and reasonably sympathetic to person-affecting views (although I don't assign them my full credence) so I've been paying attention to this space. One non-obvious consideration is whether to take an asymmetric person-affecting view (extra happy lives have no value, extra unhappy lives has negative value) or a symmetric person-affecting view (extra lives have no value).

If the former, one is pushed towards some concern for the long-term anyway, as Halstead argues here, because there will be lots of unhappy lives in the future it would be good to prevent existing.

If the latter - which I think, after long-reflection, is the more plausible version, even though it is more prima facie unintuitive - then that is practically sufficient, but not necessary, for concentrating on the near-term, i.e. this generation of humans; animals won't, for the most part, exist whatever we choose to do. I say not necessary because one could, in principle, think all possible lives matter and still focus on near-humans due to practical considerations.

But 'prioritise current humans' still leaves it wide-open what should you do. The 'canonical' EA answer for how to help current humans is by working on global (physical) health and development. It's not clear to me that this is the right answer. If I can be forgiven for tooting my own horn, I've written a bit about this in this (now somewhat dated) post on mental health, the relevant section being "why might you - and why might you not - prioritise this area [i.e. mental health]".

Comment by michaelplant on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T09:29:27.896Z · score: 6 (3 votes) · EA · GW

Plausibly, feotuses will not be morally relevant on such a view as they won't exist whatever we choose to do.

Comment by michaelplant on Coronavirus: how much is a life worth? · 2020-03-24T15:44:19.090Z · score: 2 (1 votes) · EA · GW

Yes, good point. Now inclined to think your and Paul F's analyses need to be combined in some way, not immediately clear to me how.

He is indeed converting money into quality and quality of health, not just quantity, my mistake.