Just picking up on the importance, neglectedness, tractability table, Hauke, can I ask you to explain what you meant by those three terms (or, at least the last two) and how you see them as fitting together to give you an estimate of cost-effectiveness? I notice you did a fermi estimate too, so can you say what the relationship is between I, N, T and the fermi estimate? This isn't a critical question - I've been thinking about cause prio a lot and I've realised it's not clear to me how people use these concepts in their decision-making. Hence, if you could say a bit more that would helpful.
is tractability: cost-effectiveness, resources requires to solve the problem, subjectively-perceived easiness, or something else?
Is neglectedness: resources going towards the problem? if so, how directly targeted at the problem did you have mind? Is it about counterfactual replacability? Something else?
Is the idea I, N, and T somehow give you an intuitive cost-effectiveness estimate and then you build the fermi estimate as an explicit follow up?
Sorry if this seems pedantic and I'm not engaging with the spirit of your post. The research looks very thorough and I'm glad you did it. As a non-expert on the subject matter I probably don't have much of substance of add to that.
I think that GWWC & GiveWell's earlier use of QALYs created a lot of path dependence, such that current EA prioritization remains influenced by the QALY framework even though no organization explicitly uses it at present.
I find this to be the most plausible explanation of what has happened. Your counterfactual story is rather helpful!
Peter Singer is not competitive with Usain Bolt when it comes to running.
He's faster than he looks...
But more seriously, now I understand your point, I think it's plausible psychedelics could beat AMF (assuming we count the value of AMF the standard way, looking just at the self-regarding effects of saving lives, i.e. the value to the saved person) and more research would be useful to think through this. I had a go at comparing AMF to drug policy reform for psychedelics nearly 2years ago. I think my model is not out of date but it's at least indicative. The main problem isn't the potential of psychedelics to be impactful, as that's clear - the idea is psychedelics could be much better treatments for mental health, which is huge in scale, and changing the law would improve treatments for huge numbers of people - but about what the mostly counterfactual things (for EAs) to do are. It's not obvious what the best leverage points for money/time are and I haven't been able to justify the time to look (I'm trying to finish a PhD in philosophy and this is not a philosophy topic).
I'm unsure what you mean by 'not competitive with'. Aren't all causes competitive with each other in the sense that one unit of resources (i.e. money or time) you spend on one isn't one unit you can spend on another cause?
To address your point, I think the reason more EAs don't pay attention to psychedelics is a combination of EAs not thinking mental health is an important problem (something I've also written about) and because psychedelics are weird and unfamiliar. Regarding mental health's importance, I think EAs are increasingly interested in the longterm (this would also explain a relative lack of interest in poverty and animal welfare) or they are focused on poverty but don't believe mental health treatments are comparably cost-effective with anti-poverty ones. I think mental health treatments are comparably cost-effective - at least in the same ballpark although it's unclear which is better on current evidence - when we use self-reported happiness scores to judge effectiveness. You might then doubt we can sensibly measure happiness, which I argue we can in this forum post.
I agree this is plausible, but I think you would accept that this is conjecture and still quite a long way from what we want, which I assume is some sort of quantified, evidence-based, comparative analysis.
I think your short argument misses the point. The obstacle isn't the lack of such infrastructure - I imagine academics could use the existing tools if they asked politely or created their own - but the lack of demand for such infrastructure.
Thanks for writing this up. I agree that ESM is the theoretically ideal measure of happiness. I made a few comments on a recent post about QALYs vs ESM I thought I should link too here.
A couple of other comments. First, I'd be happy to chat to you about this. Do get in contact.
Second, SWB measures are increasingly being taken seriously. See the global happiness policyreport and the fact 170,000 articles and books have been published on SWB in the last 15 years and the graph below for a change over time. However, the focus on mainly on life satisfaction, rather than on ESM measures, and looks set to stay that way. The reason for this is a combination of (a) some SWB researchers, e.g. Helliwell, think life satisfaction, not happiness, is what matters, (b) it's easier and cheaper to collect data on life satisfaction, (c) as a result of (b), there is much more work that has been done to established what will increase life satisfaction, which is what is needed to guide policy and do cost-effectiveness - see my happiness manifesto post and Origins ofHappiness for more, (d) as a result of the fact more work has been done with life satisfaction, there is now path dependence where it's easier to use life satisfaction because other researchers are/have.
Third, have you had an take up from researchers on this? If they'd said they aren't interested, did they give reasons?
Three thoughts. First, it's not really the case that EAs use QALYs/DALYs. GWWC and GiveWell used to use them , but GWWC no longer exists as an independent entity and GiveWell now use their own metric. 80k mostly focus on the far future and so QALYs/DALYs aren't of primary interest. Have I missed someone? I think Founders Pledge do use them. Not sure what goes on 'under the hood' for The Life You Can Save's recommendations.
Second, even if you wanted to use the experience sampling method (ESM) as your measure of wellbeing, you couldn't because there isn't enough data on it. There are only two academic projects which have tried to collect data en masse - trackyourhappiness and mappiness. The former is now defunct (Killingsworth works for Microsoft now I believe) and the latter isn't actively being used (I spoke to the creator, George MacKerron a couple of months ago) I discuss this in a previous forumpost. The best I think we can do, if we want to use subjective wellbeing (SWB) measure is life satisfaction.
Third, I think ESM is the theoretically ideal measure of happiness and thus EA - indeed, everyone - should use it as the outcome measure of impact (I assume wellbeing consists in happiness). What follows is that ESM is superior to all other measures of wellbeing, including QALYs/DALYs, wealth, etc. I'm hoping to do some research using ESM at some point in the future if I can.
Though I am saying that 80,000 Hours' research can't offer a single, definite ranking of what is best for everyone to do, that doesn't mean that their research isn't very useful for people figuring out what it is best for them to do
Well, they do offer A list of the most urgent global problems. I'll grant this isn't a list of what it is best for everyone to do, but it is (plausibly, from their perspective) a list of what it is best for most people to do (or 'most EAs' or some nearby specification). Indeed, given 80k has a concept of 'personal fit', which is distinct from their rating of the problems, the natural reading of the list is that it provides a general, impersonal ranking of where (average?) individuals can do the most good.
I'm concerned you're defending a straw man - did anyone ever claim 80k's list was true for every single possible person? I don't think so and such a claim would be implausible.
This puzzled me slightly. One reason is that longtermism and person-affecting views are different categories; the former is a view about where, in practice, value lies and the latter is a view about where, in theory, value lies. You could be a totalist (all possible people matter), which is not a person-affecting view, but be a near-termism. I think a better set up would have been: 'psychedelics look good whether you just value the near-term or the long-term'. I suppose that leaves out the 'medium-termists', but I don't know how many people there are who hold this view, whatever it is, inside or outside EA.
Also robust: interventions that increase the set of well-intentioned + capable people
The psychedelic experience also seems like a plausible lever on increasing capability (via reducing negative self-talk & other mental blocks) and improving intentions (via ego dissolution changing one's metaphysical assumptions)
I would like you to say more about this. It seems plausible to me that training rationality is orders of magnitude more impactful for the longrun, so this is an objection to counter.
under a longtermist view, psychedelic interventions are plausibly in the same ballpark of effectiveness as x-risk interventions
I don't think you've shown this. It's more plausible to me that Xrisk is a top tier intervention and rationality and the 'mindset-changingness' of psychedelics are in the lower tiers. This would still make them potentially very interesting from a long-termist perspective - in the bucket of 'things to do take seriously and possibly fund if X-risk has absorbed as many resources as it can'.
Just FYI, I wrote a mammothseries of articles on drug policy reform 18 months or so ago where I argued that psychedelics for mental health looks very promising from the near term perspective. In other words, I explicitly claim what you're claiming! I haven't had a chance to do more work on it since and I add the usual caveats about not necessarily agree with everything past-Michael wrote.
Also, just because psychedelics are promising as a category of intervention, it doesn't follow that setting up a retreat of this kind is the best way to go within that (sub)cause area. You'd need to argue for that too.
This post did not convince me that the business was created 'for EA reasons.'
I think this is uncharitable and I gave small downvote as a result. Given those involved in this business are involved in the EA community and there is at least a plausible story to tell about why this is impactful, you're essentially claiming accusing the OP of acting in bad faith when there isn't compelling reason to do so.
And contrary to Forum standards, it was written to persuade, not to inform
I reread this and didn't notice that it was written to persuade vs inform.
otherwise why would there be no studies listed that found no effect or a negative effect?
But I don't know any practising doctors in the EA community, so this is definitely the wrong place to advertise
Again, I think it's bad faith to assume the purpose is simply trying to make money from the participants of this forum. I think it's fine, good even, for people in the community to tell others what they are doing. Where else is one supposed to make these announcements?
So it doesn't seem to be that there's any insoluble tension between taking account of individual difference and communicating the same message to a broad audience
I don't think the tension is between those things. The tension is between saying 'our research is useful: it tells (group X) of people what it is best for them to do' and 'our research does not offering a definite ranking of what it is before for people to do (whether people in group X or otherwise)'. I don't think you can have this both ways.
While this isn't entirely personalized (it's based only on certain attributes that 80,000 Hours highlights), it's also far from a single, definitive list
Then it seems reasonable to interpret it as (an attempt at) a definitive list if you have those attributes.
I understand why the author is arguing that 80k doesn't offer a big list but I think that argument is undermines the claim that 80k is useful ("Hey, we're not telling anyone what to do?" "Really? I thought that was the point")
80,000 Hours’ research does not and cannot yield a “big list” of the best career paths, because no such thing exists. Instead, we should use 80,000 Hours content to map out our own personal lists and figure out how to do the top things on them.
These two sentences seem to be in a lot of tension. If giving advice about which careers did the most good were entirely personal, then it necessarily follows that you could make no general recommendations at all about which careers are better in terms of impact and therefore 80k should stop what they are doing. However, if you can make general recommendations and thus say which careers have more impact that others, then there is a 'big list' after all.
We might disagree about who this is a 'big list' for - the average person, an omni-skilled graduate of a top university, the average reader of 80k's content - but however we fill that out, it's still possible to see it as a 'big list'.
I'm entirely with you that it doesn't make sense to feel bad if someone else can do more good than you. The aim is to do the most good you can do, not the most good someone else who isn't you can do. Despite recognising this on a conceptual level, I still find it hard to believe and often feel guilty (or shame or sadness) when I think of people whose 'altruistic successfulness' surpasses mine.
Hello Kris. Can you say what type of people you think should be spending their time doing this? I like the idea, but it seems like a lot of effort for someone who is not already someone plugged into these networks and has a professional interest in the area.
I also think having David Clark speak at events is a scaleable solution!
Thanks for writing this up. Great to see people testing things and then adjusting their plans in light of the results.
This is probably a relatively minor question but this wasn't something you mentioned so I thought I'd ask: was transportation a problem in people getting to the advanced workshops? I can imagine that, if the a student needed to be driven to the workshop, that would make it much harder to attend.
On the second, the obvious counterargument is that it applies just as well to e.g. murder; in the case where the person is killed, "there is no sensible comparison to be made" between their status and that in the case where they are alive
Person-affecting views are those will hold not all possible people matter. Once you've decided who matters (the present, necessary or actual people), it's then a different question how you think about the badness of death for those that matter. You can say creating people isn't good/bad, but it's still bad if already existing people die early. FWIW, I also find Epicureanism about the badness of death rather plausible, i.e. I don't think we compare the value of living longer for someone. I recognise this makes me something of a 'moral hipster' but I think the arguments for it are pretty good, although I won't get into that here. As such, I think death, whether by murder or other means, isn't bad for someone. I think we tend to have the intuition that murder is wrong over and above what it deprives the deceased from, which it why we think it's just as wrong to murder someone with 1 month vs 10 years left to live. hence I think you're getting at a deontological intuition, not one about value.
I find the stuff about posthumous harms and benefits very implausible. If Socrates wants us to say 'Socrates' and we do, does it really make his life go better?
I don't think my argument here is analogous to trying to beat the market. (i.e. I'm not arguing that AI research companies are currently undervalued.)
I have to disagree. I think your argument is exactly that AI companies are undervalued: investors haven't considered some factor - the growth potential of AI companies - and that's why they are such a good purchase relative to other stocks and shares.
Another thing I'd be interested in seeing would be the percentage changes in support for causes year-on-year as that would indicate what the internal dynamics of the movement are. I'm (at least) partly motivated to see this because mental health, which I've written quite a lot on, may be the smallest top priority cause, but this is also the first time it's snuck into the list.
Thanks for this. Were there any causes you considered adding beyond those stated? Those seems like the main causes EAs support, but it would be nice to include 'minor' ones to see what the community feeling is about those, e.g. wild animal suffering, education, social justice, immigration reform, etc.
Yes, if the chance of death each year is constant it turns out that remaining life expectancy is around 1/chance of death
Can you explain this is the case? Sorry if this is obvious, but I'm not getting it and can't think offhand how to do the maths.
On population ethics, for totalists it then seems the dominating concern will be how valuable it is to have a population with longer lives, which puts the emphasis in a difference place from the value of keeping particular individuals alive longer.
Can you explain in a bit more detail, and without complicated formalisation, why life expectancy after LEV is 1000. I note life expectancy is 1000 and the chance of death in 1 year is 1/1000. Is that a coincidence, or is life expectancy post-LEV just 1/annual chance of death?
I know you've said you're going to cover this later, but I want to flag how sensitive this is to population ethics. On totalism (the value of the outcome is the sum total of well-being of everyone who will ever life), it's good to create lives, so it's not necessarily a problem that there's a higher 'turnover' of lives, i.e. people die and other people replace them. Totalists will want to know how longevity affects the long run for everyone, not just those that get to live longer. By contrast, if you're a person-affecting deprivationism (there is no value creating new lives, but for those lives that count, the badness of death is the amount of well-being they would have had had they lived), life extension looks super important!
Relevant to this, in the following article MacAskill provides the following account of what EA is:
What Is Effective altruism?
As defined by the leaders of the movement, effective altruism is the use of evidence and reason to work out how to benefit others as much as possible and the taking action on that basis.11 So defined, effective altruism is a project rather than a set of normative commitments. It is both a research project—to figure out how to do the most good—and a practical project, of implementing the best guesses we have about how to do the most good. There are some defining characteristics of the effective altruist research project. The project is:
Maximizing. The point of the project is to try to do as much good as possible.
Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on both empirical observation and careful rigorous argument or theoretical models.
Tentatively welfarist. As a tentative hypothesis or a first approximation, goodness is about improving the welfare of individuals.
Impartial. Everyone’s welfare is to count equally
Also, you've accidentally posted the same thing three times, if you hadn't noticed already.
Hello Matthew and thanks for your points. I don't think it counts as bias if favour of X if you chose to do X because you thought X was best!
On the first, I haven't looked, but I wouldn't consider that to be the right evidence. It seems pretty plausible people could below hedonic/satisfaction neutrality and not want to kill themselves; I'd expect our evolutionary insight is to keep living even in such circumstances - those who committed suicide easily would have their genes removed from the pool.
On the second, I haven't, but I'd welcome someone doing that research.
On the third, I am familiar with that stuff and am in regular communication with the economists who write the big reports, e.g. the World Happiness Report. However, I tend to think that, given there are people working on the policy problem, and I don't have much to add there, but there isn't really anyone thinking about the EA-type questions of what the best things for individuals to do with their time and money, I do more by contributing to this latter issue.
On the 80k framework, if you have info on scale, tractability and neglectedness, there is no point calculating neglectedness
Are you using the two 'neglectedness' words differently? Why would you calculate X if you already knew X in general?
This being said, when we don't know much about cost-effectiveness, I still think neglectedness is a useful heuristic for cost-effectiveness. The fact that AI is 1000 times more neglected than climate change does seem like a very good reason that AI is a more promising cause to work on
I think that's right. One method is to use scale and/or neglectedness as (weak), independent heuristics for cost-effectiveness if you haven't or can't calculate cost-effectiveness. It's unclear how to use tractability as a heuristic without implicitly factoring in information about neglectedness or scale. Another (the other?) method, then, is to directly assess cost-effectiveness. Once you'd done that, you've incorporated the ITN stuff and it would be double-counting to appeal to them again ("I know X is more cost-effective than Y, but Y is more neglected" etc).
Thanks for all these great points (Derek sent these to me privately and I suggested it would be valuable for him to share them here for other interested parties). My brief replies, in order, to those comments that weren't just informative:
1. fair cop. I think I was lazily using those as I first compiled these numbers back in 2015 (at the start of my PhD).
2. agree it's unclear what these breakthrough drugs imply for EA
5. it makes sense to compare to GW because that's who our audience is. People who already think GW is irrelevant and focus on e.g. far future are unlikely to be interested in the analysis here.
6. yes, there are probably flaws in the SM analysis. I look forward to mine being made obsolete in due course. I note that my points on negative spillovers should cause us to downgrade the effectiveness of anti-poverty charities.
8. agree, but this applies to mental health intervention too: their effects could also be larger if we take spillovers into account, e.g. reduced strain on family who care for them.
9. As I'm sympathetic to person-affecting views, I'm not too concerned about the long-term anyway. Even if I were a long-termist, the problem with including indirect effects is that it tends to make the analysis incredibly 'hand-wavey' ("ah, saving lives speeds up growth, which is bad for climate change, etc.). I think it makes sense to calculate what can easily be calculated first. If you can't look anywhere else, at least look under the lamppost.
10. Probably correct. A better analysis would factor in how the LS of AMF recipients would change over their lives (presumably upwards and societal conditions improve)
11. I agree LS is not the ideal thing. If we had affect scores, I would say we use those, but we don't! ("slaves to the data" etc)
12. I also agree moving to affect would make mental health score better than poverty. I left that out because I thought the analysis was complicated enough already.
Hello Sanjay. I didn't do this because I think the idea of comparing causes by numerically assigned scores to I, N and T is of illusory helpfulness and I wish we would all stop doing it(!). What we care about is knowing the expected value of the dollar you would donate (or, more complicatedly, the hour you would spend). I've produced some numbers by doing cost-effectiveness estimates of a charity you could donate to. Given that's what we ultimately want, it's unclear what the positive value is of representing things via the INT approach. I have a thesis chapter/EA forum post forthcoming on this topic, but I'll make a couple of points here.
First, note that on the 80k framework the INT literally is a cost-effectiveness calculation and not, which is what Will uses in Doing Good Better, 3 independent heuristics which somehow combine to give a rough idea of cost effectiveness. Indeed, it's more confusing to do expected value the way 80k suggests, than how I did it, as their method requires redundant and arbitrary steps. 80k specify neglectedness as "% increase in resources/extra person or dollar". It is later defined as "How many people, or dollars, are currently being dedicated to solving the problem?" But, deciding what counts as dollars being dedicated to "solving the problem" is arbitrary, hence there cannot be a precise answer to this question.
Further, if I wanted to put mental health in 80k's framework, note that in addition to establishing an arbitrary neglectedness score, I'd have to ascertain solvability - found by asking "If we doubled direct effort on this problem, by what fraction of the remaining problem would we expect to solve?" How would I do that? I'd have to work out the total size of the problem, then assess how much of it would be solved by some given intervention. To do that, I'd need to work out the cost-effectiveness of a mental health intervention. But I've already done that, so I can only calculate the tractability/solvability number once I already have the information that is ultimately of interest to me.
I don't see how it's an improvement over the formula cost-effectiveness = effect/cost to say to say Cost-effectiveness = (effect/ % of a problem solved)/(-% of a problem solved / %increase in resources)/(% increase in resources /cost). As demonstrated, it's (at least sometimes) harder to calculate cost-effectiveness this latter way. If we really think scale is important to keep in mind, we could have a two-factor model, scale (value of solving whole problem) and solvability* (% of problem solved/cost).
Second, I don't see what the point is to take one ranking of scale/neglectedness/tractability for each of two causes and compare those. What does it tell us that X is more neglected/tractable/large that Y, if that is all we know about X and Y? By itself, it literally tells us nothing about the expected value of marginal resources to X vs Y. We only understand that once we've thought how scale, neglectedness and tractability combine to give us cost-effectiveness. To bring this out, imagine you and I are having a conversation.
Sanjay: "mental health is more neglected than poverty".
Michael: "and? That doesn't tell me which one has higher expected value".
S: "hmm. Poverty is bigger".
M: "again? So what? That doesn't tell me which one has higher expected value either".
S: "Okay, well, poverty is more tractable than mental health".
M: "and? So what? In fact, what do you mean by 'tractable'? if you mean 'has higher expected value', then you're just saying poverty is better than mental health health and I don't know how you factored in neglectedness and size when assessing tractability. If by tractability you mean 'if we doubled direct effort on this problem by, what fraction of the remaining problem would we expect to solve?' then I only know which cause you think has higher expected value when you give me precise scores of scale, neglectedness and tractability and tell me how you're combining those scores to give expected value"
S: Michael, why are you always so difficult? [curtain falls]
By analogy, if we want to know the speed of some object (speed = distance/time), knowing just the distance its traveled, or just the time it took, gives us absolutely no insight into its speed. Do objects which travel further tend to travel faster? Always travel faster?
Third, I don't think it even makes sense to talk about comparing causes as opposed to comparing interventions. What we're really doing when we do cause prioritisation is saying "there are problems of types A, B and C. I'm going to find the best intervention I can that tackles each of A, B and C. Then I'm going to compare the best item I've found in each 'bucket'." Given we can't give money to poverty (the abstract noun), but we can give to interventions that reduce poverty, we should just think in terms of interventions instead of causes.
I may have misunderstood your first comment, but if I had estimated the effects for GiveDirectly it would have been (on my best guess) less effective than the study showed. From the 2016 paper I inferred GD increased life satisfaction (LS) by 0.3/10 per person. In the Origins of Happiness, Clark et al find a doubling of income increase LS 0.12/10 by. IIRC (and I may not), the $750 transfer from GD is less than a doubling of household income. So the estimated effects would have been approx. 3 times smaller for GD.
Regarding StrongMinds' treatment, Reay et al. (2012) have a 2 year study of how much of the benefits are retained for interpersonal group therapy (which is what StrongMinds delivers). I agree it is more appropriate to use this than using the Wiles et. al (2016) model - which I interpret as a constant effect for 4 years and then nothing thereafter - as Wiles et al. is based on UK CBT, I think delivered individually. To account for this, in my spreadsheet, I do two estimates, one where I assume the treatment effect is constant as lasts only 4 years, another where 75% of the benefits are retained annually. This latter estimation method is taken from Halstead and Snowden's Founder's Pledge report on mental health where they also assess StrongMinds. It turns out the estimates give practically identical results so, in this case, the cost-effectiveness is not sensitive to how duration of effect is modeled.
I agree with you that the best current mental health charity is probably far less cost-effective, relative to whatever the best possible intervention is, than the best current development or physical health charities, on the grounds more effort has been put into the latter. (As you and I have discussed) I am optimistic about finding/developing even better ways to do provide mental health treatments. I didn't stress this point on the grounds the reader was probably more interested in current interventions than hypothetical interventions, but that could have been an error on my part.
First, it's unclear how many EAs are totalists or long-termists. I suppose this post is addressed at those who support global poverty and development, which is (from surveys) the majority of EAs. To support global poverty and development you could - this is not an exhaustive list - (a) be a person-affector or (b) be a totalist who is sceptical about the effectiveness of far-future stuff or (c) be a long-termist who think near-term interventions have strong long-term impacts, such that they are cost-competitive with X-risk.
Second, on why I'm sympathetic to person-affecting view, the short answer is because I find the following two concepts highly plausible.
First, the person-affecting restriction: an outcome can only be better or worse if it is better or worse for someone. (Parfit, Reasons and Persons attributes such a view to Narveson, explaining "On [Narveson's] view, it is not good that people exist because their lives contain happiness. Rather, happiness is good because it is good for people”)
Second, non-comparativism about existence: non-existence is neither better than, worse than, or equally good as, existence for someone. Why believe this? For the personal betterness relation to hold (i.e. for an outcome to be better for someone) the person needs to exist in both of those outcomes. If the person only exists in one outcome, there is no comparison to be made. By analogy, to say "X is taller than Y", X and Y need to have a height. If X or Y lack the property of height, they cannot stand in the relationship of "being taller than". It's confused to say "the Eiffel Tower is taller than nothing". "Nothing" lacks a height (rather than has a height of zero), thus the Eiffel tower's height is incomparable to the height of "nothing". If we're concerned with the personal betterness relationship, we are comparing two states of the person (i.e. the person needs to exist and have some good, bad, or neutral-making properties). A non-existent entity cannot stand in the personal betterness relationship with an existing person. There is no sensible comparison to be made; one cannot compare something with nothing.
Taken together, these two statements entail that creating new lives is incomparable in value to not creating them.
Yes, I had a few paragraphs on the potential indirect effects of treating mental health but decided to cut them out at last moment as (a) I wasn't sure how many people would be interested in them and (b) the whole analysis is just extremely handwavey.
It's possible that someone could think focusing on mental health/happiness now could have very long-run effects and would be justified primarily on the impact it would have to future people. This also applies to bednets, economic development etc. and it seems very hard to sensibly compare these things. My hunch is that if someone was taking this angle they would do more good by trying to get governments to measure policies by their SWB impact, rather than by treating more people for depression through developing world micro-interventions.
I want to note a tension in this article. It was about being welcoming by, roughly, not assuming all people you speak to are from a certain group. However, while 'conservative' is a general term, the conservatives under discussion were clearly conservatives in the USA; in the UK, from where I write, there isn't much in the way of creationists, pro-lifers, or Trump supporters. As such, I would like to suggest that one way effective altruists can be welcoming is by not presuming everyone interested in effective altruism is an US citizen.
Found this post again after many months. Don't those who endorse the asymmetry tend to think neutrality is 'greedy' in the sense that if you add a mix of happy and unhappy lives, such that future total welfare is positive, then the outcome has zero value? Your approach is the 'non-greedy' one where happy lives never contribute towards outcome value and unhappy lives always count against. On the greedy approach, I think it follows we have no reason to worry about the future unless it's negative. I think Bader supports something like the greedy version. I'm somewhat unsure on this.
Very pleased to see this write up and hear the many valuable things Rethink Priorities is working on. I'll just comment one part. Seeing as you said you wanted to look into mental health and metrics for well-being, I should mention previous and current work done in this area.
I was surprised to see person-affecting views weren't on your list of exception, then I saw it was in the uncertainties section. FWIW, taking Gregory Lewis' model at face value - I raised some concerns in a comment replying to that post - he concludes it's $100,000 per life saved. If AMF is $3,500 per life saved then X-risk is a relatively poor buy (although perhaps tempting as a sort of 'hedge'). That would only speak to your use of money: a person-affector could still conclude they'd do more good with a career focused on X-risk than elsewhere.
First, I want to say that I do not endorse TRIA. This post wanted to look at applying the SWB approach given what people's moral views seem to be, rather than evaluate how good those views are. GiveWell staff and many EAs (implictly) endorse TRIA, hence I discussed it.
FWIW, I don't think the concern that TRIA ignores equality really hits the mark. If you think what matters is interest, then you weight by the strength of interest, and - adding some further theory - young children don't seem to have such strong interest in survival as older humans. I think there are deep problems with TRIA, but I don't think concerns about equality is one of them.
Indeed, many people are surprised the relationship with inequality is complicated. I don't work on this, but my understanding it matters whether you see inequality in your society as a sign of unfairness and the system being broken (Europe) or you see it as a sign of opportunity to succeed (developing world). I've heard researchers say that don't find such an affect of inequality in the US because really believe in the American Dream and thus don't mind it. As I say, I'm no expert on this but I'd be keen for someone to look into it in more detail.
On your questions 1) the effect will be due to social comparison. Unclear if secret cash transfers would be possible - people buying new roofs for their houses - and whether this would then reduce the increase to the recipients if they can't 'show off'.
2) there is evidence on unemployment. In areas where unemployment is really high (20+%), individuals who are unemployed don't show such a reduction in life satisfaction -there's not such a social penalty if everyone else is unemployed.
I'm pretty sceptical on basic income. I would rather use that money - which would be huge - to provide mental health treatment to everyone who needed it. People are atrocious as converting money into happiness.
Well, I confess I don't fully understand the paper and a further social scientist I've since spoken to had a different take on what the paper said altogether. I'll try to bring this up with a few more people.
Hello Larks. Glad your found it useful. On equality, it's going to turn on how you think equality should be understood. If you think we should give equal weight to the 'time-relative-interest-adjusted' value of people's life, you might think it is correct to believing saving a 25-year-old is better for that person than saving a 2-year-old is to that person.
FWIW, intuitions seem quite split on deprivationism vs TRIA about deaths. What people find weird about deprivationism is that there is some sharp point when someone starts to matter. Say someone begins to exist after 90 days after conception. Well, saving someone after 89 days would be morally unimportant, whereas saving them after 91 days would be hugely important. TRIA, by contrast, has a more gradually approach.
On 1), I agree the correlation is only partial, which is why I said that we should use the LS data cautiously, keeping in mind when the two measures come apart. I think it would be worth writing up where they conflict and diverge.
In the case of mental health vs poverty, I think moving to affect measures from life satisfaction would would leave the priority ranking unchanged: Dolan and Metcalfe (2012) indicate mental health has a bigger impact on affect than LS, and Kahneman and Deaton (2008) that income has a bigger impact on LS than affect, hence if we ignore the negative externalities of the income transfers on LS, given StrongMinds seems 4x more cost-effective, this would only increase the comparative cost-effectiveness of mental health. I accept this is somewhat complication, and should be written up in greater depth.
It's also unfortunate that my claim here is hypothetical rather than based on actually affect measure of poverty alleviate vs mental health treatment. I'm currently talking to a couple of economists in the hope we can actually find this out!
2) You're quite right. I thought about getting into that but reckoned it was too complicated for an already long piece. I agree it would be worth thinking about someone's LS would arc of the course of their whole life. Another worthy research questions!