Posts

Can money buy happiness? A review of new data 2021-06-28T01:48:27.751Z
Ending The War on Drugs - A New Cause For Effective Altruists? 2021-05-06T13:18:04.524Z
2020 Annual Review from the Happier Lives Institute 2021-04-26T13:25:51.249Z
The Comparability of Subjective Scales 2020-11-30T16:47:00.000Z
Life Satisfaction and its Discontents 2020-09-25T07:54:58.998Z
Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty 2020-08-03T16:17:32.230Z
Update from the Happier Lives Institute 2020-04-30T15:04:23.874Z
Understanding and evaluating EA's cause prioritisation methodology 2019-10-14T19:55:28.102Z
Announcing the launch of the Happier Lives Institute 2019-06-19T15:40:54.513Z
High-priority policy: towards a co-ordinated platform? 2019-01-14T17:05:02.413Z
Cause profile: mental health 2018-12-31T12:09:02.026Z
A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare 2018-10-25T15:48:03.377Z
Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was 2018-05-23T10:30:43.748Z
Could I have some more systemic change, please, sir? 2018-01-22T16:26:30.577Z
High Time For Drug Policy Reform. Part 4/4: Estimating Cost-Effectiveness vs Other Causes; What EA Should Do Next 2017-08-12T18:03:34.835Z
High Time For Drug Policy Reform. Part 3/4: Policy Suggestions, Tractability and Neglectedess 2017-08-11T15:17:40.007Z
High Time For Drug Policy Reform. Part 2/4: Six Ways It Could Do Good And Anticipating The Objections 2017-08-10T19:34:24.567Z
High Time For Drug Policy Reform. Part 1/4: Introduction and Cause Summary 2017-08-09T13:17:20.012Z
The marketing gap and a plea for moral inclusivity 2017-07-08T11:34:52.445Z
The Philanthropist’s Paradox 2017-06-24T10:23:58.519Z
Intuition Jousting: What It Is And Why It Should Stop 2017-03-30T11:25:30.479Z
The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik 2016-12-05T21:03:24.496Z
Are You Sure You Want To Donate To The Against Malaria Foundation? 2016-12-05T18:57:59.806Z
Is effective altruism overlooking human happiness and mental health? I argue it is. 2016-06-22T15:29:58.125Z

Comments

Comment by MichaelPlant on An evaluation of Mind Ease, an anti-anxiety app · 2021-07-30T12:13:37.353Z · EA · GW

I'm really pleased to see this: I have been wondering how one would do an EA-minded evaluation of the cost-effectiveness of a start-up that runs it head to head with things like AMF. I'm particularly pleased to see an analysis of a mental health product.*

I only have one comment. You say:

The promise of mobileHealth (mHealth) is that at scale apps often have ‘zero marginal cost’ per user (much less than $12.50) and so plausibly are very cost-effective

It doesn't seem quite that tech products have zero marginal cost. Shouldn't one include the cost of acquiring (and supporting?) a user, e.g. through advertising? This has a cost, and this cost would need to be lower than $12.50 per user, given your other assumptions. I have no idea what user acquisition costs are and if $12.5 is high or low. 

*(Semi-obligatory disclaimer: Peter Brietbart, MindEase's CEO, is the chair of the board of trustees for HLI, the organisation I run)

Comment by MichaelPlant on Can money buy happiness? A review of new data · 2021-06-28T10:52:31.459Z · EA · GW

Uhh... that shouldn't happen from just re-plotting the same data. In fact, how is it that in the original graph, there is an increase from $400,000 to $620,000, but in the new linear axis graph, there is a decrease?


So, there was a discrepancy between the data provided for the paper and the graph in the paper itself. The graph plotted above used the data provided.  I'm not sure what else to say without contacting the journal itself.

this seems to imply that rich people shouldn't get more money because it barely makes a difference, but this also applies to poor people as well, casting doubt on whether we should bother giving money away.

I don't follow this. The claim is that money makes less of a difference what one might expect, not that it makes no difference. Obviously, there are reasons for and against working at, say, Goldman Sachs besides the salary. It does follow that, if you receiving money makes less of a difference than you would expect, then you giving it to other people, and them receiving it, will also make a smaller-than-anticipated difference. But, of course, you could do something else with your money that could be more effective than giving it away as cash - bednets, deworming, therapy, etc.

Comment by MichaelPlant on US bill limiting patient philanthropy? · 2021-06-25T09:22:03.807Z · EA · GW

I also know almost nothing about US tax law. Call me a cynic but it seems plausible that lots (nearly all?) of the people putting their money into foundations and not spending it are doing so for tax reasons, rather than because they have a sincere concern for the longterm future.

As a communications point, this does make me wonder if longtermist philanthropists who hypothetical campaigned for such a 'loophole' to remain open will, by extension, be seen as unscrupulous tax dodgers.

Comment by MichaelPlant on Can "pride" be used as a subjective measure like "happiness"? · 2021-06-19T10:36:11.402Z · EA · GW

So, if you look at OECD (2013, Annex A) there's a few example questions about subjective well-being. The eudaimonic questions are sort of in your area (see p 251), e.g. "I lead a purposeful and meaningful life", and "I am confident and capable in the activities that are important to me".

You might also be interested by Kahneman's(?) distinctions of decision vs remembered vs experience utility. Sounds like your question taps into "how will I, on reflection, feel about this decision?" and you're sampling your intuitions about how you judge life. 

Comment by MichaelPlant on [Podcast] Suggest a question for Jeffrey Sachs · 2021-06-15T15:33:00.227Z · EA · GW

He may well have been asked this before, but I'd want to know what, if anything, he thinks would be lost be replaced the SDGs - at the least insofar as they apply to current humans - with a measure of happiness.

Also, if/how he thinks about intergenerational trade-offs.

Comment by MichaelPlant on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T22:41:53.174Z · EA · GW

Just a half-formed thought how something could be "meta but not longtermist" because I thought that was a conceptually interesting issue to unpick.

I suppose one could distinguish between meaning "meta" as (1) does non-object level work or (2) benefits more than one value-bearer group, where the classic, not-quite-mutually-exclusive three options for value-bearer groups are (1) near-term humans, (2) animals, and (3) far future lives.

If one is thinking the former way, something is meta to the degree it does non-object level vs object-level work (I'm not going to define these), regardless of what domain it works towards. In this sense, 'meta' and (e.g.) 'longtermist' are independent: you could be one, or the other, both, or neither. Hence, if you did non-object level work that wasn't focused on the longterm, you would be meta but not longtermist (although it might be more natural to say "meta and not longtermist" as there is no tension between them).

If one is thinking the latter way, one might say that an org is less "meta", and more "non-meta", the greater the fraction of its resources are intentionally spent to benefit just only one value-bearer group. Here "meta" and "non-meta" are mutually exclusive and a matter of degree. A "non-meta" org is one that spends, say, more than 50% of its resources aimed at one group. The thought is of this is that, on this framework, Animal Advocacy Careers and 80k are not meta, whereas, say, GWWC is meta. Thinking this way, something is meta but not longtermist if it primarily focuses on non-longtermist stuff.

(In both cases, we will run into familiar issues about to making precise what an agent 'focuses on' or 'intends'.)

Comment by MichaelPlant on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-06T14:39:47.890Z · EA · GW

In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them

Thanks for this reply, which I found reassuring. 

FWIW, I think this example is pretty unrealistic, as I don't think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination

Okay, this is interesting and helpful to know. I'm trying to put my finger on the source of what seems to be a perspectival difference, and I wonder if this relates to the extent to which fund managers should be trying to instantiate donor's wishes vs fund managers allocating the money by their own lights of what's best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former, not least for long-term concerns about reputation, integrity, and people just taking their money elsewhere. 

To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better.

I suspect you would agree with this in principle: you wouldn't want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused.

However, I imagine you would disagree that this is a problem in practice, because donors expect there to be some overlap between funds and, in any case, fund managers will not recommend things wildly outside their fund's remit.  (I am not claiming this is a problem in practice; might concern is that it may become one and I want to avoid that.)

I haven't thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive - this gives donors greater choice and minimises worries about permissible fund allocation. 

Comment by MichaelPlant on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-05T18:23:43.420Z · EA · GW

Hello Michelle. Thanks for replying, but I was hoping you would engage more with the substance of my question - your comment doesn't really give me any more information than I already had about what to expect.

Let me try again with a more specific case. Suppose you are choosing between projects A and B - perhaps they have each asked for $100k but you only have $100k left. Project A is only eligible for funding from EAIF - the other EA funds consider it outside their respective purviews. Project B is eligible for funding from one of the other EA funds, but so happens to have applied to EAIF. Suppose, further, you think B is more cost-effective at doing good.

What would you do? I can't think of any other information you would need.

FWIW, I think you must pick A. I think we can assume donors expect the funds not to be overlapping - otherwise, why even have different ones? - and they don't want their money to go to another fund's area - otherwise, that's where they have put it. Hence, picking B would be tantamount to a breach of trust.

(By the same token, if I give you £50, ask you to put it in the collection box for a guide dog charity, and you agree, I don't think you should send the money to AMF, even if you think AMF is better. If you decide you want to spend my money somewhere else from what we agreed to, you should tell me and offer to return the money.)

Comment by MichaelPlant on My current impressions on career choice for longtermists · 2021-06-05T16:31:43.619Z · EA · GW

Thanks for writing this up! I found the overall perspective very helpful, as well as lots of the specifics, particularly (1) what it means to be on track and (2) the emphasis on the importance of 'personal fit' for an aptitude (vs the view there being a single best thing).

Two comments. First, I'm a bit surprised that you characterised this as being about career choice for longtermists.  It seems that the first five aptitudes are just as relevant for non-longtermist do-gooding, although the last two - software engineering and information security - are more specific to longtermism. Hence, this could have been framed as your impressions on career choice for effective altruists, in which you would set out the first five aptitudes and say they applied broadly, then noted the two more which are particular to longtermism. 

In the spirit of being a vocal customer, I would have preferred this framing. I am enthusiastic about effective altruism, but ambivalent about longtermism - I'm glad some people focus on it, but it's not what I prioritise - and found the narrower framing somewhat unwelcoming, as if non-longtermists aren't worth considering. (Cf if you had said this was career advice for women even though gender was only pertinent to a few parts.)

Second, one aptitude that did seem conspicuous by its absence was for-profit entrepreneurship - the section on the "entrepreneur" aptitude only referred to setting up longtermist organisations. After all, the Open Philanthropy Project, along with much of the rest of the effective altruist world, only exists because people became very wealthy and then gave their money away. I'm wondering if you think it is sufficiently easy to persuade (prospectively) wealthy people of effective altruism(/longtermism) that becoming wealthy isn't something community members should focus on; I have some sympathy with this view, but note you didn't state it here. 

Comment by MichaelPlant on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T17:15:27.824Z · EA · GW

Yes, I read that and raised this issue privately with Jonas.

Comment by MichaelPlant on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T16:49:21.833Z · EA · GW

I recognise there is admin hassle. Although, as I note in my other comment, this becomes an issue if the EAIIF in effect becomes a top-up for another fund.

Comment by MichaelPlant on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T16:28:34.599Z · EA · GW

Thanks for writing this reply and, more generally, for an excellent write-up and selection of projects!

I'd be grateful if you could address a potential, related concern, namely that EAIIF might end up as a sort of secondary LTFF, and that this would be to the detriment of non-longtermist applicants to the fund, as well being, presumably,  against the wishes of EAIIF's current donors.  I note the introduction says:

we generally strive to maintain an overall balance between different worldviews according to the degree they seem plausible to the committee.

and also that Buck, Max, and yourself are enthusiastic longtermists - I am less sure about Ben and Jonas is a temporary member. Putting these together, combined with what you say about funding projects which could/should have applied to the LTFF, it would seem to follow you could (/should?) put the vast majority of the EAIIF towards long-terminist projects.

Is this what you plan to do? If not, why not? If  yes,  do you plan to inform the current donors?

I emphasise I don't see any signs of this in the current round, nor do I expect you to do this. I'm mostly asking so you can set my mind at rest, not least because the Happier Lives Institute (disclosure: I am its Director) has been funded by EAIIF and its forerunner, would likely apply again, and is primarily non-longtermism (although we plan to do some LT work - see the new research agenda). 

If the EAIIF radically changes directly, it would hugely affect us, as well as meaning more pluralistic/meta EA donors would lack an EA fund to donate to. 

Comment by MichaelPlant on Working in Parliament: How to get a job & have an impact · 2021-05-24T16:33:38.045Z · EA · GW

Yep, SpAds bit is key - If my employer hadn't got a special advisor, I might have been useful

Comment by MichaelPlant on Working in Parliament: How to get a job & have an impact · 2021-05-24T15:11:38.410Z · EA · GW

Thanks for writing this. I too think more people should consider this.

I agree almost entirely with what you've written, and I'd just like to add a couple of comments drawing on my own perspective -  I worked as a Parliamentary Researcher for a year after finishing my undergraduate degree.

First, your impact really depends on how useful you are to the MP. Ideally, you want to work for an ambitious up-and-coming MP, who will be active and will rely on their staff. If you work for a government minister, they will focus on their policy brief, and you'll be mostly redundant: they will have civil service staff, and sometimes a special advisor (which is a political appointment paid for by the central party), who will have policy domain expertise that you don't. In my case, I  worked for an MP who unexpectedly became a government minister at the end of my first week. I ended up doing very little work for my whole year, let alone impactful work. If you work for an unambitious or a very experienced MP - perhaps someone who has been there 20 years - then they may not be that active, or have as much use for you, or both.

I recognise you may not have much choice in who you work for - the jobs are very competitive - but I would advise someone to think twice about taking the job if their only option is to for an MP who won't use your labour. If I'd known how my year would turn out, I would have looked hard for something else.

Second, regarding risks, it is the case that your impact depends on how your MP does - if they rise up the ranks, they can carry you with them, and you will get associated with being in their bit of the party. However, if your MP does something stupid, it doesn't seem to cause you reputational damage. I knew several staffers whose MPs got into trouble. People felt sorry for them, rather than that they were tarnished by association. 

Comment by MichaelPlant on AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy · 2021-05-24T10:16:32.153Z · EA · GW

Hello Aaron,

Re (a), that would be a sufficient justification, I agree: you suggest the option that is less cost-effective in the expectation more people will do it and therefore its expectation value is higher nonetheless. My point was that, if you have a fixed total of resources then, as an investor, the lower-risk, lower ROI option can be better (due to diminishing marginal utility) but, as a donor, you just want to put the fixed total to the thing with higher ROI.

That said, this is possibly worse than creating some kind of psychedelics fund that can combine many small donations into grants of a size that make sense for universities to process

I am not aware of this, but I have had a bit of discussion with Jonas Vollmer about setting up a new EA fund that could do this. This hypothetical 'human well-being fund' would be an alternative to the global health and development fund. While the latter would (continue to) basically back 'tried-and-tested' GiveWell recommendations (which are in global health and development), the former could, inter alia, engage in hits-based giving and take a wider view.

Comment by MichaelPlant on Problem area report: mental health · 2021-05-24T10:00:34.800Z · EA · GW

No problem. It's always a challenge that you want to put the attention-grabbing stuff at the top whilst knowing that you can't properly caveat it and many won't read anything else!

Comment by MichaelPlant on Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty · 2021-05-24T09:58:21.672Z · EA · GW

Hello SamiM,

It's an interesting position. I'm not sure if it's exactly the same, but it seems similar to desert-adjusted attitudinal hedonism (see here) where certain pleasures/pains don't count if they are(/aren't) accompanied by the right attitudes. I feel the intuitive pull behind it but, on reflection, I don't buy it.

One issue is going to be providing a non-question-begging around of why makes certain emotions, but not others, well-grounded. Does the groundedness related to the emotion? If so, why some rather than others? Does it relate to the beliefs? If so, why is my pleasure only good for me if the beliefs that contribute to it are correct? That doesn't seem relevant at all. This isn't my area of expertise, but I'd be surprised if there was any really good way of doing this. 

It strikes me a more plausible way of accounting for the intuitions is that, as a pragmatic matter, we don't want to reward people for being 'bad' (in some, to be specified, sense) lest it gives them incentives to keep doing it. It's an appeal to deterrence, rather than retribution, c.f locking up criminals to demotivate their activities just to punish them for being bad people. On this understanding, you need to actually inform people of your decision-making, otherwise it will just seem, to them, an arbitrary punishment.

In this case, I don't see how deterrence would work here. Would you, um, tell people that you would be giving them lots of money, but now you won't because you've learnt this makes their neighbours jealous?

There are some other issues that spring to mind, but hopefully that suffices! 

Comment by MichaelPlant on Problem area report: mental health · 2021-05-21T13:07:22.996Z · EA · GW

that the fact about "per DALY lost, spending on HIV is 150x higher than spending on mental health" is not necessarily a sign of irrational priorities

I agree! At the start of section 4, on neglectedness - the one which later compares HIV to mental health spending - we make the same point (emphasis added):

In terms of national spending, in every country the proportion of the health budget spent on mental healthcare and research is disproportionate to the burden of mental disorders (see figure 4) (Patel et al., 2018). To be clear, this by itself does not show more should be spent on mental health, if the aim is to have the biggest impact using scarce resources: if interventions in other areas were more cost-effective, then this allocation would be justified. However, as we go on to argue, it seems likely mental health has been unduly neglected due to reasons such as stigma.

Sort of aside: there is this ongoing confusion inside the EA community about the importance of 'scale', 'neglectedness' and 'tractability', something that's been discussed on this forum before - see e.g. this summary of two chapters from my PhD thesis. I recommend that people think of scale (size of a problem) and neglectedness (resources going to a problem) as background information that might later be relevant to the cost-effectiveness of a  problem, but that they don't by themselves tell you anything about cost-effectiveness. 

Comment by MichaelPlant on Problem area report: mental health · 2021-05-20T19:08:38.747Z · EA · GW

Thanks for raising this - comparing things is a cause very close to my heart!

First, the report wasn't trying to compare the importance of mental health as a cause area to other things, so I understand that you didn't find that, because it wasn't central.

Second, the report  (p8) does compare the impact of depression and anxiety to various other health conditions, as well as to debt, unemployment, and divorce in terms of 0-10 life satisfaction, a measure of subjective well-being (SWB) - the other main measure of SWB is happiness. We, as in HLI, are pretty enthusiastic about comparing different outcomes in terms of SWB rather than anything else, e.g. QALYs. The obvious issue, if you use QALYs, is it's a measure of health, and even if you thought it was an excellent measure of the impact of health on well-being, you still need to compare health to non-health outcomes. 

We mentioned SWB in a recent post about Our 2020 Annual review, this post about using SWB to compare averting poverty to saving lives, I argue explicitly for in this 2018 post, and it's raised in so many other places I'm starting to feel embarrassed about repeating myself, which is why it wasn't featured prominently here(!)

Third, the report also mentions (p26) that I've previously done a fairly basic analysis, including in my PhD, using SWB to compare a mental health charity (StrongMind) to those recommended by GiveWell - on that mental health looks rather promising. Further, it notes (at p36) that HLI is now working on a more empirically sophisticated SWB analysis of the same type. We have some provisional results for this latter analysis and should be putting out those reports within a couple of months, and which point you are welcome to dive into that comparison!

Comment by MichaelPlant on AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy · 2021-05-20T13:11:58.855Z · EA · GW

Tim,

Thanks enormously for this very thorough write-up - shared despite your nervousness(!) - 
which was insightful, not just for your thinking about psychedelics, but also about non-profit and for-profit investing.

You said lots. I'm just going to focus on two things here.

1. (Dis)analogies between investing and donating

You drew the analogy that GiveWell-recommended charities - evidence-based 'micro-interventions' - are like index funds, whereas funding research is more like angel investing. I agree with you that the risk-return structure is similar, in the sense we think the former has lower variance and lower expected value and the latter has higher variance but also higher expected value. Crucially, 'value' here is being used ambiguously: for investing, we're interested in financial value; for philanthropy, in moral value. Because of this, the analogy isn't exact and it doesn't follow we should think about investing and philanthropy the same way.

From an investor's perspective, it does make sense to make both sorts of investments, but only because there are diminishing marginal returns to income on well-being. If there were no diminishing marginal returns to income on well-being, the best thing for your well-being would be whatever has the highest expected return on investing!

From the philanthropist's perspective, because there aren't diminishing marginal returns to value on, er, value - increasing happiness by 1 'unit' is just as good, no matter how much happiness there already is - we really should just do the things that have the highest expected value and ignore concerns about variance.

Hence, if you think funding some project in psychedelics really has higher expected (moral) value than anything else, including GiveWell's picks, it would be better (by your lights) to give to that, and recommend your listeners to do likewise. Put another way, note there's something odd about saying "yeah, I really do think A would have the most impact, and all that matters here is impact, but you and I should do B anyway."

Admittedly, you might have some concerns about 1. asking your listeners to follow your recommendations, rather than someone else's (which wouldn't be relevant to your own giving) and 2. it being psychological motivating to have some low-risk wins, i.e. you think you will give donations with a higher total expected value if some are lower expected value 'sure-things'.

2. When is it worth doing detailed analyses of early-stage investments/philanthropy?

I'm not sure if we disagree here or not. In terms of a Value of Information approach, the less money you are putting towards something, and the less you expect to learn from investigating it - because e.g. you think there is no good evidence available, so you'd still be relying on your intuitions - the less valuable it is to do the investigation. For really big decisions, it can be worth doing this even if you're very confident, because you might be wrong.

I suspect we probably agree on this in general, but we might disagree on exactly where 'the bar' is, that is, where it makes sense to sit down to write out one's assumptions, put probabilities and values on things, and crunch some numbers. Broadly, I'm a fan of doing this: I find it helps clarify my thinking, plus if cost-effectiveness analysis doesn't agree with the intuitive judgement, that is a good spur to think about where the difference emerges. It's possible I'm suffering from bias here: quantifying hard-to-quantify stuff is what the conceptual tools of effective altruism (primarily philosophy and economics) allow one to do and I am familiar with. To the man with only a hammer, etc.

That said, I think one specific, valuable project would be sifting through the landscape of psychedelic funding opportunities. As you say, even some of the best projects are not getting funded, so it seems useful to think through exactly which those are and make the case for them so they get the money they need. This is a more or less apples-to-apples comparison and could be done quite qualitatively because it's things like "fund research into compound A for X or fund compound A for Y", so you can just compare X to Y. However, this is pretty hard to do without lots of inside knowledge of the players and projects, particularly as they change over time. HLI doesn't have this knowledge, so we'd need to partner with someone in the know.

The other obvious valuable project would be comparing (the best thing in) psychedelics to other things. This is the familiar-but-difficult quantitative analysis piece. Given the money at stake, it's worth doing even if one is pretty confident about the answer. Further, at least for EA-minded donors, it's crucial to see a good attempt to do this before switching where they put their money. Again, a key input is what the best-in-class psychedelics thing is.

I'm wondering if this is something you might be interested in collaborating on.  I'll send you a message on the EA forum privately to ask you about this.

P.S. Regarding the Founders Pledge comparison of Usona to StrongMinds, they say it's comparable on e.g. p. 69 of the psychedelics report. Sorry, I thought that was somewhere more obvious.

Comment by MichaelPlant on AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy · 2021-05-20T13:06:50.466Z · EA · GW

Tim,

Thanks enormously for this very thorough write-up - shared despite your nervousness(!) - 
which was insightful, not just for your thinking about psychedelics, but also about non-profit and for-profit investing.

You said quite a lot. I'm just going to focus on two things here.

1. (Dis)analogies between investing and donating

You drew the analogy that GiveWell-recommended charities - evidence-based 'micro-interventions' are like index funds, whereas funding research is more like angel investing. I agree with you that the risk-return structure is similar, in the sense we think the former has lower variance and lower expected value and the latter has higher variance but also higher expected value. Crucially, 'value' here is being used ambiguously: for investing, we're interested in financial value; for philanthropy, in moral value. Because of this, the analogy isn't exact and it doesn't follow we should think about investing and philanthropy the same way.

From investor's perspective, it does make sense to make both sorts of investors, but only because there are diminishing marginal returns to income on well-being. If there were no diminishing marginal returns to income on well-being, the best thing for your well-being would be whatever has the highest expected return on investing!

From the philanthropist's perspective, because there aren't diminishing marginal returns to value on, er, value - increasing happiness by 1 'unit' is just as good, no matter how much happiness there already is - you really should just do the things that have the highest expected value and ignore concerns about variance.

Hence, if you think funding some project in psychedelics really has higher expected (moral) value than anything else, including GiveWell's picks, it would be better (by your lights) to give to that, and recommend your listeners to likewise. Put another way, note there's something odd about saying "yeah, I really do think A would have the most impact, and all that matters here is impact, but you and I should do B anyway."

Admittedly, you might have some concerns about 1. asking your listeners to follow your recommendations, rather than someone else's (which wouldn't be relevant to your own giving) and 2. it being psychological motivating to have some low-risk wins, i.e. you think you can give donations with a higher total expected value if some are sure-things.

2. When is it worth doing detailed analyses of early-stage investments/philanthropy?

I'm not sure if we disagree here or not. In terms of a Value of Information approach, the less money you are putting towards something, and the less you expect to learn from investigating it - because e.g. you think there is no good evidence available, so you'd still be relying on your intuitions - the less valuable it is to do the investigation. For really big decisions, it can be worth doing this even if you're very confident.

I suspect we probably agree on this in general, but we might disagree on exactly where the bar is, that is, where it makes sense to sit down to write out one's assumptions, put probabilities and values on things, and crunch some numbers. Broadly, I'm a fan of doing this: I find it helps clarify my thinking, plus if cost-effectiveness analysis doesn't agree with the intuitive judgement, that is a good spur to think about where the difference emerges. It's possible I'm suffering from bias here: quantifying hard-to-quantify stuff is what the conceptual tools of effective altruism (primarily philosophy and economics) allow one to do and I am familiar with. To the man with only a hammer, etc.

That said, I think one specific, valuable project would be sifting through the landscape of psychedelic funding opportunities. As you say, even some of the best projects are not getting funded, so it seems useful to think through exactly which those are and make the case for them so they get the money they need. This is a more or less apples-to-apples comparison and could be done quite qualitatively because it's things like "fund research into compound A for X or fund compound A for Y", so you can just compare X to Y. However, this is pretty hard to do without lots of inside knowledge of the players and projects, particularly as they change over time. HLI doesn't have this knowledge, so we'd need to partner with someone in the know.

The other obvious valuable project would be comparing (the best thing in) psychedelics to other things. This is the familiar-but-difficult quantitative analysis piece. Given the money at stake, it's worth doing even if one is pretty confident about the answer. Further, at least for EA-minded donors, it's crucial to see a good attempt to do this before switching where they put their money. Again, a key input is what the best-in-class psychedelics thing is.

I'm wondering if this is something you might be interested in collaborating on.  I'll send you a message on the EA forum privately to ask you about this.

P.S. Regarding the Founders Pledge comparison of Usona to StrongMinds, they say it's comparable on e.g. p. 69 of the psychedelics report. Sorry, I thought that was somewhere more obvious.

Comment by MichaelPlant on Should Chronic Pain be a cause area? · 2021-05-18T14:06:33.704Z · EA · GW

I note in your conclusion you suggest:

providing treatment to people who have conditions associated with immense pain (10/10 on the pain scale)

But I couldn't see, from the post, which conditions those were or an explanation for why those merit particular attention. Could you possibly expand on this?

(I'm wondering if you're appealing to logarithmic pain scales but, even if pain scales were logarithmic (something I doubt and discuss in this working paper) it doesn't follow that would be the best chronic pain condition to treat: less intense pains could still be more cost-effective to treat.)

Comment by MichaelPlant on Should Chronic Pain be a cause area? · 2021-05-18T14:00:45.995Z · EA · GW

Great to see more people looking at this topic! I should flag that the Happier Lives Institute produced a report on pain last November which goes into quite a bit of depth, although we don't cover demographics, so that was interesting to learn about (disclaimer: I'm HLI's Director and contributed to the report). We didn't label this a report on 'chronic pain' but we looked at three causes of pain, all of which lead to long-term pain. These were: 

(1) Terminal conditions requiring access to opioids, e.g. cancer

(2) Headache disorders

(3) Low back pain

We investigated each of these separately as they seemed quite distinct regarding solutions and obstacles.  

For (1) and (2), there did seem to be good interventions you could use to treat pain, but we weren't sure what the most promising things for an EA-minded person would be, not how this compared to the other global priorities. For (3), we weren't really sure what could be done - low back pain doesn't seem well-understood - apart from suggesting basic research; however, we again, we didn't get as far as figuring out what basic research would be best.

Comment by MichaelPlant on AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy · 2021-05-15T16:12:19.301Z · EA · GW

What's the current state of research into 'bad trips' and hallucinogen-persisting perception disorder

There seems to be this weird disconnect between people saying "but what about bad trips?" and psychedelic researchers basically shrugging their shoulders and replying "we actually don't see these in clinical trials". Is one explanation that clinical trials screen out certain people, eg those susceptible to schizophrenia, who are most liable to react badly to psychedelics?

Comment by MichaelPlant on AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy · 2021-05-15T15:59:29.832Z · EA · GW

More personal question: what reactions have you gotten from other people, such as your friends and family, when/if you've told them about your use of psychedelics? Was anyone shocked and appalled?

Comment by MichaelPlant on AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy · 2021-05-15T15:56:08.664Z · EA · GW

What do you think of drug policy reform more broadly, and where do you see your work on psychedelics fitting into that?

(Disclaimer/hopefully-acceptable-self-promotion: Peter Singer and I argued, in the New Statesman, in favour of full legalisation a couple of weeks ago. We didn't mention psychedelics specifically, but full legalisation would, amongst other things, make research into and the therapeutic use of psychedelics easier).

Comment by MichaelPlant on AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy · 2021-05-15T15:50:34.545Z · EA · GW

Within the field of psychedelics, where do you think additional action is most urgent, and why?

Comment by MichaelPlant on AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy · 2021-05-15T15:49:09.212Z · EA · GW

First of all, I'd like to say I've been excited about this topic for some time and have been following each of you, and your (excellent) work individually, so it's a very pleasant surprise to have you all here!

Question: what is your thinking on how cost-effective, from a donor perspective, additional resources are if put towards psychedelics compared to other problems, e.g. the GiveWell-style health and development interventions? 

Follow up: How valuable do you think additional detailed research on this would be (to you)? 

This is primarily for Tim, seeing as he's really putting his money where his mouth is!

Background: I run the Happier Lives Institute and I want us to take look, in the near-future, into funding psychedelics.* Psychedelics seems very promising, but it's unclear exactly how promising.

One generic issue is that it's hard to sensibly model the cost-effectiveness of systemic interventions, e.g psychedelics, to 'atomic' ones, e.g. handing out cash transfers to one person at a time, because you have to make so many assumptions about how funding one thing might impact an entire society. The best analysis currently is from  Founders Pledge, who compared funding psychedelic research  (specifically, Usona's research into psilocybin as a treatment for depression) to funding psychotherapy for mental health (specifically, StrongMinds, which treats women for depression in Africa). This is probably the most straightforward comparison, as it's in terms of depression in both cases, and finds them about equally effective. However, the Founders Pledge analysis of psychedelics is arguably too sceptical of psychedelics because, for instance, it only considers the impact research would have in the US, rather than world.

A particular issue is that psychedelics now seems to be getting increasingly more attention, so one might wonder if all the best projects will get funded anyway, and donors seeking the biggest impact should go elsewhere. 

*Or, rather,  another look - I wrote series of posts on this forum and gave a talk on it in 2017 - but then dropped the topic because Founders Pledge picked it up.

Comment by MichaelPlant on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-10T16:18:35.414Z · EA · GW

I was waiting for this! I thought there were going to be lots of "this would be bad for the EA brand" comments. As some evidence against this, and to my surprise, across all the places where I posted this, or saw others post it (on the EA forum, facebook, and twitter) the post received very little pushback.

I was actually pretty disappointed with this as it made me think it hadn't reached many who would disagree. On the plus side, this suggests this cause is not going to objectionable amongst people who are sympathetic to EA ideas.

Re the second para, I wasn't claiming that a new organisation would need to exist. My concern what whether it was reasonable to think this is where (for someone) their money or time could do the most good. That doesn't imply they would need to start something.

Comment by MichaelPlant on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-10T16:04:18.010Z · EA · GW

Right, so I do agree that if you're going to move away from prohibition, you do need to consider how non-prohibition would be implemented in reality, rather than some fictitious ideal world, and then whether it really would be better in reality. The thing people tend to forget is that you can evolve regulation, so I'm optimistic problems like those mentioned here can eventually be overcome.

Also, to state the obvious, that something has some problems is not an all-things-considered reason against doing it.

Comment by MichaelPlant on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-10T16:00:39.367Z · EA · GW

What I think the three different replies to this comment indicate is that crudely thinking "how many resources go to this thing?" is, in itself, neither necessary nor sufficient to deem something a high priority. We need a fuller story about the nature of the problem, it's scale, potential solutions, obstacles, and the rest. I don't think anyone has tried to do that for this issue, which is why I'd like someone to dig into it.

This strikes me as an issue where it's not obviously high priority, but because it's not obvious, it is worth researching further to see if it is.

Comment by MichaelPlant on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-07T07:58:38.211Z · EA · GW

Yes, there is some overlap here, certainly.

OPP has, I undestand it, worked on drug decriminalisation, cannabis legalisation, and prison reform, all within the US. What we might call 'global drug legalisation' goes further with respected to drug policy reform (legal, regulated markets for all drugs + global scope, rather than then US) but it also wouldn't cover non-drug related prison reforms.

Comment by MichaelPlant on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-06T21:28:21.872Z · EA · GW

I'm partially sympathetic to this. However, I think EAs have got a bit hung up on 'neglectedness' to the extent it's got in the way of clear thinking: if lots of people are doing something, and you can make them do it slightly better, then working on non-neglected things is promising. Really, I think you need to judge the 'facts on the grounds', what you can do, and go from there. If there aren't ruthlessly impact-focused types working on a problem, that would a good heuristic for some such people to get stuck in.

What was salient to me, compared to when I knew very little of the topic, is how much larger the expected value of drug legalisation now seems.

Comment by MichaelPlant on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-05-04T10:09:48.619Z · EA · GW

I think the least contentious argument is that 'an introduction' should introduce people to the ideas in the area, not just the ideas that the introducer thinks are most plausible. Eg a curriculum on political ideology wouldn't focus nearly exclusively on 'your favourite ideology'. A thoughtful educator would include arguments for and against their position and do their best to steelman. Even if your favourite ideology was communism and you were doing 'an intro to communism' you would still expect it not just to focus on your favourite strand of communism. Hence, I would have had more sympathy with (the original incarnation) if billed as "an intro to longtermism".

But, further, there can be good reasons to do things for symbolic or coalition reasons. To think otherwise implies a rather naive understanding of politics and human interaction. If you want people to support you - you can frame this in terms of moral trade, if you want - sometimes you also need to support to include them. The way I'd like EA to work is "this is what I believe matters most, but if you disagree because of A, B, C, then you should talk to my friend". This strikes me as coalitional moral trade that benefits all the actors individually (by their own lights). An alternative, and more or less what 80k had been proposing was, is "this is what I believe, but I'm not going to tell what the alternatives are or what you should do if you disagree". This isn't an engagement in moral trade.

I'm pretty worried about a scenario where the different parts of the EA world believe (rightly or wrongly) that others aren't engaging in moral trade and so decide to embark on 'moral trade wars' against each other instead.

Comment by MichaelPlant on 2020 Annual Review from the Happier Lives Institute · 2021-04-30T10:29:09.635Z · EA · GW

Hello!

I'm not really sure what Seligman means in the above quote, sorry. Perhaps it would make sense in a wider context.

Re PERMA, I'm not a fan of the concept and it strikes me as unmotivated. It's something like a subjective list theory of well-being, where Seligman takes well-being to consist in a bunch of different items, each of them subjective in some way. However, I don't see the justification for why he's chosen those 5 items (positive emotions, engagement, relationships, meaning, accomplishments) rather than any others. It seems to be the most plausible re-interpretation of PERMA is that those 5 items are major contributions to happiness, and well-being consists only in happiness.

I'm glad you like our transparency! We hope it helps us improve our decision-making and better allows others to see how we think.

Re Layard's book, Richard asked me to read a draft and I gave him extensive comments, primarily on the philosophical aspects, which were mostly in the earlier chapters. I also attended a conference he put on to discuss the book.

Comment by MichaelPlant on 2020 Annual Review from the Happier Lives Institute · 2021-04-29T10:30:10.790Z · EA · GW

I'm not sure exactly what you mean by "objective well-being". Here are two options.

One thing you might have in mind is that well-being is constituted by something subjective, eg happiness or life satisfaction, but you then wonder how objective life circumstances (health, wealth, relationship status, etc), positional concerns, etc. contribute to that subjective thing. In this case, health, etc are determinants of well-being, not actually well-being itself. This approach is pretty much exactly what the SWB literature does: you see how the right-hand side variables, many of which are objective,  relate to the left-hand side subjective one. I'm not sure what the shortcomings of this approach are in general - if you think well-being is subjective, this is just the sort of analysis you would want to undertake. 

An alternative thing you might mean is that well-being is properly constituted (at least in part) by something objective. One might adopt an objective list theory of well-being:

All objective list theories claim there can be things which make a person’s life go better which are neither pleasurable to nor desired by them. Classic items for this list include success, friendship, knowledge, virtuous behaviour, and health. Such items are ‘objective’ in the sense of being concerned with facts beyond both a person’s conscious experience and/or their desires

If one had this view, your question would be about how well-being, which is objective, relates to how people feel about their well-being. It's not clear what the purpose of this project would be: if you already know what well-being is, and you think it's something objective, why would you care how having well-being causes people to feel about their lives? So, I assume you mean the former!

Comment by MichaelPlant on 2020 Annual Review from the Happier Lives Institute · 2021-04-28T14:48:35.570Z · EA · GW

Thanks for your comment and for bringing this to our attention. One of the pleasures, but also pains, of SWB research, is that there is simply an enormous scope of it; basically everything impacts well-being one way or another. The result is that many potentially fruitful avenues of research are left unturned.

I don't expect we'll be pursuing this specific line of inquiry, or headaches in general, with the next year or so. The only scenarios in which I would see that change would be if (1) a major donor appeared and would (only) fund us to look at headaches or (2) we already had a lot of donors following our recommendations - we don't have any such donors now, which is necessarily the case because we don't have any all-things-considered recommendations(!) - and our inside view what the headaches might be more effective than our hypothetical top pick and so worth investigating.

As a hot take on your particular suggestion, this is a very small study and I've heard lots of horror stories about dietary research, so this causes me only a (very) minor update, sorry!

Comment by MichaelPlant on 2020 Annual Review from the Happier Lives Institute · 2021-04-28T14:40:17.059Z · EA · GW

Hello Engelhardt,

Thanks for the comment! In response to your comments:

  1. To clarify, the WELLBY is something that has come out of the academic SWB community - bits of economics and psychology, mostly. It's not been developed by us, as there are only a handful of papers that have used it so far; hence we're among the first to be applying it. I should add that, if you're already using measures of SWB, say, a 0-10 life satisfaction scale, it's not a big innovation to look at how much something changes that, then multiplying that by duration, which is really all the WELLBY is. (The more innovative bit is using SWB at all, rather than using WELLBYs given you're already using SWB.) So, it easiest to think of us as using a relatively new, but existing, methodology and applying it to new problems - namely, (re)assessing the cost-effectiveness of things EAs already focus on.

That said, there are some theoretical and practical kinks to be worked out in using WELLBYs - e.g. on the ‘neutral point’, mentioned above. Our plan - which we are already engaged in - is to do the work we think is necessary to improve the WELLBY approach, then feed that back into SWB academia. More generally, it’s not unusual that a measurement tool gets developed and then refined.

  1. Ideally, we’d like to see SWB metrics used across the board, where feasible, and we are pushing to make this happen. Part of the issue with Q/DALYs is that they are measures of health. Even if you thought they were the ideal measures of health (or, the contribution of health to well-being) you run into an issue comparing health to non-health outcomes. A chief virtue of SWB metrics is that you can measure changes in any domain in one currency, namely their impact on SWB.

Having said this, Q/DALYs are quite ingrained in the medical world and it’s an open question how valuable it is to push for change their vs do other things.

  1. I think the rules can be bent in search of a good name, and we're really just following what other SWB researchers call them. It has been suggested, notably by John Broome, that it should be the 'WALY', but that sounds a bit, well, silly (in British English, a ‘wally’ is a synonym for ‘fool’). Personally, I also like the SWELLBY, but that’s yet to catch on...
Comment by MichaelPlant on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-28T11:49:51.430Z · EA · GW

Hello Rob and Keiran,

I apologise if this is just rank incompetence/inattention on my part as a forum reader, but I actually can't find anything mentioning 1. or 2. in your comments on this thread, although I did see your note about 3. (I've done control-F for all the comments by "80000_Hours" and mentions of "Paul Christiano", "Ajeya Cotra", "Keiran", and "Rob". If I've missed them, and you provide a (digestible) hat, I will take a bite.)

In any case, the new structure seems pretty good to me - one series that deals with the ideas more or less in the abstract, another that gets into the object-level issues. I think that addresses my concerns but I don't know exactly what you're suggesting; I'd be interested to know exactly what the new list would be.

More generally, I'd be very happy to give you feedback on things (I'm not sure how to make this statement more precise, sorry). I would far prefer to be consulted in advance than feel I had to moan about it on the forum after the fact- this would also avoid conveying the misleading impression I don't think you do a lot of excellent work, which I do think. But obviously, it's up to you whose and how much input you solicit.

Comment by MichaelPlant on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-24T13:17:00.677Z · EA · GW

Thanks for somewhat engaging on this, but this response doesn't adequately address the main objection I, and others, have been making: your so-called 'introduction' will still only cover your preferred set of object-level problems.

To emphasise, if you're going to push your version of EA, call it 'EA', but ignore the perspectives of dedicated, sincere, thoughtful EAs just because you happen not to agree with them, that's (1) insufficiently epistemically modest, (2) uncooperative, and (3) is going to (continue to) needlessly annoy a lot of people off, myself included.

Comment by MichaelPlant on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-19T11:49:34.484Z · EA · GW

I suppose so. But if you don't think the article provides new reasons to care less about avoiding the Repugnant Conclusion, then it doesn't provide new reasons to focus on other moral problems more.

Comment by MichaelPlant on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-19T11:48:05.568Z · EA · GW

Thank you for your comments, Max and John. They inclined me to be quite a bit more favourable to the paper. I still have mixed feelings: while I respect the urge the move a stale conversation on, I don't think the authors provide new object-level reasons to do so. They do provide a raw (implicit?) appeal for others, as their peers, to update in their direction, but I'm sceptical that's what philosophy should involve.

Comment by MichaelPlant on Avoiding the Repugnant Conclusion is not necessary for population ethics: new many-author collaboration. · 2021-04-18T12:49:22.488Z · EA · GW

When I first saw the paper, I thought "oh cool, how novel for philosophers to come together and say they agree on something, for once". But then, as I reflected on it a couple of days later, I thought the publication was odd. After all, there's not much in the way of argument, so the paper is really just a statement of opinion. As such, there is a problematic whiff of an appeal to authority and social pressure here: "oh, you think the repugnant conclusion is repugnant? But you shouldn't, because all these smart people disagree with you. Just get with the progamme, okay?"

In general, I don't see how papers which say (little more than) "We agree with X" merit publication. What would be the point of a paper which said, e.g. "We, some utilitarian philosophers, do not think the usual objections to utilitarianism succeed because of the usual counter-objections"? We already know that philosophers believe a variety of things.

Comment by MichaelPlant on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-17T15:35:19.638Z · EA · GW

TL;DR. I'm very substantially in agreement with Brian's comment. I expand on those concerns, put them in stronger terms, then make a further point about how I'd like 80k to have more of a 'public service broadcasting' role. Because this is quite long, I thought it was better to have it as a new comment.

It strikes me as obviously inappropriate to describe the podcast series as "effective altruism: an introduction" when it focuses almost exclusively on a specific worldview - longtermism. The fact this objection is acknowledged, and that a "10 problems areas" series is also planned, doesn't address it. In addition, and relatedly, it seems mistaken to produce and distribute such a narrow introduction to EA in the first place.

The point of EA is to work out how to do the most good, then do it. There are three target groups one might try to benefit - (1) (far) future lives, (2) near-term humans, (3) (near-term) animals. Given this, one cannot, in good faith, call something an 'introduction' when it focuses almost exclusively on object-level attempts to benefit just one group. At the very least, this does not seem to be in good faith when there is a substantial fraction of the EA community, and people who try to live by EA principles, who do prioritise each of three.

For people inside effective altruism who do not share 80k's worldview, stating that this is an introduction runs the serious risk of conveying to those people that they are not "real EAs", they are not welcome in the EA community, and their sincere and thoughtful labours and perspectives are unimportant. It does not seem adequately inclusive, welcoming, open-minded, and considerate - values EAs tend to endorse.

For people outside EA who are being introduced to the ideas for the first time, it genuinely fails to introduce them to the relevant possibilities of how they might do the most good, leaving them with a misleading impression of what EA is or can be. It would have been trivially easy to include the Bollard and Glennister interviews - or something else to represent those who focus on animals or humans in the near-term – and so indicate that those are credible altruistic paths and enthuse those who might take them.

By analogy, if someone taught an "introduction to political ideologies" course which glossed over conservatism and liberalism to focus primarily on (the merits of) socialism, you would assume they were either incompetent or pushing an agenda. Either way, if you hoped that they would cover all the material and do so in an even-handed manner, you would be disappointed.

Given this podcast series is not an introduction to effective altruism, it should not be called "effective altruism: an introduction". More apt might be “effective longtermism: an introduction” or “80k’s opinionated introduction to effective altruism” or “effective altruism: 80k’s perspective”. In all cases, there should be more generous signposting of what the other points of view are and where they could be found.

A good introduction to EA would, at the very least, include a wide range of steel-manned positions about how to do the most good that are held by sincere, thoughtful, individuals aspiring to do the most good. I struggle to see why someone would produce such a narrow introduction unless they thought those holding alternative views were errant and irrelevant fools.

I can imagine someone defending 80k by saying that this is their introduction to effective altruism and there’s nothing to stop someone else writing their own and sharing it (note RobBesinger does this below).

While this is technically true, I do not find it compelling for the following reason. In a cooperative altruistic community, you want to have a division, rather than a duplication, of labour, where people specialise in different tasks. 80k has become, in practice, the primary source of introductory materials to EA: it is the single biggest channel by which people are introduced to effective altruism, with 17% of EA survey respondents saying they first heard about EA through it; it produces much of the introductory content individuals read or listen to. 80k may not have a monopoly on telling people about EA, but it is something like the ‘market leader’.

The way I see it, given 80k’s dominant position, they should fulfil something like a public service broadcasting role for EA, where they strive to be impartial, inclusive, and informative (https://en.wikipedia.org/wiki/Public_broadcasting).

Why? Because they are much better placed to do it than anyone else! In terms any 80k reader will be familiar with, 80k should do this because it is their comparative advantage and they are not easily replaced. Their move to focusing on longtermism has left a gap. A new organisation, Probably Good, has recently stepped into this gap to provide more cause neutral careers advice but I see it as cause for regret that this had to happen.

While I think it would be a good idea if 80k had more of a public service broadcasting model, I don't expect this to happen, seeing as they've consciously moved away from it. It does, however, seem feasible for 80k to be a bit more inclusive - in this case, one very easy thing would be to expand the list from 10 to 12 items so concerns for animals and near-term humans feature. It would be a huge help to non-longtermist EAs that 80ks talks about them a bit (more), and it would be a small additional cost to 80k.

Comment by MichaelPlant on Confusion about implications of "Neutrality against Creating Happy Lives" · 2021-04-12T08:41:06.510Z · EA · GW

I want to focus on the following because it seems to be a problematic misunderstanding:

"1. Temporal position should not impact ethics (hence longtermism)"

This genuinely does seem to be a common view in EA, namely, that when someone exists doesn't (in itself) matter, and that, given impartiality with respect to time, longtermism follows. Longtermism is the view we should be particularly concerned with ensuring long-run outcomes go well.

The reason this understanding is problematic is that the probably two strongest objections to longtermism (in the sense that, if these objections hold, they rob longtermism of its practical force) have nothing to do with temporal position in itself. I won't say if these objections are, all things considered, plausible, I'll merely set out what they are.

First, there is the epistemic objection to longtermism (sometimes called the 'tractability', 'washing-out', or 'cluelessness' objection) that, in short, we can't be confident enough about the impact our actions will have on the longrun future to make it the practical priority. See this for recent discussion and references: https://forum.effectivealtruism.org/posts/z2DkdXgPitqf98AvY/formalising-the-washing-out-hypothesis#comments. Note this has nothing to do with different values of people due to time.

Second, there is the ethical objection that appeals to person-affecting views in population ethics and has the implication making (happy) lives is neutral.* What's the justification for this implication? One justification could be 'presentism', the view only presently existing people matter. This is a justification based on temporal position per se, but it is (I think) highly implausible.

An alternative justification, which does not rely on temporal position in itself, is 'necessitarianism', the view the only people that matter are those that exist necessarily (i.e. in all outcomes under consideration). The motivation for this is (1) outcomes can only be better or worse if they are better or worse for someone ('person-affecting restriction') and (2) existence is not comparable to non-existence for someone ('non-comparativism'). In short, it isn't better to create lives, because it's not better for the people that get created. (I am quite sympathetic to this view and think too many EAs dismiss it too quickly, often without understanding it.)

The further thought is that our actions change the specific individuals who get created (e.g. think if any particular individual alive today would exist if Napoleon had won Waterloo). The result is that our actions, which aim to benefit (far) future people, cause different people to exist. This isn't better for either the people that would have existed, or the people that will actually exist. This is known as the 'non-identity problem'. Necessitarians might explain that, although we really want to help (far) future people, we simply can't. There is nothing, in practice, we can do make their lives better. (Rough analogy: there is nothing, in practice, we can do to make trees' lives go better - only sentient entities can have well-being.)

Note, crucially, this has nothing to do with temporal position in itself either. It's the combination of only necessary lives mattering and our actions changing which people will exist. Temporal position is ethically relevant (i.e. instrumentally important), but not ethically significant (i.e. doesn't matter in itself).

*You can have symmetric person-affecting views (creating lives is neutral). You can also have asymmetric person-affecting views (creating happy lives is neutral, creating unhappy lives is bad). Asymmetric PAVs may, or may not, have concern for the long term depending on what the future looks likes and whether they think adding happy lives can compensate for adding unhappy lives. I don't want to get into this here as this is already long enough.

Comment by MichaelPlant on Announcing "Naming What We Can"! · 2021-04-05T09:04:52.929Z · EA · GW

Ha. I like this name.

While I'm writing, I'll mention I seriously proposed calling HLI the Bentham Institute for Global Happiness (BIGHAP), but it was put to an internal vote and I, tragically, lost. I am fairly confident not calling it BIGHAP will be my biggest deathbed regret.

Comment by MichaelPlant on Spears & Budolfson, 'Repugnant conclusions' · 2021-04-05T09:01:39.142Z · EA · GW

Pablo could you, or perhaps some other kind forum reader, provide a brief explanation of what they actually do? The abstract more-or-less says 'we solve a problem', but it's unclear exactly how they solve the problem - I have no intuitive purchase on what "more inclusive formalizations" means - so don't know whether it's a good use of time to read the paper.

Comment by MichaelPlant on Announcing "Naming What We Can"! · 2021-04-02T07:57:03.379Z · EA · GW

I'd like to know what the Happier Lives Institute should be; we never liked the name anyway.

Comment by MichaelPlant on How much does performance differ between people? · 2021-04-01T08:23:16.030Z · EA · GW

ah, this is great. evidence the selectors could tell the top 2% from the rest, but 2%-20% was much of a muchness. Shame that it doesn't give any more information on 'commercial success'.

Comment by MichaelPlant on Any EAs familiar with Partha Dasgupta's work? · 2021-03-31T14:17:15.863Z · EA · GW

I'm not sure how to assess what counts as 'core EA'! But I don't think the org bills itself as EA, or that the overwhelming majority of its staff self-identify as EAs (cf. the way the staff at, um, CEA probably do...)