Posts

Donating money, buying happiness: new meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy in terms of subjective well-being 2021-10-25T15:38:11.299Z
Can money buy happiness? A review of new data 2021-06-28T01:48:27.751Z
Ending The War on Drugs - A New Cause For Effective Altruists? 2021-05-06T13:18:04.524Z
2020 Annual Review from the Happier Lives Institute 2021-04-26T13:25:51.249Z
The Comparability of Subjective Scales 2020-11-30T16:47:00.000Z
Life Satisfaction and its Discontents 2020-09-25T07:54:58.998Z
Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty 2020-08-03T16:17:32.230Z
Update from the Happier Lives Institute 2020-04-30T15:04:23.874Z
Understanding and evaluating EA's cause prioritisation methodology 2019-10-14T19:55:28.102Z
Announcing the launch of the Happier Lives Institute 2019-06-19T15:40:54.513Z
High-priority policy: towards a co-ordinated platform? 2019-01-14T17:05:02.413Z
Cause profile: mental health 2018-12-31T12:09:02.026Z
A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare 2018-10-25T15:48:03.377Z
Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was 2018-05-23T10:30:43.748Z
Could I have some more systemic change, please, sir? 2018-01-22T16:26:30.577Z
High Time For Drug Policy Reform. Part 4/4: Estimating Cost-Effectiveness vs Other Causes; What EA Should Do Next 2017-08-12T18:03:34.835Z
High Time For Drug Policy Reform. Part 3/4: Policy Suggestions, Tractability and Neglectedess 2017-08-11T15:17:40.007Z
High Time For Drug Policy Reform. Part 2/4: Six Ways It Could Do Good And Anticipating The Objections 2017-08-10T19:34:24.567Z
High Time For Drug Policy Reform. Part 1/4: Introduction and Cause Summary 2017-08-09T13:17:20.012Z
The marketing gap and a plea for moral inclusivity 2017-07-08T11:34:52.445Z
The Philanthropist’s Paradox 2017-06-24T10:23:58.519Z
Intuition Jousting: What It Is And Why It Should Stop 2017-03-30T11:25:30.479Z
The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik 2016-12-05T21:03:24.496Z
Are You Sure You Want To Donate To The Against Malaria Foundation? 2016-12-05T18:57:59.806Z
Is effective altruism overlooking human happiness and mental health? I argue it is. 2016-06-22T15:29:58.125Z

Comments

Comment by MichaelPlant on A Red-Team Against the Impact of Small Donations · 2021-11-26T12:44:55.799Z · EA · GW

A lot's been said. Is this a fair summary: small donors can do a lot of good (and earning and giving can be much higher impact than other altruistic activities, like local community volunteering) but as the amount of 'EA dedicated' money goes up, small donors are less impactful and more people should consider careers which are directly impactful?

Comment by MichaelPlant on Minimal-trust investigations · 2021-11-24T12:44:04.148Z · EA · GW

I have to say, I rather like putting a name to this concept. I know this wasn't the upshot of the article, but it immediately struck me, on reading this, that it would be a good idea for the effective altruist community to engage in some minimal trust investigations of each other's analyses and frame them as such.

I'm worried about there being too much deference and actually not very much criticism of the received wisdom. Part of the issue is that to criticise the views of smart, thoughtful, well-intentioned people in leadership positions might imply either that you don't trust them (which is rude) or that you're not smart and well-informed enough to 'get it'; there are also the normal fears associated with criticising those with greater power.

These issues are somewhat addressed by saying "look, I have a lot of respect for X and assume there are right about lots of things, but I wanted to get to the bottom of this issue myself and not take anything they said for granted. So I did a 'minimal-trust investigation'. Here's what I found..."

Comment by MichaelPlant on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-24T10:51:29.322Z · EA · GW

Yeah, does not seem like a good outcome if people are donating, say, 10% of their salary, then they come to EA events and they get the feeling that people look down their noses at them as if to say "that's it? You don't have an 'EA' job?"

Comment by MichaelPlant on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-23T22:22:00.293Z · EA · GW

[restating and elaborating on what I said on twitter]

Thanks very for this update, Ben. The "EA has loads of money" meme has unfortunately led people to (incorrectly) assume that everything 'within EA' was fully funded. This made it harder to fundraise, particularly for small orgs, like mine, who do need new donors, because prospective donors assumed they weren't necessary.  

Of course, the meme had no impact on organisations that are already fully-funded - which is more or less only those orgs being funded by Open Philanthropy. 

Comment by MichaelPlant on Ngo and Yudkowsky on AI capability gains · 2021-11-19T20:25:49.948Z · EA · GW

Ok, that helps - a litte! - but it's still not quite at TL;DR. :)

Comment by MichaelPlant on Sleep: effective ways to improve it · 2021-11-19T11:36:25.841Z · EA · GW

Hello Ben!

Good to know this was based on existing literature. In most cases, it helps to show the reader you know that literature, to outline what it is, and then go on say what your new contribution is. Like I say, you missed a few of the obvious things, which is unfortunate. A piece of "what works for X" should, I say, include the things that work for X, then perhaps go on to flag which of these are likely to be a surprise, rather than assuming on the reader's behalf what they will already know. If you are going to have a piece on "what works for X that might but that might surprise you" you should at least clearly flag that, and then point to something such as "standard guidance on X".

Re strength of interventions being "40%" that still seems a confused way of presenting the information. 40% of what? Of a maximum score? A maximum score of what? Of cost-effectiveness? Well, why not just present the effectiveness numbers and divide them by the costs then?

I agree that this sort of thing can have a lower level of rigour but I stand by my concern that the method you use is so puzzling it's questionably useful at all. You gathered quite a bit of relevant info, but I think you presented it in a less-than-ideal way. Here, simpler would have been better: I'd have preferred a post that just said "here's a list of evidence-based ways to improve sleep" and then listed them and provided a brief discussion on each. That seems the way to go unless you have the data and time to do a quantitative (cost-)effectiveness analysis.

Glad you think we (at HLI) do good work. Like I see, feel free to reach out if you want to chat about research methods etc.! You can get me at michael@happierlivesinstitute.org

Comment by MichaelPlant on Ngo and Yudkowsky on AI capability gains · 2021-11-19T11:16:36.451Z · EA · GW

I might be interested in this, but I'd be really helped by a TL;DR or similar providing some context on what's being discussed.

Comment by MichaelPlant on Sleep: effective ways to improve it · 2021-11-18T10:48:12.084Z · EA · GW

While I am as much a fan of wellbeing research as the next fellow - indeed, probably a much bigger fan - I have to say I found the methodology and conclusions of this research rather confusing.

If I were approaching this topic, I would have (1) done a review of the existing literature to find out what people thought was effective and what the possible interventions were, then (2) tried to assess the options in terms of (a) a comparable metric of effectiveness and (b) cost, so readers could think about what would do the most for them at the least effort.

As it is, this research seems to have missed out many of the standard pieces of advice like avoiding alcohol, napping after 3pm, having a large meal before bed, or having a sleep routine.  The author doesn't mention having looked at the existing literature, but does note that other EAs have mentioned sleep. I don't mean to single out the OP, but I do want to deride the myopic and self-referential tendency among effective altruists in general to overlook work done outside effective altruism. Lots of good has happened 'out there' and we ignore it at our peril.

What I found least satisfying about this research was how this (partial list of) interventions were assessed. As far as I can see, the 'weighted-factor model' involved assigning unexplained subjective numbers to various seemingly-arbitrarily chosen properties, then assigning a seemingly-arbitrary weight to each factor to aggregate them. * I am reminded of "Garbage-In-Garbage-Out" concept in computer science where nonsense inputs products nonsense outputs. As a reader, I have no idea how to interpret the rankings or numbers - what does it mean that melatonin gets "5.95/10" or that CBT-I gets "5.78"? - or how much to update off them. The results are basically uninterpretable. 

I would strongly recommend that the OP heavily revise their methods and the presentation of their research for any further work. The main thing would be to present the results of the interventions in a standardised metric, e.g. total sleep time, or standard deviations of something, so readers can make a comparison themselves, then make comments on cost and, if necessary research quality. I am happy to provide advice if that's helpeful.

 

*I recognise the weighted-factor model is something Charity Entrepreneurship use.  I have raised it with them several times that, for the reasons given, I find this approach hard to follow or justify and thus questionably useful. 

Comment by MichaelPlant on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-27T16:32:25.966Z · EA · GW

It seems that people object most to statues of those who became rich, famous, or powerful from doing something objectionable. For instance, you get rich from slavery, become a philanthropist, then get a statue for being a philanthropist,  cf Edward Colston in the UK. People mind less if you so happen to do something objectionable, but that's not your reason from prominence. 

Therefore, the story would have much more oompfh if it was about a factory farming magnate who, say, set up a university with the proceeds, instead of just a guy who ate meat. You could have had quite a lot of fun teasing out those parallels. 

Comment by MichaelPlant on Donating money, buying happiness: new meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy in terms of subjective well-being · 2021-10-27T16:00:37.377Z · EA · GW

Last question: what's HLI's current funding situation? (Current funding, room for funding in different growth scenarios)

Our funding situation is, um, "actively seeking new donors"! We haven't yet filled our budget for 2022. 

Our gap up to the end 2022 on our lean budget is £120k; that's the minimum we need to 'keep the lights on'.

Our growth budget, the gap to the of 2022 is probably £300k; I'm not sure we could efficiently scale up much faster than that. (But if someone insisted on giving me more than that, I would have a good go!)

Comment by MichaelPlant on Low-Hanging (Monetary) Fruit for Wealthy EAs · 2021-10-20T09:41:05.063Z · EA · GW

Ordinary wealthy people don't care as much about getting more money because they already have a lot of it. So we should expect to be able to find overlooked methods for rich people to get richer

I'm not sure what you mean by 'ordinary' wealthy people (vs 'altruistic' wealthy people?) but I'd be pretty surprised if there were overlooked methods. In my experience (ordinary) wealthy types spend lots of time talking to other wealthy types and swap notes on how best to do the most with their money. Because they have more money, it can make sense to hire people to help you save it, eg tax accountants. In short, I reject the premise that most wealthy people are just not trying to make themselves wealthy and therefore there are $20 bills on the sidewalk for wealthy people who really care about helping others. 

I'm not convinced by the examples either. I'm assuming the case with Sam Bankman-Fried is that his business didn't require as much external investment, rather than (with no offense to him) he has remarkable negotiating skills basically all other entrepreneurs lack. 

Comment by MichaelPlant on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T10:38:56.496Z · EA · GW

The implicit framing of this post was that, if individuals just got smarter, everything would work out much better. Which is true to some extent. But I'm concerned this perspective is overlooking something important, namely that it's very often the case it's clear what should be done for the common good, but society doesn't organise itself to do those things because many individuals don't want to - for discussion, see the recent 80k podcast on institutional economics and corruption. So I'd like to see a bit more emphasis on collective decision-making vs just individuals getting smarter.  

Comment by MichaelPlant on How valueable are external reviews? · 2021-10-18T13:08:54.493Z · EA · GW

Thanks for clarifying! I wonder if it would be even better if the review was done by people outside the EA community. Maybe the sympathy of belonging to the same social group and shared, distinctive assumptions (assuming they exist), make people less likely to spot errors? This is pretty speculative, but wouldn't surprise me

I can't immediately remember where I've seen this discussed before, but I concerned I've heard raised is that's it's quite hard to find people who (1) know enough about what you're doing to evaluate your work but (2) are not already in the EA world. 

I see, interesting! This might be a silly idea, but what do you think about setting up a competition where there is a cash-prize of a few thousand dollars for the person who spots an important mistake? If you manage to attract the attention of a lot of phd students in the relevant area, you might really get a lot of competent people trying hard to find your mistakes. 

Hmm. Well, I think you'd have to be quite a big and well funded organisation to do that. It would be a lot of management time to set up and run a competition, one which wouldn't obviously be that useful (in terms of the value of information, such a competition is more valuable the worse you think your research is). I can see organisations quite reasonably thinking this wouldn't be a good staff priority vs other things. I'd be interested to know if this has happened elsewhere and how impactful it had been. 

>> Maybe that would be weird for some people. I would be surprised though if the majority of people wouldn't interpret a positive expert review as a signal that your research is trustworthy (even if its not actually a signal because you chose and paid that expert). 

That's right. People who were suspicious of your research would be unlikely to have much confidence in the assessment of someone you paid.

Comment by MichaelPlant on How valueable are external reviews? · 2021-10-15T17:38:42.638Z · EA · GW

I think my argument here holds for any other similar organisation. 
 

Gotcha

does it count as an independent, in-depth, expert review?

I mean, how long is a piece of string? :) The way I did my reviewing was to check the major assumptions and calculations and see if those made sense. But where a report, say, took information from academic studies, I wouldn't necessarily delve into those or see if they had been interpreted correctly. 

Re making things public, that's a bit trickier than it sounds. Usually I'd leave a bunch of comments in a google doc as I went, which wouldn't be that easy for a reader to follow. You could ask someone to write a prose evaluation - basically like an academic journal review report - but that's quite a lot more effort and not something I've been asked to do.

In HLI, we have asked external academics to do that for us for a couple of pieces of work, and we recognise it's quite a big ask vs just leaving gdoc comments. The people we asked were gracious enough to do it, but they were basically doing us a favour and it's not something we could keep doing (at least with those individuals). I guess one could make them public - we've offered to share ours with donors, but none have asked to see them - but there's something a bit weird about it: it's like you're sending the message "you shouldn't take our word for it, but there's this academic who we've chosen and paid to evaluate us - take their word for it".

Comment by MichaelPlant on How valueable are external reviews? · 2021-10-14T23:09:23.280Z · EA · GW

I'm slightly confused by the framing here. You only mention Founders Pledge, which, to me, implies you think Founders Pledge don't get external reviews but other EA orgs do.

This doesn't seem right, because Founders Pledge do ask others for reviews: they've asked me/my team at HLI to review several of their reports (StrongMinds, Actions for Happiness, psychedelics) which we've been happy to do, although we didn't necessarily get into the weeds. I assume they do this for their other reports and this is what I expect other EA orgs do too.

Comment by MichaelPlant on How valueable are external reviews? · 2021-10-14T23:07:15.267Z · EA · GW

I'm not sure why you're focusing on Founders Pledge in particular. Are you claiming that they are uniquely under-evaluated, relative to other EA organisations that publicly or privately provide donation advice?

I say this both because this doesn't seem true to me. First, I'm struggling to think of cases where EA orgs have publicly hired independent experts to double-check their analyses - I have a vague memory GiveWell has had this done, but that's it.

Second, Founders Pledge have asked me/my team at HLI to review several of their reports (StrongMinds, Actions for Happiness, psychedelics); I assume this is what they, and other EAs orgs, do in general. Admittedly, researchers in different teams reviewing the work of others is usually not newsworthy enough to announce, so you wouldn't always know it was happening.

Comment by MichaelPlant on Presenting: 2021 Incubated Charities (Charity Entrepreneurship) · 2021-10-07T15:04:40.419Z · EA · GW

Very well done to the incubatees! I wish you the best of luck. Two questions.

For Training For Good, did you consider teaching professional skills, eg management, to those in EA orgs? I ask rather self-interestedly and because that was conspicuous by its absence.

For CAPS, could you explain what the cost-effectiveness analysis was that led to that benefit:cost ratio? I couldn't immediately see anything explaining that on the website; sorry if I missed it!

Comment by MichaelPlant on We’re discontinuing the standout charity designation · 2021-10-07T14:55:10.017Z · EA · GW

I'll  post Catherine's reply and then raise a couple of issues:
 

Thanks for your question. You’re right that we model GiveDirectly as the least cost-effective top charity on our list, and we prioritize directing funds to other top charities (e.g. through the Maximum Impact Fund). GiveDirectly is the benchmark against which we compare the cost-effectiveness of other opportunities we might fund.

As we write in the post above, standout charities were defined as those that “support programs that may be extremely cost-effective and are evidence-backed” but “we do not feel as confident in the impact of these organizations as we do in our top charities.”

Our level of confidence, rather than their estimated cost-effectiveness, is the key difference between our recommendation of GiveDirectly and standout charities.

We consider the evidence of GiveDirectly’s impact to be exceptionally strong. We’re not sure that our standout charities were less cost-effective than GiveDirectly (in fact, as we wrote, some may be extremely cost-effective), but we felt less confident in making that assessment, based on the more limited evidence in support of their impact, as well as our more limited engagement with them.

 

I don't see a justification here for keeping GiveDirectly in the list. Okay, there are charities GiveWell is 'confident' in, and those that they aren't, and GiveDirectly, like the other top picks, is in the first category. But this still raises the question of why to recommend GiveDirectly at all. Indeed, it's arguably more puzzling: if you think there's basically no chance A is better than B, why advocate for A? At least if you think A might be better than B, then you might defend recommending A on the grounds there's a chance, that is, if someone believes X, Y, Z they might sensibly believe it's better.

The other thing that puzzles me about this response is its seemingly non-standard approach to expected value reasoning. Suppose you can do G, which has a 100% chance of doing one 'unit' of good, or H, which has a 50% chance of doing 3 'units' of good. I say you should pick H because, in expectation, it's better, even though you're not sure it will be better. 

Where might having less evidence fit into this?

One approach to dealing with different levels of evidence is to discount the 'naive' expected value of the intervention, that is, the one you get from taking the evidence at face value. Why and by how much should you discount your 'naive' estimate? Well, you reduce it to what you expect you would conclude its actual expected value was if you had better information. For instance, suppose one intervention has RCTs with much smaller samples, and you know that effect sizes tend to go down when interventions use larger samples (they are harder to implement at scale, etc.). Hence, you're justified in discounting it because and to that extent. Once you've done this, you have the 'sophisticated' expected values. Then you do the thing with the higher 'sophisticated' expected value. 

Hence, I don't see why lower ('naive') cost-effectiveness should stop someone from recommending something.

Comment by MichaelPlant on We’re discontinuing the standout charity designation · 2021-10-07T13:01:38.562Z · EA · GW

This line of reasoning seems sensible to me. However, it does raise the following question: will GiveWell also stop recommending GiveDirectly, given that, by your own cost-effectiveness numbers, it's 10-20x less cost-effective than basically all your other recommendations? And, if not, why not?

I can understand the importance of having some variety of options to recommend donors, which necessitates recommending some things that are worse than others, but 10x worse seems to leave quite a lot of value on the table. Hence, I'd be curious to hear the rationale.

Comment by MichaelPlant on Has Life Gotten Better? · 2021-10-07T09:54:16.368Z · EA · GW

Thanks for this! I don't see anything here that disagrees with my claim. I said it can't literally be true, which is how lots of people treat it. Going from no income to $400/year also involves an infinity of doublings.

A better claim might be "given you have enough income to subsist, doubling your income causes a fixed increase in happiness." Fine, but note that's not literally the claim "doubling your income causes a fixed increase in happiness." My hope is that by showing the logarithmic model isn't true, that pushes us to come up with a more realistic model of the relationship between happiness and income.

Comment by MichaelPlant on Has Life Gotten Better? · 2021-10-06T10:13:02.297Z · EA · GW

Namely: very rough estimates suggest that we are now 100x-1000x richer than in the past, and our lives are in the range [good-ok], but generally not pure bliss or anything close to it. If we extend reasonable estimations for  the effect of  material circumstances on wellbeing (i.e. doubling of wealth increases satisfaction by 1 point on a 10 point scale) , we should then expect past humans to have been miserable.

I don't think we should expect past humans to have been miserable. One of the key findings in the happiness literature is the so-called Easterlin Paradox, which is that (1) richer people are happier than poorer people at a time but (2) people, in aggregate, happiness doesn't increase over time. This is usually explained by some combination of adaptation and social comparison effects. 

It's also worth noting that the "each doubling of income increases happiness by a fixed amount (eg 1 point on a 10-point scale)" can't literally be true. If it were, anyone with any income would have maximum happiness, because there are an infinity of doubling between zero income and, well, any level of income. Research does, however, find this result is basically true across the range of incomes that people actually have. 

Comment by MichaelPlant on Has Life Gotten Better? · 2021-10-05T09:41:18.926Z · EA · GW

This seems a potentially valuable exercise for sharpening our understanding of what the future might look like. A couple of comments.

You don't say what you mean by 'better'. What do you mean, exactly? Sorry if you said this elsewhere. Without that criterion specified, it's hard to interrogate the analysis. I'm inclined to understand better by 'happier', that is, with an improved balance of pleasure over pain, and imagine you probably mean something like this too.

Assuming we're thinking in terms of happiness, I'll flag now what I hope the future posts contain - I note you're just giving your answers here, and not your reasoning.

One thing is an understanding of the role happiness plays in evolution, that is, as a mechanism rewarding or punishing for the things that help and survive and reproduce. So, one would want to say why, given the sort of creatures that we are, we're better/worse living our lives in one sort of societial configuration than another.

The other piece is to focus particularly on time use - how people actually spend their lives - rather than just imagining their lives in snapshots. Psychological research shows that we make predictable mistakes when engaging in 'affective forecasting'. On such bias is 'focusing illusions', where we let our judgments be driven by the stuff that's easy to imagine. Another is 'duration neglect', where our judgments of how good/bad things are pay very little attention to the passage of time.

Taking this together, how would we go about answering your question? In short, you'd want to know how good/bad the average, ordinary day in someone's life is. Hunter-gatherers seemed to have things pretty good: I can't immediately remember where I read this - maybe Sapiens - but I understand hunter-gatherers don't spend very much time working, ie looking for food, and do spend lots of time socialising. In the agricultural and industrial ages, people spend many more hours working, and the work was less fun - tilling fields and working looms vs gathering berries and hunting. It's not so obvious to me that modernity is better than all that came before: as a result of technology, we increasingly live desk-bound, socially isolated lives. 

I might add I haven't thought lots about this historical comparison piece, so this is not a 'cold take'. 

Comment by MichaelPlant on Independent impressions · 2021-09-29T10:48:24.473Z · EA · GW

I found the OP helpful and thought it would have been improved by a more detailed discussion of how and why to integrate other people's views. If you update when you shouldn't - e.g. when you think you understand someone's reasons but are confident they're overlooking something - then we get information cascades/group think scenarios. By contrast, it seems far more sensible to defer to others if you have to make a decision, but don't have the time/ability/resources to get to the bottom of why you disagree. If my doctor tells me to take some medicine for some minor ailment, it doesn't seem worth me even trying to check if their reasoning was sound.

Comment by MichaelPlant on A Primer on the Symmetry Theory of Valence · 2021-09-07T21:45:42.304Z · EA · GW

I read this post and the comments that have followed it with great interest.  

I have two major, and one minor, worries about QRI's research agenda I hope you can clarify. First, I am not sure exactly which question you are trying to answer. Second, it's not clear to me why you think this project is (especially) important. Third,  I can't understand what STV is about because there is so much (undefined) technical jargon. 

1.  Which question is QRI trying to answer?

You open by saying:

We know suffering when we feel it — but what is it? What would a satisfying answer for this even look like?

This makes me think you want to identify what suffering is, that is, what it consists in. But you then immediately raise Buddhist and Arisotlean theories of what causes suffering - a wholly different issue. FWIW, I don't see anything deeply problematic in identifying what suffering, and related terms, refer to. Valence just refers to how good/bad you feel (the intrinsic pleasurableness/displeasurableness of your experience); happiness is feeling overall good; suffering is feeling overall bad. I don't find anything dissatisfying about these. Valence refers to something subjective.  That's a definition in terms of something subjective. What else could one want?

It seems you want to do two things: (1) somehow identify which brainstates are associated with valence and (2) represent subjective experiences in terms of something mathematical, i.e. something non-subjective. Neither of these questions is identical to establishing either what suffering is, or what causes it.  Hence, when you say:

QRI thinks not having a good answer to the question of suffering is a core bottleneck

I'm afraid I don't know which question you have in mind. Could you please specify? 

2. Why does that all matter?

It's unclear to me why you think solving either problem - (1) or (2) - is (especially) valuable. There is some fairly vague stuff about neurotech, but this seems pretty hand-wavey. It's rather bold for you to claim 

 there are trillion-dollar bills on the sidewalk, waiting to be picked up if we just actually try

and I think you owe the reader a bit more to bite into, in terms of a theory of change.  

You might offer some answer about the importance of being able to measure what impacts well-being here but - and I hope old-time forum hands will forgive me as I mount a familiar hobby-horse - economics and psychology seem to be doing a reasonable job of this simply by surveying people, e.g. asking them how happy they are (0-10). Such work can and does proceed without a theory of exactly what is happening inside the 'black box' of the brain; it can be used, right now, to help us determine what our priorities are -  if I can be permitted to toot my horn from aside the hobby-horse, I should add that this just is what my organisation, the Happier Lives Institute, is working on. If I were to insist on waiting for real-time brain scanning data to learn whether, saying, cash transfers are more cost-effective than psychotherapy at increasing happiness, I would be waiting some time. 

3. Too much (undefined) jargon

Here is a list of terms or phrases that seem very important for understanding STV where I have very little idea exactly what you mean:

  • Neurophysiological models of suffering try to dig into the computational utility and underlying biology of suffering
  • symmetry
  • harmony
  • dissonance
  • resonance as a proxy for characteristic activity
  • Consonance Dissonance Noise Signature
  • self-organizing systems
  • Neural Annealing
  • full neuroimaging stack
  • precise physical formalism for consciousness
  • STV gives us a rich set of threads to follow for clear neurofeedback targets, which should allow for much more effective closed-loop systems, and I am personally extraordinarily excited about the creation of technologies that allow people to “update toward wholesome”,

Finally, and perhaps most importantly, I really not sure what it could even mean to represent consciousness/ valence as a mathematic shape. 

If this is the 'primer', I am certainly not ready for the advanced course(!). 

Comment by MichaelPlant on The psychology of population ethics · 2021-09-02T11:32:17.596Z · EA · GW

Thanks for this answer! It was really helpful. I hadn't spotted that the 'empty world' really was empty in the experiment; not sure how I missed that.

Comment by MichaelPlant on The psychology of population ethics · 2021-09-02T09:34:46.278Z · EA · GW

Well, all disagreements in philosophy ultimately come down to intuitions, not just those in population ethics! The question I was pressing is what, if anything, the authors think we should infer from data about intuitions. One might think you should update toward people's intuitions, but that's not obvious to me, not least when (1) in aggregate, people's answers are inconsistent and (2) this isn't something they've thought about.

Comment by MichaelPlant on The psychology of population ethics · 2021-08-31T09:34:23.548Z · EA · GW

I found this paper really interesting - so, thanks!

Two questions and a comment

First question: in broad terms, what do you think moral philosophers should infer from psychological studies of this type in general, and from this one in particular? One perspective would be for moral philosophers to update their views towards that of the population - the "500 million Elvis fans can't be wrong" approach.

This is tempting, except that the views of the average person appear inconsistent (ie they weigh suffering more but also think creating neutral lives is good) and implausible, by the lights of views amongst philosophers (eg those surveyed believe adding unhappy lives can be good where it increases average happiness). Even if the views were coherent and plausible (eg those surveyed congregated on a single, consistent view) it would still seem open to philosophers to discount the views of non-experts who hadn't really familiarised themselves enough with the literature and so did not constitute epistemic peers.

Second question: for the adding people experiment, how confident should we be that those surveyed were thinking solely about the value of adding the new person, as it relates to that person themselves, and not instead thinking about the effects adding a life has on other people? In skimming the paper, I couldn't see anything about how you had tested the participants were answered the right question.

I ask because, when I speak to people about the value of adding new lives, it is incredibly hard to get people to think just about the value related to the created individuals, and not that person's parents, society, etc. Yet, to find out their views on population ethics, people need to realise they are just thinking about the effects regarding the created individual themself only. Of course, I might say that adding a happy life is very good, but that's just because I am thinking it is good for the parents, etc.; conversely, I could answer that adding unhappy lives are bad because they are a drain on others. If I do this, I wouldn't have answered the question you want me to. As such, it's not clear to me your experiment has really tested what you said it has.

A comment: in the one about adding lives, you describe the populations as 'empty' and 'full'. This is confusing, as in the paper 'empty' actually means 1 millon people, not an actually empty world; 'full' means 10 billion (which is questionably 'full', either). I think you should flag this more clearly and/or use different terms. 'Small' and 'large' might be better. I can imagine people having different intuitions if there are genuinely no people existing at the time, and also if the world seems more genuinely full, eg had 100 billion people.

Comment by MichaelPlant on Questions for Howie on mental health for the 80k podcast · 2021-08-27T11:01:10.041Z · EA · GW

When people say "EAs should do X", it's usually wise to reflect on whether that is really the case - are there skills or mindsets that members of the EA community are bringing to X?

 The case I would like to see made her is why EA orgs would benefit from getting mental health services from some EA provider rather than the existing ones available. Could you elaborate on why you think this is the case? I'm not sure why you think current mental services, eg regular therapists are unapproachable and how having an 'EA' service would get around this. I don't buy the access point, at least not for EA orgs: access is a question of funding, and that's something EA orgs plausibly have. Demand for a service leads to more of it being supplied (of course, there are elasticities). If I buy more groceries, it's not like someone else goes hungry, it's more like more groceries get produced.

No, this isn't what I'm thinking about. I don't understand what you're saying here.

I assume you didn't mean it this way, but I found the tone of this comment rather brusque and dismissive. Please be mindful of that for discussions, particularly those in the EA forum. 

I'm not sure how else to explain my point. One approach to MH is to talk to each individual about what they can do. Another approach, the organisational psychology one, is to think about how to change office culture and working practices. Sort of bottom-up vs top-down.

Given my original comment, I think it's appropriate to give a broad view of the potential forms the intervention can take and what can be achieved by a strong founding team. 

These services can take forms that don't currently exist. I think it's very feasible to find multiple useful programs or approaches that could be implemented.

I'd be interested to hear you expand on what you mean here!

Comment by MichaelPlant on Questions for Howie on mental health for the 80k podcast · 2021-08-25T10:44:04.250Z · EA · GW

While I am sympathetic to the idea of doing lots of well-being stuff, it's not obvious why this needs a new EA-specified org.

To restate, I take it thought is that improving mental health of EAs could be a plausible priority because of the productivity gains from those people, which allows them to do more good - saliently, the main benefit of this isn't supposed to come from the welfare gains to the treated people.

Seeing as people can buy mental health treatments for themselves, and orgs can pay for it for their staff, I suppose the intervention you have in mind is to improve the mental health of organisation as a whole - that is, change the system, rather than keep the system fixed but help the people in the system. This is a classic organizational psychology piece, and I'm sure there are consultants EAs orgs could hire to help them with this. Despite being a huge happiness nerd, I'm actually not familiar with the world of happiness/mental health organisational consultancies.  One I do know of is Friday Pulse, but I'm sure they aren't the only people who try to do this sort of thing. 

Given such things exist, it's not obvious why self-described effective altruists should prioritise setting up more things of this type. 

Comment by MichaelPlant on New Articles on Utilitarianism.net: Population Ethics and Theories of Well-Being · 2021-08-22T15:40:27.041Z · EA · GW

Regarding the well-being section, you say:

The differences between these theories are of primarily theoretical interest; they overlap sufficiently in practice that the practical implications of utilitarianism are unlikely to depend upon which of these turns out to be the correct view.

But you don't substantiate or explain this. As a helpful suggestion, you could add a line later on pointing out that, if the different theories will agree, in practice, on which things make life go well vs badly, they are likely to agree about what sort of practical actions are good vs bad. However, different theories of well-being may well disagree on what the priorities are amongst actions, and one would need to get further into the details to investigate this. 

Comment by MichaelPlant on New Articles on Utilitarianism.net: Population Ethics and Theories of Well-Being · 2021-08-22T15:23:59.740Z · EA · GW

To bang a drum: while I appreciate the effort to communicate utilitarianism to a wider world, the bit on population ethics seemed, for my tastes, too much of an opinionated 'Trojan horse' to lead the reader to the author's (or authors') practical priorities. As I've moaned elsewhere on this Forum, I like Introductions to be introductions, not plugs.

Comment by MichaelPlant on [PR FAQ] Sharing readership data with Forum authors · 2021-08-13T22:22:42.483Z · EA · GW

Yeah, I'd like this. For stuff that doesn't get comments, it would be really interesting to know whether people read it or not.

Comment by MichaelPlant on Post on maximizing EV by diversifying w/ declining marginal returns and uncertainty · 2021-08-13T08:14:38.198Z · EA · GW

I'm not sure if you're disagreeing with my toy examples, or elaborating on the details - I think the latter.

Comment by MichaelPlant on Post on maximizing EV by diversifying w/ declining marginal returns and uncertainty · 2021-08-13T08:12:48.902Z · EA · GW

Right. You'd have a fuzzy line to represent the confidence interval of ex post value, but you would still have a precise line that represented the expected value.

Comment by MichaelPlant on How are resources in EA allocated across issues? · 2021-08-11T17:28:31.821Z · EA · GW

Thanks for this! Some minor points.

I'm puzzled by what's going on in the category "Other near-term work (near-term climate change, mental health)". The two causes in parentheses are quite different and I have no idea what other topics fall into this. Also, this has 12% of the people, but >1% of the money: how did that happen? What are those 12% of people doing?

Also, shouldn't "global health" really be "global health and development"? If it's just "global health" that leaves out the economic stuff, e.g. Give Directly. Further, global health should probably either include mental health, or be specified as "global physical health".

Comment by MichaelPlant on Post on maximizing EV by diversifying w/ declining marginal returns and uncertainty · 2021-08-11T14:06:55.989Z · EA · GW

I was thinking about this recently too, and vaguely remember it being discussed somewhere and would appreciate a link myself.

To answer the question, here's a rationale for diversification that's illustrated in the picture below that I just whipped up. 

Imagine you have two causes where you believe their cost-effectiveness trajectories cross at some point. Cause A does more good per unit resources than cause B at the start but hits diminishing marginal returns faster than B. Suppose you have enough resources to get to the crossover point. What do you do? Well, you fund A up to that point, then switch to B. Hey presto, you're doing the most good by diversifying.

This scenario seems somewhat plausible in reality. Notice it's a justification for diversification that doesn't rely on appeals to uncertainty, either epistemic or moral. Adding empirical uncertainty doesn't change the picture: empirical uncertainty basically means you should draw fuzzy lines instead of precise ones, and it'll be less clear when you hit the crossover.

What's confusing for me about the worldview diversification post is that it seems to run together two justifications for, in practice, diversifying (i.e. supporting more than one thing) that are very different in nature.

One justification for diversification is based on this view about 'crossovers' illustrated above: basically, Open Phil has so much money, they can fund stuff in one area to the point of crossover, then start funding something else. Here, you diversify because you can compare different causes in common units and you so happen to hit crossovers. Call this "single worldview diversification" (SWD).

The other seems to rely on the idea there are different "worldviews" (some combination of beliefs about morality and the facts) which are, in some important way, incommensurable: you can't stick things into the same units. You might think Utilitarianism and Kantianism are incommensurable in this way: they just don't talk in the same ethical terms. Apples 'n' oranges. In the EA case, one might think the "worldviews" needed to e.g. compare the near-term to the long-term are, in some relevant sense incommensurable - I won't to try to explain that here, but may have a stab at in another post. Here, you might think you can't (sensibly) compare different causes in common units. What should you do? Well, maybe you give each of them some of your total resources, rather than giving it all to one. How much do you give each? This is a bit fishy, but one might do it on the basis of how likely you think each cause is really the best (leaving aside the awkward fact you've already said you don't think you can compare tem). So if you're totally unsure, each gets 50%. Call this "multiple worldview diversification" (MWD).* 

Spot the difference: the first justification for diversification comes because you can compare causes, the second because you can't. I'm not sure if anyone has pointed this out before. 

 

*I think MWD is best understood as an approach dealing with moral and/or empirical uncertainty. Depending on the type of uncertainty at hand, there are extant responses about how to deal with the problem that I won't go into here. One quick example: for moral uncertainty, you might opt for 'my favourite theory' and give everything to the theory in which you have most credence; see Bykvist (2017) for a good summary article on moral uncertainty.

Comment by MichaelPlant on The Cryonics reductio against pure time preference: a rhetorical low-hanging fruit - or "Do we discount the future only because we won't live in it?" · 2021-08-04T15:23:52.141Z · EA · GW

You might think it's reasonable to discount based on psychological similarity: something is less valuable to your later self the less like you that person is. Cf. The Time-Relative Interest Account of the badness of death (e.g. Holtug 2011). This wouldn't justify a pure time preference, but it would justify a contingent time preference: in reality, you value stuff less the further in the future it happens, but not because of time per se, but because of reduced psychological connectededness, which so happens to occur of time.

I point this out to show that someone accept your reductio but get much the same practical result by other means.  

Of course, someone who took this view would agree that some harm of size S that befalls you just before you enter the cryo chamber would be just as bad as one that befalls you as soon as you get out.  

Comment by MichaelPlant on An evaluation of Mind Ease, an anti-anxiety app · 2021-07-30T12:13:37.353Z · EA · GW

I'm really pleased to see this: I have been wondering how one would do an EA-minded evaluation of the cost-effectiveness of a start-up that runs it head to head with things like AMF. I'm particularly pleased to see an analysis of a mental health product.*

I only have one comment. You say:

The promise of mobileHealth (mHealth) is that at scale apps often have ‘zero marginal cost’ per user (much less than $12.50) and so plausibly are very cost-effective

It doesn't seem quite that tech products have zero marginal cost. Shouldn't one include the cost of acquiring (and supporting?) a user, e.g. through advertising? This has a cost, and this cost would need to be lower than $12.50 per user, given your other assumptions. I have no idea what user acquisition costs are and if $12.5 is high or low. 

*(Semi-obligatory disclaimer: Peter Brietbart, MindEase's CEO, is the chair of the board of trustees for HLI, the organisation I run)

Comment by MichaelPlant on Can money buy happiness? A review of new data · 2021-06-28T10:52:31.459Z · EA · GW

Uhh... that shouldn't happen from just re-plotting the same data. In fact, how is it that in the original graph, there is an increase from $400,000 to $620,000, but in the new linear axis graph, there is a decrease?


So, there was a discrepancy between the data provided for the paper and the graph in the paper itself. The graph plotted above used the data provided.  I'm not sure what else to say without contacting the journal itself.

this seems to imply that rich people shouldn't get more money because it barely makes a difference, but this also applies to poor people as well, casting doubt on whether we should bother giving money away.

I don't follow this. The claim is that money makes less of a difference what one might expect, not that it makes no difference. Obviously, there are reasons for and against working at, say, Goldman Sachs besides the salary. It does follow that, if you receiving money makes less of a difference than you would expect, then you giving it to other people, and them receiving it, will also make a smaller-than-anticipated difference. But, of course, you could do something else with your money that could be more effective than giving it away as cash - bednets, deworming, therapy, etc.

Comment by MichaelPlant on US bill limiting patient philanthropy? · 2021-06-25T09:22:03.807Z · EA · GW

I also know almost nothing about US tax law. Call me a cynic but it seems plausible that lots (nearly all?) of the people putting their money into foundations and not spending it are doing so for tax reasons, rather than because they have a sincere concern for the longterm future.

As a communications point, this does make me wonder if longtermist philanthropists who hypothetical campaigned for such a 'loophole' to remain open will, by extension, be seen as unscrupulous tax dodgers.

Comment by MichaelPlant on Can "pride" be used as a subjective measure like "happiness"? · 2021-06-19T10:36:11.402Z · EA · GW

So, if you look at OECD (2013, Annex A) there's a few example questions about subjective well-being. The eudaimonic questions are sort of in your area (see p 251), e.g. "I lead a purposeful and meaningful life", and "I am confident and capable in the activities that are important to me".

You might also be interested by Kahneman's(?) distinctions of decision vs remembered vs experience utility. Sounds like your question taps into "how will I, on reflection, feel about this decision?" and you're sampling your intuitions about how you judge life. 

Comment by MichaelPlant on [Podcast] Suggest a question for Jeffrey Sachs · 2021-06-15T15:33:00.227Z · EA · GW

He may well have been asked this before, but I'd want to know what, if anything, he thinks would be lost be replaced the SDGs - at the least insofar as they apply to current humans - with a measure of happiness.

Also, if/how he thinks about intergenerational trade-offs.

Comment by MichaelPlant on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T22:41:53.174Z · EA · GW

Just a half-formed thought how something could be "meta but not longtermist" because I thought that was a conceptually interesting issue to unpick.

I suppose one could distinguish between meaning "meta" as (1) does non-object level work or (2) benefits more than one value-bearer group, where the classic, not-quite-mutually-exclusive three options for value-bearer groups are (1) near-term humans, (2) animals, and (3) far future lives.

If one is thinking the former way, something is meta to the degree it does non-object level vs object-level work (I'm not going to define these), regardless of what domain it works towards. In this sense, 'meta' and (e.g.) 'longtermist' are independent: you could be one, or the other, both, or neither. Hence, if you did non-object level work that wasn't focused on the longterm, you would be meta but not longtermist (although it might be more natural to say "meta and not longtermist" as there is no tension between them).

If one is thinking the latter way, one might say that an org is less "meta", and more "non-meta", the greater the fraction of its resources are intentionally spent to benefit just only one value-bearer group. Here "meta" and "non-meta" are mutually exclusive and a matter of degree. A "non-meta" org is one that spends, say, more than 50% of its resources aimed at one group. The thought is of this is that, on this framework, Animal Advocacy Careers and 80k are not meta, whereas, say, GWWC is meta. Thinking this way, something is meta but not longtermist if it primarily focuses on non-longtermist stuff.

(In both cases, we will run into familiar issues about to making precise what an agent 'focuses on' or 'intends'.)

Comment by MichaelPlant on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-06T14:39:47.890Z · EA · GW

In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them

Thanks for this reply, which I found reassuring. 

FWIW, I think this example is pretty unrealistic, as I don't think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination

Okay, this is interesting and helpful to know. I'm trying to put my finger on the source of what seems to be a perspectival difference, and I wonder if this relates to the extent to which fund managers should be trying to instantiate donor's wishes vs fund managers allocating the money by their own lights of what's best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former, not least for long-term concerns about reputation, integrity, and people just taking their money elsewhere. 

To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better.

I suspect you would agree with this in principle: you wouldn't want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused.

However, I imagine you would disagree that this is a problem in practice, because donors expect there to be some overlap between funds and, in any case, fund managers will not recommend things wildly outside their fund's remit.  (I am not claiming this is a problem in practice; might concern is that it may become one and I want to avoid that.)

I haven't thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive - this gives donors greater choice and minimises worries about permissible fund allocation. 

Comment by MichaelPlant on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-05T18:23:43.420Z · EA · GW

Hello Michelle. Thanks for replying, but I was hoping you would engage more with the substance of my question - your comment doesn't really give me any more information than I already had about what to expect.

Let me try again with a more specific case. Suppose you are choosing between projects A and B - perhaps they have each asked for $100k but you only have $100k left. Project A is only eligible for funding from EAIF - the other EA funds consider it outside their respective purviews. Project B is eligible for funding from one of the other EA funds, but so happens to have applied to EAIF. Suppose, further, you think B is more cost-effective at doing good.

What would you do? I can't think of any other information you would need.

FWIW, I think you must pick A. I think we can assume donors expect the funds not to be overlapping - otherwise, why even have different ones? - and they don't want their money to go to another fund's area - otherwise, that's where they have put it. Hence, picking B would be tantamount to a breach of trust.

(By the same token, if I give you £50, ask you to put it in the collection box for a guide dog charity, and you agree, I don't think you should send the money to AMF, even if you think AMF is better. If you decide you want to spend my money somewhere else from what we agreed to, you should tell me and offer to return the money.)

Comment by MichaelPlant on My current impressions on career choice for longtermists · 2021-06-05T16:31:43.619Z · EA · GW

Thanks for writing this up! I found the overall perspective very helpful, as well as lots of the specifics, particularly (1) what it means to be on track and (2) the emphasis on the importance of 'personal fit' for an aptitude (vs the view there being a single best thing).

Two comments. First, I'm a bit surprised that you characterised this as being about career choice for longtermists.  It seems that the first five aptitudes are just as relevant for non-longtermist do-gooding, although the last two - software engineering and information security - are more specific to longtermism. Hence, this could have been framed as your impressions on career choice for effective altruists, in which you would set out the first five aptitudes and say they applied broadly, then noted the two more which are particular to longtermism. 

In the spirit of being a vocal customer, I would have preferred this framing. I am enthusiastic about effective altruism, but ambivalent about longtermism - I'm glad some people focus on it, but it's not what I prioritise - and found the narrower framing somewhat unwelcoming, as if non-longtermists aren't worth considering. (Cf if you had said this was career advice for women even though gender was only pertinent to a few parts.)

Second, one aptitude that did seem conspicuous by its absence was for-profit entrepreneurship - the section on the "entrepreneur" aptitude only referred to setting up longtermist organisations. After all, the Open Philanthropy Project, along with much of the rest of the effective altruist world, only exists because people became very wealthy and then gave their money away. I'm wondering if you think it is sufficiently easy to persuade (prospectively) wealthy people of effective altruism(/longtermism) that becoming wealthy isn't something community members should focus on; I have some sympathy with this view, but note you didn't state it here. 

Comment by MichaelPlant on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T17:15:27.824Z · EA · GW

Yes, I read that and raised this issue privately with Jonas.

Comment by MichaelPlant on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T16:49:21.833Z · EA · GW

I recognise there is admin hassle. Although, as I note in my other comment, this becomes an issue if the EAIIF in effect becomes a top-up for another fund.

Comment by MichaelPlant on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T16:28:34.599Z · EA · GW

Thanks for writing this reply and, more generally, for an excellent write-up and selection of projects!

I'd be grateful if you could address a potential, related concern, namely that EAIIF might end up as a sort of secondary LTFF, and that this would be to the detriment of non-longtermist applicants to the fund, as well being, presumably,  against the wishes of EAIIF's current donors.  I note the introduction says:

we generally strive to maintain an overall balance between different worldviews according to the degree they seem plausible to the committee.

and also that Buck, Max, and yourself are enthusiastic longtermists - I am less sure about Ben and Jonas is a temporary member. Putting these together, combined with what you say about funding projects which could/should have applied to the LTFF, it would seem to follow you could (/should?) put the vast majority of the EAIIF towards long-terminist projects.

Is this what you plan to do? If not, why not? If  yes,  do you plan to inform the current donors?

I emphasise I don't see any signs of this in the current round, nor do I expect you to do this. I'm mostly asking so you can set my mind at rest, not least because the Happier Lives Institute (disclosure: I am its Director) has been funded by EAIIF and its forerunner, would likely apply again, and is primarily non-longtermism (although we plan to do some LT work - see the new research agenda). 

If the EAIIF radically changes directly, it would hugely affect us, as well as meaning more pluralistic/meta EA donors would lack an EA fund to donate to. 

Comment by MichaelPlant on Working in Parliament: How to get a job & have an impact · 2021-05-24T16:33:38.045Z · EA · GW

Yep, SpAds bit is key - If my employer hadn't got a special advisor, I might have been useful