Posts

A change to the grantmaking schedule of the Effective Altruism Global Health & Development Fund 2019-06-26T22:05:32.979Z · score: 34 (13 votes)
The Giving What We Can Pledge: self-determination vs. self-binding 2017-01-26T10:42:58.434Z · score: 12 (12 votes)
New version of effectivealtruism.org 2016-08-05T17:58:15.854Z · score: 8 (10 votes)
Should charity prioritise the worst off? 2016-06-02T16:09:54.261Z · score: 2 (2 votes)
Why effective altruism used to be like evidence-based medicine. But isn’t anymore 2015-08-12T15:13:34.927Z · score: 11 (15 votes)

Comments

Comment by jamessnowden on Announcing the launch of the Happier Lives Institute · 2019-06-22T19:03:55.809Z · score: 10 (3 votes) · EA · GW

On (1)

>people inflate their self-reports scores generally when they are being given treatment?

Yup, that's what I meant.

>Is there one or more studies you can point me to so I can read up on this, or is this a hypothetical concern?

I'm afraid I don't know this literature on blinding very well but a couple of pointers:

(i) StrongMinds notes "social desirability bias" as a major limitation of their Phase Two impact evaluation, and suggest collecting objective measures to supplement their analysis:

"Develop the means to negate this bias, either by determining a corrective percentage factor to apply or using some other innovative means, such as utilizing saliva cortisol stress testing. By testing the stress levels of depressed participants (proxy for depression), StrongMinds could theoretically determine whether they are being truthful when they indicate in their PHQ-9 responses that they are not depressed." https://strongminds.org/wp-content/uploads/2013/07/StrongMinds-Phase-Two-Impact-Evaluation-Report-July-2015-FINAL.pdf

(ii) GiveWell's discussion of the difference between blinded and non-blinded trials on water quality interventions when outcomes were self-reported [I work for GiveWell but didn't have any role in that work and everything I post on this forum is in a personal capacity unless otherwise noted]

https://blog.givewell.org/2016/05/03/reservations-water-quality-interventions/

On (2)

May be best to just chat about this in person but I'll try to put it another way.

Say a single RCT of a cash transfer program in a particular region of Kenya doubled people's consumption for a year, but had no apparent effect on life satisfaction. What should we believe about the likely effect of a future cash transfer program on life satisfaction? (taking it as an assumption for the moment that the wider evidence suggests that increases in consumption generally lead to increases in life satisfaction).

Possibility 1: there's something about cash transfer programs which mean they don't increase life satisfaction as much as other ways to increase consumption.

Possibility 2: this result was a fluke of context; there was something about that region at that time which meant increases in consumption didn't translate to increases in reported life satisfaction, and we wouldn't expect that to be true elsewhere (given the wider evidence base).

If Possibility 2 is true, then it would be more accurate to predict the effect of a future cash transfer program on life satisfaction by using the RCT effect of cash on consumption, and then extrapolating from the wider evidence base to the likely effect on life satisfaction. If possibility 1 is true, then we should simply take the measured effect from the RCT on life satisfaction as our prediction.

One way of distinguishing between possibility 1 and possibility 2 would be to look at the inter-study variance in the effects of similar programs on life satisfaction. If there's high variance, that should update us to possibility 2. If there's low variance, that should update us to possibility 1.

I haven't seen this problem discussed before (although I haven't looked very hard). It seems interesting and important to me.

Comment by jamessnowden on Announcing the launch of the Happier Lives Institute · 2019-06-21T19:28:10.489Z · score: 12 (5 votes) · EA · GW

Excited to see your work progressing Michael!

I thought it might be useful to highlight a couple of questions I personally find interesting and didn't see on your research agenda. I don't think these are the most important questions, but I haven't seen them discussed before and they seem relevant to your work.

Writing this quickly so sorry if any of it's unclear. Not necessarily expecting an answer in the short term; just wanted to flag the questions.

(1) How should self-reporting bias affect our best guess of the effect size of therapy-based interventions on life satisfaction (proxied through e.g. depression diagnostics)?

My understanding is that at least some of the effect size for antidepressants is due to placebo (although I understand there's a big debate over how much).

If we assume that (i) at least some of this placebo effect is due to self-reporting bias (rather than a "real" placebo effect that genuinely makes people happier), and (ii) It's impossible to properly blind therapeutic interventions, how should this affect our best guess of the effect size of therapy relative to what's reported in various meta-analyses? Are observer-rating scales a good way to overcome this problem?

(2) How much do external validity concerns matter for directly comparing interventions on the basis of effect on life satisfaction?

If my model is: [intervention] -> increased consumption -> increased life satisfaction.

And let's say I believe the first step has high external validity but the second step has very low external validity.

That would imply that directly measuring the effect of [intervention] on life satisfaction would have very low external validity.

It might also imply a better heuristic to make predictions on the effect of future similar interventions on life satisfaction would be:

(i) Directly measure the effect of [intervention] on consumption

(ii) Use the average effect of increased consumption on life satisfaction from previous research to estimate the ultimate effect on life satisfaction.

In other words: when the link between certain outcomes and ultimate impact differs between settings in a way that's ex ante unpredictable, it may be better to proxy future impact of similar interventions through extrapolation of outcomes, rather than directly measuring impact.

What evidence currently exists around the external validity of the links between outcomes and ultimate impact (i.e. life satisfaction)?

Comment by jamessnowden on Oxford Prioritisation Project: Version 0 · 2017-03-11T14:56:24.448Z · score: 4 (4 votes) · EA · GW

I would deprioritise looking at BasicNeeds (in favour of StrongMinds). They use a franchised model and aren't able to provide financials for all their franchisees. This makes it very difficult to estimate cost-effectiveness for the organisation as a whole.

The GWWC research page is out of date (it was written before StrongMinds' internal RCT was released) and I would now recommend StrongMinds above BasicNeeds on the basis of greater levels of transparency, and focus on cost-effectiveness.

Comment by jamessnowden on Some Thoughts on Public Discourse · 2017-02-23T21:34:17.996Z · score: 12 (12 votes) · EA · GW

Thanks Holden. This seems reasonable.

A high impact foundation recently (and helpfully) sent me their grant writeups, which are a treasure trove of useful information. I asked them if I could post them here and was (perhaps naively) surprised that they declined.

They made many of the same points as you re: the limited usefulness of broad feedback, potential reputation damage, and (given their small staff size) cost of responding. Instead, they share their writeups with a select group of likeminded foundations.

I still think it would be much better if they made their writeups public, but almost entirely because it would be useful for the reader.

It's a shame that the expectation of responding to criticism can disincentivise communication in the first place.

(Views my own, not my employer's)

Comment by jamessnowden on The Giving What We Can Pledge: self-determination vs. self-binding · 2017-02-03T14:39:58.546Z · score: 1 (1 votes) · EA · GW

I agree this seems relevant.

One slight complication is that donors to GWWC might expect a small proportion of people to renege on the pledge.

Comment by jamessnowden on Estimating the Value of Mobile Money · 2016-12-23T11:10:51.592Z · score: 1 (1 votes) · EA · GW

It seems like you're assuming that the GiveDirectly money would have gone only to the M-Pesa-access side of the (natural) experiment, but they categorized areas based on whether they had M-Pesa access in 2008-2010, not 2012-2014 when access was much higher.

Ah yes - that kind of invalidates what I was trying to do here.

I didn't notice that GiveWell had an estimate for this, and checking now I still don't see it. Where's this estimate from?

It came from the old GiveWell cost-effectiveness analysis excel sheet (2015). "Medians - cell V14". Actually looking at the new one the equivalent figures seems to be 0.26% so you're right! (Although this is the present value of total increases in current and future consumption).

Comment by jamessnowden on Estimating the Value of Mobile Money · 2016-12-21T16:16:03.951Z · score: 2 (2 votes) · EA · GW

Thanks for this Jeff - a very informative post.

The study doesn't appear to control for cash transfers received through access to M-Pesa. I was thinking about how much of the 0.012 increase in ln(consumption) was due to GiveDirectly cash transfers.

Back of the envelope:

  • M-Pesa access raises ln(consumption) by 0.012 for 45% of population (c.20m people).
  • 0.012 * 20m = 234,000 unit increases in ln(consumption)

  • GiveDirectly gave c.$9.5m in cash transfers between 2012-14 to people with access to M-Pesa. [1]

  • GiveWell estimate each $ to GiveDirectly raises ln(consumption) by 0.0049
  • 9.5m * 0.0049 = 46,000 unit increases in ln(consumption)

So GiveDirectly accounted for (very roughly) a fifth of the 0.012 increase in ln(consumption) due to M-Pesa.

[1] this is an overestimate as it assumes all transfers went to Kenya and none to Uganda

(Done in haste - may have got my sums / methodology wrong)

Comment by jamessnowden on Is not giving to X-risk or far future orgs for reasons of risk aversion selfish? · 2016-09-19T08:29:02.181Z · score: 0 (0 votes) · EA · GW

I agree. Although some forms of personal insurance are also rational. Eg health insurance in the US because the downside of not having it is so bad. But don't insure your toaster.

Comment by jamessnowden on Is not giving to X-risk or far future orgs for reasons of risk aversion selfish? · 2016-09-19T08:27:13.138Z · score: 0 (0 votes) · EA · GW

I agree that dmu over crop yields is perfectly rational. I mean a slightly different thing. Risk aversion over utilities. Which is why people fail the Allais pradadox. Rational choice theory is dominated by expected utility theory (exceptions Buchak, McClennen) which suggests risk aversion over utilities is irrational. Risk aversion over utilities seems pertinent here because most moral views don't have dmu of people's lives.

Comment by jamessnowden on Is not giving to X-risk or far future orgs for reasons of risk aversion selfish? · 2016-09-16T17:19:02.362Z · score: 2 (2 votes) · EA · GW

In normative decision theory, risk aversion means a very specific thing. It means using a different aggregating function from expected utility maximisation to combine the value of disjunctive states.

Rather than multiplying the realised utility in each state by the probability of that state occurring, these models apply a non-linear weighting to each of the states which depends on the global properties of the lottery, not just what happens in that state.

Most philosophers and economists agree risk aversion over utilities is irrational because it violates the independence axiom / sure-thing principle which is one of the foundations of objective / subjective expected utility theory.

One way a person could rationally have seemingly risk averse preferences is by placing a higher value on the first bit of good they do than on the second bit of good they do, perhaps because doing some good makes you feel better. This would technically be selfish.

But I'm pretty sure this isn't what most people who justify donating to global poverty out of risk aversion actually mean. They generally mean something like "we should place a lot of weight on evidence because we aren't actually very good at abstract reasoning". This would mean their subjective probability that an x-risk intervention is effective is very low. So it's not technically risk aversion. It's just having a different subjective probability. This may be an epistemic failure. But there's nothing selfish about it.

I wrote a paper on this a while back in the context of risk aversion justifying donating to multiple charities. This is a shameless plug. https://docs.google.com/document/d/1CHAjFzTRJZ054KanYj5thWuYPdp8b3WJJb8Z4fIaaR0/edit#heading=h.gjdgxs

Comment by jamessnowden on New version of effectivealtruism.org · 2016-08-11T15:04:45.287Z · score: 1 (1 votes) · EA · GW

We wanted to differentiate the website slightly from the eaglobal site while maintaining brand coherency so went for a slightly different shade of blue which feels a bit 'calmer'.

Not wedded to it though and may change back. Which do you prefer?

Comment by jamessnowden on New version of effectivealtruism.org · 2016-08-11T15:00:15.300Z · score: 0 (0 votes) · EA · GW

Thanks Michael - fixed now

Comment by jamessnowden on New version of effectivealtruism.org · 2016-08-11T14:52:57.827Z · score: 0 (0 votes) · EA · GW

Thanks Ian - agreed it doesn't look fantastic at the moment. We embedded it on the website at the last moment and it screwed with the formatting. We'll be working to improve how it looks over the next couple of weeks.

Comment by jamessnowden on Philanthropy Advisory Fellowship: Mental Health in Sub-Saharan Africa · 2016-07-25T14:11:16.553Z · score: 0 (0 votes) · EA · GW

Thanks Austen. This is really helpful feedback.

  1. Yes I agree. This is important but very hard to quantify. Of course the causal relationship goes both ways (poor physical health poor mental health) but it's probable that mental health disorders have worse downstream effects than most physical health problems (economic productivity, stigma, impact on carers, physical health). We tried to capture these qualitatively at the beginning of the report but could have been clearer that they weren't included in the cost-effectiveness calculations.

  2. Thanks - this is really interesting. The $1000 figure came from here: http://dcp-3.org/sites/default/files/resources/15.%20Self%20Harm%20Pesticide%20Ban.pdf but that excludes morbidity. I'll check out the Eddleston paper.

  3. This is exciting

  4. Agreed kind of. Room for more funding is a tricky one. In the long term, the treatment gap is so high that there's a LOT of room to scale. But we've also included StrongMinds forecast expenditure based on current plans as it may be relevant for short term ability to productively use more funding. In any case, conclusion is the same. The organisation can absorb more funding in the short term, and in the long term there's huge room to scale.

  5. Should have been more clear. Fit with key themes was evaluated as: Evidence generating] AND [Preventative child health OR Task-shifting model]

We'll be updating this before sharing it more widely. Would be great to chat more about pesticide bans if you're available?

Comment by jamessnowden on Philanthropy Advisory Fellowship: Mental Health in Sub-Saharan Africa · 2016-07-22T13:00:59.431Z · score: 2 (2 votes) · EA · GW

Eric - this is so great! Coincidentally, CEA has also been working on a very similar report which was completed last week. It's here: https://drive.google.com/open?id=0B551Ijx9v_RoZWlUUFVTYWZ6aTVCUDRDLTViVHVyQVpPWVNn

I've shot you an email. We should definitely discuss our conclusions.

Comment by jamessnowden on Is effective altruism overlooking human happiness and mental health? I argue it is. · 2016-07-02T11:39:13.116Z · score: 1 (1 votes) · EA · GW

1) Ah yes - thanks for pointing out. Probably has limited external validity for the strongminds model though (which is psychosocial treatment alone for most patients delivered by community health workers, with only the most serious cases referred to clinics for medication). The numbers come from the Chisholm (2015) WHO-CHOICE model. http://www.bmj.com/content/344/bmj.e609

2) Analysis is here https://docs.google.com/spreadsheets/d/1-lCC1zQHVZlJS8f9OfqhzcZTetHMxuMkW7nT75QDGhk/edit#gid=960072536

[This is quick and dirty but gives a rough indication of cost-effectiveness. Most uncertain assumption is the long term impact of interpersonal group therapy on treated individuals 1-10 years down the line]

3) On the 'bednets' sheet you can see that the output measure is cost per under 5 child death averted. DALYs are then backcalculated from this to get c.$100 [not in sheet] . Something like $3,500 / 50 years of life for each death averted = c.$70/DALY. Because they're only looking at deaths, it's YLL not YLD. I haven't seen a quantitative estimate of the total morbidity burden of malaria. One important consequence of surviving severe (cerebral) malaria is a much higher chance of getting epilepsy later in life http://www.ncbi.nlm.nih.gov/pubmed/25631856 although I suspect there are many others. Child health is really important!

Also - could you specify what you mean by mental health being 10-18 times worse than we think. Does this mean: a) DALY weighting of severe depression is 0.65. Actually it should be 6.5 (so 6.5x worse than death. seems implausible) or b) Life with severe depression is worth 0.35 of healthy life. Actually it should be 0.035 (so 1 year of healthy life is worth c.30 years of life with severe depression. maybe but this seems like a lot)

Comment by jamessnowden on Is effective altruism overlooking human happiness and mental health? I argue it is. · 2016-06-30T10:00:22.248Z · score: 0 (0 votes) · EA · GW

Just to add to this. Acute schizophrenia is one of the worst health conditions on GBD13 DALY weightings (c.0.8). Severe depression is also one of the worst (c.0.65).

See http://www.thelancet.com/action/showFullTableImage?tableId=tbl2&pii=S2214109X15000698

So Michael - I agree it's very possible that mental health disorders are underweighted by DALY weightings because of the focusing illusion. But they are actually weighted quite highly at the moment. 10 years with severe depression is worth approximately 3.5 years of healthy life.

Comment by jamessnowden on Is effective altruism overlooking human happiness and mental health? I argue it is. · 2016-06-30T09:44:02.639Z · score: 0 (0 votes) · EA · GW

Hi Michael! As I said before, congrats on an interesting paper.

A few points on this comment:

1) DCP3 didn't have any cost-effectiveness figures for the StrongMinds intervention (interpersonal group therapy). Is the $1,000/DALY figure you mention related to primary care advice on alcohol use?

2) I'm currently writing a piece on mental health for a HNW donor and tried to model c-e of StrongMinds. I got c.$650/DALY reducing to $400/DALY as intervention scales. The biggest uncertainty in this estimate is the long term effects of psychosocial treatment as hardly any evidence exists. (I will post the calcs later - they're on another computer)

3) Givewell's estimate ignores YLD and is only based on U5 child mortality. So it's entirely YLL. You can find the calculations here: http://www.givewell.org/international/technical/criteria/cost-effectiveness/cost-effectiveness-models

Comment by jamessnowden on Why effective altruism used to be like evidence-based medicine. But isn’t anymore · 2015-09-30T11:22:38.736Z · score: 0 (0 votes) · EA · GW

Thank you all for some great responses and apologies for my VERY late reply. This post was intended to 'test an idea/provoke a response' and there's some really good discussion here.

Comment by jamessnowden on Why effective altruism used to be like evidence-based medicine. But isn’t anymore · 2015-08-17T09:23:43.735Z · score: 1 (1 votes) · EA · GW

Bernadette,

Thank you for your very informative response. I must admit that my knowledge of EBM is much more limited than yours and is primarily Wikipedia-based.

The lines which particularly led me to believe that EBM favoured formal approaches rather than doctors' intuitions were:

"Although all medicine based on science has some degree of empirical support, EBM goes further, classifying evidence by its epistemologic strength and requiring that only the strongest types (coming from meta-analyses, systematic reviews, and randomized controlled trials) can yield strong recommendations; weaker types (such as from case-control studies) can yield only weak recommendations"

"Whether applied to medical education, decisions about individuals, guidelines and policies applied to populations, or administration of health services in general, evidence-based medicine advocates that to the greatest extent possible, decisions and policies should be based on evidence, not just the beliefs of practitioners, experts, or administrators."

Criticism of EBM: "Research tends to focus on populations, but individual persons can vary substantially from population norms, meaning that extrapolation of lessons learned may founder. Thus EBM applies to groups of people, but this should not preclude clinicians from using their personal experience in deciding how to treat each patient."

Perhaps the disagreement comes from my unintentional implication that the two camps were diametrically opposed to each other.

I agree that they are "both fundamentally important when you act in the real world" and that evidence based giving / evidence based medicine are not the last word on the matter and need to be supplemented by reason. At the same time though, I think there is an important distinction between maximising expected utility and being averse to ambiguity.

For example, to the best of my knowledge, the tradeoff between donating to SCI ($1.23 per treatment) and Deworm the World Initiative ($0.50 per treatment), is that DWI has demonstrated higher cost effectiveness but with a wider confidence interval (less of a track record). Interestingly, this actually sounds similar to your EGDT example. I therefore donate to SCI because I prefer to be confident in the effect. I think this distinction also applies to XRisk vs. development.

Comment by jamessnowden on Why effective altruism used to be like evidence-based medicine. But isn’t anymore · 2015-08-12T18:41:23.657Z · score: 2 (2 votes) · EA · GW

Thanks both for thoughtful replies and links.

I agree that it may be counterproductive to divide people who are answering the same questions into different camps and, on re-reading, that is how my post may come across. My more limited intention was to provide a (crude) framework through which we might be able to understand the disagreement.

I guess I had always interpreted (perhaps falsely) EA as making a stronger claim than 'we should be more reasonable when deciding how to do good'. In particular I feel that there used to be more of a focus on 'hard' rather than 'soft' evidence. This helps explain why EA used to advocate charitable giving over advocacy work / systemic change, for which hard evidence is necessarily more limited. It seems EA is now a broader church and this is probably for the better but in departing from a preference for hard evidence/RCTs it has lost its claim to being like evidence-based medicine.

The strength of this evolution is that EA seems to have absorbed thoughtful critiques such as that of Acemoglu http://bostonreview.net/forum/logic-effective-altruism/daron-acemoglu-response-effective-altruism although I imagine it must have been quite annoying to be told that "if X offers some prospect of doing good, then EAs will do it" when we weren't at the time. Perhaps EA is growing so broad that the only real opponents they have left are the anti-rationalists like John Gray (although the more opponents he has the better)