GiveWell's Top Charities Are Increasingly Hard to Beat

post by Peter_Hurford · 2019-07-10T00:34:52.510Z · score: 40 (20 votes) · EA · GW · 8 comments

This is a link post for

GiveWell reviews their “near-termist human-centric OpenPhil grants” (i.e., criminal justice reform, immigration policy, land use reform, macroeconomic stabilization policy, and scientific research) grantmaking and find that many of these grants have substantial risk of failing to exceed the cost-effectiveness of GiveWell's top charities.

It appears that over the past several years, the estimated cost-effectiveness of GiveWell's top charities (as a class) has increased higher than expected, whereas so far the estimated cost-effectiveness of “near-termist human-centric OpenPhil grants” (as a class) has not produced as many hits at a similar or better level as expected. There are also further notes around comparing the robustness of these estimations and additional considerations for why non-GiveWell near-termist human-centric grantmaking is valuable.

OpenPhil says they're "planning to write more at a later date about the cost-effectiveness of [their] 'long-termist' and animal-inclusive grantmaking and the implications for our future resource allocation" which I'm especially excited about seeing next.


Comments sorted by top scores.

comment by Milan_Griffes · 2019-07-10T01:36:27.938Z · score: 4 (2 votes) · EA · GW

Aren't corporate cage-free campaigns [EA · GW] a really big hit?

comment by Peter_Hurford · 2019-07-10T02:35:17.699Z · score: 12 (8 votes) · EA · GW

Corporate cage-free campaigns aren't considered among the “near-termist human-centric OpenPhil grants”... they're instead in a separate "animal-inclusive" granting bucket that will be evaluated later.

comment by Milan_Griffes · 2019-07-10T14:37:27.230Z · score: 2 (1 votes) · EA · GW

Oh, right! (I skimmed past the "human-centric" part.)

comment by Evan_Gaensbauer · 2019-07-11T06:54:18.402Z · score: 1 (5 votes) · EA · GW

Do you know if these take into account criticisms of Givewell's methodology for estimating the effectiveness of their recommended charities?

comment by Peter_Hurford · 2019-07-11T14:39:09.556Z · score: 11 (3 votes) · EA · GW

Can you elaborate more about what you mean?

comment by Evan_Gaensbauer · 2019-07-14T18:29:45.415Z · score: 1 (1 votes) · EA · GW

This is a recent criticism of Givewell that I didn't see responded to or accounted for in any clear way in the linked post. I haven't read the whole thing closely yet, but no section appears to go over the considerations raised in that post. If they were sound, these criticisms incorporated into the analysis might make Givewell's top-recommended charities look more 'beatable'. I was wondering if I was missing something in the post, and Open Phil's analysis either accounts for or incorporates for that possibility.

comment by David_Moss · 2019-07-14T19:21:11.808Z · score: 14 (6 votes) · EA · GW

I'm not sure this is well-described as a "criticism of Givewell's methodology for estimating the effectiveness of their recommended charities." The problem seems to apply to cost-effectiveness estimates more broadly and the author explicitly says "Due to my familiarity with GiveWell, I mention it in a lot of examples. I don’t think the issues I raise in this post should be more concerning for GiveWell than other organizations associated with the EA movement". As such, I don't think these criticisms would make GiveWell's recommendations look more 'beatable.' Indeed, one might even think that it's partly because of considerations like those cited in the article you link, that GiveWell's top charities remain hard to beat, while other areas, which prima facie seemed like they would be extremely promising have turned out to be not so promising.