Drowning children are rare 2019-05-28T17:33:39.308Z
Should Effective Altruism be at war with North Korea? 2019-05-05T01:44:47.210Z
Cash transfers are not necessarily wealth transfers 2017-12-01T23:35:09.534Z
Effective altruism is self-recommending 2017-04-23T06:11:20.903Z
GiveWell and the problem of partial funding 2017-02-14T10:29:57.250Z
Matching-donation fundraisers can be harmfully dishonest 2016-11-12T03:30:32.349Z


Comment by BenHoffman on COVID-19 Risk Assessment App Idea for Vetting and Discussion · 2020-03-04T16:26:29.396Z · EA · GW

I know of one related effort:

Comment by BenHoffman on Are there good EA projects for helping with COVID-19? · 2020-03-04T16:24:11.672Z · EA · GW

This project seems relevant - an app to track COVID-19. Especially given the lack of testing in e.g. the US (and anecdotal evidence from my own social circle suggests it's already more prevalent than official statistics suggest) simple data-gathering seems relevant.

Comment by BenHoffman on Drowning children are rare · 2019-05-28T19:46:15.658Z · EA · GW

This isn’t a coherent rationalization for reasons covered in tedious detail in the longer series.

Comment by BenHoffman on Drowning children are rare · 2019-05-28T19:44:37.266Z · EA · GW

The series is long and boring precisely because it tried to address pretty much every claim like that at once. In this case GiveWell’s on record as not wanting their cost per life saved numbers to be held to the standard of “literally true” (one side of that disjunction) so I don’t see the point in going through that whole argument again.

Comment by BenHoffman on Should Effective Altruism be at war with North Korea? · 2019-05-06T04:33:22.034Z · EA · GW
Your perception that the EA community profits from the perception of utilitarianism is the opposite of the reality; utilitarianism is more likely to have a negative perception in popular and academic culture, and we have put nontrivial effort into arguing that EA is safe and obligatory for non-utilitarians. You're also ignoring the widely acknowledged academic literature on how axiology can differ from decision methods; sequence and cluster thinking are the latter.

I've talked with few people who seemed under the impression that the EA orgs making recommendations were performing some sort of quantitative optimization to maximize some sort of goodness metric, and used those recommendations on that basis, because they themselves accepted some form of normative utilitarianism.

Comment by BenHoffman on Should Effective Altruism be at war with North Korea? · 2019-05-06T04:31:52.323Z · EA · GW
Academia has influence on policymakers when it can help them achieve their goals, that doesn't mean it always has influence. There is a huge difference between the practical guidance given by regular IR scholars and groups such as FHI, and ivory tower moral philosophy which just tells people what moral beliefs they should have. The latter has no direct effect on government business, and probably very little indirect effect.
The QALY paradigm does not come from utilitarianism. It originated in economics and healthcare literature, to meet the goals of policymakers and funders who already had utilitarian-ish goals.

I agree with Keynes on this, you disagree, and neither of us has really offered much in the way of an argument or evidence, you've just asserted a contrary position.

Comment by BenHoffman on Should Effective Altruism be at war with North Korea? · 2019-05-06T04:22:13.458Z · EA · GW
The idea of making a compromise by coming up with a different version of utilitarianism is absurd. First, the vast majority of the human race does not care about moral theories, this is something that rarely makes a big dent in popular culture let alone the world of policymakers and strategic power. Second, it makes no sense to try to compromise with people by solving every moral issue under the sun when instead you could pursue the much less Sisyphean task of merely compromising on those things that actually matter for the dispute at hand. Finally, it's not clear if any of the disputes with North Korea can actually be cruxed to disagreements of moral theory.
The idea that compromising with North Korea is somehow neglected or unknown in the international relations and diplomacy communities is false. Compromise is ubiquitously recognized as an option in such discourse. And there are widely recognized barriers to it, which don't vanish just because you rephrase it in the language of utilitarianism and AGI.

So, no one should try this, it would be crazy to try, and besides we don't know whether it's possible because we haven't tried, and also competent people who know what they're doing are working on it already so we shouldn't reinvent the wheel? It doesn't seem like you tried to understand the argument before trying to criticize it, it seems like you're just throwing up a bunch of contradictory objections.

Comment by BenHoffman on Should Effective Altruism be at war with North Korea? · 2019-05-06T04:20:06.312Z · EA · GW
The most obvious way for EAs to fix the deterrence problem surrounding North Korea is to contribute to the mainstream discourse and efforts which already aim to improve the situation on the peninsula. While it's possible for alternative or backchannel efforts to be positive, they are far from being the "obvious" choice.
Backchannel diplomacy may be forbidden by the Logan Act, though it has not really been enforced in a long time.
The EA community currently lacks expertise and wisdom in international relations and diplomacy, and therefore does not currently have the ability to reliably improve these things on its own.

All these seem like straightforward objections to supporting things like GiveWell or the global development EA Fund (vs joining or supporting establishment aid orgs or states which have more competence in meddling in less powerful countries' internal affairs).

Comment by BenHoffman on [Link] Totalitarian ethical systems · 2019-05-06T04:06:59.606Z · EA · GW
Second, as long as your actions impact everything, a totalizing metric might be useful.

Wait, is your argument seriously "no one does this so it's a strawman, and also it makes total sense to do for many practical purposes"? What's really going on here?

Comment by BenHoffman on [Link] Totalitarian ethical systems · 2019-05-06T04:04:47.037Z · EA · GW
actual totalitarian governments have existed and they have not used such a metric (AFAIK).

Linear programming was invented in the Soviet Union to centrally plan production with a single computational optimization.

Comment by BenHoffman on [Link] Totalitarian ethical systems · 2019-05-06T04:01:52.383Z · EA · GW
The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy's familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here.

Some claim to, others don't.

I worked at GiveWell / Open Philanthropy Project for a year. I wrote up some of those reports. It's explicitly not scoring all recommendations on a unified metric - I linked to the "Sequence vs Cluster Thinking" post which makes this quite clear - but at the time, there were four paintings on the wall of the GiveWell office illustrating the four core GiveWell values, and one was titled "Utilitarianism," which is distinguished from other moral philosophies (and in particular from the broader class "consequentialism") by the claim that you should use a single totalizing metric to assess right action.

Comment by BenHoffman on Leverage Research: reviewing the basic facts · 2018-08-05T01:14:25.092Z · EA · GW

"Compared to a Ponzi scheme" seems like a pretty unfortunate compression of what I actually wrote. Better would be to say that I claimed that a large share of ventures, including a large subset of EA, and the US government, have substantial structural similarities to Ponzi schemes.

Maybe my criticism would have been better received if I'd left out the part that seems to be hard for people to understand; but then it would have been different and less important criticism.

Comment by BenHoffman on Effective altruism is self-recommending · 2018-07-25T12:52:42.431Z · EA · GW

retry the original case with double jeopardy

This sort of framing leads to publication bias. We want double jeopardy! This isn't a criminal trial, where the coercive power of a massive state is being pitted against an individual's limited ability to defend themselves. This is an intervention people are spending loads of money on, and it's entirely appropriate to continue checking whether the intervention works as well as we thought.

Comment by BenHoffman on Effective altruism is self-recommending · 2018-07-25T12:50:37.385Z · EA · GW

As I understand the linked page, it's mostly about retroactive rather than prospective observational studies, and usually for individual rather than population-level interventions. A plan to initiate mass bednet distribution on a national scale is pretty substantially different from that, and doesn't suffer from the same kind of confounding.

Of course it's mathematically possible that the data is so noisy relative to the effect size of the supposedly most cost-effective global health intervention out there, that we shouldn't expect the impact of the intervention to show up. But, I haven't seen evidence that anyone at GiveWell actually did the relevant calculation to check whether this was the case for bednet distributions.

Comment by BenHoffman on Effective altruism is self-recommending · 2018-03-29T03:34:34.970Z · EA · GW

If they did the followups and malaria rates held stable or increased, you would not then believe that the bednets do not work; if it takes randomized trials to justify spending on bednets, it cannot then take only surveys to justify not spending on bed nets, as the causal question is identical.

It's hard for me to believe that the effect of bednets is large enough to show an effect in RCTs, but not large enough to show up more often than not as a result of mass distribution of bednets. If absence of this evidence really isn't strong evidence of no effect, it should be possible to show it with specific numbers and not just handwaving about noise. And I'd expect that to be mentioned in the top-level summary on bed net interventions, not buried in a supplemental page.

Comment by BenHoffman on Cash transfers are not necessarily wealth transfers · 2017-12-05T18:31:01.500Z · EA · GW

One simple example:

More generally, things like the profusion of makework designed to facially resemble teaching, instead of optimizing for outcomes.

Comment by BenHoffman on Cash transfers are not necessarily wealth transfers · 2017-12-02T21:58:43.720Z · EA · GW

We should also expect this to mean that countries such as Australia and China that heavily weight a national exam system when advancing students at crucial stages will have less corrupt educational systems than countries like the US which weight locally assessed factors like grades heavily.

(Of course, there can be massive downsides to standardization as well.)

Comment by BenHoffman on Cash transfers are not necessarily wealth transfers · 2017-12-02T21:56:36.767Z · EA · GW

I think the thing to do is try to avoid thinking of "bureaucracy" as a homogeneous quantity, and instead attend to the details of institutions involved. Of course, as a foreigner with respect to every country but one's own, this is going to be difficult to evaluate when giving abroad. This is one of the many reasons why giving effectively on a global scale is hard, and why it's so important to have information feedback of the kind GiveDirectly is working on. Long-term follow-up seems really important too, and even then there's going to be some substantial justified uncertainty.

Comment by BenHoffman on Cash transfers are not necessarily wealth transfers · 2017-12-02T21:53:42.271Z · EA · GW

There's an implied heuristic that if someone makes an investment that gives them an income stream worth $X, net of costs, then the real wealth of their society increases by at least $X. On this basis, you might assume that if you give a poor person cash, and they use it to buy education, which increases the present value of their children's earnings by $X, then you've thereby added $X of real wealth to their country.

I am saying that we should doubt the premise at least somewhat.

Comment by BenHoffman on Cash transfers are not necessarily wealth transfers · 2017-12-02T21:50:20.942Z · EA · GW

For some balance, see Kelsey Piper's comments here - it looks like empirically, the picture we get from GiveDirectly is encouraging.

Comment by BenHoffman on In defence of epistemic modesty · 2017-11-09T00:18:59.199Z · EA · GW

To support a claim that this applies in "virtually all" cases, I'd want to see more engagement with pragmatic problems applying modesty, including:

  • Identifying experts is far from free epistemically.
  • Epistemic majoritarianism in practice assumes that no one else is an epistemic majoritarian. Your first guess should be that nearly everyone else is iff you are, in which you should expect information cascades due to the occasional overconfident person. If other people are not majoritarians because they're too stupid to notice the considerations for it, then it seems a bit silly to defer to them. On the other hand, if they're not majoritarians because they're smarter than you are... well, you mention this, but this objection seems to me to be obviously fatal and the only thing left is to explain why the wisdom of the majority disagrees with the epistemically modest.
  • The vast majority of information available about other people's opinions does not differentiate clearly between their impressions and their beliefs after adjusting for their knowledge about others' beliefs.
  • People lie to maintain socially desirable opinions.
  • Control over others' opinions is a valuable social commodity, and apparent expertise gives one some control.

In particular, the last two factors (different sorts of dishonesty) are much bigger deals if most uninformed people copy the opinions of apparently informed people instead of saying "I have no idea".

Overall, I agree that when you have a verified-independent, verified-honest opinion from a peer, one should weight it equally to one's own, and defer to one's verified epistemic superiors - but this has little to do with real life, in which we rarely have that opportunity!

Comment by BenHoffman on [deleted post] 2017-05-21T23:29:59.139Z

Our prior strongly punishes MIRI. While the mean of its evidence distribution is 2,053,690,000 HEWALYs/$10,000, the posterior mean is only 180.8 HEWALYs/$10,000. If we set the prior scale parameter to larger than about 1.09, the posterior estimate for MIRI is greater than 1038 HEWALYs/$10,000, thus beating 80,000 Hours.

This suggests that it might be good in the long run to have a process that learns what prior is appropriate, e.g. by going back and seeing what prior would have best predicted previous years' impact.

Comment by BenHoffman on [deleted post] 2017-05-21T23:26:35.988Z

Regrettably, we were not able to choose shortlisted organisations as planned. My original intention was that we would choose organisations in a systematic, principled way, shortlisting those which had highest expected impact given our evidence by the time of the shortlist deadline. This proved too difficult, however, so we resorted to choosing the shortlist based on a mixture of our hunches about expected impact and the intellectual value of finding out more about an organisation and comparing it to the others.


Later, we realised that understanding the impact of the Good Food Institute was too difficult, so we replaced it with Animal Charity Evaluators on our shortlist. Animal Charity Evaluators finds advocates for highly effective opportunities to improve the lives of animals.

If quantitative models were used for these decisions I'd be interested in seeing them.

Comment by BenHoffman on Fact checking comparison between trachoma surgeries and guide dogs · 2017-05-13T02:22:40.563Z · EA · GW

On the ableism point, my best guess is that the right response is to figure out the substance of the criticism. If we disagree, we should admit that openly, and forgo the support of people who do not in fact agree with us. If we agree, then we should account for the criticism and adjust both our beliefs and statements. Directly optimizing on avoiding adverse perceptions seems like it would lead to a distorted picture of what we are about.

Comment by BenHoffman on Fact checking comparison between trachoma surgeries and guide dogs · 2017-05-13T02:18:48.371Z · EA · GW

If I try to steelman the argument, it comes out something like:

Some people, when they hear about the guide dog - tracheoma surgery contrast, will take the point to be that ameliorating a disability is intrinsically less valuable than preventing or curing an impairment. (In other words, that helping people live fulfilling lives while blind is necessarily a less worthy cause than "fixing" them.) Since this is not in fact the intended point, a comparison of more directly comparable interventions would be preferable, if available.

Comment by BenHoffman on Effective altruism is self-recommending · 2017-05-08T18:33:38.057Z · EA · GW

I imagine this has been stressful for all sides, and I do very much appreciate you continuing to engage anyway! I'm looking forward to seeing what happens in the future.

Comment by BenHoffman on A mental health resource for EA community · 2017-05-08T16:13:54.967Z · EA · GW

Thanks for writing this! It's really helpful to have the basics of what the medical community knows.

I've been trying to figure out how to help in ways that respect neurodiversity. Psychosis and mania, like other mental conditions, aren't just the result of some exogenous force - they're the brain doing too little or too much of some particular things it was already doing.

So someone going through a psychotic episode might at times have delusions that seem to their friends to be genuinely poetic, insightful, and important, and this impression might be right. And yet, they're still having trouble tracking what's real and what's just a thought they had, worse at caring for themselves, and really need to eat and get a good night's sleep and friends to help them remember to do this.

Comment by BenHoffman on Effective altruism is self-recommending · 2017-05-08T15:59:55.820Z · EA · GW


I think that in a writeup for the two funds Nick is managing, CEA has done a fine job making it clear what's going on. The launch post here on the Forum was also very clear.

My worry is that this isn't at all what someone attracted by EA's public image would be expecting, since so much of the material is about experimental validation and audit.

I think that there's an opportunity here to figure out how to effectively pitch far-future stuff directly, instead of grafting it onto existing global-poverty messaging. There's a potential pitch centered around: "Future people are morally relevant, neglected, and extremely numerous. Saving the world isn't just a high-minded phrase - here are some specific ways you could steer the course of the future a lot." A lot of Nick Bostrom's early public writing is like this, and a lot of people were persuaded by this sort of thing to try to do something about x-risk. I think there's a lot of potential value in figuring out how to bring more of those sorts of people together, and - when there are promising things in that domain to fund - help them coordinate to fund those things.

In the meantime, it does make sense to offer a fund oriented around the far future, since many EAs do share those preferences. I'm one of them, and think that Nick's first grant was a promising one. It just seems off to me to aggressively market it as an obvious, natural thing for someone who's just been through the GWWC or CEA intro material to put money into. I suspect that many of them would have valid objections that are being rhetorically steamrollered, and a strategy of explicit persuasion has a better chance of actually encountering those objections, and maybe learning from them.

I recognize that I'm recommending a substantial strategy change, and it would be entirely appropriate for CEA to take a while to think about it.

Comment by BenHoffman on Effective altruism is self-recommending · 2017-05-08T15:45:42.664Z · EA · GW

I don't see why Holden also couldn't have a supportive role where his feedback and different perspectives can help Open AI correct for aspects they've overlooked.

I agree this can be the case, and that in the optimistic scenario this is a large part of OpenAI's motivation.

Comment by BenHoffman on Effective altruism is self-recommending · 2017-05-08T15:41:15.172Z · EA · GW

Thanks! On a first read, this seems pretty clear and much more like the sort of thing I'd hope to see in introductory material.

Comment by BenHoffman on Effective altruism is self-recommending · 2017-05-06T20:28:31.092Z · EA · GW

There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

Yes! More clear descriptions of how people have changed their mind would be great. I think it's especially important to be able to identify which things we'd hoped would go well but didn't pan out - and then go back and make sure we're not still implicitly pitching that hope.

Comment by BenHoffman on Where should anti-paternalists donate? · 2017-05-05T20:45:53.626Z · EA · GW

But there are also factors pushing the other way - e.g. biases about spending on personal health, positive externalities etc - that counterbalance a presumption against paternalism.

It's not obvious to me that the "near" bias about one's own health is generically worse than our "far" bias about what to do about the health of people far away. For instance, we might have a bias towards action that's not shared by, e.g., the children who feel sick after their worm chemo, or getting bit by mosquitos through their supposedly mosquito-proof bednets. (I'm not sure how bad either of these problems are relative to the benefits, and that's the problem - we really don't know. I'll note that Living Goods does sell some deworming pills, so at least some people in poor countries think it's in their interest to take them.)

It's also not obvious that positive externalities are generically more likely with paternalistic interventions. For instance, in a recent Reddit AMA, GiveDirectly basic income recipients reported that there was much less social conflict in their community once people started receiving basic income - they started imposing fewer costs on each other once they were more secure in meeting their basic needs.

It does seem to me like each of these considerations - if it points in the right direction for any given comparison - could contribute to overcoming the paternalism objection.

Comment by BenHoffman on Where should anti-paternalists donate? · 2017-05-05T20:33:59.724Z · EA · GW

It sounds like we might be coming close to agreement. The main thing I think is important here, is taking seriously the notion that paternalism is evidence about the other things we care about, and thus an important instrumental proxy goal, not just something we have intrinsic preferences about. More generally the thing I'm pushing back against is treating every moral consideration as though it were purely an intrinsic value to be weighed against other intrinsic values.

I see people with a broadly utilitarian outlook doing this a lot, perhaps because people from other moral perspectives don't have a lot of practice grounding their moral intuitions in a way that is persuasive to utilitarians. Autonomy in particular is something where we need to distinguish purely intrinsic considerations (e.g. factory farmed animals are unhappy because they have little physical autonomy) from instrumental pragmatic considerations (e.g. interventions that give poor people more autonomy preserve information by letting them use local knowledge that we do not have, while paternalistic interventions overwrite local information).

Thus, we should think about requiring higher impact for paternalism interventions as building in a margin for error, not just outweighing the anti-paternalism intuition. If a paternalistic intervention has strong evidence of a large benefit, it makes sense to describe it as overcoming the paternalism objection, but not rebutting it - we should still be skeptical relative to a nonpaternalistic intervention with the same evidence, it's just that sometimes we should intervene anyway.

Comment by BenHoffman on Where should anti-paternalists donate? · 2017-05-05T15:42:53.828Z · EA · GW

You're assuming the premise here a bit - that the data collected don't leave out important negative outcomes. In the particular cases you mentioned (tobacco taxes, mandatory seatbelt legislation, smallpox eradication, ORT, micronutrient foritification) my sense is that in most cases the benefits have been very strong, strong enough to outweigh a skeptical prior on paternalist interventions. But that doesn't show that we shouldn't have the skeptical prior in the first place. Seeing Like A State shows some failures, we should think of those too.

Comment by BenHoffman on Where should anti-paternalists donate? · 2017-05-05T09:08:45.919Z · EA · GW

Consider paternalism as a proxy for model error rather than an intrinsic dispreference. We should wonder whether maybe the things we do to people are more likely to cause hidden harm or lack supposed benefits, than things they do for examples.

Deworming is an especially stark example. The mass drug administration program is to go to schools and force all the children, whether sick or healthy, to swallow giant poisonous pills that give them bellyaches, because we hope killing the worms in this way buys big life outcome improvements. GiveWell estimates the effect at about 1.5% of what studies say, but EV is still high. This could involve a lot of unnecessary harm too via unnecessary treatments.

By contrast, the less paternalistic Living Goods (a recent GiveWell "standout charity") sells deworming pills (at or near cost) so we should expect better targeting to kids sick with worms, and repeat business is more likely if the pills seem helpful.

I wrote a bit about this here:

Comment by BenHoffman on Effective altruism is self-recommending · 2017-05-01T23:56:50.513Z · EA · GW's Introduction to Effective Altruism allocates most of its words to what's effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.

Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.

If you click "Donate Effectively," you end up on the EA Funds site, which presents the four Fund categories as generic products you might want to allocate a portfolio between. Two of the four products are in effect just letting Nick Beckstead do what he thinks is sensible with the money, which as I've said above is a good idea but a very large leap from the anti-Playpump pitch. "Trust friendly, sensible-seeming agents and empower them to do what they think is sensible" is a very, very different method than "check everything because it's easy to spend money on nice-sounding things of no value."

The GWWC site and Facebook page have a similar dynamic. I mentioned in this post that the page What We Can Achieve mainly references global poverty (though I've been advised that this is an old page pending an update). The GWWC Facebook page seems like it's mostly global poverty stuff, and some promotion of other CEA brands.

It's very plausible to me that in-person EA groups often don't have this problem because individuals don't feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.

Comment by BenHoffman on [deleted post] 2017-05-01T01:11:05.252Z

SlateStarScratchpad claims (with more engagement here) that the literature mainly shows that parents who like hitting their kids or beat them severely do poorly, and that if you control for things like heredity or harsh beatings it’s not obvious that mild corporal punishment is more harmful than other common punishments.

My best guess is that children are very commonly abused (and not just by parents - also by schools), but I don't think the line between physical and nonphysical punishments is all that helpful for understanding the true extent of this.

Comment by BenHoffman on Effective altruism is self-recommending · 2017-04-28T20:22:28.631Z · EA · GW

I think 2016 EAG was more balanced. But I don't think the problem in 2015 was apparent lack of balance per se. It might have been difficult for the EAG organizers to sincerely match the conference programming to promotional EA messaging, since their true preferences were consistent with the extent to which things like AI risk were centered.

The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesn't match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations. This is necessarily going to require some amount of insincerity or disconnect between initial marketing and reality, and represents a substantial cost to that marketing strategy.

Comment by BenHoffman on Effective altruism is self-recommending · 2017-04-28T02:48:27.243Z · EA · GW

The featured event was the AI risk thing. My recollection is that there was nothing else scheduled at that time so everyone could go to it. That doesn't mean there wasn't lots of other content (there was), nor do I think centering AI risk was necessarily a bad thing, but I stand by my description.

Comment by BenHoffman on Effective altruism is self-recommending · 2017-04-27T06:01:47.188Z · EA · GW

I also originally saw the reply attributed to a different comment on Mobile.

Comment by BenHoffman on Update on Effective Altruism Funds · 2017-04-27T01:24:10.144Z · EA · GW

I would guess that $300k simply isn't worth Elie's time to distribute in small grants, given the enormous funds available via GoodVentures and even GiveWell direct and directed donations.

This is consistent with the optionality story in the beta launch post:

If the EA Funds raises little money, they can spend little additional time allocating the EA Funds’ money but still utilize their deep subject-matter expertise in making the allocation. This reduces the chance that the EA Funds causes fund managers to use their time ineffectively and it means that the lower bound of the quality of the donations is likely to be high enough to justify donations even without knowing the eventual size of the fund.

However, I do think this suggests that - to the extent to which GiveWell is already a known and trusted institution - for global poverty in particular it's more important to get the fund manager with the most unique relevant expertise than a fund manager with the most expertise.

Comment by BenHoffman on Update on Effective Altruism Funds · 2017-04-27T01:21:08.867Z · EA · GW

On the other hand, it does seem worthwhile to funnel money through different intermediaries sometimes if only to independently confirm that the obvious things are obvious, and we probably don't want to advocate contrarianism for contrarianism's sake. If Elie had given the money elsewhere, that would have been strong evidence that the other thing was valuable and underfunded relative to GW top charities (and also worrying evidence about GiveWell's ability to implement its founders' values). Since he didn't, that's at least weak evidence that AMF is the best global poverty funding opportunity we know about.

Overall I think it's good that Elie didn't feel the need to justify his participation by doing a bunch of makework. This is still evidence that channeling this through Elie probably gives a false impression of additional optimizing power, but I think that should have been our strong prior anyhow.

Comment by BenHoffman on Update on Effective Altruism Funds · 2017-04-27T01:13:59.881Z · EA · GW

Or to simply say "for global poverty, we can't do better than GiveWell so we recommend you just give them the money".

Comment by BenHoffman on Update on Effective Altruism Funds · 2017-04-27T01:09:16.661Z · EA · GW

I also dislike that you emphasize that some people "expressed confusion at your endorsement of EA Funds". Some people may have felt that way, but your choice of wording both downplays the seriousness of some people's disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they're less competent than others who aren't confused).

I definitely perceived the sort of strong exclusive endorsement and pushing EA Funds got as a direct contradiction of what I'd been told earlier, privately and publicly - that this was an MVP experiment to gauge interest and feasibility, to be reevaluated after three months. If I'm confused, I'm confused about how this wasn't just a lie. My initial response was "HOW IS THIS OK???" (verbatim quote). I'm willing to be persuaded, of course. But, barring an actual resolution of the issue, simply describing this as confusion is a pretty substantial understatement.

ETA: I'm happy with the update to the OP and don't think I have any unresolved complaint on this particular wording issue.

Comment by BenHoffman on Introducing the EA Funds · 2017-04-27T00:58:05.044Z · EA · GW

Tell me about Nick's track record? I like Nick and I approve of his granting so far but "strong track record" isn't at all how I'd describe the case for giving him unrestricted funds to grant; it seems entirely speculative based on shared values and judgment. If Nick has a verified track record of grants turning out well, I'd love to see it, and it should probably be in the promotional material for EA Funds.

Comment by BenHoffman on Update on Effective Altruism Funds · 2017-04-27T00:54:08.816Z · EA · GW

Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum.

Generally I upvote a post because I am glad that the post has been posted in this venue, not because I am happy about the facts being reported. Your comment has reminded me to upvote Will's post, because I'm glad he posted it (and likewise Tara's) - thanks!

Comment by BenHoffman on Effective altruism is self-recommending · 2017-04-25T04:55:56.505Z · EA · GW

Yep! I think it's fine for them to exist in principle, but the aggressive marketing of them is problematic. I've seen attempts to correct specific problems that are pointed out e.g. exaggerated claims, but there are so many things pointing in the same direction that it really seems like a mindset problem.

I tried to write more directly about the mindset problem here:

Comment by BenHoffman on Effective altruism is self-recommending · 2017-04-24T14:29:54.389Z · EA · GW

If someone thinks concentrated decisionmaking is better, they should be overtly making the case for concentrated decisionmaking. When I talk with EA leaders about this they generally do not try to sell me on concentrated decisionmaking, they just note that everyone seems eager to trust them so they may as well try to put that resource to good use. Often they say they'd be happy if alternatives emerged.

Comment by BenHoffman on Effective altruism is self-recommending · 2017-04-24T14:26:42.831Z · EA · GW

It also seems to me that the time to complain about this sort of process is while the results are still plausibly good. If we wait for things to be clearly bad, it'll be too late to recover the relevant social trust. This way involves some amount of complaining about bad governance used to good ends, but the better the ends, the more compatible they should be with good governance.

Comment by BenHoffman on Effective altruism is self-recommending · 2017-04-24T14:23:41.110Z · EA · GW

I think sufficient evidence hasn't been presented, in large part because the argument has been tacit rather than overt.