Comment by benhoffman on Drowning children are rare · 2019-05-28T19:46:15.658Z · score: -32 (14 votes) · EA · GW

This isn’t a coherent rationalization for reasons covered in tedious detail in the longer series.

Comment by benhoffman on Drowning children are rare · 2019-05-28T19:44:37.266Z · score: 5 (5 votes) · EA · GW

The series is long and boring precisely because it tried to address pretty much every claim like that at once. In this case GiveWell’s on record as not wanting their cost per life saved numbers to be held to the standard of “literally true” (one side of that disjunction) so I don’t see the point in going through that whole argument again.

Drowning children are rare

2019-05-28T17:33:39.308Z · score: -2 (30 votes)
Comment by benhoffman on Should Effective Altruism be at war with North Korea? · 2019-05-06T04:33:22.034Z · score: 1 (1 votes) · EA · GW
Your perception that the EA community profits from the perception of utilitarianism is the opposite of the reality; utilitarianism is more likely to have a negative perception in popular and academic culture, and we have put nontrivial effort into arguing that EA is safe and obligatory for non-utilitarians. You're also ignoring the widely acknowledged academic literature on how axiology can differ from decision methods; sequence and cluster thinking are the latter.

I've talked with few people who seemed under the impression that the EA orgs making recommendations were performing some sort of quantitative optimization to maximize some sort of goodness metric, and used those recommendations on that basis, because they themselves accepted some form of normative utilitarianism.

Comment by benhoffman on Should Effective Altruism be at war with North Korea? · 2019-05-06T04:31:52.323Z · score: 0 (2 votes) · EA · GW
Academia has influence on policymakers when it can help them achieve their goals, that doesn't mean it always has influence. There is a huge difference between the practical guidance given by regular IR scholars and groups such as FHI, and ivory tower moral philosophy which just tells people what moral beliefs they should have. The latter has no direct effect on government business, and probably very little indirect effect.
The QALY paradigm does not come from utilitarianism. It originated in economics and healthcare literature, to meet the goals of policymakers and funders who already had utilitarian-ish goals.

I agree with Keynes on this, you disagree, and neither of us has really offered much in the way of an argument or evidence, you've just asserted a contrary position.

Comment by benhoffman on Should Effective Altruism be at war with North Korea? · 2019-05-06T04:22:13.458Z · score: -1 (3 votes) · EA · GW
The idea of making a compromise by coming up with a different version of utilitarianism is absurd. First, the vast majority of the human race does not care about moral theories, this is something that rarely makes a big dent in popular culture let alone the world of policymakers and strategic power. Second, it makes no sense to try to compromise with people by solving every moral issue under the sun when instead you could pursue the much less Sisyphean task of merely compromising on those things that actually matter for the dispute at hand. Finally, it's not clear if any of the disputes with North Korea can actually be cruxed to disagreements of moral theory.
The idea that compromising with North Korea is somehow neglected or unknown in the international relations and diplomacy communities is false. Compromise is ubiquitously recognized as an option in such discourse. And there are widely recognized barriers to it, which don't vanish just because you rephrase it in the language of utilitarianism and AGI.

So, no one should try this, it would be crazy to try, and besides we don't know whether it's possible because we haven't tried, and also competent people who know what they're doing are working on it already so we shouldn't reinvent the wheel? It doesn't seem like you tried to understand the argument before trying to criticize it, it seems like you're just throwing up a bunch of contradictory objections.

Comment by benhoffman on Should Effective Altruism be at war with North Korea? · 2019-05-06T04:20:06.312Z · score: 1 (1 votes) · EA · GW
The most obvious way for EAs to fix the deterrence problem surrounding North Korea is to contribute to the mainstream discourse and efforts which already aim to improve the situation on the peninsula. While it's possible for alternative or backchannel efforts to be positive, they are far from being the "obvious" choice.
Backchannel diplomacy may be forbidden by the Logan Act, though it has not really been enforced in a long time.
The EA community currently lacks expertise and wisdom in international relations and diplomacy, and therefore does not currently have the ability to reliably improve these things on its own.

All these seem like straightforward objections to supporting things like GiveWell or the global development EA Fund (vs joining or supporting establishment aid orgs or states which have more competence in meddling in less powerful countries' internal affairs).

Comment by benhoffman on [Link] Totalitarian ethical systems · 2019-05-06T04:06:59.606Z · score: -1 (3 votes) · EA · GW
Second, as long as your actions impact everything, a totalizing metric might be useful.

Wait, is your argument seriously "no one does this so it's a strawman, and also it makes total sense to do for many practical purposes"? What's really going on here?

Comment by benhoffman on [Link] Totalitarian ethical systems · 2019-05-06T04:04:47.037Z · score: -1 (3 votes) · EA · GW
actual totalitarian governments have existed and they have not used such a metric (AFAIK).

Linear programming was invented in the Soviet Union to centrally plan production with a single computational optimization.

Comment by benhoffman on [Link] Totalitarian ethical systems · 2019-05-06T04:01:52.383Z · score: -2 (4 votes) · EA · GW
The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy's familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here.

Some claim to, others don't.

I worked at GiveWell / Open Philanthropy Project for a year. I wrote up some of those reports. It's explicitly not scoring all recommendations on a unified metric - I linked to the "Sequence vs Cluster Thinking" post which makes this quite clear - but at the time, there were four paintings on the wall of the GiveWell office illustrating the four core GiveWell values, and one was titled "Utilitarianism," which is distinguished from other moral philosophies (and in particular from the broader class "consequentialism") by the claim that you should use a single totalizing metric to assess right action.

Should Effective Altruism be at war with North Korea?

2019-05-05T01:44:47.210Z · score: -9 (10 votes)
Comment by benhoffman on Leverage Research: reviewing the basic facts · 2018-08-05T01:14:25.092Z · score: 4 (6 votes) · EA · GW

"Compared to a Ponzi scheme" seems like a pretty unfortunate compression of what I actually wrote. Better would be to say that I claimed that a large share of ventures, including a large subset of EA, and the US government, have substantial structural similarities to Ponzi schemes.

Maybe my criticism would have been better received if I'd left out the part that seems to be hard for people to understand; but then it would have been different and less important criticism.

Comment by benhoffman on Effective altruism is self-recommending · 2018-07-25T12:52:42.431Z · score: 0 (2 votes) · EA · GW

retry the original case with double jeopardy

This sort of framing leads to publication bias. We want double jeopardy! This isn't a criminal trial, where the coercive power of a massive state is being pitted against an individual's limited ability to defend themselves. This is an intervention people are spending loads of money on, and it's entirely appropriate to continue checking whether the intervention works as well as we thought.

Comment by benhoffman on Effective altruism is self-recommending · 2018-07-25T12:50:37.385Z · score: 1 (1 votes) · EA · GW

As I understand the linked page, it's mostly about retroactive rather than prospective observational studies, and usually for individual rather than population-level interventions. A plan to initiate mass bednet distribution on a national scale is pretty substantially different from that, and doesn't suffer from the same kind of confounding.

Of course it's mathematically possible that the data is so noisy relative to the effect size of the supposedly most cost-effective global health intervention out there, that we shouldn't expect the impact of the intervention to show up. But, I haven't seen evidence that anyone at GiveWell actually did the relevant calculation to check whether this was the case for bednet distributions.

Comment by benhoffman on Effective altruism is self-recommending · 2018-03-29T03:34:34.970Z · score: 1 (1 votes) · EA · GW

If they did the followups and malaria rates held stable or increased, you would not then believe that the bednets do not work; if it takes randomized trials to justify spending on bednets, it cannot then take only surveys to justify not spending on bed nets, as the causal question is identical.

It's hard for me to believe that the effect of bednets is large enough to show an effect in RCTs, but not large enough to show up more often than not as a result of mass distribution of bednets. If absence of this evidence really isn't strong evidence of no effect, it should be possible to show it with specific numbers and not just handwaving about noise. And I'd expect that to be mentioned in the top-level summary on bed net interventions, not buried in a supplemental page.

Comment by benhoffman on Cash transfers are not necessarily wealth transfers · 2017-12-05T18:31:01.500Z · score: 1 (1 votes) · EA · GW

One simple example: https://en.wikipedia.org/wiki/Grade_inflation

More generally, things like the profusion of makework designed to facially resemble teaching, instead of optimizing for outcomes.

Comment by benhoffman on Cash transfers are not necessarily wealth transfers · 2017-12-02T21:58:43.720Z · score: 2 (2 votes) · EA · GW

We should also expect this to mean that countries such as Australia and China that heavily weight a national exam system when advancing students at crucial stages will have less corrupt educational systems than countries like the US which weight locally assessed factors like grades heavily.

(Of course, there can be massive downsides to standardization as well.)

Comment by benhoffman on Cash transfers are not necessarily wealth transfers · 2017-12-02T21:56:36.767Z · score: 0 (0 votes) · EA · GW

I think the thing to do is try to avoid thinking of "bureaucracy" as a homogeneous quantity, and instead attend to the details of institutions involved. Of course, as a foreigner with respect to every country but one's own, this is going to be difficult to evaluate when giving abroad. This is one of the many reasons why giving effectively on a global scale is hard, and why it's so important to have information feedback of the kind GiveDirectly is working on. Long-term follow-up seems really important too, and even then there's going to be some substantial justified uncertainty.

Comment by benhoffman on Cash transfers are not necessarily wealth transfers · 2017-12-02T21:53:42.271Z · score: 1 (1 votes) · EA · GW

There's an implied heuristic that if someone makes an investment that gives them an income stream worth $X, net of costs, then the real wealth of their society increases by at least $X. On this basis, you might assume that if you give a poor person cash, and they use it to buy education, which increases the present value of their children's earnings by $X, then you've thereby added $X of real wealth to their country.

I am saying that we should doubt the premise at least somewhat.

Comment by benhoffman on Cash transfers are not necessarily wealth transfers · 2017-12-02T21:50:20.942Z · score: 7 (7 votes) · EA · GW

For some balance, see Kelsey Piper's comments here - it looks like empirically, the picture we get from GiveDirectly is encouraging.

Cash transfers are not necessarily wealth transfers

2017-12-01T23:35:09.534Z · score: 12 (21 votes)
Comment by benhoffman on In defence of epistemic modesty · 2017-11-09T00:18:59.199Z · score: 2 (2 votes) · EA · GW

To support a claim that this applies in "virtually all" cases, I'd want to see more engagement with pragmatic problems applying modesty, including:

  • Identifying experts is far from free epistemically.
  • Epistemic majoritarianism in practice assumes that no one else is an epistemic majoritarian. Your first guess should be that nearly everyone else is iff you are, in which you should expect information cascades due to the occasional overconfident person. If other people are not majoritarians because they're too stupid to notice the considerations for it, then it seems a bit silly to defer to them. On the other hand, if they're not majoritarians because they're smarter than you are... well, you mention this, but this objection seems to me to be obviously fatal and the only thing left is to explain why the wisdom of the majority disagrees with the epistemically modest.
  • The vast majority of information available about other people's opinions does not differentiate clearly between their impressions and their beliefs after adjusting for their knowledge about others' beliefs.
  • People lie to maintain socially desirable opinions.
  • Control over others' opinions is a valuable social commodity, and apparent expertise gives one some control.

In particular, the last two factors (different sorts of dishonesty) are much bigger deals if most uninformed people copy the opinions of apparently informed people instead of saying "I have no idea".

Overall, I agree that when you have a verified-independent, verified-honest opinion from a peer, one should weight it equally to one's own, and defer to one's verified epistemic superiors - but this has little to do with real life, in which we rarely have that opportunity!

Comment by benhoffman on Expected value estimates we (cautiously) took literally - Oxford Prioritisation Project · 2017-05-21T23:29:59.139Z · score: 4 (4 votes) · EA · GW

Our prior strongly punishes MIRI. While the mean of its evidence distribution is 2,053,690,000 HEWALYs/$10,000, the posterior mean is only 180.8 HEWALYs/$10,000. If we set the prior scale parameter to larger than about 1.09, the posterior estimate for MIRI is greater than 1038 HEWALYs/$10,000, thus beating 80,000 Hours.

This suggests that it might be good in the long run to have a process that learns what prior is appropriate, e.g. by going back and seeing what prior would have best predicted previous years' impact.

Comment by benhoffman on Four quantitative models, aggregation, and final decision - Oxford Prioritisation Project · 2017-05-21T23:26:35.988Z · score: 2 (2 votes) · EA · GW

Regrettably, we were not able to choose shortlisted organisations as planned. My original intention was that we would choose organisations in a systematic, principled way, shortlisting those which had highest expected impact given our evidence by the time of the shortlist deadline. This proved too difficult, however, so we resorted to choosing the shortlist based on a mixture of our hunches about expected impact and the intellectual value of finding out more about an organisation and comparing it to the others.

[...]

Later, we realised that understanding the impact of the Good Food Institute was too difficult, so we replaced it with Animal Charity Evaluators on our shortlist. Animal Charity Evaluators finds advocates for highly effective opportunities to improve the lives of animals.

If quantitative models were used for these decisions I'd be interested in seeing them.

Comment by benhoffman on Fact checking comparison between trachoma surgeries and guide dogs · 2017-05-13T02:22:40.563Z · score: 1 (1 votes) · EA · GW

On the ableism point, my best guess is that the right response is to figure out the substance of the criticism. If we disagree, we should admit that openly, and forgo the support of people who do not in fact agree with us. If we agree, then we should account for the criticism and adjust both our beliefs and statements. Directly optimizing on avoiding adverse perceptions seems like it would lead to a distorted picture of what we are about.

Comment by benhoffman on Fact checking comparison between trachoma surgeries and guide dogs · 2017-05-13T02:18:48.371Z · score: 1 (3 votes) · EA · GW

If I try to steelman the argument, it comes out something like:

Some people, when they hear about the guide dog - tracheoma surgery contrast, will take the point to be that ameliorating a disability is intrinsically less valuable than preventing or curing an impairment. (In other words, that helping people live fulfilling lives while blind is necessarily a less worthy cause than "fixing" them.) Since this is not in fact the intended point, a comparison of more directly comparable interventions would be preferable, if available.

Comment by benhoffman on Effective altruism is self-recommending · 2017-05-08T18:33:38.057Z · score: 2 (2 votes) · EA · GW

I imagine this has been stressful for all sides, and I do very much appreciate you continuing to engage anyway! I'm looking forward to seeing what happens in the future.

Comment by benhoffman on A mental health resource for EA community · 2017-05-08T16:13:54.967Z · score: 1 (1 votes) · EA · GW

Thanks for writing this! It's really helpful to have the basics of what the medical community knows.

I've been trying to figure out how to help in ways that respect neurodiversity. Psychosis and mania, like other mental conditions, aren't just the result of some exogenous force - they're the brain doing too little or too much of some particular things it was already doing.

So someone going through a psychotic episode might at times have delusions that seem to their friends to be genuinely poetic, insightful, and important, and this impression might be right. And yet, they're still having trouble tracking what's real and what's just a thought they had, worse at caring for themselves, and really need to eat and get a good night's sleep and friends to help them remember to do this.

Comment by benhoffman on Effective altruism is self-recommending · 2017-05-08T15:59:55.820Z · score: 5 (5 votes) · EA · GW

Kerry,

I think that in a writeup for the two funds Nick is managing, CEA has done a fine job making it clear what's going on. The launch post here on the Forum was also very clear.

My worry is that this isn't at all what someone attracted by EA's public image would be expecting, since so much of the material is about experimental validation and audit.

I think that there's an opportunity here to figure out how to effectively pitch far-future stuff directly, instead of grafting it onto existing global-poverty messaging. There's a potential pitch centered around: "Future people are morally relevant, neglected, and extremely numerous. Saving the world isn't just a high-minded phrase - here are some specific ways you could steer the course of the future a lot." A lot of Nick Bostrom's early public writing is like this, and a lot of people were persuaded by this sort of thing to try to do something about x-risk. I think there's a lot of potential value in figuring out how to bring more of those sorts of people together, and - when there are promising things in that domain to fund - help them coordinate to fund those things.

In the meantime, it does make sense to offer a fund oriented around the far future, since many EAs do share those preferences. I'm one of them, and think that Nick's first grant was a promising one. It just seems off to me to aggressively market it as an obvious, natural thing for someone who's just been through the GWWC or CEA intro material to put money into. I suspect that many of them would have valid objections that are being rhetorically steamrollered, and a strategy of explicit persuasion has a better chance of actually encountering those objections, and maybe learning from them.

I recognize that I'm recommending a substantial strategy change, and it would be entirely appropriate for CEA to take a while to think about it.

Comment by benhoffman on Effective altruism is self-recommending · 2017-05-08T15:45:42.664Z · score: 1 (1 votes) · EA · GW

I don't see why Holden also couldn't have a supportive role where his feedback and different perspectives can help Open AI correct for aspects they've overlooked.

I agree this can be the case, and that in the optimistic scenario this is a large part of OpenAI's motivation.

Comment by benhoffman on Effective altruism is self-recommending · 2017-05-08T15:41:15.172Z · score: 0 (0 votes) · EA · GW

Thanks! On a first read, this seems pretty clear and much more like the sort of thing I'd hope to see in introductory material.

Comment by benhoffman on Effective altruism is self-recommending · 2017-05-06T20:28:31.092Z · score: 1 (1 votes) · EA · GW

There was a recent post by 80,000 hours (which annoyingly I now can't find) describing how their founders' approaches to doing good have evolved and updated over the years. Is that something you'd like to see more of?

Yes! More clear descriptions of how people have changed their mind would be great. I think it's especially important to be able to identify which things we'd hoped would go well but didn't pan out - and then go back and make sure we're not still implicitly pitching that hope.

Comment by benhoffman on Where should anti-paternalists donate? · 2017-05-05T20:45:53.626Z · score: 1 (1 votes) · EA · GW

But there are also factors pushing the other way - e.g. biases about spending on personal health, positive externalities etc - that counterbalance a presumption against paternalism.

It's not obvious to me that the "near" bias about one's own health is generically worse than our "far" bias about what to do about the health of people far away. For instance, we might have a bias towards action that's not shared by, e.g., the children who feel sick after their worm chemo, or getting bit by mosquitos through their supposedly mosquito-proof bednets. (I'm not sure how bad either of these problems are relative to the benefits, and that's the problem - we really don't know. I'll note that Living Goods does sell some deworming pills, so at least some people in poor countries think it's in their interest to take them.)

It's also not obvious that positive externalities are generically more likely with paternalistic interventions. For instance, in a recent Reddit AMA, GiveDirectly basic income recipients reported that there was much less social conflict in their community once people started receiving basic income - they started imposing fewer costs on each other once they were more secure in meeting their basic needs.

It does seem to me like each of these considerations - if it points in the right direction for any given comparison - could contribute to overcoming the paternalism objection.

Comment by benhoffman on Where should anti-paternalists donate? · 2017-05-05T20:33:59.724Z · score: 1 (1 votes) · EA · GW

It sounds like we might be coming close to agreement. The main thing I think is important here, is taking seriously the notion that paternalism is evidence about the other things we care about, and thus an important instrumental proxy goal, not just something we have intrinsic preferences about. More generally the thing I'm pushing back against is treating every moral consideration as though it were purely an intrinsic value to be weighed against other intrinsic values.

I see people with a broadly utilitarian outlook doing this a lot, perhaps because people from other moral perspectives don't have a lot of practice grounding their moral intuitions in a way that is persuasive to utilitarians. Autonomy in particular is something where we need to distinguish purely intrinsic considerations (e.g. factory farmed animals are unhappy because they have little physical autonomy) from instrumental pragmatic considerations (e.g. interventions that give poor people more autonomy preserve information by letting them use local knowledge that we do not have, while paternalistic interventions overwrite local information).

Thus, we should think about requiring higher impact for paternalism interventions as building in a margin for error, not just outweighing the anti-paternalism intuition. If a paternalistic intervention has strong evidence of a large benefit, it makes sense to describe it as overcoming the paternalism objection, but not rebutting it - we should still be skeptical relative to a nonpaternalistic intervention with the same evidence, it's just that sometimes we should intervene anyway.

Comment by benhoffman on Where should anti-paternalists donate? · 2017-05-05T15:42:53.828Z · score: 1 (1 votes) · EA · GW

You're assuming the premise here a bit - that the data collected don't leave out important negative outcomes. In the particular cases you mentioned (tobacco taxes, mandatory seatbelt legislation, smallpox eradication, ORT, micronutrient foritification) my sense is that in most cases the benefits have been very strong, strong enough to outweigh a skeptical prior on paternalist interventions. But that doesn't show that we shouldn't have the skeptical prior in the first place. Seeing Like A State shows some failures, we should think of those too.

Comment by benhoffman on Where should anti-paternalists donate? · 2017-05-05T09:08:45.919Z · score: 5 (5 votes) · EA · GW

Consider paternalism as a proxy for model error rather than an intrinsic dispreference. We should wonder whether maybe the things we do to people are more likely to cause hidden harm or lack supposed benefits, than things they do for examples.

Deworming is an especially stark example. The mass drug administration program is to go to schools and force all the children, whether sick or healthy, to swallow giant poisonous pills that give them bellyaches, because we hope killing the worms in this way buys big life outcome improvements. GiveWell estimates the effect at about 1.5% of what studies say, but EV is still high. This could involve a lot of unnecessary harm too via unnecessary treatments.

By contrast, the less paternalistic Living Goods (a recent GiveWell "standout charity") sells deworming pills (at or near cost) so we should expect better targeting to kids sick with worms, and repeat business is more likely if the pills seem helpful.

I wrote a bit about this here: http://benjaminrosshoffman.com/effective-altruism-not-no-brainer/

Comment by benhoffman on Effective altruism is self-recommending · 2017-05-01T23:56:50.513Z · score: 5 (5 votes) · EA · GW

EffectiveAltruism.org's Introduction to Effective Altruism allocates most of its words to what's effectively an explanation of global poverty EA. A focus on empirical validation, explicit measurement and quantification, and power inequality between the developed and developing world. The Playpump example figures prominently. This would make no sense if I were trying to persuade someone to support animal charity EA or x-risk EA.

Other EA focus areas that imply very different methods are mentioned, but not in a way that makes it clear how EAs ended up there.

If you click "Donate Effectively," you end up on the EA Funds site, which presents the four Fund categories as generic products you might want to allocate a portfolio between. Two of the four products are in effect just letting Nick Beckstead do what he thinks is sensible with the money, which as I've said above is a good idea but a very large leap from the anti-Playpump pitch. "Trust friendly, sensible-seeming agents and empower them to do what they think is sensible" is a very, very different method than "check everything because it's easy to spend money on nice-sounding things of no value."

The GWWC site and Facebook page have a similar dynamic. I mentioned in this post that the page What We Can Achieve mainly references global poverty (though I've been advised that this is an old page pending an update). The GWWC Facebook page seems like it's mostly global poverty stuff, and some promotion of other CEA brands.

It's very plausible to me that in-person EA groups often don't have this problem because individuals don't feel a moral obligation to give the most generically effective pitch for EA, but instead just talk about what they personally care about and find interesting.

Comment by BenHoffman on [deleted post] 2017-05-01T01:11:05.252Z

SlateStarScratchpad claims (with more engagement here) that the literature mainly shows that parents who like hitting their kids or beat them severely do poorly, and that if you control for things like heredity or harsh beatings it’s not obvious that mild corporal punishment is more harmful than other common punishments.

My best guess is that children are very commonly abused (and not just by parents - also by schools), but I don't think the line between physical and nonphysical punishments is all that helpful for understanding the true extent of this.

Comment by benhoffman on Effective altruism is self-recommending · 2017-04-28T20:22:28.631Z · score: 2 (2 votes) · EA · GW

I think 2016 EAG was more balanced. But I don't think the problem in 2015 was apparent lack of balance per se. It might have been difficult for the EAG organizers to sincerely match the conference programming to promotional EA messaging, since their true preferences were consistent with the extent to which things like AI risk were centered.

The problem is that to the extent to which EA works to maintain a smooth, homogeneous, uncontroversial, technocratic public image, it doesn't match the heterogeneous emphases, methods, and preferences of actual core EAs and EA organizations. This is necessarily going to require some amount of insincerity or disconnect between initial marketing and reality, and represents a substantial cost to that marketing strategy.

Comment by benhoffman on Effective altruism is self-recommending · 2017-04-28T02:48:27.243Z · score: 1 (1 votes) · EA · GW

The featured event was the AI risk thing. My recollection is that there was nothing else scheduled at that time so everyone could go to it. That doesn't mean there wasn't lots of other content (there was), nor do I think centering AI risk was necessarily a bad thing, but I stand by my description.

Comment by benhoffman on Effective altruism is self-recommending · 2017-04-27T06:01:47.188Z · score: 0 (0 votes) · EA · GW

I also originally saw the reply attributed to a different comment on Mobile.

Comment by benhoffman on Update on Effective Altruism Funds · 2017-04-27T01:24:10.144Z · score: 4 (4 votes) · EA · GW

I would guess that $300k simply isn't worth Elie's time to distribute in small grants, given the enormous funds available via GoodVentures and even GiveWell direct and directed donations.

This is consistent with the optionality story in the beta launch post:

If the EA Funds raises little money, they can spend little additional time allocating the EA Funds’ money but still utilize their deep subject-matter expertise in making the allocation. This reduces the chance that the EA Funds causes fund managers to use their time ineffectively and it means that the lower bound of the quality of the donations is likely to be high enough to justify donations even without knowing the eventual size of the fund.

However, I do think this suggests that - to the extent to which GiveWell is already a known and trusted institution - for global poverty in particular it's more important to get the fund manager with the most unique relevant expertise than a fund manager with the most expertise.

Comment by benhoffman on Update on Effective Altruism Funds · 2017-04-27T01:21:08.867Z · score: 2 (2 votes) · EA · GW

On the other hand, it does seem worthwhile to funnel money through different intermediaries sometimes if only to independently confirm that the obvious things are obvious, and we probably don't want to advocate contrarianism for contrarianism's sake. If Elie had given the money elsewhere, that would have been strong evidence that the other thing was valuable and underfunded relative to GW top charities (and also worrying evidence about GiveWell's ability to implement its founders' values). Since he didn't, that's at least weak evidence that AMF is the best global poverty funding opportunity we know about.

Overall I think it's good that Elie didn't feel the need to justify his participation by doing a bunch of makework. This is still evidence that channeling this through Elie probably gives a false impression of additional optimizing power, but I think that should have been our strong prior anyhow.

Comment by benhoffman on Update on Effective Altruism Funds · 2017-04-27T01:13:59.881Z · score: 1 (1 votes) · EA · GW

Or to simply say "for global poverty, we can't do better than GiveWell so we recommend you just give them the money".

Comment by benhoffman on Update on Effective Altruism Funds · 2017-04-27T01:09:16.661Z · score: 3 (3 votes) · EA · GW

I also dislike that you emphasize that some people "expressed confusion at your endorsement of EA Funds". Some people may have felt that way, but your choice of wording both downplays the seriousness of some people's disagreements with EA Funds, while also implying that critics are in need of figuring something out that others have already settled (which itself socially implies they're less competent than others who aren't confused).

I definitely perceived the sort of strong exclusive endorsement and pushing EA Funds got as a direct contradiction of what I'd been told earlier, privately and publicly - that this was an MVP experiment to gauge interest and feasibility, to be reevaluated after three months. If I'm confused, I'm confused about how this wasn't just a lie. My initial response was "HOW IS THIS OK???" (verbatim quote). I'm willing to be persuaded, of course. But, barring an actual resolution of the issue, simply describing this as confusion is a pretty substantial understatement.

ETA: I'm happy with the update to the OP and don't think I have any unresolved complaint on this particular wording issue.

Comment by benhoffman on Introducing the EA Funds · 2017-04-27T00:58:05.044Z · score: 2 (2 votes) · EA · GW

Tell me about Nick's track record? I like Nick and I approve of his granting so far but "strong track record" isn't at all how I'd describe the case for giving him unrestricted funds to grant; it seems entirely speculative based on shared values and judgment. If Nick has a verified track record of grants turning out well, I'd love to see it, and it should probably be in the promotional material for EA Funds.

Comment by benhoffman on Update on Effective Altruism Funds · 2017-04-27T00:54:08.816Z · score: 3 (3 votes) · EA · GW

Will's post introducing the EA funds is the 4th most upvoted post of all time on this forum.

Generally I upvote a post because I am glad that the post has been posted in this venue, not because I am happy about the facts being reported. Your comment has reminded me to upvote Will's post, because I'm glad he posted it (and likewise Tara's) - thanks!

Comment by benhoffman on Effective altruism is self-recommending · 2017-04-25T04:55:56.505Z · score: 5 (5 votes) · EA · GW

Yep! I think it's fine for them to exist in principle, but the aggressive marketing of them is problematic. I've seen attempts to correct specific problems that are pointed out e.g. exaggerated claims, but there are so many things pointing in the same direction that it really seems like a mindset problem.

I tried to write more directly about the mindset problem here:

http://benjaminrosshoffman.com/humility-argument-honesty/

http://effective-altruism.com/ea/13w/matchingdonation_fundraisers_can_be_harmfully/

http://benjaminrosshoffman.com/against-responsibility/

Comment by benhoffman on Effective altruism is self-recommending · 2017-04-24T14:29:54.389Z · score: 10 (12 votes) · EA · GW

If someone thinks concentrated decisionmaking is better, they should be overtly making the case for concentrated decisionmaking. When I talk with EA leaders about this they generally do not try to sell me on concentrated decisionmaking, they just note that everyone seems eager to trust them so they may as well try to put that resource to good use. Often they say they'd be happy if alternatives emerged.

Comment by benhoffman on Effective altruism is self-recommending · 2017-04-24T14:26:42.831Z · score: 17 (17 votes) · EA · GW

It also seems to me that the time to complain about this sort of process is while the results are still plausibly good. If we wait for things to be clearly bad, it'll be too late to recover the relevant social trust. This way involves some amount of complaining about bad governance used to good ends, but the better the ends, the more compatible they should be with good governance.

Comment by benhoffman on Effective altruism is self-recommending · 2017-04-24T14:23:41.110Z · score: 4 (4 votes) · EA · GW

I think sufficient evidence hasn't been presented, in large part because the argument has been tacit rather than overt.

Comment by benhoffman on Effective altruism is self-recommending · 2017-04-24T04:06:11.256Z · score: 10 (10 votes) · EA · GW

On (1) I agree that GiveWell's done a huge public service by making many parts of decisionmaking process public, letting us track down what their sources are, etc. But making it really easy for an outsider to audit GiveWell's work, while an admirable behavior, does not imply that GiveWell has done a satisfactory audit of its own work. It seems to me like a lot of people are inferring the latter from the former, and I hope by now it's clear what reasons there are to be skeptical of this.

On (3), here's why I'm worried about increasing overt reliance on the argument from "believe me":

The difference between making a direct argument for X, and arguing for "trust me" and then doing X, is that in the direct case, you're making it easy for people to evaluate your assumptions about X and disagree with you on the object level. In the "trust me" case, you're making it about who you are rather than what is to be done. I can seriously consider someone's arguments without trusting them so much that I'd like to give them my money with no strings attached.

"Most effective way to donate" is vanishingly unlikely to be generically true for all donors, and the aggressive pitching of these funds turns the supposed test of whether there's underlying demand for EA Funds into a test of whether people believe CEA's assurances that EA Funds is the right way to give.

Effective altruism is self-recommending

2017-04-23T06:11:20.903Z · score: 30 (41 votes)
Comment by benhoffman on How accurately does anyone know the global distribution of income? · 2017-04-17T19:36:25.241Z · score: 0 (0 votes) · EA · GW

The nature of the correction, I think, is that I underestimated how much individual caution there was in coming up with the original numbers. I was suggesting some amount of individual motivated cognition in generating the stitched-together dataset in the first place, and that's what I think I was wrong about.

I still think that:

(1) The stitching-together represents a big problem and not a minor one. This is because it's basically impossible to "sanity check" charts like this without introducing some selection bias. Each step away from the original source compounds this problem. Hugging the source data as tightly as you can and keeping track of the methodology is really the only way to fight this. Otherwise, even if there is no individual intent to mislead, we end up passing information through a long series of biased filters, and thus mainly flattering our preconceptions.

I can see the appeal of introducing individual human gatekeepers into the picture, but that comes with a pretty bad bottlenecking problem, and substitutes the bias of a single individual for the bias of the system. Having experts is great, but the point of sharing a chart is to give other people access to the underlyling information in away that's intuitive to interpret. Robin Hanson's post on academic vs amateur methods puts the case for this pretty clearly:

A key tradeoff in our methods is between ease and directness on the one hand, and robustness and rigor on the other. [...] When you need to make an immediate decision fast, direct easy methods look great. But when many varied people want to share an analysis process over a longer time period, more robust rigorous methods start to look better. Easy direct easy methods tend to be more uncertain and context dependent, and so don’t aggregate as well. Distant others find it harder to understand your claims and reasoning, and to judge their reliability. So distant others tend more to redo such analysis themselves rather than building on your analysis. [...]

You might think their added freedom would result in amateurs contributing proportionally more to intellectual progress, but in fact they contribute less. Yes, amateurs can and do make more initial progress when new topics arise suddenly far from topics where established expert institutions have specialized. But then over time amateurs blow their lead by focusing less and relying on easier more direct methods. They rely more on informal conversation as analysis method, they prefer personal connections over open competitions in choosing people, and they rely more on a perceived consensus among a smaller group of fellow enthusiasts. As a result, their contributions just don’t appeal as widely or as long.

GiveWell is a great example of an organization that keeps track of sources so that people who are interested can figure out how they got their numbers.

(2) It's weird and a little sketchy that there's not a discontinuity around 80%. This could easily be attributable to Milanovic rather than CEA, but I still think it's a problem that that wasn't caught, or - if there turns out to be a good explanation - documented.

(3) It's entirely appropriate to hold CEA's CEO (the one who used this chart at the start of the controversy you're responding to by adding helpful information) to be held to a much higher standard than some amateur or part-time EA advocate who got excited about the implications of the chart. For this reason, while I think you're right that it's hard to avoid amateurs introducing large errors and substantial bias by oversimplifying things, that doesn't seem all that relevant to the case that started this.

GiveWell and the problem of partial funding

2017-02-14T10:29:57.250Z · score: 11 (14 votes)

Matching-donation fundraisers can be harmfully dishonest

2016-11-12T03:30:32.349Z · score: 8 (8 votes)