EAs write about where they givepost by Julia_Wise · 2016-12-09T16:00:28.601Z · score: 18 (18 votes) · EA · GW · Legacy · 4 comments
Blake Borgeson Eva Vivalt Ben Kuhn Alexander Gordon-Brown and Denise Melchin Elizabeth Van Nostrand None 4 comments
Several organizations like GiveWell, ACE, and CEA do posts on where their staff are making personal donations. This is a sampling of donation decisions, and the reasons behind them, from some EAs who don’t work at any of those organizations.
I’ve selected a range of people whose opinions I thought would be interesting. Please feel free to add your own decisions or considerations in the comments!
This will be my first full year of giving after having dug deeply into Effective Altruism and existential risk starting in 2015, so this mix will certainly shift in the coming years as I learn more. Also, I'm making a number of decisions in these last weeks of the year, so there are still large error bars around these numbers. But my current best guess at my end of 2016 allocation is something like:
20% Additional EA and x-risk orgs
15% Experimental funds
From my conversations and reading so far, I believe that countering existential risks is the largest opportunity for donors right now to have positive impact on humanity, and that within existential risks, the most neglected and largest risks we currently face are the risks from advanced artificial intelligence. MIRI and CFAR are both aimed at mitigating AI risks, are very much funding-constrained, are pursuing strategies that seem valuable, and have been instrumental (along with currently better-funded organizations like FHI) in bringing AI risk from a fringe cause to one that has gained significant traction among effective altruists and others with aligned values. MIRI also remains one of the main places working directly on technical solutions to the AI control problem, which I use to mean both reliably learning good values and also reliably aligning AI with those values.
With my donations to additional EA and x-risk organizations, I aim to support groups pursuing other types of work that I think are likely helpful in the cause to address AI x-risk. Some of these include awareness-building, strategic research, political research or activities, or other ways of attacking the problem. I also intend to support work on other existential risks through this bucket.
Finally, my experimental funds bucket stands for my attempts to test out ways of impacting the problem that don't fall within existing organizations. This includes considering new organizations, different forms of encouraging and supporting valuable work on the relevant problems, and buying information, either in cases where it's hard to have access to data helpful in evaluating an opportunity until you're already a supporter, or in cases where inexpensive experiments can shed light on the usefulness of an unconventional approach. I anticipate that this 15% will yield opportunities that take up an increasing portion of my overall donations.
I donate to organizations conducting research and meta-research. I am optimistic that there exist better options than we are currently thinking about, and I think research can help to uncover them.
Within research, I think it is almost always better to fund organizations or contribute to grant schemes than to give to individual researchers directly. Researchers often don't have the right incentives to make their work as useful as possible to others, and organizations can impose conditions on how the money will be used and selectively fund only the most promising projects. Each year, one of the organizations I give to is AidGrade, the meta-research non-profit I founded. If I didn't think it was worthwhile, I would not have started it, so this is putting my money where my mouth is. Currently, we are working on leveraging machine learning for meta-analysis, making it easier to conduct meta-analyses and keep them up-to-date, and we are also doing research on how to encourage policymakers to use the evidence that is generated from impact evaluations when making policy decisions. Policymakers can have a much larger impact than any of us do individually, so if we can help them make better decisions, that could be very valuable.
I haven't decided where I'm giving yet this year, but here's my general process:
The bulk of my impact comes from direct work (engineering at Wave) rather than donating, and I'd rather spend the marginal hour of EA-time optimizing that work rather than optimize where I donate. I also don't think I have much of a comparative advantage in making donation allocation decisions. Because of this, I'd rather delegate the actual allocation decision to someone I trust--it's much easier for me to assess whether a person or group seems to generally make good decisions and share my values, than to compare the merits of specific interventions. In the past, I've given the majority of my donations to GiveWell (unrestricted) for this reason: since I worked there briefly and know a lot of the decision-makers, I have a really high degree of trust in their values and decisions.
Alexander Gordon-Brown and Denise Melchin
Provisional* donation split for April 2016 - April 2017:
GiveWell recommendation, 25%: In the past we've split our donations approximately 50% to GiveWell-recommended charities and 50% to other promising opportunities, mostly in EA movement building. The rationale for the latter is straightforward: we have believed and still do believe these to be the best opportunities out there in terms of raw expected impact per dollar. The former are a bit more complicated and have flowed from a mix of interlinked considerations:
1. We want at least part of our donations to be tangibly and robustly helping some of the worst-off people in the world.
2. We don't want to solely be doing things which amount (or could be seen to amount) to paying salaries of 'inner circle' EAs who we know well.
3. GiveWell charities are easier to talk about and arguably allow us to send a less ambiguous signal to outsiders
This year we've decided to drop the percentage going directly to GiveWell-recommended charities to 25%: in short we think this still meets the spirit of the above thoughts while allowing us significantly more practical freedom to focus on higher-leverage opportunities.
Charity Science, 10%: This is for Charity Science to allocate between their projects as they see fit, but it was prompted by the launch of Charity Science: Health. Charity Science: Health recently received a $200,000 experimental grant from GiveWell, though that news came out after this donation was committed. We've supported the Charity Science team financially in getting to this point and are excited to be helping this get off the ground; the learning/exploration value of this donation seems much more obvious than other things we are considering.
CEA, 20%: In the past, we've donated to GWWC for its outreach efforts. CEA has since merged GWWC with some other sections and re-organised. In the outreach division we've seen some evidence of a renewed focus on actually getting people to take the pledge, which seems positive. A lot of community criticism of GWWC's claimed multipliers seems to boil down to 'it can't possibly be that high' rather than a serious attempt to build a plausible alternative model that would lead to an answer lower than one. The former reaction is understandable, but our experience is that doing the latter is difficult.
20% Unallocated/TBC: From a RFMF perspective, I feel significantly less confident about our donations than I have done previously. In particular, Good Ventures (via OPP) and GiveWell are increasingly funding things that we also fund, e.g. see the Charity Science:Health grant above. In addition, we think we've left ourselves too little flexibility in the past and have occasionally had to turn down plausibly strong funding opportunities for lack of immediate funds. A particular mention here should go to REG, which might be one of the best EA movement building charities but as of the most recent information we have are primarily talent-constrained.
GiveWell operations, 5%: In general, we would like the norm to be that people who use GiveWell's service to commit substantial donations also donate to GiveWell itself, compensating them for providing those services. Until recently this consideration has been outweighed by feeling that GiveWell was notably less funding constrained than our other options. With the changing funding landscape this is no longer the case, so we're making up for lost time. This donation will amount to approximately 5% of all the money we've donated to GiveWell recommendations over the past few years.
*We're about to get significant updates from 80k, CEA, GiveWell and many other potential targets. Accordingly we can't be too certain about the split above, but it is our current best guess and absent major news we do expect the final totals to be at least somewhat close to the above.
Elizabeth Van Nostrand
When it comes to casual friends, or my friends' parents, I tend to recommend GiveWell or GiveWell top charities, because it’s an incremental improvement over what they are currently doing and introduces the idea of evaluating effectiveness rather than operating costs or who has the nicest poster. When it comes to people who are prepared to invest a lot of time in choosing where to donate I almost never recommend GiveWell (or their top charities), because GiveWell has access to a lot of money from Good Ventures and the more casual donors. If you have the time, you should look for a giving opportunity they haven’t noticed.
For me, that opportunity is Tostan, which runs three year educational series in rural west Africa covering a variety of topics. This is a complex intervention that doesn’t lend itself to simple RCTs the way e.g. GiveDirectly does. One of my biggest concerns about effective altruism has always been the streetlight effect, that we’ll focus on what’s easy to measure rather than what’s important. Tostan fights that, both by expanding the streetlight (they are two years into a program to quantify the effects of their interventions, funded by the Gates Foundation) and by creating feedback loops that improve their interventions without quantitative data. This is necessary technology if we’re going to solve more complicated problems.
So I believe my money will be well spent at an object level. I also believe it’s developing tools useful to future interventions (and Tostan is already trying to spread their educational technology to other NGOs). Lastly, I believe Tostan is at a decision point in deciding how quantitative and evidence driven it wants to be, and donations from EA supporters will help push them in the right direction. For much, much more on this, see my longer post on my blog.
[Disclosure: I have several close friends who work or have worked at GiveWell. Tostan’s director of philanthropy, Suzanne Bowles, has been in contact with me to support my efforts to promote Tostan and for me to give her advice on marketing to the EA community, and we have made a few professional introductions for each other.]
Comments sorted by top scores.