What are the most common objections to “multiplier” organizations that raise funds for other effective charities?

post by Jon_Behar · 2020-12-08T14:10:04.926Z · EA · GW · 3 comments

This is a question post.

Contents

  Answers
    HStencil
    Jon_Behar
    HaukeHillebrandt
    capybaralet
    Jon_Behar
    Jon_Behar
    Jon_Behar
    Jon_Behar
    Monica
    Jon_Behar
    Jon_Behar
None
3 comments

Numerous EA organizations use a “multiplier” model in which they try to leverage each dollar they spend on their own operations by fundraising multiple dollars for other effective charities. My strong impression is that the number of donors who give to effective charities doing direct work is much larger than the number of donors who give to organizations that fundraise for effective charities doing direct work. I would like to understand why this is the case.

Below, I’ve listed some of the most common objections to the multiplier model I’ve heard in the EA community, and in my own experience pitching The Life You Can Save (where I work) and other multiplier organizations. I’ve put each of these objections as its own comment, please upvote if it applies to you. If you have a substantively different objection to the multiplier model, please add your own comment.

Answers

answer by HStencil · 2020-12-08T15:21:43.623Z · EA(p) · GW(p)

I don’t think my reasoning falls neatly into any one of the categories you listed, so I’ll post it as its own comment. I don’t give to “multiplier” charities mainly because I think a huge percentage of the good that they do probably comes from running great websites, but the fixed costs that were necessary to get these websites built and online have already been paid, basically, and while I believe that initial investment probably had a large multiplier, I’m far less convinced that subsequent expenditures by these organizations (other than maintaining their websites) will have such a large multiplier (and big donors would happily step in—or the multiplier charities would tell us—if maintenance costs could not be met).

Furthermore, in the exceptional cases when subsequent expenditures would likely have large multipliers, my sense is that usually, those expenditures require atypically substantial amounts of funding, without which the investments in question cannot happen. I am not a large donor, and it just isn’t clear to me that if I give a few thousand dollars to a multiplier charity instead of to, say, the GiveWell Maximum Impact Fund, that few thousand dollars will enable anything particularly high-impact to occur that otherwise wouldn’t have. By my mental model—which may be mistaken—for each additional dollar I give to GiveWell’s Maximum Impact Fund, my impact rises by some smooth function that probably isn’t far off linear. In contrast, I think that the value of additional dollars given to a multiplier charity probably follows some kind of a step function. I understand that my donations might increase the probability of the multiplier organization being able to “go up a step” sooner, but I suspect that if the step were truly likely to have an extraordinarily high charitable return, large donors, like foundations or ultra-high net worth individuals, would fund it no matter what, and the fact that I’d chipped in a few thousand on the margin wouldn’t change their calculus on that one bit. I’m just not the limiting factor here.

Finally, multiplier charities seem like sort of obvious breeding grounds for conflicts of interest in the community, and I’m quite wary about that because 1) I think the community has had a poor track record on managing conflicts of interest historically (though this has unquestionably improved), and 2) there is effectively no oversight of multiplier charities. They don’t have to go through anywhere close to the level of scrutiny or provide nearly as much transparency as GiveWell’s top charities, so I’m much more reluctant to take many of their claims to impact at face value.

Ultimately, I feel that my giving to multiplier charities would be troublingly analogous to the fact that around a quarter of foreign aid by OECD countries never leaves the donor country because it gets spent on consultants, auditors, and evaluators domestically. There is obviously a plausible case that the consulting, auditing, and evaluation in question increases the value of the foreign aid so much that it pays for itself, but doesn’t it seem more likely that these firms get retained for bad reasons (they lobby governments, have friends in high places, employ voters, tell unrepresentative horror stories about the misuse of aid, etc.) than for good reasons? I don’t mean to implicate multiplier charities in an unsavory comparison... but for the fact that the unsavory comparison is actually a meaningful reason why I don’t give to them. I just have no idea how I could tell with confidence that the way they would use my marginal dollars would actually beat other opportunities.

comment by Jon_Behar · 2020-12-08T21:13:27.674Z · EA(p) · GW(p)

Thank you! This was exactly the sort of thoughtful explanation I was hoping for.

For what it’s worth, in my experience at TLYCS it takes a lot more than just a website to move money. When I look at the things that seem to have driven our growth over the years, a lot of it is simply having the capacity to do basic things like communicate more with donors. And the relative steadiness in TLYCS’s multiplier (between 9x and 13x from 2016-2019) as expenses more than tripled suggests that there’s not a huge difference between the marginal multiplier and the average multiplier (and that if anything, the marginal multiplier might be higher). 

I do think your “step function” argument is getting at something interesting (though I’d say you’re overestimating the availability and willingness of large donors to fund these transformative initiatives). There have definitely been discrete steps up in TLYCS’s history, most recently last year when we had a major launch of the updated book, overhauled our website, and more than doubled both money moved and expenses. The investments paid off: this year expenses will be down slightly and money moved will be up a lot, so the multiplier will break out of its recent range. 

As you note, multiplier charities don’t get much scrutiny. Part of the motivation for this post is trying to figure out whether adding more scrutiny could be a good investment. After all, if additional vetting could make donors feel confident that multiplier organizations were offering legitimate 2x multiplier (let alone 10X or more), that would be a huge source of leverage for the EA community.

Replies from: MarisaJurczyk, HStencil
comment by MarisaJurczyk · 2020-12-14T02:30:09.124Z · EA(p) · GW(p)

For what it’s worth, in my experience at TLYCS it takes a lot more than just a website to move money.

+1 to this. RC Forward wrote a bit about this in our year-in-review [EA · GW] in 2019. 

comment by HStencil · 2020-12-08T23:21:47.076Z · EA(p) · GW(p)

I'm glad to hear you found my reasoning useful, and I appreciate your explanation of where you think it may go astray. I'm a fairly marginal actor in the grand scheme of the EA community and don't feel I am anywhere close to having a clear view on whether the returns to adding further vetting or oversight structures would outweigh the costs. Naïvely, it seems to me that some kinds of organizational transparency are pretty cheap. However, it occurs to me that even though I've spent a fair bit of time on the TLYCS website over the past several years and gave to your COVID-19 response fund back in the spring, I honestly have no recollection of the extent of your transparency in the status quo. In a similar vein, to put it more flippantly than you deserve, I don't think most people I know in the community (myself included) really understand what you do. I was even unaware of how high your estimated multiplier is (if you had asked me to guess prior to your comment, there's no way I would've gone higher than 4x), and now, I am quite curious about how you’re estimating that and what you think is driving such a high return. I expect this is probably my fault for not seriously investigating "multiplier charities" when deciding where to give and instead presuming that they likely aren't a good fit for small donors like me for the reasons I explained. However, I also think I am exactly the persuadable small donor who you would want to be touching with whatever outreach or marketing you're doing , so maybe there's room for improvement on your part there, as well.

For what it's worth, if you were going to invest in adding some kind of vetting or oversight structure, here are a few questions that—inspired by your comment—I would most want it to answer before making a determination about whether to give to TLYCS:

1. Why have TLYCS's expenses tripled since 2016? Other than the website overhaul and the book launch, what have you been spending on? Are you aiming to engage in similar (financial) growth again in the near term? If not, would you be if you had more support from small donors?

2. What do you mean by "communicate with more donors?" What does that involve? How costly is it on a per-donor basis? How scalable is it?

3. When you spend more money (beyond your basic operating expenses: salaries, office space if you have it, etc.), and that spending seems to be associated with an increase in donor interest in your recommended charities, what do you think generally explains that relationship, and how do you determine that such an increase in donor interest was counterfactually caused by the increase in spending?

4. More generally, and this may be an extremely dumb question/something you have explained at length elsewhere, how do you arrive at your "money moved" estimates, and how do you ensure that they are counterfactually valid?

5. Do you personally believe that TLYCS will hit diminishing marginal returns on investments in growing its base of donors to its recommended charities sometime in the near or intermediate term?

You obviously do not have to answer these questions here or at all. I wrote them out only to provide a sense of what information I feel I am missing.

Replies from: Jon_Behar
comment by Jon_Behar · 2020-12-09T15:45:08.975Z · EA(p) · GW(p)

I don't think most people I know in the community (myself included) really understand what you do. 

 

I agree there’s a lack of understanding of our work, and hope this discussion helps clarify some things. And we haven’t done a great job of reaching out to the community to explain our work.  One difficulty in operating a multiplier charity is that it can be tough to promote your own organization since your whole purpose is to promote other charities. 

I was even unaware of how high your estimated multiplier is (if you had asked me to guess prior to your comment, there's no way I would've gone higher than 4x).

FWIW, I think most (maybe all) multiplier organizations report multipliers well above 4x.


1. Why have TLYCS's expenses tripled since 2016? Other than the website overhaul and the book launch, what have you been spending on? Are you aiming to engage in similar (financial) growth again in the near term? If not, would you be if you had more support from small donors?
 

Most of the increase was due to the book: expenses were around 300k in 2016 and 2017, ~450k in 2018, and a bit under $1m in 2019 as we ramped up for the book project. The increase in 2018 was due to adding a bit of headcount (by far our largest expense) and rationalizing some very low salaries that had been in place at the outset.

Going forward, we’d very much like to be able to grow our operational budget and would do so if we had more confidence in our ability to raise the necessary funds. Off the top of my head (definitely not an official organizational response) I’d say something like 30% annual growth would be manageable.  


2. What do you mean by "communicate with more donors?" What does that involve? How costly is it on a per-donor basis? How scalable is it?

 

I meant this very broadly- it covers a lot of things, and the cost of those activities likely varies a lot. Over the past few years we’ve done things like: building out a CRM system to manage our donors and leads, personally emailing and/or calling more donors to thank them and build a relationship, have more one on one conversations with large donors/prospects, hold more donor/fundraising events, and add customization to our newsletter/email communications  (so that, for example, donors and non-donors receive different newsletters.) The common thread is that this all involves work, and you need to pay someone to do that work.

I think there is enormous room to scale this stuff. 


3. When you spend more money (beyond your basic operating expenses: salaries, office space if you have it, etc.), and that spending seems to be associated with an increase in donor interest in your recommended charities, what do you think generally explains that relationship, and how do you determine that such an increase in donor interest was counterfactually caused by the increase in spending?
 

Salaries account for the vast majority of our budget (meaning increased spending typically means increased headcount). We try to assess if our strategy and execution are working, and the details depend on the project. Sometimes we add expenses that don’t immediately impact donations, like hiring an accountant. We didn’t try to model out that ROI, we just knew we had grown beyond the point where it was feasible to operate with a volunteer accountant. When we overhauled the website, we were able to look at a lot of quantitative metrics (conversion rates, engagement rates, etc) and see that they improved a lot right after the change. When we launched TLYCS Australia, we didn’t have to watch the donations that came in for very long to know it was going to be a big success.

FYI our 2018 annual report has a good discussion of how we pivoted our strategy then assessed that change.


4. More generally, and this may be an extremely dumb question/something you have explained at length elsewhere, how do you arrive at your "money moved" estimates, and how do you ensure that they are counterfactually valid?
 

Thanks for asking- this is something most people probably don't know.

It’s pretty simple: we count money that’s been donated to TLYCS to be regranted to our recommended charities + money donated to those charities (and reported back to us) where the donor indicates that we influenced the gift.

We’ve discussed counterfactuals in detail in the appendix to our 2017 annual report. There are definitely a lot of considerations, but I generally think counterfactual concerns are becoming less of an issue over time (TLYCS’s role in producing the new book really mitigates concerns that we’re measuring Peter Singer’s impact rather than the organization’s.) 

One place we have made a counterfactual adjustment (which I think speaks to our attempts to be reasonable in our metrics) is with a specific family that has donated several million dollars over the past 5 years. We think it’s very likely those gifts would have been made without our involvement, so we’ve only counted 5% of their value in our numbers. FWIW, the charities involved told us they thought we should take full credit.

I don’t really know what TLYCS’s exact multiplier would be if you had perfect information and could account for all the counterfactuals. But I’m highly confident it’s well above the threshold of providing significant leverage. In 2020, even if our true impact is only 10% of our reported money moved figure (which I believe is conservative), we’d still provide >50% leverage. There’s a very large margin of safety (which you wouldn't really have if you had a mental model that our multiplier was 4x or less per your comment above).   


5. Do you personally believe that TLYCS will hit diminishing marginal returns on investments in growing its base of donors to its recommended charities sometime in the near or intermediate term?

Personally, I think TLYCS is just getting started. The new book is a powerful asset, and by getting free copies (and excerpts, video summaries, etc) in many people’s hands, I’m confident that our money moved will grow significantly over the long run.  We know the first book influenced a ton of people (including Cari Tuna), now we have a book that will have a much wider reach and has an organization behind it.

I know our multiplier will go up in 2020, but after that I’m not really sure. We focus more on “Net Impact”, which is our Money Moved minus our expenses, rather than our multiplier which takes the ratio of those numbers. 

I think Peter Hurford originally suggested the Net Impact metric back in the day, and it makes a lot of sense to us. We’d much rather spend $1 billion to move $5 billion than spend $1,000 to move $10,000. So potentially there could be diminishing marginal returns (i.e. a falling multiplier), but I don’t think that’s necessarily a problem if you’re trying to build an organization that does as much good as possible instead of one that’s as efficient as possible.

Replies from: lukefreeman, HStencil
comment by lukefreeman · 2020-12-10T03:03:52.849Z · EA(p) · GW(p)

For what it's worth – Giving What We Can also noticed a bump in pledges that came from The  Life You Can Save book relaunch (and people specifying that is how they found out). There's often spill over like this that isn't directly tracked by the organisation doing the multiplying.

Replies from: Jon_Behar
comment by Jon_Behar · 2020-12-10T16:43:10.735Z · EA(p) · GW(p)

Thanks for this data point Luke! It’s a good reminder that counterfactuals work both ways for multiplier orgs. Sometimes we count money that would have been donated counterfactually (overestimating our impact), but sometimes there are donations we don’t count that wouldn’t have happened if we didn’t exist (underestimating our impact).

Also worth noting that sometimes the spillover effect is in an area that isn't the multiplier orgs main focus. For instance, I'd also expect the book relaunch to help 80K which gets a nice discussion, but that's not anything TLYCS will capture in its metrics.

comment by HStencil · 2020-12-09T19:01:22.519Z · EA(p) · GW(p)

This is all fantastic information to have — thank you so much for explaining it! I'm really glad to have improved my understanding of this.

Replies from: Jon_Behar
comment by Jon_Behar · 2020-12-09T20:07:33.060Z · EA(p) · GW(p)

Glad it helped! Thanks for the great questions, I'm sure you're not the only one who had them!

comment by MichaelStJules · 2020-12-09T05:39:19.396Z · EA(p) · GW(p)

By my mental model—which may be mistaken—for each additional dollar I give to GiveWell’s Maximum Impact Fund, my impact rises by some smooth function that probably isn’t far off linear.

I think this depends on the particulars of the charities. Your donations might only impact them through whether or not they go to an extra region, which might happen only at funding thresholds. Many of their impacts are also random, e.g. most bednets don't save lives, and most deworming pills are used on children without worms.

What you seem to be describing is risk-aversion, but GiveWell's cost-effectiveness estimates assume risk-neutrality, too, so this might have implications on how to prioritize between them. I'd guess GiveDirectly would look relatively more attractive than otherwise.

Replies from: HStencil
comment by HStencil · 2020-12-09T06:51:48.943Z · EA(p) · GW(p)

I really don’t think my argument is about risk aversion at all. I think it’s about risk-neutral expected (counterfactual) value. The fact that it is extraordinarily difficult to imagine my donations to a multiplier charity having any counterfactual impact informs my belief about the likely probability of my donations to such an organization having a counterfactual impact, which is an input in my expected value calculus. You’re right that under some circumstances, a risk-neutral expected value calculus will favor small donors donating to “step-functional” charities that can’t scale their operations smoothly per marginal dollar donated, but my argument was that in the specific case of multiplier charities, the odds of a small-dollar donation being counterfactually responsible for moving the organization up an impact step are particularly infinitesimal (or at least that this is the most reasonable thing for small-dollar donors without inside information to believe). The fact that impact in this context is step-functional is a part of the explanation for the argument, not the conclusion of the argument.

With respect to the question of “relative step-functionality,” though, it’s also not clear to me why, compared to a multiplier charity, one would think that giving to GiveWell’s Maximum Impact Fund would be any more step-functional on the margin. It seems odd to suggest that being counterfactually responsible for an operational expansion into a new region is among the most plausible ways that a small-dollar gift to the AMF, for instance, has an impact. Clearly, such a gift allows the AMF to distribute more nets where they are currently operating, even if no such expansion into a new region is presently on the table. Moreover, I find this particularly confusing in the case of the Maximum Impact Fund, which allocates grants to target specific funding gaps, often corresponding to very well-defined organizational initiatives (e.g. expansions into new regions), the individual cost-effectiveness of which GiveWell has modeled. It’s obviously true that regardless of whether one gives to a multiplier charity or to the Maximum Impact Fund, there is some chance that one’s donations either A) languish unused in a bank account, or B) counterfactually cause something hugely impactful to happen, but given that in the case of GiveWell, we know the end recipients have a specific, highly cost-effective use already planned out for this particular chunk of money (and if they have extra, they can just put it toward… more nets), whereas in the multiplier charity case, we don’t have any reason to believe they could use these specific funds at all (not to mention productively), doesn’t it seem like the balance of expected values here favors going with GiveWell?

Finally, while it is obviously true that most nets don’t save lives, I fail to see how that bears on the question at hand. We both agree that this is reflected in GiveWell’s cost-effectiveness analysis, which we (presumably) both agree that we have strong reason to trust. We have no such independent cost-effectiveness analysis of any multiplier charity. And the fact that most nets don’t save lives certainly isn’t a reason why the impact of donations to the AMF would not rise by some smooth function of dollars donated. The only premise on which that argument depends is that if they don’t have anything else good to do with my money (which presumably, they do, having earned a grant from the Maximum Impact Fund), they can always just buy more nets. Given the current scale of global net distribution relative to the total malaria burden, it seems wildly unlikely that a much larger percentage of those nets would fail to save lives than was the case during previous net distributions.

Replies from: MichaelStJules
comment by MichaelStJules · 2020-12-09T15:56:39.481Z · EA(p) · GW(p)

I really don’t think my argument is about risk aversion at all. I think it’s about risk-neutral expected (counterfactual) value. The fact that it is extraordinarily difficult to imagine my donations to a multiplier charity having any counterfactual impact informs my belief about the likely probability of my donations to such an organization having a counterfactual impact, which is an input in my expected value calculus.

If, as Jon suggests, the average impact scales well (even if historically not smoothly), unless you confirm that your donation won't make a difference, and most donations make little difference, in the unlikely event that it does (because you push them past such a threshold), it can make a huge difference to make up for it, enough so that the expected value looks good even if you don't know whether you'll push them past such a threshold. It's similar to this argument for veg*nism.

It seems odd to suggest that being counterfactually responsible for an operational expansion into a new region is among the most plausible ways that a small-dollar gift to the AMF, for instance, has an impact. Clearly, such a gift allows the AMF to distribute more nets where they are currently operating, even if no such expansion into a new region is presently on the table.

Have you confirmed this about AMF? In the case of GiveDirectly, they give to whole villages at a time, so maybe the benefactors in the "marginal" village will get more or less, but I imagine there's a cutoff where they won't bother and just wait for more donations instead. Similarly, the school-based deworming charities might wait until they have enough to deworm another whole school. Of course, these villages and schools might be really small, so it might not matter too much unless you're making very small donations.

Replies from: HStencil
comment by HStencil · 2020-12-09T18:56:54.514Z · EA(p) · GW(p)

Yes, that argument for veg*anism is a big part of why I’m a vegetarian, but it does not on its own entail that one should prefer giving to multiplier charities rather than to the GiveWell Maximum Impact Fund. That depends on the empirical question of how the relative expected values weigh out. My argument is that there are sound reasons to believe that in the multiplier charity case specifically, the best-guess expected values do not favor giving to multiplier charities. “Your donation to a multiplier charity might have a big positive impact if it pushes them up an impact step,” doesn’t really respond to these reasons. Obviously, I agree that my donation might have a really big positive impact. I am just skeptical that we have sufficient reason to believe that, at the end of the day, the expected value is higher. I think the main reasons why I, at least prior to conversing with Jon, was strongly inclined to think the expected value calculus favored GiveWell were:

  1. TLYCS’s multiplier likely doesn’t exceed 4x. [I have updated against this view on account of Jon’s comments.]
  2. There is a much higher likelihood that TLYCS sees diminishing marginal returns on “charitable investments” than organizations directly fighting, say, the global malaria burden. [I have updated somewhat against this view on account of Jon’s comments.]
  3. If a particularly promising opportunity for TLYCS to go up an impact step were to present itself, it most likely would get filled by a large donor, who would fully fund the opportunity irrespective of my donation. (In the case of the AMF—to continue with our earlier example—I imagine most in the donor community assume that such opportunities get funded with grants from GiveWell’s Maximum Impact Fund; it has proven to be an effective coordination mechanism in that sense.)
  4. There are good reasons to be at least somewhat suspicious of the impact estimates that multiplier charities put out about themselves, particularly given how little scrutiny or oversight exists of their activities. There’s even a reasonable argument, I think, that such organizations, in the status quo, face strong incentives (due to potential conflicts of interest) to optimize for achieving aims unrelated to having a positive impact. For instance, I think Peter Singer’s work likely is highly effective at persuading people give more to effective charities, but imagine for a moment that TLYCS were to discover that, in fact, owing to the many controversies surrounding Singer, his association with the movement on net turned people away. Based on Jon’s remarks during this forum discussion, he seems like a great person, but I don’t think we have any general reason to believe that TLYCS would respond to that discovery in a positive-impact-maximizing way. Singer is such a large part of the organization that it seems plausible to me that he would be able—if he wished—to push it to continue to raise his profile, as it does today, even if doing so were likely net negative for the EA project. Furthermore, in reality, if something like this were to occur, it would probably happen through a slow trickle of individually inconclusive pieces of evidence, not through a single decisive revelation, so subconscious bias in interpreting that evidence inside of TLYCS could lead to this sort of suboptimal outcome even without anyone doing anything they believed might be harmful. Obviously, this is a deliberately somewhat outlandish hypothetical, but hopefully, it gets the point across.

Regarding your final point, I basically agree with your reasoning here. I have not confirmed my mental model with the AMF, and it’s fair to say I should. However, I also think that 1) you’re right that benefactors in “marginal” villages may get more (or there may be more of them) on account of my donations, and 2) deworming is so cheap (as are mosquito nets) that my donations to deworming charities probably do cover entire schools, etc.

answer by Jon_Behar · 2020-12-08T14:10:31.142Z · EA(p) · GW(p)

I don’t believe the multipliers that fundraising organizations report (e.g. because they don’t appropriately adjust for money that would have been donated counterfactually, rely on aggressive assumptions, or ignore the opportunity cost of having people working at the multiplier organization)

comment by MichaelStJules · 2020-12-09T17:32:49.534Z · EA(p) · GW(p)

There's also the question of whether the multiplier charities are just replacing some of the fundraising that the beneficiary charities would do in their absence, and whether or not they're better at it.

Could the beneficiary charities coordinate to fund the multiplier charity? Why don't they? Is it because they think their own fundraising is better, or that their regular donors wouldn't like that, or something else?

Replies from: Jon_Behar
comment by Jon_Behar · 2020-12-09T18:23:34.370Z · EA(p) · GW(p)

I think it’s preferable to have people give through intermediaries, so that the message is “give to organizations the experts think is best” vs. having every charity try to argue for its own impact and having donors try to make sense of it all.

That message gets undermined if the recommendations aren’t independent, which is a serious problem with having the recommended charities fund the multiplier org.

Replies from: MichaelStJules
comment by MichaelStJules · 2020-12-09T20:31:05.450Z · EA(p) · GW(p)

That's fair, but if the fundraising org (e.g. TLYCS) was independent of a charity evaluator (e.g. GiveWell) and took all of its recommendations from them, then this seems like it would be okay. I know TLYCS support more than just GiveWell-recommended charities, though.

Replies from: Jon_Behar
comment by Jon_Behar · 2020-12-09T22:44:24.609Z · EA(p) · GW(p)

Yeah, that makes sense.

comment by MichaelStJules · 2020-12-09T05:43:42.057Z · EA(p) · GW(p)

I think the multiplier estimates I've seen are usually of the average multiplier, not the marginal multiplier, but what we care about is the marginal impact. See also HStencil's answer [EA(p) · GW(p)] and the  discussion that follows, though.

answer by HaukeHillebrandt · 2020-12-08T18:18:00.940Z · EA(p) · GW(p)

There's a good talk on this - Considerations for fundraising in Effective Altruism. [EA · GW]

comment by Jon_Behar · 2020-12-08T21:49:00.246Z · EA(p) · GW(p)

Interesting talk- thanks for sharing!

Stefan’s framework (which I largely agree with) would argue that the potential funding base for multiplier organizations is quite small due to their complexity, and is probably limited to the EA community (or a subset thereof). So I’m trying to do a little market research to learn more about what that audience thinks about multiplier organizations. 

The talk also argues for focusing on the largest donors, which in EA usually means Open Phil. But that’s less of an option for multiplier organizations as Open Phil’s EA “program does not fund organizations focused primarily on raising money for effective charities or organizations primarily focused on animal welfare or global poverty (though organizations in these categories might qualify for support under another focus area, e.g. farm animal welfare).”

Replies from: MichaelStJules
comment by MichaelStJules · 2020-12-09T20:48:11.125Z · EA(p) · GW(p)

The talk also argues for focusing on the largest donors, which in EA usually means Open Phil. But that’s less of an option for multiplier organizations as Open Phil’s EA “program does not fund organizations focused primarily on raising money for effective charities or organizations primarily focused on animal welfare or global poverty (though organizations in these categories might qualify for support under another focus area, e.g. farm animal welfare).”

You wouldn't necessarily approach large donors to fund TLYCS itself (although you could), you could approach them to directly fund the charities TLYCS supports. I think that's what Stefan had in mind.

Also, they could fund TLYCS through their global health and poverty program instead. They've funded One for  the World. The EA Infrastructure Fund has also funded TYLCS among many other multiplier orgs.

Replies from: Jon_Behar
comment by Jon_Behar · 2020-12-09T22:36:15.505Z · EA(p) · GW(p)

Also, they [Open Phil] could fund TLYCS through their global health and poverty program instead. They've funded One for  the World. The EA Infrastructure Fund has also funded TYLCS among many other multiplier orgs.

To get funded, One for the World had to change the recommendations to use only GiveWell's research. That was also a precondition of any discussion with GiveWell about funding for TLYCS, which was not a strategic compromise we were willing to make. 

As you say, the EA Infrastructure Fund has funded a lot of multiplier orgs. But aside from Founders Pledge, the grants have been pretty small- at the organizational level, I think they're all under $30k in almost 4 years the fund has been operating. That's definitely helpful, but not really a sustainable funding source for an organization. 

answer by capybaralet · 2020-12-09T04:32:23.269Z · EA(p) · GW(p)

I'm skeptical of multiplier organizations relative effectiveness because the EA community doesn't seem that excited about them. 

(P.S.: This is actually probably my #1 reason, as someone who hasn't spent much time thinking about where people should donate.  I suspect a lot of people are wary of seeming too enthusiastic because they don't want EA to look like a pyramid scheme.)

comment by MichaelStJules · 2020-12-09T21:17:47.729Z · EA(p) · GW(p)

Open Phil and the CEA Global Health and Development Fund have each made a grant to One for the World before, Open Phil has made grants to Founders Pledge, and the EA Infrastructure Fund has made grants to TLYCS, One for the World, RC Forward, Raising for Effective Giving, Founders Pledge, a tax deductible status project run by Effective Altruism Netherlands, Generation Pledge, http://gieffektivt.no/, Lucius Caviola and Joshua Greene (givingmultiplier.org), EA Giving Tuesday and Effektiv Spenden.

Of the 4 EA funds, the EA Infrastructure Fund has paid out the least to date, though, and it looks like they all started paying out in 2017.

Replies from: Jon_Behar, Jon_Behar, capybaralet
comment by Jon_Behar · 2020-12-09T22:43:51.198Z · EA(p) · GW(p)

My sense is that most individual donors aren't excited about multiplier orgs because they find them complicated, don't have time to dig into the leverage numbers to really understand them, and therefore don't trust those numbers. And I think that's a pretty reasonable strategy for most individuals. But it does seem telling that funders that have the resources to do more vetting have supported such a wide range of multiplier orgs.  

comment by Jon_Behar · 2020-12-14T17:51:59.100Z · EA(p) · GW(p)

These sophisticated donors’ support of such a wide range of multiplier orgs supports the idea that there could be a lot of leverage out there to be had. If that’s true, it also has some interesting implications for the “it’s hard to get a job at an EA org” discussion that’s been going on for a while, most recently here [EA · GW].

Here's a simplified thought experiment. Let’s say you invested $1 million in the orgs listed above, allocated proportionally to their current size (not all that far off from what the infrastructure fund has actually done, but we’ll use stylized numbers to keep the math simple). Salaries are typically the biggest expense for multiplier orgs, so let’s say $800k flows through to hiring new people. Assume $100k/year per new person and that’s 8 new hires. If 75% of those jobs go to people in the EA community, that’s 6 EA’s getting the sorts of jobs that are immensely desireable and immensely scarce.

If the multiplier model really works, $1 million will be a small fraction of what’s needed to build a flourishing system multiplier orgs with models spanning research (e.g. GiveWell, ACE), fundraising for targeted causes (e.g. TLYCS, OFTW), fundraising for targeted donors (e.g. Founders Pledge and REG), and country-level organizations that provide tax deductible giving (e.g. RC Forward, EA Netherlands). If you built that ecosystem, you’d quickly create dozens of new roles. So if the multiplier model works at scale, you’ll move a ton of incremental money while also making real headway on the issue of EA jobs being scarce. (To be clear, I don’t think we should fund multiplier orgs so EAs will be able to get the jobs they want, I’m just saying that would be a nice added benefit if the multiplier model works and another reason to investigate whether it does work.)

comment by capybaralet · 2020-12-11T00:07:55.535Z · EA(p) · GW(p)

Do you disagree that the EA community at large seems less excited about multiplier orgs vs. more direct orgs?  

Replies from: MichaelStJules
comment by MichaelStJules · 2020-12-11T20:29:30.370Z · EA(p) · GW(p)

No, I agree with that.

comment by Jon_Behar · 2020-12-09T15:46:02.426Z · EA(p) · GW(p)

Thanks for sharing your thinking on this! Hopefully this exercise will shed some light on whether that lack of excitement is warranted, or whether it could represent an untapped opportunity.

answer by Jon_Behar · 2020-12-08T14:12:08.932Z · EA(p) · GW(p)

I think multiplier organizations have provided leverage in the past, but think that going forward the marginal multiplier will be lower than the average multiplier

answer by Jon_Behar · 2020-12-08T14:11:09.780Z · EA(p) · GW(p)

Multiplier organizations typically raise funds for a lot of different charities, and I only care about money that’s raised for the charity with the highest absolute impact

answer by Jon_Behar · 2020-12-08T14:12:26.656Z · EA(p) · GW(p)

I’m generally skeptical of the multiplier model because it seems too good to be true

answer by Jon_Behar · 2020-12-08T14:11:29.401Z · EA(p) · GW(p)

There aren’t multiplier organizations available in the cause areas I care about

answer by Monica · 2020-12-09T19:15:03.546Z · EA(p) · GW(p)

I have very strong opinions on which organization in the cause area I care about is doing the most effective work, and I don't think that the relevant evaluating organization is any better equipped to opine on it than I am. I sometimes compare evaluating charities to doing a fermi estimation. If there are a sufficient number of steps to estimating a fermi problem or if you are sufficiently off on your intermediate steps, and you have a some idea about the general magnitude of the target that your are estimating, it becomes better to just directly take a guess on the end-estimate rather than attempting intermediate guesses to guide you.  It seems to me that even though evaluators are often very thorough and transparent, they end up making a ton of assumptions on top of high-error estimates (because one has to in order to make any progress, not because they don't do a great job given what they are working with). I am not at all suggesting I could do a better job of rigorously estimating which organization is better, but I don't think that is the right approach given the complexity of some of these problems. In other words, I'm far more convinced by the arguments and track record of the direct work charity that they are doing the most efficient work than I am by the evaluator charity that they should be trusted to make that evaluation.

comment by Monica · 2020-12-09T19:48:51.677Z · EA(p) · GW(p)

As I'm reading more,I'm realizing that multiplier organizations are not as much about evaluation as my post makes them out to be, so I will just say that I agree with the point in the post about wanting to give the single most effective charity rather than a mix, and not really understanding the value added of an intermediate organization.

Replies from: MichaelStJules, Jon_Behar
comment by MichaelStJules · 2020-12-09T20:53:31.597Z · EA(p) · GW(p)

The multiplier for the single organization you want to support within the mix could still be > 1. From this:

According to their own calculations, for every $1 spent

  • Raising For Effective Giving (REG) raised $8 to various effective charities ($3.21 for MIRI, $1.37 for AMF, $0.81 for animal charities) (2015 report)
  • The Life You Can Save (TLYCS) raised $2.27 to AMF and $3.17 to their other top charities (2015 report)
  • Giving What We Can (GWWC) raised $6 to their top charities, estimated $104 if future donations were included[1] (2009-2014 report)

These are old numbers, though.

Replies from: Jon_Behar
comment by Jon_Behar · 2020-12-09T21:44:45.698Z · EA(p) · GW(p)

I think this is a really important point.

To give you some updated numbers, in 2019 TLYCS raised over $6 for AMF for every dollar we spent on operations plus another $7 for other recommended charities. If you look only at GiveWell recommended charities, our multiplier was 10X.

As I mentioned to HStencil [EA(p) · GW(p)], if these multiplier numbers are remotely accurate, there’s a huge margin of safety. You could believe that donations to any charity other than AMF are totally worthless AND that TLYCS overestimated donations to AMF by 3x, and you still would have doubled your impact by giving to TLYCS. And our multiplier is going to be even higher this year. (I’m talking about TLYCS because that’s what I’m familiar with, but I also recall seeing strong multipliers from e.g. RC Forward and REG.)

comment by Jon_Behar · 2020-12-09T21:12:45.522Z · EA(p) · GW(p)

Really appreciate that you spelled out your thinking so clearly- thank you! 

I think intermediaries make more sense for more casual donors, unlike people like yourself who are putting lots of thought into where to give.

answer by Jon_Behar · 2020-12-08T14:11:49.967Z · EA(p) · GW(p)

I think multiplier organizations are significantly riskier than organizations doing direct work

answer by Jon_Behar · 2020-12-08T14:10:47.480Z · EA(p) · GW(p)

I feel an emotional “warm glow” when I give to charities that do direct work, but not when I give to multiplier organizations

3 comments

Comments sorted by top scores.

comment by MichaelStJules · 2020-12-09T04:16:04.493Z · EA(p) · GW(p)

Some discussion here: https://strivingforpositiveimpact.wordpress.com/2018/04/25/should-you-donate-to-a-fund-raising-meta-charity/

Replies from: Jon_Behar
comment by Jon_Behar · 2020-12-09T15:48:35.421Z · EA(p) · GW(p)

Interesting discussion! One of the things I find striking is how much stronger the case for multiplier organizations is now than it was a couple of years ago when that post was written. There are a now a bunch of organizations now have significantly higher multipliers than the ones discussed in the post, and also have an established track record of providing those multipliers.

comment by Jon_Behar · 2021-03-09T19:23:15.564Z · EA(p) · GW(p)

Thanks to everyone who voted and commented! It was helpful to learn more about how EAs think about multiplier orgs, and I hope it was helpful to hear my perspective from inside one of those orgs. 

Here are my biggest takeaways from the discussion, apologies that it took me so long to post this:

  • Somebody is likely wrong: either multiplier orgs are pursuing strategies that don’t work, or those strategies do work and donors are missing out on leverage. If EAs can learn more about which statement is true, there’s a real opportunity for improvement. In theory there could be a middle ground where multiplier orgs are pursuing strategies that work at their current scale and marginal multiples are close to one, but it seems unlikely that all/most multiplier orgs fall into that category.
  • If you look at the history of multiplier orgs in EA, I think it’s clear that at least for some of them the model has worked. REG seems like a pretty clearcut case study of an organization that has spent very little money to move a lot of money to highly effective charities from donors who almost certainly never would have heard of those charities without REG.
  • By a large margin, people’s biggest objection to multiplier orgs is that they don’t trust the multipliers those organizations are reporting. People mentioned a variety of concerns [EA(p) · GW(p)] about the reported numbers, including that they aren’t counterfactually adjusted and that they think the marginal return on new donations is likely to be lower than the historical average return. And this may create a reinforcing dynamic, where EAs aren’t excited about multiplier orgs because they get the sense that other EAs aren’t excited about them [EA(p) · GW(p)].
  • There are very high barriers to entry to getting a better understanding of the multiplier space (In other words, I think this area is quite vetting constrained.) There are a lot of different organizations, lots of different methodological considerations, and inconsistent reporting across multiplier orgs (e.g. money moved numbers may or may not be counterfactually adjusted). It would be hard enough for one person learn enough to make an informed decision; it’s completely unreasonable to expect everyone to learn this much on their own. (Note: it's not clear to me that comparing multiplier orgs is harder than, say, comparing movement building orgs; my sense is that both areas are vetting constrained.)
  • The most sophisticated potential funders for meta orgs (Open Phil and the EA Infrastructure Fund) have actually supported about a dozen different multiplier organizations [EA(p) · GW(p)]. However, it’s unlikely [EA(p) · GW(p)] that these two funders alone can support a thriving ecosystem of multiplier orgs, given that Open Phil only considers a narrow subset of multiplier orgs and that the Infrastructure Fund’s funding capacity is quite low relative to the what a thriving ecosystem would need. 
  • The multiplier ecosystem is diverse enough such that there may well be an organization out there that addresses your biggest concerns about the space. For example, if you’re worried that people might not keep the GWWC pledge, Founders Pledge’s legally binding pledge could be interesting. Conversely, if you’re worried that FP’s founders will give to ineffective charities, GWWC could be a good option. REG is extremely efficient but is relatively hard to scale, while TLYCS has a more scalable model but is willing to sacrifice efficiency in pursuit of that scale (i.e. TLYCS is more likely to pursue a strategy we think will cost $10 million and move $100 million [10x leverage] than we would be to pursue a strategy that would cost $1k and move $100k [100x leverage].) I’m not trying to argue here that one model is superior to the other, my point is that the universe of multiplier orgs is broad enough to appeal to a lot of different types of donors. 
  • I get the sense many EAs don’t realize just how terrible a “minimum viable product” is, and how significantly an MVP can be improved with dedicated work. In the context of multiplier organizations, I think there’s a misperception that a basic website with limited maintenance is good enough [EA(p) · GW(p)], and that this misperception drives the concerns people have about marginal impact being lower than average impact. The empirical evidence is at odds with the preference for MVP models, based on the experiences of orgs like RC Fwd [EA(p) · GW(p)], GWWC [EA · GW], and TLYCS. TLYCS is a pretty clear example because we essentially used the website + volunteer model from ~2009 through mid-2013, before switching to paid full time staff. When we made the switch, web traffic (which had been flat for 4 years) started increasing and continued to do so, and money moved has steadily grown ever since (and is probably >25x higher than it was in the MVP years). 

 

Outcomes I’d like to see going forward:

I’d love to see someone write up an overview of the multiplier space, similar to Larks’ annual AI Alignment Literature Review and Charity Comparison [EA · GW]. Consolidating information would make it much easier for donors to engage with the space. Something as simple as a list of organizations with a few sentences about their work, their multiplier data, and links to more info would go a long way. (Ideally this would be done by someone who doesn’t work at a multiplier org; I’ll post this as a volunteer project on EA Work Club.)

I’d hope that overview would encourage more EAs dip their toes in the water by making a small donation to one or more multiplier orgs and/or subscribing to their mailing lists (I just did this to put some skin in the game). This is less about the actual money, and more about making it more likely you’ll stay informed about their work going forward. The more you do that, the more you’ll be able to make your own informed decision about whether their model is working.

A final note… While I’d love to see more people donating to multiplier orgs, I’d hate to see donors naively donating to the organization with the highest multiplier or otherwise incentivizing multiplier orgs to prioritize maximizing their short term multiplier. Ideally, both donors and organizations will prioritize strategies that maximize long run impact, and prioritize the magnitude of that long run impact (money moved – expenses) rather than the efficiency of that impact (money moved / expenses). For donors, I’d recommend asking 1) “do I believe in the strategy?” and 2) “do I believe the team can execute the strategy?”