Let’s Fund: annual review / fundraising / hiring / AMA

post by HaukeHillebrandt · 2019-12-31T14:54:35.968Z · EA · GW · 13 comments

Contents

  Executive Summary
  What does Let’s Fund do?
  2019: Big picture activities and achievements
  Mission: Short-term metrics
  Vision: Long-term goals
  2019 Budgets and fundraising
  Key focus for 2020
  Fundraising Target for 2020
  Hiring
  Acknowledgments
None
13 comments

Executive Summary

Let’s Fund has crowdfunded over $300,000 in total for the high-risk, high-leverage projects of our grantees:

We have also published several tens of thousands of words of high-quality research and analysis. For instance, our climate policy research coverage by Vox went viral, was retweeted by Bill Gates and many other policy wonks, and featured in the American Institute of Physics newsletter.

We have done this on a $40,000 budget, suggesting a net present value of roughly $238,000.

In 2020, we’re seeking $100,000 to cover the cost of our operations and grow this project. Our stretch / aggressive growth goal is $200,000.

If you want to support Let’s Fund’s operations and make a medium-sized donation, then the best thing for us would be to donate via our Stripe page—unfortunately these donations are not tax deductible. If you would like to make a slightly larger tax-deductible donation, please get in touch. If you want to donate less than $100, then the best thing would be to support our campaigns.

Ask us anything in the comments (but note that we’re on holiday from the 4th till the 9th of January and so there will be a delay in responding).

Edit 1/1/2020: We've updated some of the figures to include end-of-the-year donations and corrected a small error in the notes column of the spreadsheet (the total money raised did not come from just two donors).

What does Let’s Fund do?

Lets-Fund.org researches policy solutions to important problems and crowdfunds for the most effective ones.

We’re a “think-and-do tank”:

  1. We conduct in-depth research to make people engage (deeply) with important problems and the effectiveness of different solutions (e.g. replication crisis, climate change, impact investing, and global catastrophic risks).
  2. Then, we encourage people to actually do something, e.g. by donating or making a grant to rigorously vetted, high-risk, high-reward charitable projects via crowdfunding campaigns.

2019: Big picture activities and achievements

In early 2019, we first got funding for Let’s Fund and Hauke, the CEO and research lead, started full-time work on this project. In mid-2019, Sahil contracted for Let’s Fund part-time for a few months to work on business operations and has since volunteered. Henry volunteered part-time for the project. Overall, roughly one year of full-time equivalent (FTE) has been spent on the project, since our launch 14 months ago.[1] Our CVs are at Lets-Fund.org/About.

Note that some of the money has been firmly committed but has not moved to the charities yet (however, there are a few softer commitments that have not been included).

  1. Better Science campaign: For our first crowdfunding campaign, we’ve raised ~$77k for an academic to change how researchers across scientific disciplines do research. In our grant report (Lets-Fund.org/Better-Science), we also discussed differential technological development[2] and the Long-Term Future Fund endorsed this project with a grant. Our grantee has already hired assistants to push his advocacy forward and now implemented the Registered Reports format at +200 journals (including EA-relevant areas such as development economics[3] and Global Catastrophic Biological Risks[4]).
  2. Climate policy campaign: We’ve raised ~$225k for a think tank to work on climate policy. Our research (Lets-Fund.org/Clean-Energy) was covered on Vox,[5] which went viral with +10k positive engagements on social media from politicians, academics, policy wonks, and billionaires. For instance, Bill Gates tweeted “If you read one article this week about climate change, check out this one”[6] and it featured on the American Institute of Physics newsletter. The article also mentioned and positively reflected on EA (uncontroversial, high-quality analysis on a mainstream topic).
  3. Fundraising ratio: In total, we have raised around ~$300k. We’ve spent ~$40k ($10k from the EA Meta Fund grant and ~$30k from private EA donors). This suggests a fundraising ratio of ~8x and the net value is ~$265k (net present value @ 5%: ~$238k). (more below).
  4. Impact Investing: Our report on impact investing (Lets-Fund.org/Impact-Investing), was covered on Vox and was well received. For instance, a philanthropic advisor we know recently advised a client who wanted to impact invest $1.3m, but after introducing him to some conclusions of our report, he decided on the spot to put $650k into (classical) "investing to give" and $650k into immediate effective giving. We have heard that related work on the Mission Hedging investment strategy, which we popularized within the EA community, has been of interest to foundations in the EA space.
  5. Other research:
  6. Global development interventions are generally more effective than Climate change interventions” (cited by 7 [Pingbacks on the EA forum]).
  7. “Global Catastrophic Risks from Corporations
  8. “Economic growth and the case against evidence-based development” with John Halstead (Forthcoming)

Mission: Short-term metrics

  1. Money raised. Ultimately, we want to optimize for and measure counterfactual, quality-adjusted money moved. In other words, how good are the projects we crowdfund for compared to OPP/EA funds grants (e.g. 75% as good)? Where would donations go counterfactually (i.e. are we reaching donors that would otherwise not give effectively)? Instead of merely being a think tank, crowdfunding keeps our work focused, applied, and grounded.
  2. Research quality. How good is our research and how much do people engage with it? Metrics: time spent on site, feedback from (EA) researchers. Also: though small donors might not donate much, their donations, a revealed preference, are a metric for “high-fidelity EA meme spreading” (“parting with hard-earned cash”, is a good proxy for deep engagement with our ideas).

Vision: Long-term goals

  1. We want to normalize donating to high-risk, high-leverage, long-termist EA science and policy over more traditional charity. We also want to increase diversity of thought, discussion, and more active engagement around which projects are most effective.
  2. We want to be a philanthropy and prioritization think tank that affects philanthropic funding and thinking about prioritization generally. Long-term, we want to work with HNWs to optimize their giving. We’ve recently started reaching out and pitching our grantees to HNWs and foundations.

2019 Budgets and fundraising

Google Spreadsheet from above can be found here.

The big picture is that we have raised more than $300k on a $40k budget.

This suggests a fundraising ratio (or benefit-cost ratio) of

8

And a net present value (discounted benefits minus costs) of:

~$238k

Simcikas lists ways in which cost-effectiveness estimates can be wrong, and the estimate here is wrong in several of those and other ways. We have not quantified and included: the value of our volunteers’ time, investing some of our own money into the project, counterfactual impact of altruistic employees, the high counterfactual value of money donated to our campaigns by EAs, and the costs of work of others (see acknowledgments below). Thus, this analysis should be taken with a grain of salt. Having said that, we believe this simplified cost-effectiveness analysis is roughly accurate and comparable to other cost-effectiveness analysis.

We extend this cost-effectiveness analysis to make it comparable to giving money to GiveDirectly.

To do so, we make the following assumptions:

  1. If the average effectiveness of Let’s Fund’s campaigns is equal to the Long-Term Future fund’s effectiveness, because one of our campaigns has received a grant from the Long-term future fund, and...
  2. If the Long-term future fund is ~33x more effective than giving to the global development fund, as suggested by a survey of “EA leaders", and...
  3. If the Global Development fund is ~10x more effective than GiveDirectly, because it mostly pays out to Givewell recommended charities, which are usually ~10x more effective than cash.

...then the quality-adjusted GiveDirectly-equivalent net present value might be as high as $60 million. In other words, the project is roughly as good as donating $60mn for GiveDirectly. This sounds grandiose and is likely wrong in many ways, but in general, we believe that the counterintuitive idea that raising a smaller amount for (riskier) very effective projects over a larger amount of money for less effective projects is not wrong. As such, we believe this result is not off by more than an order of magnitude and that the expected value Let’s Fund’s activities through its grantees are at least worth on the order of millions of dollars to GiveDirectly.

Of course, given the level of wealth inequality, big donors donate much more than small donors. However, we feel there is additional value in small donations through crowdfunding because it creates deep engagement with and discussion of prioritization.

Moreover, small donors collectively donating to policy research has more democratic legitimacy than donations by ultra high-net worth individuals (c.f. some politicians not accepting large donations[7] vs some billionaires funding think tanks[8]).

We see our fundraising an easily measurable and robust backbone of our activities, which keeps our research focused on concrete real-world impacts.

We believe there is perhaps even more value in our research itself, but it’s less quantifiable. For instance, the ideas in the long research report for the “Better Science” campaign might have had an influence on the Long-term Future fund, because one of the fund managers said our report “played a major role in my assessment of the grant”.[9] The ideas in the long research report for the “Clean Energy” campaign might have influenced climate policy, because the Vox summary of the research went viral and some people in policy told us that they read the whole research report and it has influenced their thinking.

Key focus for 2020

In Q1, we aim to do more research to start another crowdfunding campaign. The exact topic is TBD, but it might be in economic growth in low-income countries, reducing risks from war, AI governance, or other GCR reduction.

If fundraising is unsuccessful, we might discontinue actively growing Let’s Fund in April (but continue running it with a skeleton crew on a volunteer/part-time basis).

If fundraising is successful, then the broad plan is to find a full-time co-founder and rent office space to professionalize and grow the project.

  1. Do more and higher quality research with the aim of finding long-termist funding opportunities.
  2. Raise more money for current and future projects by…
  3. Setting up partnerships with foundations, HNWs, or other fundraising organizations that reuse our research (such as Effective Giving Germany).
  4. Experiment with scalable outreach to small donors through content marketing or Ads (e.g. offer carbon offsetting through our climate change campaign).
  5. Professionalize operations (e.g. improve UX of website, set up a non-profit (we’re currently running Let’s Fund as the Center for Applied Utilitarianism, a UK limited company).

Fundraising Target for 2020

Staff compensation: For 2020, we need $100,000 for two full-time annual salaries for Hauke and another full-time co-founder that we would try to hire.

Overhead cost: $20,000 (mostly office rent in central London)

In total, this adds up to $120,000. We have already raised $20,000 from the Survival and Flourishing fund. Thus, we are seeking and additional $100,000.

Stretch goal: We would take up to $200,000 to have more planning security and runway, pay efficiency wages, and grow more aggressively by hiring additional support staff and spending money on adwords.

How to donate: If you want to support Let’s Fund’s operations and make a medium-sized donation, then the best thing for us would be to donate via our Stripe page—unfortunately these donations are not tax deductible. If you would like to make a slightly larger tax-deductible donation, please get in touch. If you want to donate less than $100, then the best thing would be to support our campaigns.

Hiring

Interested in working for Let’s Fund? Please fill in this form.

Acknowledgments

Special thanks to the following for helping Let’s Fund in various ways:

[1] "Announcing: "Lets-Fund.org: High-Impact Crowdfunding campaigns ...." 25 Oct. 2018, https://forum.effectivealtruism.org/posts/SPc9CXaLdEJGiDZy5/announcing-lets-fund-org-high-impact-crowdfunding-campaigns. Accessed 31 Dec. 2019.

[2] "Differential progress - Effective Altruism Concepts." https://concepts.effectivealtruism.org/concepts/differential-progress/. Accessed 31 Dec. 2019.

[3] "Registered Reports: Piloting a Pre-Results Review Process at ...." 9 Mar. 2018, https://blogs.worldbank.org/impactevaluations/registered-reports-piloting-pre-results-review-process-journal-development-economics. Accessed 31 Dec. 2019.

[4] "Opening Influenza Research - The Center for Open Science." https://cos.io/our-services/research/flulab/. Accessed 31 Dec. 2019.

[5] "The climate change policy with the most potential is the ... - Vox." 20 Sep. 2019, https://www.vox.com/energy-and-environment/2019/7/11/20688611/climate-change-research-development-innovation. Accessed 31 Dec. 2019.

[6] "Bill Gates on Twitter: "If you read one article about climate ...." 26 Jul. 2019, https://twitter.com/billgates/status/1154787966256058368. Accessed 31 Dec. 2019.

[7] https://www.washingtonpost.com/politics/2019/09/30/are-sanders-warren-grassroots-funded/. Accessed 31 Dec. 2019.

[8] "Billionaires Channel Millions to Think Tanks - Forbes." 4 Feb. 2012, https://www.forbes.com/sites/lauriebennett/2012/02/04/billionaires-channel-millions-to-think-tanks/. Accessed 31 Dec. 2019.

[9] "Payout Report: Long-Term Future Fund - Effective Altruism ...." 30 Aug. 2019, https://app.effectivealtruism.org/funds/far-future/payouts/4UBI3Q0TBGbWcIZWCh4EQV. Accessed 31 Dec. 2019.

13 comments

Comments sorted by top scores.

comment by Matt_Lerner (mattlerner) · 2019-12-31T19:59:42.315Z · EA(p) · GW(p)

Thanks for the writeup!

If the recent Bill Gates documentary on Netflix is to be believed, then Gates first became seriously aware of the problem of diarrhea in the developing world thanks to a 1998 column by Nicholas Kristof. It's hard to assess the counterfactual here (would Gates have encountered the issue in a different context? Would he have taken the steps he ultimately did after reading the Kristof piece?) but it seems plausible that Kristof's article constitutes a cost-effective intervention in its own right (if a not particularly targeted one).

I bring this up because I'm intrigued by the viral coverage of your clean energy research. It's not possible to quantify the impact of an article like this in any realistic way, but perhaps we can agree that a plausible distribution of beliefs about its value is close to strictly positive.

Future Perfect being what it is, it's obviously the case that Vox constitutes an unusually receptive channel for EA-adjacent research. But I'm curious if you consider the wide propagation of your research in the news media a "risky and very effective" project, and if your research products have been intentionally structured toward this end. If you have some takeaways from your big success so far, it could be very helpful to post them here- widely taken-up tweaks to make research propagate more effectively through the media are marginal improvements with potentially very high value.

comment by HaukeHillebrandt · 2020-01-02T18:58:15.471Z · EA(p) · GW(p)

Excellent comment!

If the recent Bill Gates documentary on Netflix is to be believed, then Gates first became seriously aware of the problem of diarrhea in the developing world thanks to a 1998 column by Nicholas Kristof. It's hard to assess the counterfactual here (would Gates have encountered the issue in a different context? Would he have taken the steps he ultimately did after reading the Kristof piece?) but it seems plausible that Kristof's article constitutes a cost-effective intervention in its own right (if a not particularly targeted one).

To clarify, the win here was not to influence Gates, because he is already very much on board with clean energy innovation agenda (though perhaps if he really read the article then it might be that it might have ever so slightly shifted his views towards the importance of government vs. private R&D which I feel he doesn't focus on enough).

Rather that he has 50mn followers on Twitter and is considered a public intellectual / authority on climate change (publishing a book on climate change in 2020).

But yes your general point is good, because for instance Reid Hoffman and others retweeted the Gates tweet - so perhaps there might be a very small chance of a "Kristof's effect".

I'm curious if you consider the wide propagation of your research in the news media a "risky and very effective" project, and if your research products have been intentionally structured toward this end.

Yes, the research is intentionally structured for wide-ish dissemination. This manifests in several ways:

I feel there are relatively few and only modest downside risks to this project. For instance, there's little information hazard [EA · GW], however there are some moral hazards. See the "Risks, reservations, drawbacks section of the report" where we write:

"Overall, what these quotes have in common are concerns about the moral hazard of spreading a meme like 'breakthrough technology by itself will solve climate change'. Authors repeatedly caution that additional policies—especially carbon taxes—are needed. We think these concerns are warranted, but do not believe that this suggests that clean energy R&D should not be increased. We believe there is consensus amongst even these climate policy scholars that R&D levels must be increased substantially. Crucially, while the moral hazard aspect of clean energy R&D increases might drag out emission reduction, another aspect pushes more strongly in the other direction and make carbon taxes more likely. For instance, one economic model suggests that "if a carbon tax imposes a dollar of cost on the economy, induced innovation will end up reducing that cost to around 70 cents".[82] Given that political acceptability is mainly a function of cost, making clean energy cheaper might make carbon taxes more likely."

The crowdfunding aspect of the campaign means that the campaign and its topic are at least a little bit optimized for being more readily understood by the wider public. This means that there are other harder to explain topics that are perhaps more neglected and thus might be more effective. For instance, in the report in the Section on Climate change is relatively non-neglected, we write:

"Climate change is a high-profile topic that many people work on. It is funded by both governments and big private foundations. Thus, even though clean energy innovation in particular has been relatively underfunded within the climate policy space, it is conceivable that in the future ITIF might receive grants for their clean energy innovation program from other funders, which lowers the counterfactual impact of donating to this project. In other words, comparatively, climate change is not very neglected. For instance, the risks and expected losses of pandemics are of a similar magnitude than those of climate change, yet the area is more neglected by other funders."

Also, there the page is intentionally structured hierarchically going from less to more in-depth, with summaries at the top and then for people who really want to read all the details there's heavily footnoted analysis further down on the page.

If you have some takeaways from your big success so far, it could be very helpful to post them here- widely taken-up tweaks to make research propagate more effectively through the media are marginal improvements with potentially very high value.

Generally with questions about success, there's of course a lot of survivorship bias and a lot of it was perhaps just luck. Similarly, the first step to make research propagate widely is that you need to spend a lot of time and effort researching and editing until it's the research not only really good but also very readable, which requires a lot of resources/privilege.

So perhaps take the following with a grain of salt- your mileage will vary.

If you want your research be covered by the media you of course need a good pitch and get in touch with a lot of relevant journalists (numbers game). You can have it peer-reviewed by people and say so and so has reviewed it and says it's really good/interesting/novel research.

Then to push the coverage of your research you can find influencers for whom your content is highly relevant. I used social proof and had a few select academics and policy wonks I was connected to retweet/endorse the article because it was very much in their field of expertise even if they didn't have very many followers. Then I used this to contact relevant influencers who in the past had tweeted about climate change and had also in the past retweeted Vox articles (aligned political leaning). You can tell them that so and so has already retweeted it as social proof, and ask if they could perhaps also retweet because it's relevant to their audience.

Then there's a technique called power mapping that I used, where you get in touch with people that are connected to even more influential people. You're connected to many people through only very degrees of separation (small world phenomenon), so you perhaps know someone who knows someone who knows an "influencer". You can see who for instance, Obama follows on twitter and then if you get to those people to to get say Obama to retweet the coverage of your research (because it's on reputable site such as Vox).

Sorry if this was a bit rambly, but I hope you get the general idea.

comment by Henry_Stanley · 2019-12-31T16:59:46.944Z · EA(p) · GW(p)

I work with Hauke part-time on Let's Fund. We'd be happy to take any questions you might have!

comment by Khorton · 2020-01-02T17:45:08.095Z · EA(p) · GW(p)

How confident in your analysis and conclusion do you have to be in order to publish a recommendation? For example, do you believe "better wrong than vague"? Do you try to caveat to show your degree of confidence? How easy would it be to find a demonstrably incorrect statement or paragraph in your work?

comment by HaukeHillebrandt · 2020-01-02T19:48:26.870Z · EA(p) · GW(p)

Really interesting questions - thank you!

How confident in your analysis and conclusion do you have to be in order to publish a recommendation? For example, do you believe "better wrong than vague"?

I'm very confident in the conclusions of the research for campaigns and the bar for publication is substantially higher than for what I post on the EA forum. I usually also ask many people to review my research for campaigns (see acknowledgment sections in the reports).

On the EA forum, I sometimes don’t excessively hedge my claims for clarity’s sake. And I have sometimes epistemic status disclaimers which you're referring (better wrong than vague’, “say wrong things [LW · GW]”, “Big, if true [? · GW]”, “Strong stances’)

Do you try to caveat to show your degree of confidence?

Yes, I use sensitivity analysis and careful language throughout.

For instance, in my cost-effectiveness analysis I caveat:

"Below we present a very rough, simple, back-on-the-envelope cost-effectiveness analysis ("Fermi estimate”). This model is crude and should not be taken literally. Rather than leaving our assumptions unarticulated and fuzzy, we think it is better to be wrong than vague. By stating assumptions explicitly that can be questioned and falsified (as the common aphorisms in statistics go: "Truth will sooner come out of error than from confusion" and "All models are wrong, but some are useful"). It also helps us think through relevant considerations and formalizes our intuitions. If you disagree with any of the inputs to our model, then you can create a copy of our spreadsheet and plug in your own parameters."

I also use the word "might" about 90 times in the Clean Energy campaign.

But there are some statements even in the report where I'm intentionally wrong for clarity's sake. For instance, when I write:

"The focus of advanced economies like EU countries to prioritize reducing their own domestic emissions is a natural impulse ('clean up your own backyard first'). But 75% of all emissions will come from emerging economies such as China and India by 2040. Only if advanced economies' climate policies reduce emissions in all countries, will we prevent dangerous climate change. We call this the cool rule: only if all countries reduce their emissions will the planet stay cool."

The bolded sentence is clearly wrong on some level, because we can perhaps use geoengineering to cool the planet or maybe emerging economies such as China will solve the issue. However, I feel this is less important to emphasize because it's somewhat unlikely and writing all that out would distract from the central message. By making strong statements such as "Only if" you're making your writing and central claims really clear so that they can be more easily falsified. But some people might disagree and like to hedge more.

How easy would it be to find a demonstrably incorrect statement or paragraph in your work?

I'm quite careful I think but given the length of the report I cannot rule out that there are errors in somewhere. But I'd be somewhat surprised if they were easy to find. So I'll pay a bug bounty of $20 for any statement that is demonstrably incorrect.

However, the central claims I'm very confident in, because I'm trying to triangulate with multiple lines of evidence, so that the conclusions do not depend on a single piece of data (https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/ ).

comment by Khorton · 2020-01-02T20:05:00.860Z · EA(p) · GW(p)

Thanks very much Hauke, really interesting! I'll keep an eye out for any bugs in future work ;)

comment by Khorton · 2020-01-02T17:29:34.298Z · EA(p) · GW(p)

You talk about 'Net Present Value' here, but not in a way I'm familiar with. Normally if I see NPV calculations for moving money from one person to another, I get an NPV of zero. I only add net benefits to society that go beyond simple redistribution. Can you explain what you've done here?

comment by HaukeHillebrandt · 2020-01-02T20:54:22.339Z · EA(p) · GW(p)

Yes, absolutely.

One year ago I've gotten $40k from the EA community, these are the cost to society, because I've spent this money on Let's Fund operations.

Then, on average roughly 1 year later, I've crowdfunded $300k for my campaigns (these are the benefits).

The benefits minus the costs are the also called the net value, which are then simply $260k. But if you discount the costs at 5% with the formula:

=NPV(0.05,(40*-1),300)

then you get a net present value of 234.

Does that make sense?

comment by Khorton · 2020-01-02T22:10:55.644Z · EA(p) · GW(p)

Thanks Hauke. I spoke to an economist friend who explained you're using the formula for a business, while I'm thinking of the one for government. In government, we'd consider the money you crowdfunded a cost to society as well. (I'd argue mine is more appropriate for a fundraising charity, but at least I understand the difference now!)

comment by HaukeHillebrandt · 2020-01-10T08:01:58.934Z · EA(p) · GW(p)
In government, we'd consider the money you crowdfunded a cost to society as well.
I'd argue mine is more appropriate for a fundraising charity, but at least I understand the difference now!

Yes, I agree that for all non-profits or public benefit companies a net present value analysis from a societal perspective would be optimal and what we ultimately care about. However, I feel like my analysis is good approximation: of course, the money I crowdfund is a cost to society given the opportunity costs, but the implicit assumption here is that the donor's money would counterfactually be spent on conspicuous consumption or perhaps ineffective charities. If this is the case, then because the value of increasing consumption [EA · GW] in advanced economies is comparatively small and the cost-effectiveness analysis is relatively insensitive to whether we count this, then the business analysis is a good approximation for the societal value and it's ok to leave it out for simplicity's sake.

Fundraising charities routinely use "fundraising ratios", which are benefit-cost ratios and similar to net values, so this seems standard practise.

In the future, I might look more into the true counterfactual societal net value and see whether some of the money donated would have gone to similarly effective charities in the future.

I think EAF did a good job at this where they estimated the money they raised that would not have been donated otherwise [EA · GW].


comment by alexherwix · 2020-01-04T11:09:08.490Z · EA(p) · GW(p)

Maybe I am extending Khorton's point but in addition to this simple calculation it might be interesting to consider the marginal counterfactual impact of your operations. I imagine that most of the $300k raised would have been raised for other longtermist causes like the EA long term future fund or similar donation opportunities.

Do you have some reasonable evidence for actually having "grown the pie" and added to the overall donation volume?

Otherwise your marginal impact would be the expected value difference to other donation opportunities like EA funds, which I expect to be somewhat close to zero (e.g., you make the analogy to EA funds yourself in the post).

comment by HaukeHillebrandt · 2020-01-10T08:13:10.006Z · EA(p) · GW(p)

Yes, excellent question.

This is a really hard analysis to do because it's very hard to assess what the money would have been spent on counterfactually- see my comment above to Khorton [EA(p) · GW(p)].

My subjective impression is that the $75k for the Better Science campaign was heavily skewed towards EA donors and would have gone to EA causes anyway. However, assuming returns to research this might have still improved the quality of donation within the EA community, which counterintuitively can sometimes be more effective than growing the pie.

However, the $200k raised for the climate change campaign was heavily skewed towards non-EA donor and perhaps the counterfactual here were less effective charities or even conspicuous consumption.

comment by Henry_Stanley · 2020-01-06T18:24:24.844Z · EA(p) · GW(p)

I imagine that most of the $300k raised would have been raised for other longtermist causes

Certainly this is true of some of the money raised, but much of it came through us getting exposure to the broader public (read: non-EAs) on Vox; it's not likely that those funds were otherwise destined for longtermist causes.

I'll come back to you with a more detailed breakdown of donors by source.