Previously, I was a co-founder and co-executive director at the London-based Center on Long-Term Risk, a research group and grantmaker focused on preventing s-risks from AI.
My background is in medicine (BMed) and economics (MSc). See my LinkedIn.
Unless explicitly stated otherwise, opinions are my own, not my employer's. (I think this is generally how everyone uses the EA Forum; others who don't have such a disclaimer likely think about it similarly.)
Though is that case, is the upshot that I should donate to EA Funds, or that I should tell EA Funds to refer weird grant applicants to me?
If you're a <$500k/y donor, donate to EA Funds; otherwise tell EA Funds to refer weird grant applications to you (especially if you're neartermist – I don't think we're currently constrained by longtermist/meta donors who are open to weird ideas).
Regarding Charter Cities, I don't think EA Funds would be worried about funding them. However, I haven't yet encountered human-centric (as opposed to animal-inclusive) neartermist (as opposed to longtermist) large private donors who are open to weird ideas, and fund managers haven't been particularly excited about charter cities.
To me, it feels like I (and other grantmakers) have been saying this over and over again (on the Forum, on Facebook, in Dank EA Memes, etc.), and yet people keep believing it's hard to fund weird things. I'm confused by this.
Also, "weird things that make sense" does kind of screen off a bunch of ideas which make sense to potential applicants, but not to fund managers.
Sure, but that argument applies to individual donors in the same way. (You might say that having more diverse decision-makers helps, but I'm pretty skeptical and think this will instead just lower the bar for funding.)
Markets are made efficient by really smart people with deep expertise. Many EAs fit that description, and have historically achieved such returns doing trades/investments with a solid argument and without taking crazy risks.
Examples include: crypto arbitrage opportunities like these (without exposure to crypto markets), the Covid short, early crypto investments (high-risk, but returns were often >100x, implying very favorable risk-adjusted returns), prediction markets, meat alternatives.
Overall, most EA funders outperformed the market over the last 10 years, and they typically had pretty good arguments for their trades.
But I get your skepticism and also find it hard to believe (and would also be skeptical of such claims without further justification).
Also note that returns will get a lot lower once more capital is allocated in this way. It's easy to make such returns on $100 million, but really
Strong upvote, I think the "GiveDirectly of longtermism" is investing* the money and deploying it to CEPI-like (but more impactful) opportunities later on.
* Donors should invest it in ways that return ≥15% annually (and plausibly 30-100% on smaller amounts, with current crypto arbitrage opportunities). If you don't know how to do this yourself, funging with a large EA donor may achieve this.
I want to mildly push back on the "fund weird things" idea. I'm not aware of EA Funds grants having been rejected due to being weird. I think EA Funds is excited about funding weird things that make sense, and we find it easy to refer them to private donors. It's possible that there are good weird ideas that never cross our desk, but that's again an informational reason rather than weirdness.
Edit: The above applies primarily to longtermism and meta. If you're a large (>$500k/y) neartermist donor who is interested in funding weird things, please reach out to us (though note that we have had few to none weird grant ideas in these areas).
I've been skeptical of much of the IIDM work I've seen to date. By contrast, from a quick skim, this piece seemed pretty good to me because it has more detailed models of how IIDM may or may not be useful, and is opinionated in a few non-obvious but correct-seeming ways. I liked this a lot – thanks for publishing!
Like, if anyone feels like handing out prizes for good content, I'd recommend that this piece of work should receive a $10k prize (though perhaps I'd want to read it in full before fully recommending).
With that sentence, I only meant to suggest that I wouldn't want CEA to become more risk-averse due to this post (or similar future posts). I didn't mean to implicitly discourage thoughtful critiques like this one. Sorry if my comment read that way! I also agree with you that CEA should avoid repeating any mistakes that were made.
I think it's great that CEA increased the event size on short notice. It's hard to anticipate everything in advance for complex projects like this one, and I think it's very cool that when CEA realized the potential mistake, it fixed the issue and expanded capacity in time.
I'd much rather have a CEA that gets important things broadly right and acts swiftly to fix any issues in time, than a CEA that overall gets less done due to risk aversion resulting from pushback from posts like this one*, or one that stubbornly sticks to early commitments rather than flexibly adjusting its plans.
I also feel like the decision not to worry too much about Covid seems correct given the most up-to-date risk estimates, similar to how conference organizers usually don't worry too much about the risk of flu/norovirus outbreaks.
(Edit - disclosure: From a legal perspective, I am employed by CEA, but my project (EA Funds) operates independently (meaning I don't report to CEA staff), and I wasn't involved in any decisions related to EA Global.)
* Edit: I don't mean to discourage thoughtful critiques like this post. I just don't want CEA to become more risk-averse because of them.
(Maybe I feel somewhat skeptical about 'move slowly with high quality' ever being a good choice – it seems to me that the quality/speed tradeoff is often overstated, and there's actually not that much of a tradeoff.)
Once something is up on the internet, it's up forever. Taking it down post-facto doesn't actually undo the damage.
I think this isn't actually correct – I think it depends a lot on the type of content, how likely it is to get mirrored, the data format, etc. E.g. the old Leverage Research website is basically unavailable now (except for the front page I think), despite being text (which gets mirrored a lot more).
You only need one person to sue you for things to go quite badly wrong.
Whether it actually goes 'badly wrong' depends on the type of lawsuit, the severity of the violation, the PR effects, etc. It's probably good to err on the side of not violating any laws, and worth looking into it a bit before doing it.
Interesting. Could you say more why you believe that there are clusters of traits that go well together?
The main example that comes to my mind is that people have different personalities and preferences, so if your team clusters around a set of certain personality traits and preferences, that implies that some specific organizational design choices work better than others.
But I'd feel more reluctant to say things like "move fast and break things works well with hiring quickly"; I find it hard to see any obvious hills based on the variables you mentioned.
I would have said something more like: Which strategy is best will depend on the specifics of what you're trying to do (market, product, goals).
I'd guess that the labor should be valued at significantly more than $100k per person-year. Your calculation suggests that 64% of EA resources spent are funding and 36% are labour, but given that we're talent-constrained, I would guess that the labor should be valued at something closer to $400k/y, suggesting a split of 31%/69% between funding and talent, respectively. (Or put differently, I'd guess >20 people pursuing direct work could make >$10 million per year if they tried earning to give, and they're presumably working on things more valuable than that, so the total should be a lot higher than $200 million.)
Using those figures, the overallocation to global poverty looks less severe, we're over- rather than underallocating to meta, and the other areas look roughly similar (e.g., there still is a large gap in AI).
Regarding the overallocation to meta, one caveat is that the question was multi-select, and many people who picked that might only do a relatively small amount of meta work, so perhaps we're allocating the appropriate amount.
Regarding a YC incubator model, I think the main issue is just that people rarely generate sufficiently well-targeted and ambitious startup ideas. I really don't think we need another dozen donation apps or fundraising orgs, but that's what people often come up with. I think we'd want something that does more to help people develop better ideas. (Perhaps that's what you had in mind as well.)
FWIW, as someone who previously warned about risk of accidental harm, I personally mostly agree with this comment. I think what I care more is "option value to shut projects down if they turn out to be harmful" than preventing damage in the first place (with the exception of projects that have very large negative effects from the very beginning).
I commented on a draft of this post. I haven't re-read it in full, so I don't know to what degree my comments were incorporated. Based on a quick glance it seems they weren't, so I thought I'd copy the main comments I left on that draft. My main point is that I think inserting regional groups into the funding landscape would likely worsen rather than improve the funding situation. I still think regional groups seem promising for other reasons.
Some of my comments (copy-paste, quickly written):
[Regarding applying for funding:] At a high level, my guess would be that this solution would increase overhead and friction in distributing money, rather than reducing it. I think setting up lots of regional grantmakers is a lot of work
That said, I think regional groups can be very useful and valuable for other reasons. Just don't really think they should do grantmaking.
I'm worried about different regional groups applying inconsistent quality service, and/or inconsistent criteria in distributing money
I think we should think of ways to address the psychological issue of people being afraid, rather than building a lot of structure around this
I think [the EAIF would] have a pretty easy time setting up more scalable systems [once there is a much larger number of groups]
E.g. we could set up more standardized, faster processes for grant applications that fit certain categories that can be quickly reviewed by less senior people. The bottleneck for setting up such a system is having a sufficient number of applications for it to be worth doing
You also need to build the infrastructure for making the payments themselves efficiently, doing the financial accounting, running an entity, tax reporting, etc. – (…)
I think people routinely underestimate the time cost of running a legal entity with a lot of activity. I wish people generally try really hard to eliminate any unnecessary operational busywork. Instead, we should focus relentlessly on the EA content and promising people, and use very pragmatic fast solutions for handling admin things
Keep using your hands, acknowledging it may be (partly) psychosomatic, and not worrying too much about it. A friend told me they saw a surgeon for RSI and the surgeon recommended to keep using the hands as normally and not worry too much, and that helped in their case.
Reducing phone usage; not using the phone in bed while lying down; not playing games on my phone.
In 80K's The Precipice mailing experiment, 15% of recipients reported reading the book in full after a month, and ~7% of people reported reading at least half.
I'm also aware of some anecdotal cases where books seemed pretty good - e.g., I know of a very promising person who got highly involved with longtermism within a few months primarily based on reading The Precipice.
The South Korea case study is pretty damning, though. I wonder if things would look better if there had been a small number of promising people who help onboard newly interested ones (or whether that was already the case and it didn't work despite that).
I'd be pretty interested in engagement hours based on email clicks, if you have that data. I care less about open rates and more about whether someone goes on to read through key ideas pages for several hours based on that.
All that said, the high open rates you mentioned have updated me somewhat towards mailing lists being more valuable than I previously thought.
To me it sounds like you're underestimating the value of handing out books: I think books are great because you can get someone to engage with EA ideas for ~10 hours, without it taking up any of your precious time.
As you said, I think books can be combined with mailing lists. (If there was a tradeoff, I would estimate they're similarly good: You can either get a ~20% probability of getting someone to engage for ~10h via a book, or a ~5%(? most people don't read newsletters) probability of getting someone to engage for ~40h via a mailing list. And while I'd rather have one person engage deeply than many people engage shallowly, I think the first few engagement hours tend to be more valuable (less overdetermined) than the ones that follow later.)
One potential takeaway could be that we may want to set up the financial products we'd like to use for hedging ourselves – e.g., by setting up prediction markets for the quantity of oil consumption. (Perhaps FTX would be up for it, though it won't be easy to get liquidity.)
I'm surprised this comment was downvoted so much. It doesn't seem very nuanced, but here's obviously a lot going wrong with modern capitalism. While free markets have historically been a key driver of the decline of global poverty (see e.g. this and this), I don't think it's wrong to say that longtermists should be thinking about large scale economic transition (though should most likely still involve free markets).
A friend (edit: Ruairi Donnelly) raised the following point, which rings true to me:
If you mention EA in a conversation with people who don't know about it yet, it often derails the conversation in unfruitful ways, such as discussing the person's favorite pet theory/project for changing the world, or discussing whether it's possible to be truly altruistic. It seems 'effective altruism' causes people to ask the wrong questions.
In contrast, concepts like 'consequentialism', 'utilitarianism', 'global priorities', or 'longtermism' seem to lead to more fruitful conversations, and the complexity feels more baked into the framing.
I generally agree with most of what you said, including the 3%. I'm mostly writing for that target audience, which I think is probably at least a partial mistake, and seems worth improving.
I'm also thinking that there seem to be quite a few exceptions. E.g., the Zurich ballot initiative I was involved in had contributors from a very broad range of backgrounds. I've also seen people from less privileged backgrounds make excellent contributions in operations-related roles, in fundraising, or by welcoming newcomers to the community. I'm sure I'm missing many further examples. I think these paths are harder to find than priority paths, but they exist, and often seem pretty impactful to me.
I'm overall unsure how much to emphasize donations. It does seem the most robust option for the greatest number of people. But if direct work is often even more impactful, perhaps it's still worth emphasizing that more; it often seems more impactful to have 10 extra people do direct work than 100 people donate 10%. Of course, ideally we'd find a way to speak to all of them.
I strongly agree with the premise of this post and really like the analysis, but feel unhappy with the strong focus on physical products. I think we should instead think about a broader set of scalable ways to usefully spend money, including but not limited to physical products. E.g. scholarships aren't a physical product, but large scholarship programs could plausibly scale to >$100 million.
(Perhaps this has been said already; I haven't bothered reading all the comments.)
Yeah, in my model, I just assumed lower returns for simplicity. I don't think this is a crazy assumption – e.g., even if the AI portfolio has higher risk, you might keep your Sharpe ratio constant by reducing your equity exposure. Modelling an increase in risk would have been a bit more complicated, and would have resulted in a similar bottom line.
I don't really understand your model, but if it's correct, presumably the optimal exposure to the AI portfolio would be at least slightly greater than zero. (Though perhaps clearly lower than 100%.)
I think deciding between capital allocators is a great use of the donor lottery, even as a Plan A. You might say something like: "I would probably give to the Long-Term Future Fund, but I'm not totally sure whether they're better than the EA Infrastructure Fund or Longview or something I might come up with myself. So I'll participate in the donor lottery so if I win, I can take more time to read their reports and see which of them seems best." I think this would be a great decision.
I'd be pretty unhappy if such a donor then felt forced to instead do their own grantmaking despite not having a comparative advantage for doing so (possibly underperforming Open Phil's last dollar), or didn't participate in the donor lottery in the first place. I think the above use case is one of the most central one that I hope to address.
I tentatively agree that further diversification of funding sources might be good, but I don't think the donor lottery is the right tool for that.
I would take a close look at who the grantmakers are and whether their reasoning seems good to you. Because there is significant fungibility and many of these funding pools have broad scopes, I personally expect the competence of the grantmakers to matter at least as much as the specific missions of the funds.
I don't think it's quite as clear that the LTFF is better than the EA Infrastructure Fund; I agree with your argument but think this could be counterbalanced by the EA Infrastructure Fund's greater focus on talent recruitment, or other factors.
I don't know to what degree it is hard for Longview to get fully unrestricted funding, but if that's hard for Longview, giving it unrestricted funding may be a great idea. They may run across promising opportunities that aren't palatable to their donors, and handing them over to EA Funds or Open Philanthropy may not be straightforwardly easy in some cases.
(Disclosure: I run EA Funds, which hosts the LTFF and EA Infrastructure Fund. Opinions my own, as always.)
It's worth pointing out that these questions apply specifically to global health and development, but could be very different in other cause areas.
I don't think question 1 provides evidence that money will do more good in the future. It might even suggest the opposite: As you point out, malaria prevention and deworming might run out of room for more funding, and to me this seems more likely than the discovery of a more cost-effective option that is also highly scalable (beyond >$30 million per year).
I took your spreadsheet and made a quick estimate for an AI mission hedging portfolio. You can access it here.
The model assumes:
AI companies return 20% annually over the next 10 years in a short-timelines world, but less than the global market portfolio in a long-timelines world,
AI companies have equal or lower expected returns than the global market portfolio (otherwise we're just making a bet on AI),
money is 10x more useful in a short-timelines world than in a long-timelines world,
In the model, the extra utility from the AI portfolio is equivalent to an extra 2% annual return.
My guess is that this is less than the extra returns one might expect if one believes the market doesn't price in short AI timelines sufficiently, but it makes the case for investing in an AI portfolio more robust.
Caveat: I did this quickly. I haven't thought very carefully about the choice of parameters, haven't done sensitivity analyses, etc.
I am very excited to announce that we have appointed Max Daniel as the chairperson at the EA Infrastructure Fund. We have been impressed with the high quality of his grant evaluations, public communications, and proactive thinking on the EAIF's future strategy. I look forward to having Max in this new role!
> Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too).
Just wanted to flag briefly that I personally disagree with this:
I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Phil's last dollar), and are truly increasing overall resources*. I think that there's a high chance that more financial resources won't be helpful at all, but some small chance that they will be, so the EV is still weakly positive.
I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*.
* Some models/calculations that I've seen don't do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations / Shapley value (the fundraising organization often doesn't deserve 100% of the credit for the money raised – some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments).
I still somewhat share Buck's overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization and writing a thesis paper about donation behavior. I'd rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level.
Overall, I want to continue funding good fundraising organizations.
I wouldn't rule it out, but typically we might say something like: We are interested in principle, but would like to wait for another 6-12 months to see how your project/career/organization develops in the meantime before committing the funding (unless there's a convincing reason for why you need the funding immediately).
I'm excited that there's now more work happening on Effective Institutions / IIDM!
Some questions and constructive criticism that's hopefully useful:
The aim was to gauge the diversity of perspectives in the EA community on what “counts'' as IIDM. This helps us understand what the community thinks is important and has the most potential for impact. We hope that the results will shape the rest of our work as a working group and provide a helpful starting point for others as well.
It seems that you're starting out with the assumption that IIDM is a useful category/area, and that figuring out its scope is helpful for determining what's the most impactful. Was there a particular reason for taking the intermediate step via the scope/definition of IIDM? I personally would be curious to learn which kinds of activities people find most promising in this area, and why so. In comparison, the scope question might just track a 'verbal dispute' rather than opinions on ground truths. (Edit: Looks like EdoArad pointed out something similar above.)
Relatedly, the survey gives a picture of what some people interested in IIDM believe about some high-level abstract categories. I wonder if the survey also gave you any insight into the types of activities that people think we should work on. E.g., what specific things do people have in mind when they talk about "Institutional design / governance", and why exactly do they think it's important? Does their reasoning hold up on closer inspection? I personally would feel very excited to see more object-level discussion of that kind. Perhaps a small number of people who have thought about IIDM carefully and systematically could share their object-level arguments on which approaches seem the most promising to them.