Posts

You can now apply to EA Funds anytime! (LTFF & EAIF only) 2021-06-17T20:06:11.610Z
EA Infrastructure Fund: Ask us anything! 2021-06-03T01:06:19.360Z
EA Infrastructure Fund: May 2021 grant recommendations 2021-06-03T01:01:01.202Z
[Coronavirus] Is it a good idea to meet people indoors if everyone's rapid antigen test came back negative? 2021-03-24T15:45:58.043Z
Some quick notes on "effective altruism" 2021-03-24T15:30:48.240Z
EA Funds has appointed new fund managers 2021-03-23T10:55:29.535Z
EA Funds is more flexible than you might think 2021-03-05T09:35:11.737Z
Apply to EA Funds now 2021-02-13T13:36:10.840Z
Giving What We Can & EA Funds now operate independently of CEA 2020-12-22T03:47:48.140Z
How to best address Repetitive Strain Injury (RSI)? 2020-11-19T09:15:27.271Z
Why you should give to a donor lottery this Giving Season 2020-11-17T12:40:02.134Z
Apply to EA Funds now 2020-09-15T19:23:38.668Z
The EA Meta Fund is now the EA Infrastructure Fund 2020-08-20T12:46:31.556Z
EAF/FRI are now the Center on Long-Term Risk (CLR) 2020-03-06T16:40:10.190Z
EAF’s ballot initiative doubled Zurich’s development aid 2020-01-13T11:32:35.397Z
Effective Altruism Foundation: Plans for 2020 2019-12-23T11:51:56.315Z
Effective Altruism Foundation: Plans for 2019 2018-12-04T16:41:45.603Z
Effective Altruism Foundation update: Plans for 2018 and room for more funding 2017-12-15T15:09:17.168Z
Fundraiser: Political initiative raising an expected USD 30 million for effective charities 2016-09-13T11:25:17.151Z
Political initiative: Fundamental rights for primates 2016-08-04T19:35:28.201Z

Comments

Comment by Jonas Vollmer on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2021-07-27T12:54:10.540Z · EA · GW

Yeah, in my model, I just assumed lower returns for simplicity. I don't think this is a crazy assumption – e.g., even if the AI portfolio has higher risk, you might keep your Sharpe ratio constant by reducing your equity exposure. Modelling an increase in risk would have been a bit more complicated, and would have resulted in a similar bottom line.

I don't really understand your model, but if it's correct, presumably the optimal exposure to the AI portfolio would be at least slightly greater than zero. (Though perhaps clearly lower than 100%.)

Comment by Jonas Vollmer on What would you do if you had half a million dollars? · 2021-07-22T11:19:19.420Z · EA · GW

I think deciding between capital allocators is a great use of the donor lottery, even as a Plan A. You might say something like: "I would probably give to the Long-Term Future Fund, but I'm not totally sure whether they're better than the EA Infrastructure Fund or Longview or something I might come up with myself. So I'll participate in the donor lottery so if I win, I can take more time to read their reports and see which of them seems best." I think this would be a great decision.

I'd be pretty unhappy if such a donor then felt forced to instead do their own grantmaking despite not having a comparative advantage for doing so (possibly underperforming Open Phil's last dollar), or didn't participate in the donor lottery in the first place. I think the above use case is one of the most central one that I hope to address.

I tentatively agree that further diversification of funding sources might be good, but I don't think the donor lottery is the right tool for that.

Comment by Jonas Vollmer on What would you do if you had half a million dollars? · 2021-07-22T11:11:30.165Z · EA · GW

I really liked this comment. Three additions:

  • I would take a close look at who the grantmakers are and whether their reasoning seems good to you. Because there is significant fungibility and many of these funding pools have broad scopes, I personally expect the competence of the grantmakers to matter at least as much as the specific missions of the funds.
  • I don't think it's quite as clear that the LTFF is better than the EA Infrastructure Fund; I agree with your argument but think this could be counterbalanced by the EA Infrastructure Fund's greater focus on talent recruitment, or other factors.
  • I don't know to what degree it is hard for Longview to get fully unrestricted funding, but if that's hard for Longview, giving it unrestricted funding may be a great idea. They may run across promising opportunities that aren't palatable to their donors, and handing them over to EA Funds or Open Philanthropy may not be straightforwardly easy in some cases.

(Disclosure: I run EA Funds, which hosts the LTFF and EA Infrastructure Fund. Opinions my own, as always.) 

Comment by Jonas Vollmer on Metaculus Questions Suggest Money Will Do More Good in the Future · 2021-07-22T10:36:53.413Z · EA · GW

It's worth pointing out that these questions apply specifically to global health and development, but could be very different in other cause areas.

I don't think question 1 provides evidence that money will do more good in the future. It might even suggest the opposite: As you point out, malaria prevention and deworming might run out of room for more funding, and to me this seems more likely than the discovery of a more cost-effective option that is also highly scalable (beyond >$30 million per year).

Comment by Jonas Vollmer on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2021-07-09T09:49:53.939Z · EA · GW

I took your spreadsheet and made a quick estimate for an AI mission hedging portfolio. You can access it here

The model assumes:

  • AI companies return 20% annually over the next 10 years in a short-timelines world, but less than the global market portfolio in a long-timelines world,
  • AI companies have equal or lower expected returns than the global market portfolio (otherwise we're just making a bet on AI),
  • money is 10x more useful in a short-timelines world than in a long-timelines world,
  • logarithmic utility.

In the model, the extra utility from the AI portfolio is equivalent to an extra 2% annual return. 

My guess is that this is less than the extra returns one might expect if one believes the market doesn't price in short AI timelines sufficiently, but it makes the case for investing in an AI portfolio more robust.

Caveat: I did this quickly. I haven't thought very carefully about the choice of parameters, haven't done sensitivity analyses, etc.

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-07-06T17:03:28.724Z · EA · GW

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

Comment by Jonas Vollmer on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-07-06T17:03:12.812Z · EA · GW

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-07-06T17:02:07.016Z · EA · GW

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

Comment by Jonas Vollmer on EA Funds has appointed new fund managers · 2021-07-06T16:59:44.499Z · EA · GW

I am very excited to announce that we have appointed Max Daniel as the chairperson at the EA Infrastructure Fund. We have been impressed with the high quality of his grant evaluations, public communications, and proactive thinking on the EAIF's future strategy. I look forward to having Max in this new role!

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-07-06T09:05:49.327Z · EA · GW

I'm also in favor of EA Funds doing generous back payments for successful projects. In general, I feel interested in setting up prize programs at EA Funds (though it's not a top priority).

One issue is that it's harder to demonstrate to regulators that back payments serve a charitable purpose. However, I'm confident that we can find workarounds for that.

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-07-05T11:45:28.768Z · EA · GW

> Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?

Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too).

Just wanted to flag briefly that I personally disagree with this:

  • I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Phil's last dollar), and are truly increasing overall resources*. I think that there's a high chance that more financial resources won't be helpful at all, but some small chance that they will be, so the EV is still weakly positive.
  • I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*.

* Some models/calculations that I've seen don't do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations / Shapley value (the fundraising organization often doesn't deserve 100% of the credit for the money raised – some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments).

I still somewhat share Buck's overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization and writing a thesis paper about donation behavior. I'd rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level.

Overall, I want to continue funding good fundraising organizations.

Comment by Jonas Vollmer on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-07-01T19:52:11.271Z · EA · GW

(Also agree with Max. Long lead times in academia definitely qualify as a "convincing reason" in my view)

Comment by Jonas Vollmer on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-06-29T13:55:07.570Z · EA · GW

I wouldn't rule it out, but typically we might say something like: We are interested in principle, but would like to wait for another 6-12 months to see how your project/career/organization develops in the meantime before committing the funding (unless there's a convincing reason for why you need the funding immediately).

Comment by Jonas Vollmer on Refining improving institutional decision-making as a cause area: results from a scoping survey · 2021-06-28T15:15:25.304Z · EA · GW

I'm excited that there's now more work happening on Effective Institutions / IIDM!

Some questions and constructive criticism that's hopefully useful:

The aim was to gauge the diversity of perspectives in the EA community on what “counts'' as IIDM. This helps us understand what the community thinks is important and has the most potential for impact. We hope that the results will shape the rest of our work as a working group and provide a helpful starting point for others as well. 

It seems that you're starting out with the assumption that IIDM is a useful category/area, and that figuring out its scope is helpful for determining what's the most impactful. Was there a particular reason for taking the intermediate step via the scope/definition of IIDM? I personally would be curious to learn which kinds of activities people find most promising in this area, and why so. In comparison, the scope question might just track a 'verbal dispute' rather than opinions on ground truths. (Edit: Looks like EdoArad pointed out something similar above.)

Relatedly, the survey gives a picture of what some people interested in IIDM believe about some high-level abstract categories. I wonder if the survey also gave you any insight into the types of activities that people think we should work on. E.g., what specific things do people have in mind when they talk about "Institutional design / governance", and why exactly do they think it's important? Does their reasoning hold up on closer inspection? I personally would feel very excited to see more object-level discussion of that kind. Perhaps a small number of people who have thought about IIDM carefully and systematically could share their object-level arguments on which approaches seem the most promising to them.

Comment by Jonas Vollmer on Shallow evaluations of longtermist organizations · 2021-06-26T16:17:44.807Z · EA · GW

I actually think it would be cool to have more posts that explicitly discuss which organizations people should go work at (and what might make it a good personal fit for them).

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-24T10:11:27.136Z · EA · GW

If you have to pay fairly (i.e., if you pay one employee $200k/y, you have to pay everyone else with a similar skill level a similar amount), the marginal cost of an employee who earns $200k/y can be >$1m/y. That may still be worth it, but less clearly so.

FWIW, I also don't really share the experience that labor supply is elastic above $100k/y, at least when taking into account whether staff have a good attitude, fit into the culture of the organization, etc. I'd be keen to hear more about that.

Comment by Jonas Vollmer on 2018-2019 Long Term Future Fund Grantees: How did they do? · 2021-06-21T08:07:51.802Z · EA · GW

I'd be pretty excited about financially incentivizing people to do more such evaluations. Not sure how to set the incentives optimally, though – I really want to avoid any incentives that make it more likely that people say what we want to hear (or that lead others to think that this is what happened, even when didn't), but I also care a lot about such evaluations being high-quality and and having sufficient depth, so don't want to hand out money for any kind of evaluation.

Perhaps one way is to pay $2,000 for any evaluation or review that receives >120 Karma on the EA Forum (periodically adjusted for Karma inflation), regardless of what it finds? Of course, this is somewhat gameable, but perhaps it's good enough.

Comment by Jonas Vollmer on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-06-19T09:28:41.356Z · EA · GW

Yeah, I plan to keep sending the form around in the coming months. Using the EA Forum question feature is a great idea, too. Thank you!

Comment by Jonas Vollmer on 2018-2019 Long Term Future Fund Grantees: How did they do? · 2021-06-18T16:38:24.632Z · EA · GW

Thanks a lot for doing this evaluation! I haven't read it in full yet, but I would like to encourage more people to review and critique EA Funds. As the EA Funds ED, I really appreciate it if others take the time to engage with our work and help us improve.

Comment by Jonas Vollmer on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-06-18T16:34:27.596Z · EA · GW

"It's hard to find great grants" seems different than "It's hard to find grants we really like".

I would expect that most grantmakers (including ones with different perspectives) would agree with this and would find it hard to spend money in useful ways (e.g., I suspect that Nuño might say something similar if he were running the LTFF, though not sure). So while I think your framing is overall slightly more accurate, I feel like it's okay to phrase it the way I did.

that they're skeptical of funding independent researchers

I don't think this characterization is accurate. I think we're funding a lot of independent researchers, and often think that's great use of money. In fact, the (I think) LTFF's highest-rated grant ever was an independent research grant. I think the LTFF managers are saying something more like "doing independent research is really hard (psychologically and intellectually)", and we want to avoid funding people for independent research when they might do much better in an organization.

Similarly, a poll of Fast Grants recipients found that almost 80% would make major changes to their research program if funders relaxed constraints on what their grants could be used for, suggesting that the preferences of grantmakers can diverge wildly from the preferences of researchers applying for grants.

The context of that (NIH grants) seems very different; I don't think this supports the thesis "EA Funds grantmakers have different preferences from EA Funds grantseekers".

I haven't read Nuño's post yet (just discovered it now through your comment).

Comment by Jonas Vollmer on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-06-18T08:00:17.079Z · EA · GW

In a typical case, it takes a week to complete due diligence, and up to 31 days for the money to be paid out (because we currently do the payouts in monthly batches). So from decision to "money in the bank account" it takes 1–6 weeks, typically 3.5 weeks. I think the country shouldn't matter too much for this. Because most grantees care more about having a definite decision than the money actually arriving in their bank account, this waiting time seemed fine to us (though we're also looking into ways to cut it short).

That said, if the grantseeker indicates that they need the money urgently, and they submit due diligence promptly, the payout can be expedited and should take just a few days.

Comment by Jonas Vollmer on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-06-17T22:03:58.823Z · EA · GW

Thanks, we really hope it will help people like the ones you mentioned!

Comment by Jonas Vollmer on What should CEEALAR be called? · 2021-06-15T17:48:42.426Z · EA · GW

I like Athena, or Athena Centre!

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-14T10:00:35.325Z · EA · GW

Here's another comment that goes into this a bit.

Comment by Jonas Vollmer on Forget replaceability? (for ~community projects) · 2021-06-14T09:42:10.565Z · EA · GW

In my mind, a significant benefit of impact certificates is that they can feel motivating:

The huge uncertainty about the long-run effects of our actions is a common struggle of community builders and longtermists. Earning to give or working on near-term issues (e.g., corporate farm animal welfare campaigns, or AMF donations) tends to come with a much stronger sense of moral urgency, tighter feedback loops, and a much clearer sense of accomplishment if you actually managed to do something important: 1 million hens are spared from battery cages in country X! You saved 10 lives from malaria in developing countries! 

In comparison, the best a longtermist can ever wish for, is "these five people – who I think have good judgment – said something vaguely positive at an event, so I'm probably doing okay". Even though I buy the empirical and philosophical arguments for EA meta work or longtermism, I personally often find the dearth of costly signals of success somewhat demotivating, and I think others feel similar. 

I think impact certificates could fix this issue to some degree, as they do provide such a costly signal.

I also wonder if a lot of people aren't quitting ETG and doing direct work because they don't realize just how valuable people would find their work if they did. If they're currently earning $500k per year and donating half that amount, it feels kind of a downgrade to do direct work for $80k per year and some vague, fuzzy notion of longtermist impact. But if I could offer them a compensation package that includes impact certificates that plausibly have an EV of millions of dollars, the impact difference would become more palpable.

Edit: The "Implicit impact markets without infrastructure" equivalent of this would be to tell people what amount of donations you'd be willing to forego in return for their work. But this is not a costly signal, and thus less credible, and (in my view) also less motivating. Also, people across the EA community are operating with wildly different numbers (e.g., I've read that Rohin trades his time for money at a ~10x lower rate than I do, and I think he's probably adding a lot more value than I am, so collectively we have to be wrong by >10x), and in my view, it's hard to make them consistent without markets, or at least a lot more dialogue about these tradeoffs.

Comment by Jonas Vollmer on Forget replaceability? (for ~community projects) · 2021-06-14T09:31:24.716Z · EA · GW

Certificates of impact are the best known proposal for this, although they aren't strictly necessary.

I don't understand the difference between certificates of impact and altruistic equity – they seem kind of the same thing to me. Is the main difference that certificates of impact are broader, whereas altruistic equity refers to certificates of impact of organizations (rather than individuals, etc.)? Or is the idea that certificates of impact would also come with a market to trade them, whereas altruistic equity wouldn't? Either way,  I don't find it useful to make this distinction. But probably I'm just misunderstanding.

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-14T09:19:39.942Z · EA · GW
  • Having an application form that asks some more detailed questions (e.g., path to impact of the project, CV/resume, names of the people involved with the organization applying, confidential information)
  • Having a primary investigator for each grant (who gets additional input from 1-3 other fund managers), rather than having everyone review all grants
  • Using score voting with a threshold (rather than ordering grants by expected impact, then spending however much money we have)
  • Explicitly considering giving applicants more money than they applied for
  • Offering feedback to applicants under certain conditions (if we feel like we have particularly useful thoughts to share with them, or they received an unusually high score in our internal voting)
  • Asking for references in the first stage of the application form, but without requiring applicants to clear them ahead of time (so it's low-effort for them, but we already know who the references would be)
  • Having an automatically generated google doc for each application that contains all the information related to a particular grant (original application, evaluation, internal discussion, references, applicant emails, etc.)
  • Writing in-depth payout reports to build trust and help improve community epistemics; write shorter, lower-effort payout reports once that's done and we want to save time
Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-14T08:46:51.627Z · EA · GW

I include the opportunity cost of the broader community (e.g., the project hires people from the community who'd otherwise be doing more impactful work), but not the opportunity cost of providing the funding. (This is what I meant to express with "someone giving funding to them", though I think it wasn't quite clear.)

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-13T14:47:42.048Z · EA · GW

This isn't what you asked, but out of all the applications that we receive (excluding desk rejections), 5-20% seem ex ante net-negative to me, in the sense that I expect someone giving funding to them to make the world worse. In general, worries about accidental harm do not play a major role in my decisions not to fund projects, and I don't think we're very risk-averse. Instead, a lot of rejections happen because I don't believe the project will have a major positive impact.

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-06T20:34:08.577Z · EA · GW

A further point is donor coordination / moral trade / fair-share giving. Treating it as a tax (as Larks suggests) could often amount to defecting in an iterated prisoner's dilemma between donors who care about different causes. E.g., if the EAIF funded only one org, which raised $0.90 for MIRI, $0.90 for AMF, and $0.90 for GFI for every dollar spent, this approach would lead to it not getting funded, even though co-funding with donors who care about other cause areas would be a substantially better approach.

You might respond that there's no easy way to verify whether others are cooperating. I might respond that you can verify how much money the fund gets in total and can ask EA Funds about the funding sources. (Also, I think that acausal cooperation works in practice, though perhaps the number of donors who think about it in this way is too small for it to work here.)

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-06T20:17:53.317Z · EA · GW

Here's a toy model:

  • A production function roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly)
  • A default assumption that longtermism will eventually end up with $30-$300B in funding, let's assume $100B

Increasing the funding from $100B to $200B would then increase utility by 15%.

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-06T18:09:45.492Z · EA · GW

Thanks, this is useful!

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-06T18:09:14.485Z · EA · GW

I don't think anyone has made any mistakes so far, but they would (in my view) be making a mistake if they didn't allocate more funding this year.

Edit:

you've said elsewhere already indicate you think smaller donors are indeed often making a mistake by not allocating more funding to EAIF and LTFF

Hmm, why do you think this? I don't remember having said that.

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-06T18:02:10.269Z · EA · GW

the extent to which fund managers should be trying to instantiate donor's wishes vs fund managers allocating the money by their own lights of what's best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former

This is a longer discussion, but I lean towards the latter, both because I think this will often lead to better decisions, and because many donors I've talked to actually want the fund managers to spend the money that way (the EA Funds pitch is "defer to experts" and donors want to go all in on that, with only minimal scope constraints).

To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better.

I suspect you would agree with this in principle: you wouldn't want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused.

Yeah, I agree that all grants should be broadly in scope – thanks for clarifying.

I haven't thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive - this gives donors greater choice and minimises worries about permissible fund allocation. 

Fund scope definitions are always a bit fuzzy, many grants don't fit into a particular bucket very neatly, and there are lots of edge cases. So while I'm sympathetic to the idea in principle, I think it would be really hard to do in practice. See Max's comment.

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-06T11:27:50.728Z · EA · GW

I think we will probably do two types of post-hoc evaluations:

  1. Specifically aiming to improve our own decision-making in ways that seem most relevant to us, without publishing the results (as they would be quite explicit about which grantees were successful in our view), driven by key uncertainties that we have
  2. Publicly communicating our track record to donors, especially aiming to find and communicate the biggest successes to date

#1 is somewhat high on my priority list (may happen later this year), whereas #2 is further down (probably won't happen this year, or if it does, it would be a very quick version). The key bottleneck for both of these is hiring more people who can help our team carry out these evaluations.

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-06T11:22:43.768Z · EA · GW

high quality and convincing in whatever conclusions it has

This.

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-06T11:18:34.622Z · EA · GW

Yeah, the latter is what I meant to say, thanks for clarifying.

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-06T11:13:24.227Z · EA · GW

FWIW, it's not just admin hassle but also mental attention for the fund chairs that's IMO much better spent on improving their decisions. I think there are large returns from fund managers focusing fully on whether a grant is a good use of money or on how to make the grantees even more successful. I therefore think the costs of having to take into account (likely heterogeneous) donor preferences when evaluating specific grants are quite high, and so as long as a majority of assessed grants seems to be somewhat "in scope" it's overall better if fund managers can keep their head free from scope concerns and other 'meta' issues.

I believe that we can do the most good by attracting donors who endorse the above. I'm aware this means that donors with different preferences may want to give elsewhere.

(Made some edits to the above comment to make it less disagreeable.)

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-06T11:03:26.449Z · EA · GW

Yeah, I agree with Dicentra. Basically I'm fine if donors don't donate to the EA Funds for these reasons; I think it's not worth bothering (time cost is small, but benefit even smaller). 

There's also a whole host of other issues; Max Daniel is planning to post a comment reply to Larks' above comment that mentions those as well. Basically it's not really possible to clearly define the scope in a mutually exclusive way.

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-06T11:01:03.925Z · EA · GW

Buck, Max, and yourself are enthusiastic longtermists (…) it would seem to follow you could (/should?) put the vast majority of the EAIIF towards long-terminist projects

In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them. See, e.g., Ajeya Cotra on the 80,000 Hours podcast. I personally feel excited to fund high-quality projects that develop or promote EA principles, whether they're longtermist or not. (And Michelle suggested this as well.) For the EAIF, I would evaluate a project like HLI based on whether it seems like it overall furthers the EA project (i.e., makes EA thinking more sophisticated, leads to more people making important decisions according to EA principles, etc.).

Let me try again with a more specific case. Suppose you are choosing between projects A and B - perhaps they have each asked for $100k but you only have $100k left. Project A is only eligible for funding from EAIF - the other EA funds consider it outside their respective purviews. Project B is eligible for funding from one of the other EA funds, but so happens to have applied to EAIF. Suppose, further, you think B is more cost-effective at doing good.

FWIW, I think this example is pretty unrealistic, as I don't think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination. In practice, I would probably recommend a split between A and B (recommending my 'fair share' to B, and the rest to A); I would probably coordinate this explicitly with the other funds. I would probably also try to refer both A and B to other funders to ensure both get fully funded.

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-06T09:51:07.008Z · EA · GW

When I said that the EAIF and LTFF have room for more funding, I didn't mean to say "EA research is funding-constrained" but "I think some of the abundant EA research funding should be allocated here." 

Saying "this particular pot has room for more funding" can be fully consistent with the overall ecosystem being saturated with funding.

Do you think increasing available funding wouldn't help with any EA stuff

I think it definitely helps a lot with neartermist interventions. I also think it still makes a substantial* difference in longtermism, including research – but the difference you can make through direct work is plausibly vastly greater (>10x greater).

* Substantial in the sense "if you calculate the expected impact, it'll be huge", not "substantial relative to the EA community's total impact."

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-06T09:26:41.463Z · EA · GW

Making up some random numbers:

  • The donors to the fund – 8%
  • The grantmakers – 10%
  • The rest of the EAIF infrastructure (eg Jonas running everything, the CEA ops team handling ops logistics) – 7%
  • The grantee – 75%

This is for a typical grant where someone applies to the fund with a reasonably promising project on their own and the EAIF gives them some quick advice and feedback. For a case of strong active grantmaking, I might say something more like 8% / 30% / 12% / 50%.

This is based on the reasoning that we're quite constrained by promising applications and have a lot of funding available.

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-06T09:19:54.252Z · EA · GW

I notice that the listed grants seems substantially below $1000/hour (…)

Is this because you aren't getting those senior people applying? Or are there other constraints?

The main reason is that the people are willing to work for a substantially lower amount than what they could make when earning to give. E.g., someone who might be able to make $5 million per year in quant trading or tech entrepreneurship might decide to ask for a salary of $80k/y when working at an EA organization. It would seem really weird for that person to ask for a $5 million / year salary, especially given that they'd most likely want to donate most of that anyway.

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-04T16:01:11.568Z · EA · GW

(Just wanted to say that I agree with Michelle.)

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T15:52:53.978Z · EA · GW

A big part of the reason was simply that CLTR and Jakob Lohmar happened to apply to the EAIF, not the LTFF. Referring grants takes time (not a lot, but I don't think doing such referrals is a particularly good use of time if the grants are in scope for both funds). This is partly explained in the introduction of the grant report.

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T15:48:54.642Z · EA · GW

Though I still think it would probably make sense for Fund A to refer an application to Fund B if the project seems more centrally in-scope for Fund B, and let Fund B evaluate it first.

In theory, I agree. In practice, this shuffling around of grants costs some time (both in terms of fund manager work time, and in terms of calendar time grantseekers spend waiting for a decision), and I prefer spending that time making a larger number of good grants rather than on minor allocation improvements.

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T15:43:03.143Z · EA · GW

without an explanation of why they are being funded from the Infrastructure Fund

In the introduction, we wrote the following. Perhaps you missed it? (Or perhaps you were interested in a per-grant explanation, or the explanation seemed insufficient to you?)

Some of the grants are oriented primarily towards causes that are typically prioritized from a ‘non-longtermist’ perspective; others primarily toward causes that are typically prioritized for longtermist reasons. The EAIF makes grants towards longtermist projects if a) the grantseeker decided to apply to the EAIF (rather than the Long-Term Future Fund), b) the intervention is at a meta level or aims to build infrastructure in some sense, or c) the work spans multiple causes (whether the case for them is longtermist or not). We generally strive to maintain an overall balance between different worldviews according to the degree they seem plausible to the committee.

Comment by Jonas Vollmer on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-04T15:39:34.777Z · EA · GW

My guess is 1-3 experts.

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-04T15:34:50.997Z · EA · GW

My take on this (others at the EAIF may disagree and may convince me otherwise):

I think EA Funds should be spending less time on detailed reports, as they're not read by that many people. Also, a main benefit is people improving their thinking based on reading them (it seems helpful for improving one's judgment ability to be able to read very concrete practical decisions and how they were reached), but there are a many such reports already at this point, such that writing further ones doesn't help that much – readers can simply go back to past reports and read those instead. I think EA Funds should produce such detailed reports every 1-2 years (especially when new fund managers come on board, so interested donors can get a sense of their thinking), and otherwise focus more on active grantmaking.

In addition, I think it would make sense for us to publish reports on whichever topic seems most important to us to communicate about – perhaps an intervention report, perhaps an important but underappreciated consideration, or a cause area. I think this should probably happen on an ad-hoc basis.

Comment by Jonas Vollmer on EA Infrastructure Fund: Ask us anything! · 2021-06-04T15:20:36.529Z · EA · GW

Some further things pushing me towards lowering my bar:

  • It seems to me that it has proven pretty hard to convert money into EA movement growth and infrastructure improvements. This means that when we do encounter such an opportunity, we should most likely take it, even if it seems expensive or unlikely to succeed.
  • EA has a really large amount of money available (literally billions). Some EAs doing direct work could literally earn >$1,000 per hour if they pursued earning to give, but it's generally agreed that direct work seems more impactful for them. Our common intuitions for spending money don't hold anymore – e.g., a discussion about how to spend $100,000 should probably receive roughly as much time and attention as a discussion about how to spend 2.5 weeks (100 hours) of senior staff time. This means that I don't want to think very long about whether to make a grant. Instead, I want to spend more time thinking about how to help ensure that the project will actually be successful.
  • In cases where a grant might be too weird for a broad range of donors, we can always refer them to a private funder. So I try to think about whether something should be funded or not, and ignore the donor perception issue. At a later point, I can still ask myself 'should this be funded by the EAIF or a large aligned donor?'

Some further things increasing my bar:

  • If we routinely fund mediocre work, there's little real incentive for grantseekers to strive to produce truly outstanding work.