The Long-Term Future Fund has room for more funding, right now

post by abergal · 2021-03-29T01:46:21.779Z · EA · GW · 23 comments

23 comments

Comments sorted by top scores.

comment by Jonas Vollmer · 2021-04-29T17:58:41.518Z · EA(p) · GW(p)

The EA Infrastructure Fund also has $1.2–$4 million in room for more funding

  • The EA Infrastructure Fund will grant out around $1.2 million during this funding round. Extrapolating from that number and expecting some further growth, I'd guess it'll want to spend around $3–$6 million this year.
  • The fund has $1.8 million available currently, leaving a $1.2–$4.2 million funding gap. I expect that donors will by default give around $1.4 million by November, likely leaving to a funding shortfall of $0–$2.8 million (with a median guess of $1 million).
  • Similar to the Long-Term Future Fund, the EA Infrastructure Fund has greatly increased its grantmaking capacity: It received 79 applications as part of the current round  (up 2x compared to last round), desk-rejected 21 of them, and decided to fund 25 (up 3x) out of the remaining 58, with a total grant volume of $1.2 million (up 3x), while (in my view) maintaining and perhaps increasing the quality of the grants.
comment by jackmalde · 2021-03-29T14:46:57.093Z · EA(p) · GW(p)

We’ve recently changed parts of the fund’s infrastructure and composition, and it’s possible that these changes have caused us to unintentionally lower our standards for funding. My personal sense is that this isn’t the case

Can you say more about why the changes might have led lower standards for funding? It sounds like you think there are some at least somewhat plausible reasons why this might be the case.

Can you also say more about why you actually don't think the standards have fallen despite these possible reasons?

Replies from: abergal
comment by abergal · 2021-03-29T15:33:06.126Z · EA(p) · GW(p)

This round, we switched from a system where we had all the grant discussion in a single spreadsheet to one where we discuss each grant in a separate Google doc, linked from a single spreadsheet. One fund manager has commented that they feel less on-top of this grant round than before as a result. (We're going to rethink this system again for next grant round.) We also changed the fund composition a bunch-- Helen and Matt left, I became chair, and three new guest managers joined. A priori, this could cause a shift in standards, though I have no particular reason to think it would shift them downward.

I personally don't think the standards have fallen because I've been keeping close track of all the grants and feel like I have a good model of the old fund team (and in some cases, have asked them directly for advice). I think the old team would have made similar decisions to the ones we're making on this set of applications. It's possible there would have been a few differences, but not enough to explain a big change in spending.

Replies from: jackmalde
comment by jackmalde · 2021-03-29T16:07:45.286Z · EA(p) · GW(p)

Thanks! I'm actually not surprised that the quality of grant applications might be increasing e.g. due to people learning more about what makes for a good grant.

I have a follow-on question. Do you think that the increase in the size of the grant requests is justified? Is this because people are being more ambitious in what they want to do?

Replies from: abergal, Jonas Vollmer
comment by abergal · 2021-03-29T16:29:36.648Z · EA(p) · GW(p)

(To be clear, I think it's mostly just that we have more applications, and less that the mean application is significantly better than before.)

In several cases increased grant requests reflect larger projects or requests for funding for longer time periods. We've also definitely had a marked increase in the average individual salary request per year-- setting aside whether this is justified, this runs into a bunch of thorny issues around secondary effects that we've been discussing this round. I think we're likely to prioritize having a more standardized policy for individual salaries by next grant round.

comment by Jonas Vollmer · 2021-03-29T16:29:10.069Z · EA(p) · GW(p)

We also did a lot more promotion and encouraged more people to submit promising applications, and this plausibly also caused more people to apply – so it may be faster-than-organic growth.

comment by SomeonesBurnerAcount · 2021-04-10T22:17:43.926Z · EA(p) · GW(p)

Why does Open Philanthropy not start granting out $1M/yr to the Long-Term Future Fund? Hopefully these opportunities would beat Open Phil's last dollar!

Replies from: abergal
comment by abergal · 2021-04-10T23:22:57.633Z · EA(p) · GW(p)

I think we probably will seek out funding from larger institutional funders if our funding gap persists. We actually just applied for a ~$1M grant from the Survival and Flourishing Fund.

comment by xccf · 2021-03-30T03:07:54.023Z · EA(p) · GW(p)

We received 129 applications this round, desk rejected 33 of them, and are evaluating the remaining 96. Looking at our preliminary evaluations, I’d guess we’ll fund 20 - 30 of these.

I keep hearing that there is "plenty of money for AI safety" and things like that. But by the reversal test, don't these numbers imply you think that most LTFF applicants could do more good earning to give? (Assuming they can make at least the hourly wage they requested on their application in the private sector.)

If they request a grant with a wage of $X/hr, and you reject their proposal, that implies you think the value of the work they are doing is less than $X (since you are unwilling to purchase it at that price), so they would be better off spending a marginal hour to earn $X for the fund instead of putting a marginal hour into direct work.

Your post talks about "room for more funding" in reference to your previous standards for funding, but I think this might be a better way to think about it--if you would be sad to see an applicant give up on direct work and switch to earning to give after LTFF rejects them, I think you still have room for more funding. (This is ~80% of applicants who are getting rejected here--what message should they take away from a rejection? I realize that "give up on direct work" is probably an incredibly demoralizing message for those 80%, I'm just not sure why the above argument is incorrect.)

Replies from: abergal
comment by abergal · 2021-03-30T05:29:13.686Z · EA(p) · GW(p)

I think many applicants who we reject could apply with different proposals that I'd be more excited to fund-- rejecting an application doesn't mean I think there's no good direct work the applicant could do.

I would guess some people would be better off earning to give, but I don't know that I could say which ones just from looking at one application they've sent us.

Replies from: xccf
comment by xccf · 2021-03-30T06:27:27.777Z · EA(p) · GW(p)

I see. That suggests you think the LTFF would have much more room for funding with some not-super-large changes to your processes, such as encouraging applicants to submit multiple project proposals, or doing calls with applicants to talk about other projects they could do, or modifications to their original proposal which would make it more appealing to you.

Replies from: abergal
comment by abergal · 2021-03-30T07:30:03.829Z · EA(p) · GW(p)

Sadly, I think those changes would in fact be fairly large and would take up a lot of fund manager time. I think small modifications to original proposals wouldn't be enough, and it would require suggesting new projects or assessing applicants holistically and seeing if a career change made sense.

In my mind, this relates to ways in which mentorship is a bottleneck in longtermist work right now--  there are probably lots of people who could be doing useful direct work, but they would require resources and direction that we as a community don't have the capacity for. I don't think the LTFF is well-placed to provide this kind of mentorship, though we do offer to give people one-off feedback on their applications.

Replies from: xccf
comment by xccf · 2021-03-30T09:13:55.595Z · EA(p) · GW(p)

there are probably lots of people who could be doing useful direct work, but they would require resources and direction that we as a community don't have the capacity for.

I imagine this could be one of the highest-leverage places to apply additional resources and direction though. People who are applying for funding for independent projects are people who desire to operate autonomously and execute on their own vision. So I imagine they'd require much less direction than marginal employees at an EA organization, for instance.

I also think there's an epistemic humility angle here. It's very likely that the longtermist movement as it currently exists is missing important perspectives. To some degree, as a funder, you are diffing your perspective against that of applicants and rejecting applicants whose projects make sense according to their perspective and not yours. It seems easy for this to result in the longtermist movement developing more homogenous perspectives over time, as people Goodhart on whatever metrics are related to getting funding/career advancement. I'm actually not convinced that direction is a good thing! I personally would be more inclined to fund anyone who meets a particular talent bar. That also makes your job easier because you can focus on just the person/people and worry less about their project.

we do offer to give people one-off feedback on their applications.

Huh. I understood your rejection email says the fund was unable to provide further feedback due to high volume of applications.

Replies from: abergal
comment by abergal · 2021-04-06T22:41:57.448Z · EA(p) · GW(p)

I imagine this could be one of the highest-leverage places to apply additional resources and direction though. People who are applying for funding for independent projects are people who desire to operate autonomously and execute on their own vision. So I imagine they'd require much less direction than marginal employees at an EA organization, for instance.

I don't have a strong take on whether people rejected from the LTFF are the best use of mentorship resources. I think many employees at EA organizations are also selected for being self-directed. I know of cases where mentorship made a big difference to both existing employees and independent LTFF applicants.

I personally would be more inclined to fund anyone who meets a particular talent bar. That also makes your job easier because you can focus on just the person/people and worry less about their project.

We do weigh individual talent heavily when deciding what to fund, i.e., sometimes we will fund someone to do work we're less excited about because we're interested in supporting the applicant's career. I'm not in favor of funding exclusively based on talent, because I think a lot of the impact of our grants is in how they affect the surrounding field, and low-quality work dilutes the quality of those fields and attracts other low-quality work.

Huh. I understood your rejection email says the fund was unable to provide further feedback due to high volume of applications.

Whoops, yeah-- we were previously overwhelmed with requests for feedback, so we now only offer feedback on a subset of applications where fund managers are actively interested in providing it.

Replies from: xccf
comment by xccf · 2021-04-08T11:28:45.672Z · EA(p) · GW(p)

I'm not in favor of funding exclusively based on talent, because I think a lot of the impact of our grants is in how they affect the surrounding field, and low-quality work dilutes the quality of those fields and attracts other low-quality work.

Let's compare the situation of the Long-Term Future Fund evaluating the quality of a grant proposal to that of the academic community evaluating the quality of a published paper. Compared to the LTFF evaluating a grant proposal, the academic community evaluating the quality of a published paper has big advantages: The work is being evaluated retrospectively instead of prospectively (i.e. it actually exists, it is not just a hypothetical project). The academic community has more time and more eyeballs. The academic community has people who are very senior in their field, and your team is relatively junior--plus, "longtermism" is a huge area that's really hard to be an expert in all of.

Even so, the academic community doesn't seem very good at their task. "Sleeping beauty" papers, whose quality is only recognized long after publication, seem common. Breakthroughs are denounced by scientists, or simply underappreciated, at first (often 'correctly' due to being less fleshed out than existing theories). This paper contains a list of 34 examples of Nobel Prize-winning work being rejected by peer review. "Science advances one funeral at a time", they say.

Problems compound when the question of first-order quality is replaced by the question of what others will consider to be high quality. You're funding researchers to do work that you consider to be work that others will consider to be good--based on relatively superficial assessments due to time limitations, it sounds like.

Seems like a recipe for herd behavior. But breakthroughs come from mavericks. This funding strategy could have a negative effect by stifling innovation (filtering out contrarian thinking and contrarian researchers from the field).

Keep longtermism weird?

(I'm also a little skeptical of your "low-quality work dilutes the quality of those fields and attracts other low-quality work" fear--since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality. I think the most likely fate of low-quality work is to be forgotten. If people are too credulous of work which is actually low-quality, it's unclear to me why the fund managers would be immune to this, and having more contrarians seems like the best solution to me. The general approach of "fund many perspectives and let them determine what constitutes quality through discussion" has the advantage of offloading work from the LTFF team.)

Replies from: abergal
comment by abergal · 2021-04-08T18:59:44.033Z · EA(p) · GW(p)

I'm also a little skeptical of your "low-quality work dilutes the quality of those fields and attracts other low-quality work" fear--since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality.

The difference here is that most academic fields are pretty well-established, whereas AI safety, longtermism, and longtermist subparts of most academic fields are very new. The mechanism for attracting low-quality work I'm imagining is that smart people look at existing work and think "these people seem amateurish, and I'm not interested in engaging with them". Luke Muelhauser's report on case studies in early field growth gives the case of cryonics, which "failed to grow [...]  is not part of normal medical practice, it is regarded with great skepticism by the mainstream scientific community, and it has not been graced with much funding or scientific attention." I doubt most low-quality work we could fund would cripple the surrounding fields this way, but I do think it would have an effect on the kind of people who were interested in doing longtermist work.

I will also say that I think somewhat different perspectives do get funded through the LTFF, partially because we've intentionally selected fund managers with different views, and we weigh it strongly if one fund manager is really excited about something. We've made many grants that didn't cross the funding bar for one or more fund managers.

Replies from: xccf
comment by xccf · 2021-04-15T09:47:44.492Z · EA(p) · GW(p)

Sure. I guess I don't have a lot of faith in your team's ability to do this, since you/people you are funding are already saying things that seem amateurish to me. But I'm not sure that is a big deal.

comment by David_Moss · 2021-03-29T12:01:15.471Z · EA(p) · GW(p)

I wonder whether this alters the calculus for whether to give to donor lotteries (as opposed to EA Funds)? 

Four months ago,  it seemed like donating to the donor lottery was being recommended [EA · GW]as a kind of default (unless [EA(p) · GW(p)]the donor had a particularly cool and unusual idea for where to donate). I speculated that it might be better for a lot of donors to just donate to the Funds, resulting in the money being allocated by the fund managers rather than whoever won the lottery[^1]. It seemed at the time that the response [EA(p) · GW(p)] was fairly sanguine about the possibility that individual donors (e.g. lottery winners) might make better allocations than the fund managers. 

If we thought that the EA Funds are quite well-funded relative to the potential projects available to fund, we might be more inclined to think this is true (since the lottery winner can, in theory, seek out more promising opportunities). If, however, EA Funds are relatively under-funded, and can't fund many promising opportunities available to them, then it might seem better to just encourage people to donate to the funds by default (unless, perhaps, they are particularly confident that they or others could beat the fund managers with more time to reflect).

One might argue that it would be better for people to donate to the lottery even when the Funds are very underfunded, because whoever the winner is can make a judicious decision (potentially advised by the Fund managers) about whether they should just donate to the Funds or not. As I noted at the time, I'm a little worried that lottery winners might be biased against just donating their winnings back to the Fund. And, more generally, one might wonder about why the lottery winner would be expected to make a better decision about that question than the fund managers themselves. There may also be other advantages to people donating directly to the funds if they are under-funded (e.g. perhaps grants can be made more quickly via people donating directly to the funds, than via the lottery winner conducting their own investigations and possibly choosing to donate to the funds, or perhaps funding decisions can be made more reliably, if the funds have a more predictable amount of money coming in, rather than a large pool of money possibly going to them, possibly being donated to projects they would recommend and possibly being donated elsewhere), but of course I don't know about whether any of those practical details hold.

 

[^1]

Though to be clear I also speculated that it could be better for people to make individual donation decisions, rather than to donate to the lottery, if this lead to more investigation, experimentation and knowledge generation from a larger number of more engaged individuals.

Replies from: Jonas Vollmer
comment by Jonas Vollmer · 2021-03-29T16:28:07.000Z · EA(p) · GW(p)

I wonder whether this alters the calculus for whether to give to donor lotteries (as opposed to EA Funds)? 

I personally think it doesn't change it much. As you previously mentioned, there's a risk of donors being biased against giving to funds and instead wanting to do their "own thing"; I hope that donor lottery winners will be able to overcome that.

It seemed at the time that the response [EA(p) · GW(p)] was fairly sanguine about the possibility that individual donors (e.g. lottery winners) might make better allocations than the fund managers. 

It's worth noting that I only believe this under the assumption that the individual donors know about some specific opportunities that the fund managers are unaware of, or perhaps have significant worldview differences with the fund managers.

Replies from: jackmalde
comment by jackmalde · 2021-03-29T17:27:01.657Z · EA(p) · GW(p)

It's worth noting that I only believe this under the assumption that the individual donors know about some specific opportunities that the fund managers are unaware of, or perhaps have significant worldview differences with the fund managers.

The long-term future fund can only give to people who apply for funding though (right?) whereas someone who wins a donor lottery can give literally anywhere. This seems another reason why a donor lottery winner might give better?

Replies from: Jonas Vollmer
comment by Jonas Vollmer · 2021-03-30T13:50:30.705Z · EA(p) · GW(p)

I don't really think there's a difference between the two: 

  • The LTFF can encourage anyone to apply. Several of the grants of the current round are a result of proactive outreach to specific individuals. (This still involves filling in the application form, but that's just because it's slightly lower-effort than exchanging the same information via email.) 
  • A donor lottery winner can only grant to individuals who submit due diligence materials to CEA, which also involves filling in some forms.
Replies from: jackmalde
comment by jackmalde · 2021-03-30T14:00:09.195Z · EA(p) · GW(p)

OK thanks that makes sense