Long-Term Future Fund: December 2021 grant recommendations 2022-08-18T20:50:36.759Z
[AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships 2022-07-29T18:38:53.322Z
Long-Term Future Fund: July 2021 grant recommendations 2022-01-18T08:49:22.433Z
Public reports are now optional for EA Funds grantees 2021-11-13T00:37:23.281Z
Open Philanthropy is seeking proposals for outreach projects 2021-07-16T20:34:52.023Z
Long-Term Future Fund: May 2021 grant recommendations 2021-05-27T06:44:15.953Z
The Long-Term Future Fund has room for more funding, right now 2021-03-29T01:46:21.779Z
abergal's Shortform 2020-12-27T04:26:57.739Z
Movement building and investing to give later 2020-07-15T22:46:46.813Z
How to change minds 2020-06-11T10:15:34.721Z


Comment by abergal on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships · 2022-08-30T22:48:34.793Z · EA · GW

We’re currently planning on keeping it open at least for the next month, and we’ll provide at least a month of warning if we close it down.

Comment by abergal on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships · 2022-08-30T22:47:55.008Z · EA · GW

Sorry about the delay on this answer. I do think it’s important that organizers genuinely care about the objectives of their group (which I think can be different from being altruistic, especially for non-effective altruism groups). I think you’re right that that’s worth listing in the must-have criteria, and I’ve added it now.

I assume the main reason this criteria wouldn’t be true is if someone wanted to do organizing work just for the money, which I think we should be trying hard to select against.

Comment by abergal on Some concerns about policy work funding and the Long Term Future Fund · 2022-08-15T07:59:44.708Z · EA · GW

“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.

On potential risk factors:

  • I agree that (1) and (2) above are very unlikely for most grants (and are correlated with being unusually successful at getting things implemented).
  • I feel less in agreement about (3)-- my sense is that people who want to interact with policymakers will often succeed at taking up the attention of someone in the space, and the people interacting with them form impressions of them based on those interactions, whether or not they make progress on pushing that policy through.
  • I think (4) indeed isn’t specific to the policy space, but is a real downside that I’ve observed affecting other EA projects– I don’t expect the main factor to be that there’s only one channel for interacting with policymakers, but rather that other long-term-focused actors will perceive the space to be taken, or will feel some sense of obligation to work with existing projects / awkwardness around not doing so.

Caveating a lot of the above: as I said before, my views on specific grants have been informed heavily by others I’ve consulted, rather than coming purely from some inside view.

Comment by abergal on Some concerns about policy work funding and the Long Term Future Fund · 2022-08-12T23:25:04.298Z · EA · GW

FWIW, I think this kind of questioning is fairly Habryka-specific and not really standard for our policy applicants; I think in many cases I wouldn’t expect that it would lead to productive discussions (and in fact could be counterproductive, in that it might put off potential allies who we might want to work with later).

I make the calls on who is the primary evaluator for which grants; as Habryka said, I think he is probably most skeptical of policy work among people on the LTFF, and hasn’t been the primary evaluator for almost any (maybe none?) of the policy-related grants we’ve had. In your case, I thought it was unusually likely that a discussion between you and Habryka would be productive and helpful for my evaluation of the grant (though I was interested primarily in different but related questions, not “whether policy work as a whole is competitive with other grants”), because I generally expect people more embedded in the community (and in the case above, you (Sam) in particular, which I really appreciate), to be more open to pretty frank discussions about the effectiveness of particular plans, lines of work, etc.

Comment by abergal on Some concerns about policy work funding and the Long Term Future Fund · 2022-08-12T20:23:18.975Z · EA · GW

Rebecca Kagan is currently working as a fund manager for us (sorry for the not-up-to-date webpage).

Comment by abergal on Some concerns about policy work funding and the Long Term Future Fund · 2022-08-12T20:15:21.040Z · EA · GW

Hey, Sam – first, thanks for taking the time to write this post, and running it by us. I’m a big fan of public criticism, and I think people are often extra-wary of criticizing funders publicly, relative to other actors of the space.

Some clarifications on what we have and haven’t funded:

  • I want to make a distinction between “grants that work on policy research” and “grants that interact with policymakers”.
    • I think our bar for projects that involve the latter is much higher than for projects that are just doing the former.
  • I think we regularly fund “grants that work on policy research” – e.g., we’ve funded the Centre for Governance of AI, and regularly fund individuals who are doing PhDs or otherwise working on AI governance research.
  • I think we’ve funded a very small number of grants that involve interactions with policymakers – I can think of three such grants in the last year, two of which were for new projects. (In one case, the grantee has requested that we not report the grant publicly).

Responding to the rest of the post:

  • I think it’s roughly correct that I have a pretty high bar for funding projects that interact with policymakers, and I endorse this policy. (I don’t want to speak for the Long-Term Future Fund as a whole, because it acts more like a collection of fund managers than a single entity, but I suspect many others on the fund also have a high bar, and that my opinion in particular has had a big influence on our past decisions.)
  • Some other things in your post that I think are roughly true:
    • Previous experience in policy has been an important factor in my evaluations of these grants, and all else equal I think I am much more likely to fund applicants who are more senior (though I think the “20 years experience” bar is too high).
    • There have been cases where we haven’t funded projects (more broadly than in policy) because an individual has given us information about or impressions of them that led us to think the project would be riskier or less impactful than we initially believed, and we haven’t shared the identity or information with the applicant to preserve the privacy of the individual.
    • We have a higher bar for funding organizations than other projects, because they are more likely to stick around even if we decide they’re not worth funding in the future.
  • When evaluating the more borderline grants in this space, I often ask and rely heavily on the advice of others working in the policy space, weighted by how much I trust their judgment. I think this is basically a reasonable algorithm to follow, given that (a) they have a lot of context that I don’t, and (b) I think the downside risks of poorly-executed policy projects have spillover effects to other policy projects, which means that others in policy are genuine stakeholders in these decisions.
    • That being said, I think there’s a surprising amount of disagreement in what projects others in policy think are good, so I think the particular choice of advisors here makes a big difference.
  • I do think projects interacting with policymakers have substantial room for downside, including:
    • Pushing policies that are harmful
    • Making key issues partisan
    • Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with
    • “Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project
  • I suspect we also differ in our views of the upsides of some of this work– a lot of the projects we’ve rejected have wanted to do AI-focused policy work, and I tend to think that we don’t have very good concrete asks for policymakers in this space.
Comment by abergal on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships · 2022-08-07T01:51:47.973Z · EA · GW

Here are answers to some other common questions about the University Organizer Fellowship that I received in office hours:
If I apply and get rejected, is there a “freezing period” where I can’t apply again?

We don’t have an official freezing period, but I think we generally won’t spend time reevaluating someone within 3 months of when they last applied, unless they give some indication on the application that something significant has changed in that time.

If you’re considering applying, I really encourage you to not to wait– I think for the vast majority of people considering applying, it won’t make a difference whether you apply now or a month from now.

Should I have prior experience doing group organizing or running EA projects before applying?

No – I care primarily about the criteria outlined here. Prior experience can be a plus, but it’s definitely not necessary, and it’s generally not the main factor in deciding whether or not to fund someone.

Comment by abergal on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships · 2022-08-07T01:50:09.966Z · EA · GW

I’m not sure that I agree with the premise of the question – I don’t think EA is trying all that hard to build a mainstream following (and I’m not sure that it should).

Comment by abergal on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships · 2022-08-07T01:49:49.546Z · EA · GW

Interpreting this as “who is responsible for evaluating whether the Century Fellowship is a good use of time and money”, the answer is: someone on our team will probably try and do a review of how the program is going after it’s been running for a while longer; we will probably share that evaluation with Holden, co-CEO of Open Phil, as well as possibly other advisors and relevant stakeholders. Holden approves longtermist Open Phil grants and broadly thinks about which grants are/aren’t the best uses of money.

Comment by abergal on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships · 2022-08-07T01:48:54.409Z · EA · GW

Each application has a primary evaluator who is on our team (current evaluators: me, Bastian Stern, Eli Rose, Kasey Shibayama, and Claire Zabel). We also generally consult / rely heavily on assessments from references or advisors, e.g. other staff at Open Phil or organizations who we work closely with, especially for applicants hoping to do work in domains we have less expertise in.

Comment by abergal on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships · 2022-08-07T01:47:15.059Z · EA · GW

When we were originally thinking about the fellowship, one of the cases for impact was making community building a more viable career (hence the emphasis in this post), but it’s definitely intended more broadly for people working on the long-term future. I’m pretty unsure how the fellowship will shake out in terms of community organizers vs researchers vs entrepreneurs long-term – we’ve funded a mix so far (including several people who I’m not sure how to categorize / are still unsure about what they want to do).

Comment by abergal on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships · 2022-08-07T01:46:27.056Z · EA · GW

(The cop-out answer is “I would like the truth-seeking organizers to be more ambitious, and the ambitious organizers to be more truth-seeking”.)

If I had to choose one, I think I’d go with truth-seeking. It doesn’t feel very close to me, especially among existing university group effective altruism-related organizers (maybe Claire disagrees), largely because I think there’s already been a big recent push towards ambition there, so I think people are generally already thinking pretty ambitiously.

I feel differently about e.g. rationality local group organizers, I wish they would be more ambitious.

Comment by abergal on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships · 2022-08-07T01:43:58.901Z · EA · GW


  1. “Full-time-equivalent” is intended to mean “if you were working full-time, this is how much funding you would receive”. The fellowship is intended for people working significantly less than full-time, and most of our grants have been for 15 hours per week of organizer time or less. I definitely don’t expect undergraduates to be organizing for 40 hours per week.

    I think our page doesn’t make this clear enough early on, thanks for flagging it– I’ll make some changes to try and make this clearer.
  2. I think anyone who’s doing student organizing for more than 5 hours per semester should strongly consider applying. I’m sympathetic to people feeling weird about this, but want to emphasize that I think people should consider applying even if they would have volunteered to do the same activities, for two reasons:
    1. I think giving people funding generally causes them to do higher-quality work.
    2. I think receiving funding as an organizer makes it clearer to others that we value this work and that you don’t have to make huge sacrifices to do it, which makes it more likely that other people consider student organizing work.
  3. We’re up for funding any number of organizers per group– in the case you described, I would encourage all the organizers to apply. (We also let group leaders ask for funding for organizers working less than 10 hours per week in their own applications. If two of the organizers were working 10 hours per week or less, it might be faster for one organizer to just include them on their application.)


  1. (Let me know if I’m answering your question here, it’s possible I’ve misunderstood it.)

    I think it’s ultimately up to the person on what they want to do– I think the fellowship will generally allow more freedom than funding for a specific project, come with more benefits (see our program page), and would probably pay a higher rate in terms of personal compensation than many other funding opportunities would. It also has a much higher bar for funding than I would generally apply for funding specific projects.

    In the application form, we ask people if they would be interested in receiving a separate grant for their project or plans if they weren’t offered the Century Fellowship– we’ve funded many applicants who were below the bar for the fellowship itself that way. So if someone’s interested in both, I think it makes sense to just apply to the Century Fellowship, and we can also consider them for alternative funding.

For both programs, we don’t have an explicit referral system, but we do take into account what references have to say about the applicant (if the applicant provides references).

Comment by abergal on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships · 2022-08-04T11:22:28.700Z · EA · GW

Hi Minh– sorry for the confusion! That footer was actually from an older version of the page that referenced eligible locations for the Centre for Effective Altruism’s city and national community building grant program; I’ve now deleted it.

I encourage organizers from any university to apply, including those in Singapore.

Comment by abergal on Will EAIF and LTFF publish its reports of recent grants? · 2022-04-23T19:48:31.642Z · EA · GW

I think the LTFF will publish a payout report for grants through ~December in the next few weeks. As you suggest, we've been delayed because the number of grants we're making has increased substantially so we're pretty limited on grantmaker capacity right now (and writing the reports takes a somewhat substantial amount of time).

I like IanDavidMoss's suggestion of having a simpler list rather than delaying (and maybe we could publish more detailed justifications later)-- I'll strongly consider doing that for the payout report after this one.

Comment by abergal on Long-Term Future Fund: July 2021 grant recommendations · 2022-01-19T06:10:48.252Z · EA · GW

Confusingly, the report called "May 2021" was for grants we made through March and early April of 2021, so this report includes most of April, May, June, and July.

I think we're going to standardize now so that reports refer to the months they cover, rather than the month they're released.

Comment by abergal on Public reports are now optional for EA Funds grantees · 2021-12-07T07:01:41.768Z · EA · GW

I like this idea; I'll think about it and discuss with others. I think I want grantees to be able to preserve as much privacy as they want (including not being listed in even really broad pseudo-anonymous classifications), but I'm guessing most would be happy to opt-in to something like this.

(We've done anonymous grant reports before but I think they were still more detailed than people would like.)

Comment by abergal on Public reports are now optional for EA Funds grantees · 2021-12-02T23:26:34.569Z · EA · GW

We got feedback from several people that they weren't applying to the funds because they didn't want to have a public report.  There are lots of reasons that I sympathize with for not wanting a public report, especially as an individual (e.g. you're worried about it affecting future job prospects, you're asking for money for mental health support and don't want that to be widely known, etc.). My vision (at least for the Long-Term Future Fund) is to become a good default funding source for individuals and new organizations, and I think that vision is compromised if some people don't want to apply for publicity reasons.

Broadly, I think the benefits to funding more people outweigh the costs to transparency.

Comment by abergal on Why AI alignment could be hard with modern deep learning · 2021-09-27T18:04:43.573Z · EA · GW

Another potential reason for optimism is that we'll be able to use observations from early on in the training runs of systems (before models are very smart) to affect the pool of Saints / Sycophants / Schemers we end up with. I.e., we are effectively "raising" the adults we hire, so it could be that we're able to detect if 8-year-olds are likely to become Sycophants / Schemers as adults and discontinue or modify their training accordingly.

Comment by abergal on Open Philanthropy is seeking proposals for outreach projects · 2021-08-09T22:01:31.695Z · EA · GW

Sorry this was unclear! From the post:

There is no deadline to apply; rather, we will leave this form open indefinitely until we decide that this program isn’t worth running, or that we’ve funded enough work in this space. If that happens, we will update this post noting that we plan to close the form at least a month ahead of time.

I will bold this so it's more clear.

Comment by abergal on Open Philanthropy is seeking proposals for outreach projects · 2021-07-20T00:06:29.205Z · EA · GW

Changed, thanks for the suggestion!

Comment by abergal on Open Philanthropy is seeking proposals for outreach projects · 2021-07-19T23:58:53.502Z · EA · GW

There's no set maximum; we expect to be limited by the number of applications that seem sufficiently promising, not the cost.

Comment by abergal on Taboo "Outside View" · 2021-07-01T01:39:04.689Z · EA · GW

Yeah, FWIW I haven't found any recent claims about insect comparisons particularly rigorous.

Comment by abergal on Long-Term Future Fund: May 2021 grant recommendations · 2021-06-01T17:20:42.217Z · EA · GW

Nope, sorry. :) I live to disappoint.

Comment by abergal on HIPR: A new EA-aligned policy newsletter · 2021-05-13T05:54:09.335Z · EA · GW

FWIW I had a similar initial reaction to Sophia, though reading more carefully I totally agree that it's more reasonable to interpret your comment as a reaction to the newsletter rather than to the proposal. I'd maybe add an edit to your high-level comment just to make sure people don't get confused?

Comment by abergal on Ben Garfinkel's Shortform · 2021-05-03T22:52:53.581Z · EA · GW

Really appreciate the clarifications! I think I was interpreting "humanity loses control of the future" in a weirdly temporally narrow sense that makes it all about outcomes, i.e. where "humanity" refers to present-day humans, rather than humans at any given time period.  I totally agree that future humans may have less freedom to choose the outcome in a way that's not a consequence of alignment issues.

I also agree value drift hasn't historically driven long-run social change, though I kind of do think it will going forward, as humanity has more power to shape its environment at will.

Comment by abergal on Ben Garfinkel's Shortform · 2021-05-03T01:36:10.246Z · EA · GW

And Paul Christiano agrees with me. Truly, time makes fools of us all.

Comment by abergal on Ben Garfinkel's Shortform · 2021-05-03T01:21:33.363Z · EA · GW

Wow, I just learned that Robin Hanson has written about this, because obviously, and he agrees with you.

Comment by abergal on Ben Garfinkel's Shortform · 2021-05-02T23:20:55.706Z · EA · GW

Do you have the intuition that absent further technological development, human values would drift arbitrarily far? It's not clear to me that they would-- in that sense, I do feel like we're "losing control" in that even non-extinction AI is enabling a new set of possibilities that modern-day humans would endorse much less than the decisions of future humans otherwise. (It does also feel like we're missing the opportunity to "take control" and enable a new set of possibilities that we would endorse much more.)

Relatedly, it doesn't feel to me like the values of humans 150,000 years ago and humans now and even ems in Age of Em are all that different on some more absolute scale.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-04-10T23:22:57.633Z · EA · GW

I think we probably will seek out funding from larger institutional funders if our funding gap persists. We actually just applied for a ~$1M grant from the Survival and Flourishing Fund.

Comment by abergal on Ben Garfinkel's Shortform · 2021-04-09T23:20:41.741Z · EA · GW

I agree with the thrust of the conclusion, though I worry that focusing on task decomposition this way elides the fact that the descriptions of the O*NET tasks already assume your unit of labor is fairly general. Reading many of these, I actually feel pretty unsure about the level of generality or common-sense reasoning required for an AI to straightforwardly replace that part of a human's job. Presumably there's some restructure that would still squeeze a lot of economic value out of narrow AIs that could basically do these things, but that restructure isn't captured looking at the list of present-day O*NET tasks.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-04-08T18:59:44.033Z · EA · GW

I'm also a little skeptical of your "low-quality work dilutes the quality of those fields and attracts other low-quality work" fear--since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality.

The difference here is that most academic fields are pretty well-established, whereas AI safety, longtermism, and longtermist subparts of most academic fields are very new. The mechanism for attracting low-quality work I'm imagining is that smart people look at existing work and think "these people seem amateurish, and I'm not interested in engaging with them". Luke Muelhauser's report on case studies in early field growth gives the case of cryonics, which "failed to grow [...]  is not part of normal medical practice, it is regarded with great skepticism by the mainstream scientific community, and it has not been graced with much funding or scientific attention." I doubt most low-quality work we could fund would cripple the surrounding fields this way, but I do think it would have an effect on the kind of people who were interested in doing longtermist work.

I will also say that I think somewhat different perspectives do get funded through the LTFF, partially because we've intentionally selected fund managers with different views, and we weigh it strongly if one fund manager is really excited about something. We've made many grants that didn't cross the funding bar for one or more fund managers.

Comment by abergal on EA Debate Championship & Lecture Series · 2021-04-08T05:53:04.041Z · EA · GW

I was confused about the situation with debate, so I talked to Evan Hubinger about his experiences. That conversation was completely wild; I'm guessing people in this thread might be interested in hearing it. I still don't know exactly what to make of what happened there, but I think there are some genuine and non-obvious insights relevant to public discourse and optimization processes (maybe less to the specifics of debate outreach). The whole thing's also pretty funny.

I recorded the conversation; don't want to share publicly but feel free to DM me for access.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-04-06T22:41:57.448Z · EA · GW

I imagine this could be one of the highest-leverage places to apply additional resources and direction though. People who are applying for funding for independent projects are people who desire to operate autonomously and execute on their own vision. So I imagine they'd require much less direction than marginal employees at an EA organization, for instance.

I don't have a strong take on whether people rejected from the LTFF are the best use of mentorship resources. I think many employees at EA organizations are also selected for being self-directed. I know of cases where mentorship made a big difference to both existing employees and independent LTFF applicants.

I personally would be more inclined to fund anyone who meets a particular talent bar. That also makes your job easier because you can focus on just the person/people and worry less about their project.

We do weigh individual talent heavily when deciding what to fund, i.e., sometimes we will fund someone to do work we're less excited about because we're interested in supporting the applicant's career. I'm not in favor of funding exclusively based on talent, because I think a lot of the impact of our grants is in how they affect the surrounding field, and low-quality work dilutes the quality of those fields and attracts other low-quality work.

Huh. I understood your rejection email says the fund was unable to provide further feedback due to high volume of applications.

Whoops, yeah-- we were previously overwhelmed with requests for feedback, so we now only offer feedback on a subset of applications where fund managers are actively interested in providing it.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-03-30T07:30:03.829Z · EA · GW

Sadly, I think those changes would in fact be fairly large and would take up a lot of fund manager time. I think small modifications to original proposals wouldn't be enough, and it would require suggesting new projects or assessing applicants holistically and seeing if a career change made sense.

In my mind, this relates to ways in which mentorship is a bottleneck in longtermist work right now--  there are probably lots of people who could be doing useful direct work, but they would require resources and direction that we as a community don't have the capacity for. I don't think the LTFF is well-placed to provide this kind of mentorship, though we do offer to give people one-off feedback on their applications.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-03-30T05:29:13.686Z · EA · GW

I think many applicants who we reject could apply with different proposals that I'd be more excited to fund-- rejecting an application doesn't mean I think there's no good direct work the applicant could do.

I would guess some people would be better off earning to give, but I don't know that I could say which ones just from looking at one application they've sent us.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-03-29T16:29:36.648Z · EA · GW

(To be clear, I think it's mostly just that we have more applications, and less that the mean application is significantly better than before.)

In several cases increased grant requests reflect larger projects or requests for funding for longer time periods. We've also definitely had a marked increase in the average individual salary request per year-- setting aside whether this is justified, this runs into a bunch of thorny issues around secondary effects that we've been discussing this round. I think we're likely to prioritize having a more standardized policy for individual salaries by next grant round.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-03-29T15:33:06.126Z · EA · GW

This round, we switched from a system where we had all the grant discussion in a single spreadsheet to one where we discuss each grant in a separate Google doc, linked from a single spreadsheet. One fund manager has commented that they feel less on-top of this grant round than before as a result. (We're going to rethink this system again for next grant round.) We also changed the fund composition a bunch-- Helen and Matt left, I became chair, and three new guest managers joined. A priori, this could cause a shift in standards, though I have no particular reason to think it would shift them downward.

I personally don't think the standards have fallen because I've been keeping close track of all the grants and feel like I have a good model of the old fund team (and in some cases, have asked them directly for advice). I think the old team would have made similar decisions to the ones we're making on this set of applications. It's possible there would have been a few differences, but not enough to explain a big change in spending.

Comment by abergal on EA Funds has appointed new fund managers · 2021-03-23T19:56:32.211Z · EA · GW

Fund managers can now opt to be compensated as contractors, at a rate of $40 / hour.

Comment by abergal on EA Funds is more flexible than you might think · 2021-03-10T02:51:54.272Z · EA · GW

There's no strict 'minimum number'-- sometimes the grant is clearly above or below our bar and we don't consult anyone, and sometimes we're really uncertain or in disagreement, and we end up consulting lots of people (I think some grants have had 5+).

I will also say that each fund is somewhat intentionally composed of fund managers with somewhat varying viewpoints who trust different sets of experts, and the voting structure is such that if any individual fund manager is really excited about an application, it generally gets funded. As a result, I think in practice, there's more diversity in what gets funded than you might expect from a single grantmaking body, and there's less risk that you won't get funded just because a particular person dislikes you.

Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2021-03-01T08:31:10.771Z · EA · GW

I can't respond for Adam, but just wanted to say that I personally agree with you, which is one of the reasons I'm currently excited about funding independent work.

Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2021-02-20T01:17:09.332Z · EA · GW

Hey! I definitely don't expect people starting AI safety research to have a track record doing AI safety work-- in fact, I think some of our most valuable grants are paying for smart people to transition into AI safety from other fields. I don't know the details of your situation, but in general I don't think "former physics student starting AI safety work" fits into the category of "project would be good if executed exceptionally well". In that case, I think most of the value would come from supporting the transition of someone who could potentially be really good, rather than from the object-level work itself.

In the case of other technical Ph.D.s, I generally check whether their work is impressive in the context of their field, whether their academic credentials are impressive, what their references have to say. I also place a lot of weight on whether their proposal makes sense and shows an understanding of the topic, and on my own impressions of the person after talking to them.

I do want to emphasize that "paying a smart person to test their fit for AI safety" is a really good use of money from my perspective-- if the person turns out to be good, I've in some sense paid for a whole lifetime of high-quality AI safety research. So I think my bar is not as high as it is when evaluating grant proposals for object-level work from people I already know.

Comment by abergal on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-04T00:11:53.063Z · EA · GW

Sherry et al. have a more exhaustive working paper about algorithmic progress in a wide variety of fields.

Comment by abergal on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-29T08:56:57.918Z · EA · GW

Also a big fan of your report. :)

Historically, what has caused the subjectively biggest-feeling  updates to your timelines views? (e.g. arguments, things you learned  while writing the report, events in the world).

Comment by abergal on What does it mean to become an expert in AI Hardware? · 2021-01-10T11:51:51.529Z · EA · GW

Do you think price performance for certain applications could be one of the better ones to use on its own? Or is it perhaps better practice to keep an index of some number of trends?


I think price performance, measured in something like "operations / $", is by far the most important metric, caveating that by itself it doesn't differentiate between one-time costs of design and purchase and ongoing costs to run hardware, and it doesn't account for limitations in memory, networking, and software for parallelization that constrain performance as the number of chips are scaled up.

Are there any specific speculative hardware areas you think may be neglected? I mentioned photonics and quantum computing in the post because these are the only ones I've spent more than an hour thinking about. I vaguely plan to read up on the other technologies in the IRDS, but if there are some that might be worth looking more into than others (or some that didn't make their list at all!) that would help focus this plan significantly.

There has been a lot of recent work in optical chips / photonics, so I've been following them closely-- I have my own notes on publicly available info here. I think quantum computing is likely further from viability but good to pay attention to. I also think it's worth understanding the likelihood and implications of 3D CMOS chips, just because at least IRDS predictions suggest that might be the way forward in the next decade (I think these are much less speculative than the two above). I haven't looked into this as much as I'd like, though-- I actually also have on my todo list to read through the IRDS list and identify the things that are most likely and have the highest upside. Maybe we can compare notes. :)

I would naively think this would be another point in favor of working at start-ups compared to more established companies. My impression is that start-ups have to spend more time thinking carefully about their market is in order to attract funding (and the small size means technical people are more involved with this thinking). Does that seem reasonable?

I suspect in most roles in either a start-up or a large company you'll be quite focused on the tech and not very focused on the market or the cost model-- I don't think this strongly favors working for a start-up.

Comment by abergal on What does it mean to become an expert in AI Hardware? · 2021-01-09T10:04:48.899Z · EA · GW

Hi-- great post! I was pointed to this because I've been working on a variety of hardware-related projects at FHI and AI Impacts, including generating better hardware forecasts. (I wrote a lot here, but would also be excited to talk to you directly and have even more to say-- I contacted you through Facebook.)

 At first glance, it seemed to me that the existence of Ajeya’s report demonstrates that the EA community already has enough people with sufficient knowledge and access to expert opinion that, on the margin, adding one expert in hardware to the EA community wouldn’t improve these forecasts much.

I think this isn't true.

For one, I think while the forecasts in that report are the best publicly available thing we have, there's significant room to do better, e.g.

  • The forecasts rely on data for the sale price of hardware along with their reported FLOPS performance. But the sale price is only one component of the costs to run hardware and doesn't include power, data center costs, storage and networking etc. Arguably, we also care about the price performance for large hardware producers (e.g. Google) more than hardware consumers, and the sale price won't necessarily be reflective of that since it includes a significant mark-up over the cost of manufacture.
  • The forecasts don't consider existing forecasts from e.g. the IRDS that you mention, which are actually very pessimistic about the scaling of energy costs for CMOS chips over the next 15 years. (Of course, this doesn't preclude better scaling through switching to other technology).
  • If I recall correctly, the report partially justifies its estimate by guessing that even if chip design improvements bottom out, improvements in manufacturing cost and chip lifetime might still create a relatively steady rate of progress. I think this requires some assumptions about the cost model that may not be true, though I haven't done enough investigation yet to be sure.

(This isn't to disparage the report -- I think it's an awesome report and the current estimate is a great starting point, and Ajeya very explicitly disclaims that these are the forecasts most likely to be knowably mistaken.)

As a side note, I think EAs tend to misuse and misunderstand Moore's Law in general. As you say, Moore's Law says that the number of transistors on a chip doubles every two years. This has remained true historically, but is only dubiously correlated with 'price performance Moore's Law'-- a doubling of price performance every two years. As I note above, I think the data publicly collected on price performance is poor, partially because the 'price' and 'performance' of hardware is trickier to define than it looks. But e.g. this recent paper estimates that the price performance of at least universal processors has slowed considerably in recent years (the paper estimates 8% improvement in performance-per-dollar annually from 2008 - 2013, see section 4.3.2 'Current state of performance-per-dollar of universal processors'). Even if price performance Moore's Law ever held true, it's really not clear that it holds now.

For two, I think it's not the case that we have access to enough people with sufficient knowledge and expert opinion. I've been really interested in talking to hardware experts, and I think I would selfishly benefit substantially from experts who had thought more about "the big picture" or more speculative hardware possibilities (most people I talk to have domain expertise in something very specific and near-term). I've also found it difficult to get a lot of people's time, and would selfishly benefit from having access more hardware experts that were explicitly longtermist-aligned and excited to give me more of it. :) Basically, I'd be very in favor of having more people in industry available as advisors, as you suggest.

You also touch on this some, but I will say that I do think now is actually a particularly impactful time to influence policy on the company-level (in addition to in government, which seems to be implementing a slew of new semiconductor legislation and seems increasingly interested in regulating hardware companies.) A  recent report estimates that ASICs are poised to take over 50% of the hardware market in the coming years, and most ASIC companies now are small start-ups-- I think there's a case that influencing the policy and ethics of these small companies is much more tractable than their larger counterparts, and it would be worth someone thinking carefully about how to do that. Working as an early employee seems like a good potential way.

Lastly, I will say that I think there might be valuable work to be done at the intersection of hardware and economics-- for an example, see again this paper. I think things like understanding models of hardware costs, the overall hardware market, cloud computing, etc. are not well-encapsulated by the kind of understanding technical experts tend to have and is valuable for the longtermist community to have access to. (This is also some of what I've been working on locally.)

Comment by abergal on abergal's Shortform · 2020-12-27T04:26:58.649Z · EA · GW

I get the sense that a lot of longtermists are libertarian or libertarian-leaning (I could be wrong) and I don't really understand why. Overall the question of where to apply market structures vs. centralized planning seems pretty unclear to me.

  • More centralization seems better from an x-risk perspective in that it avoids really perverse profit-seeking incentives that companies might have to do unsafe things. (Though several centralized nation-states likely have similar problems on the global level, and maybe companies will have more cosmopolitan values in some important way.)
  • Optimizing for increasing GDP doesn't seem very trajectory changing from a longtermist perspective.  On the other hand, making moral progress seems like it could be trajectory changing, and centralized economies with strong safety nets seem like they could in theory be better at promoting human compassion than societies that are very individualistic.
  • Sometimes people say that historical evidence points to centralized economies leading to dystopias. This seems true, but I'm not sure that there's enough evidence to suggest that it's implausible to have some kind of more centralized economy that's better than a market structure.
Comment by abergal on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-22T06:34:48.504Z · EA · GW

AI Impacts now has a 2020 review page so it's easier to tell what we've done this year-- this should be more complete / representative than the posts listed above. (I appreciate how annoying the continuously updating wiki model is.)

Comment by abergal on My upcoming CEEALAR stay · 2020-12-18T05:08:13.553Z · EA · GW

I was speaking about AI safety! To clarify, I meant that investments in formal verification work could in part be used to develop those less primitive proof assistants.

Comment by abergal on Ask Rethink Priorities Anything (AMA) · 2020-12-16T18:57:34.316Z · EA · GW

I found this response  insightful and feel like it echoes mistakes I've made as well; really appreciate you writing it.