Open Philanthropy is seeking proposals for outreach projects 2021-07-16T20:34:52.023Z
Long-Term Future Fund: May 2021 grant recommendations 2021-05-27T06:44:15.953Z
The Long-Term Future Fund has room for more funding, right now 2021-03-29T01:46:21.779Z
abergal's Shortform 2020-12-27T04:26:57.739Z
Movement building and investing to give later 2020-07-15T22:46:46.813Z
How to change minds 2020-06-11T10:15:34.721Z


Comment by abergal on Why AI alignment could be hard with modern deep learning · 2021-09-27T18:04:43.573Z · EA · GW

Another potential reason for optimism is that we'll be able to use observations from early on in the training runs of systems (before models are very smart) to affect the pool of Saints / Sycophants / Schemers we end up with. I.e., we are effectively "raising" the adults we hire, so it could be that we're able to detect if 8-year-olds are likely to become Sycophants / Schemers as adults and discontinue or modify their training accordingly.

Comment by abergal on Open Philanthropy is seeking proposals for outreach projects · 2021-08-09T22:01:31.695Z · EA · GW

Sorry this was unclear! From the post:

There is no deadline to apply; rather, we will leave this form open indefinitely until we decide that this program isn’t worth running, or that we’ve funded enough work in this space. If that happens, we will update this post noting that we plan to close the form at least a month ahead of time.

I will bold this so it's more clear.

Comment by abergal on Open Philanthropy is seeking proposals for outreach projects · 2021-07-20T00:06:29.205Z · EA · GW

Changed, thanks for the suggestion!

Comment by abergal on Open Philanthropy is seeking proposals for outreach projects · 2021-07-19T23:58:53.502Z · EA · GW

There's no set maximum; we expect to be limited by the number of applications that seem sufficiently promising, not the cost.

Comment by abergal on Taboo "Outside View" · 2021-07-01T01:39:04.689Z · EA · GW

Yeah, FWIW I haven't found any recent claims about insect comparisons particularly rigorous.

Comment by abergal on Long-Term Future Fund: May 2021 grant recommendations · 2021-06-01T17:20:42.217Z · EA · GW

Nope, sorry. :) I live to disappoint.

Comment by abergal on HIPR: A new EA-aligned policy newsletter · 2021-05-13T05:54:09.335Z · EA · GW

FWIW I had a similar initial reaction to Sophia, though reading more carefully I totally agree that it's more reasonable to interpret your comment as a reaction to the newsletter rather than to the proposal. I'd maybe add an edit to your high-level comment just to make sure people don't get confused?

Comment by abergal on Ben Garfinkel's Shortform · 2021-05-03T22:52:53.581Z · EA · GW

Really appreciate the clarifications! I think I was interpreting "humanity loses control of the future" in a weirdly temporally narrow sense that makes it all about outcomes, i.e. where "humanity" refers to present-day humans, rather than humans at any given time period.  I totally agree that future humans may have less freedom to choose the outcome in a way that's not a consequence of alignment issues.

I also agree value drift hasn't historically driven long-run social change, though I kind of do think it will going forward, as humanity has more power to shape its environment at will.

Comment by abergal on Ben Garfinkel's Shortform · 2021-05-03T01:36:10.246Z · EA · GW

And Paul Christiano agrees with me. Truly, time makes fools of us all.

Comment by abergal on Ben Garfinkel's Shortform · 2021-05-03T01:21:33.363Z · EA · GW

Wow, I just learned that Robin Hanson has written about this, because obviously, and he agrees with you.

Comment by abergal on Ben Garfinkel's Shortform · 2021-05-02T23:20:55.706Z · EA · GW

Do you have the intuition that absent further technological development, human values would drift arbitrarily far? It's not clear to me that they would-- in that sense, I do feel like we're "losing control" in that even non-extinction AI is enabling a new set of possibilities that modern-day humans would endorse much less than the decisions of future humans otherwise. (It does also feel like we're missing the opportunity to "take control" and enable a new set of possibilities that we would endorse much more.)

Relatedly, it doesn't feel to me like the values of humans 150,000 years ago and humans now and even ems in Age of Em are all that different on some more absolute scale.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-04-10T23:22:57.633Z · EA · GW

I think we probably will seek out funding from larger institutional funders if our funding gap persists. We actually just applied for a ~$1M grant from the Survival and Flourishing Fund.

Comment by abergal on Ben Garfinkel's Shortform · 2021-04-09T23:20:41.741Z · EA · GW

I agree with the thrust of the conclusion, though I worry that focusing on task decomposition this way elides the fact that the descriptions of the O*NET tasks already assume your unit of labor is fairly general. Reading many of these, I actually feel pretty unsure about the level of generality or common-sense reasoning required for an AI to straightforwardly replace that part of a human's job. Presumably there's some restructure that would still squeeze a lot of economic value out of narrow AIs that could basically do these things, but that restructure isn't captured looking at the list of present-day O*NET tasks.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-04-08T18:59:44.033Z · EA · GW

I'm also a little skeptical of your "low-quality work dilutes the quality of those fields and attracts other low-quality work" fear--since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality.

The difference here is that most academic fields are pretty well-established, whereas AI safety, longtermism, and longtermist subparts of most academic fields are very new. The mechanism for attracting low-quality work I'm imagining is that smart people look at existing work and think "these people seem amateurish, and I'm not interested in engaging with them". Luke Muelhauser's report on case studies in early field growth gives the case of cryonics, which "failed to grow [...]  is not part of normal medical practice, it is regarded with great skepticism by the mainstream scientific community, and it has not been graced with much funding or scientific attention." I doubt most low-quality work we could fund would cripple the surrounding fields this way, but I do think it would have an effect on the kind of people who were interested in doing longtermist work.

I will also say that I think somewhat different perspectives do get funded through the LTFF, partially because we've intentionally selected fund managers with different views, and we weigh it strongly if one fund manager is really excited about something. We've made many grants that didn't cross the funding bar for one or more fund managers.

Comment by abergal on EA Debate Championship & Lecture Series · 2021-04-08T05:53:04.041Z · EA · GW

I was confused about the situation with debate, so I talked to Evan Hubinger about his experiences. That conversation was completely wild; I'm guessing people in this thread might be interested in hearing it. I still don't know exactly what to make of what happened there, but I think there are some genuine and non-obvious insights relevant to public discourse and optimization processes (maybe less to the specifics of debate outreach). The whole thing's also pretty funny.

I recorded the conversation; don't want to share publicly but feel free to DM me for access.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-04-06T22:41:57.448Z · EA · GW

I imagine this could be one of the highest-leverage places to apply additional resources and direction though. People who are applying for funding for independent projects are people who desire to operate autonomously and execute on their own vision. So I imagine they'd require much less direction than marginal employees at an EA organization, for instance.

I don't have a strong take on whether people rejected from the LTFF are the best use of mentorship resources. I think many employees at EA organizations are also selected for being self-directed. I know of cases where mentorship made a big difference to both existing employees and independent LTFF applicants.

I personally would be more inclined to fund anyone who meets a particular talent bar. That also makes your job easier because you can focus on just the person/people and worry less about their project.

We do weigh individual talent heavily when deciding what to fund, i.e., sometimes we will fund someone to do work we're less excited about because we're interested in supporting the applicant's career. I'm not in favor of funding exclusively based on talent, because I think a lot of the impact of our grants is in how they affect the surrounding field, and low-quality work dilutes the quality of those fields and attracts other low-quality work.

Huh. I understood your rejection email says the fund was unable to provide further feedback due to high volume of applications.

Whoops, yeah-- we were previously overwhelmed with requests for feedback, so we now only offer feedback on a subset of applications where fund managers are actively interested in providing it.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-03-30T07:30:03.829Z · EA · GW

Sadly, I think those changes would in fact be fairly large and would take up a lot of fund manager time. I think small modifications to original proposals wouldn't be enough, and it would require suggesting new projects or assessing applicants holistically and seeing if a career change made sense.

In my mind, this relates to ways in which mentorship is a bottleneck in longtermist work right now--  there are probably lots of people who could be doing useful direct work, but they would require resources and direction that we as a community don't have the capacity for. I don't think the LTFF is well-placed to provide this kind of mentorship, though we do offer to give people one-off feedback on their applications.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-03-30T05:29:13.686Z · EA · GW

I think many applicants who we reject could apply with different proposals that I'd be more excited to fund-- rejecting an application doesn't mean I think there's no good direct work the applicant could do.

I would guess some people would be better off earning to give, but I don't know that I could say which ones just from looking at one application they've sent us.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-03-29T16:29:36.648Z · EA · GW

(To be clear, I think it's mostly just that we have more applications, and less that the mean application is significantly better than before.)

In several cases increased grant requests reflect larger projects or requests for funding for longer time periods. We've also definitely had a marked increase in the average individual salary request per year-- setting aside whether this is justified, this runs into a bunch of thorny issues around secondary effects that we've been discussing this round. I think we're likely to prioritize having a more standardized policy for individual salaries by next grant round.

Comment by abergal on The Long-Term Future Fund has room for more funding, right now · 2021-03-29T15:33:06.126Z · EA · GW

This round, we switched from a system where we had all the grant discussion in a single spreadsheet to one where we discuss each grant in a separate Google doc, linked from a single spreadsheet. One fund manager has commented that they feel less on-top of this grant round than before as a result. (We're going to rethink this system again for next grant round.) We also changed the fund composition a bunch-- Helen and Matt left, I became chair, and three new guest managers joined. A priori, this could cause a shift in standards, though I have no particular reason to think it would shift them downward.

I personally don't think the standards have fallen because I've been keeping close track of all the grants and feel like I have a good model of the old fund team (and in some cases, have asked them directly for advice). I think the old team would have made similar decisions to the ones we're making on this set of applications. It's possible there would have been a few differences, but not enough to explain a big change in spending.

Comment by abergal on EA Funds has appointed new fund managers · 2021-03-23T19:56:32.211Z · EA · GW

Fund managers can now opt to be compensated as contractors, at a rate of $40 / hour.

Comment by abergal on EA Funds is more flexible than you might think · 2021-03-10T02:51:54.272Z · EA · GW

There's no strict 'minimum number'-- sometimes the grant is clearly above or below our bar and we don't consult anyone, and sometimes we're really uncertain or in disagreement, and we end up consulting lots of people (I think some grants have had 5+).

I will also say that each fund is somewhat intentionally composed of fund managers with somewhat varying viewpoints who trust different sets of experts, and the voting structure is such that if any individual fund manager is really excited about an application, it generally gets funded. As a result, I think in practice, there's more diversity in what gets funded than you might expect from a single grantmaking body, and there's less risk that you won't get funded just because a particular person dislikes you.

Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2021-03-01T08:31:10.771Z · EA · GW

I can't respond for Adam, but just wanted to say that I personally agree with you, which is one of the reasons I'm currently excited about funding independent work.

Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2021-02-20T01:17:09.332Z · EA · GW

Hey! I definitely don't expect people starting AI safety research to have a track record doing AI safety work-- in fact, I think some of our most valuable grants are paying for smart people to transition into AI safety from other fields. I don't know the details of your situation, but in general I don't think "former physics student starting AI safety work" fits into the category of "project would be good if executed exceptionally well". In that case, I think most of the value would come from supporting the transition of someone who could potentially be really good, rather than from the object-level work itself.

In the case of other technical Ph.D.s, I generally check whether their work is impressive in the context of their field, whether their academic credentials are impressive, what their references have to say. I also place a lot of weight on whether their proposal makes sense and shows an understanding of the topic, and on my own impressions of the person after talking to them.

I do want to emphasize that "paying a smart person to test their fit for AI safety" is a really good use of money from my perspective-- if the person turns out to be good, I've in some sense paid for a whole lifetime of high-quality AI safety research. So I think my bar is not as high as it is when evaluating grant proposals for object-level work from people I already know.

Comment by abergal on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-04T00:11:53.063Z · EA · GW

Sherry et al. have a more exhaustive working paper about algorithmic progress in a wide variety of fields.

Comment by abergal on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-29T08:56:57.918Z · EA · GW

Also a big fan of your report. :)

Historically, what has caused the subjectively biggest-feeling  updates to your timelines views? (e.g. arguments, things you learned  while writing the report, events in the world).

Comment by abergal on What does it mean to become an expert in AI Hardware? · 2021-01-10T11:51:51.529Z · EA · GW

Do you think price performance for certain applications could be one of the better ones to use on its own? Or is it perhaps better practice to keep an index of some number of trends?


I think price performance, measured in something like "operations / $", is by far the most important metric, caveating that by itself it doesn't differentiate between one-time costs of design and purchase and ongoing costs to run hardware, and it doesn't account for limitations in memory, networking, and software for parallelization that constrain performance as the number of chips are scaled up.

Are there any specific speculative hardware areas you think may be neglected? I mentioned photonics and quantum computing in the post because these are the only ones I've spent more than an hour thinking about. I vaguely plan to read up on the other technologies in the IRDS, but if there are some that might be worth looking more into than others (or some that didn't make their list at all!) that would help focus this plan significantly.

There has been a lot of recent work in optical chips / photonics, so I've been following them closely-- I have my own notes on publicly available info here. I think quantum computing is likely further from viability but good to pay attention to. I also think it's worth understanding the likelihood and implications of 3D CMOS chips, just because at least IRDS predictions suggest that might be the way forward in the next decade (I think these are much less speculative than the two above). I haven't looked into this as much as I'd like, though-- I actually also have on my todo list to read through the IRDS list and identify the things that are most likely and have the highest upside. Maybe we can compare notes. :)

I would naively think this would be another point in favor of working at start-ups compared to more established companies. My impression is that start-ups have to spend more time thinking carefully about their market is in order to attract funding (and the small size means technical people are more involved with this thinking). Does that seem reasonable?

I suspect in most roles in either a start-up or a large company you'll be quite focused on the tech and not very focused on the market or the cost model-- I don't think this strongly favors working for a start-up.

Comment by abergal on What does it mean to become an expert in AI Hardware? · 2021-01-09T10:04:48.899Z · EA · GW

Hi-- great post! I was pointed to this because I've been working on a variety of hardware-related projects at FHI and AI Impacts, including generating better hardware forecasts. (I wrote a lot here, but would also be excited to talk to you directly and have even more to say-- I contacted you through Facebook.)

 At first glance, it seemed to me that the existence of Ajeya’s report demonstrates that the EA community already has enough people with sufficient knowledge and access to expert opinion that, on the margin, adding one expert in hardware to the EA community wouldn’t improve these forecasts much.

I think this isn't true.

For one, I think while the forecasts in that report are the best publicly available thing we have, there's significant room to do better, e.g.

  • The forecasts rely on data for the sale price of hardware along with their reported FLOPS performance. But the sale price is only one component of the costs to run hardware and doesn't include power, data center costs, storage and networking etc. Arguably, we also care about the price performance for large hardware producers (e.g. Google) more than hardware consumers, and the sale price won't necessarily be reflective of that since it includes a significant mark-up over the cost of manufacture.
  • The forecasts don't consider existing forecasts from e.g. the IRDS that you mention, which are actually very pessimistic about the scaling of energy costs for CMOS chips over the next 15 years. (Of course, this doesn't preclude better scaling through switching to other technology).
  • If I recall correctly, the report partially justifies its estimate by guessing that even if chip design improvements bottom out, improvements in manufacturing cost and chip lifetime might still create a relatively steady rate of progress. I think this requires some assumptions about the cost model that may not be true, though I haven't done enough investigation yet to be sure.

(This isn't to disparage the report -- I think it's an awesome report and the current estimate is a great starting point, and Ajeya very explicitly disclaims that these are the forecasts most likely to be knowably mistaken.)

As a side note, I think EAs tend to misuse and misunderstand Moore's Law in general. As you say, Moore's Law says that the number of transistors on a chip doubles every two years. This has remained true historically, but is only dubiously correlated with 'price performance Moore's Law'-- a doubling of price performance every two years. As I note above, I think the data publicly collected on price performance is poor, partially because the 'price' and 'performance' of hardware is trickier to define than it looks. But e.g. this recent paper estimates that the price performance of at least universal processors has slowed considerably in recent years (the paper estimates 8% improvement in performance-per-dollar annually from 2008 - 2013, see section 4.3.2 'Current state of performance-per-dollar of universal processors'). Even if price performance Moore's Law ever held true, it's really not clear that it holds now.

For two, I think it's not the case that we have access to enough people with sufficient knowledge and expert opinion. I've been really interested in talking to hardware experts, and I think I would selfishly benefit substantially from experts who had thought more about "the big picture" or more speculative hardware possibilities (most people I talk to have domain expertise in something very specific and near-term). I've also found it difficult to get a lot of people's time, and would selfishly benefit from having access more hardware experts that were explicitly longtermist-aligned and excited to give me more of it. :) Basically, I'd be very in favor of having more people in industry available as advisors, as you suggest.

You also touch on this some, but I will say that I do think now is actually a particularly impactful time to influence policy on the company-level (in addition to in government, which seems to be implementing a slew of new semiconductor legislation and seems increasingly interested in regulating hardware companies.) A  recent report estimates that ASICs are poised to take over 50% of the hardware market in the coming years, and most ASIC companies now are small start-ups-- I think there's a case that influencing the policy and ethics of these small companies is much more tractable than their larger counterparts, and it would be worth someone thinking carefully about how to do that. Working as an early employee seems like a good potential way.

Lastly, I will say that I think there might be valuable work to be done at the intersection of hardware and economics-- for an example, see again this paper. I think things like understanding models of hardware costs, the overall hardware market, cloud computing, etc. are not well-encapsulated by the kind of understanding technical experts tend to have and is valuable for the longtermist community to have access to. (This is also some of what I've been working on locally.)

Comment by abergal on abergal's Shortform · 2020-12-27T04:26:58.649Z · EA · GW

I get the sense that a lot of longtermists are libertarian or libertarian-leaning (I could be wrong) and I don't really understand why. Overall the question of where to apply market structures vs. centralized planning seems pretty unclear to me.

  • More centralization seems better from an x-risk perspective in that it avoids really perverse profit-seeking incentives that companies might have to do unsafe things. (Though several centralized nation-states likely have similar problems on the global level, and maybe companies will have more cosmopolitan values in some important way.)
  • Optimizing for increasing GDP doesn't seem very trajectory changing from a longtermist perspective.  On the other hand, making moral progress seems like it could be trajectory changing, and centralized economies with strong safety nets seem like they could in theory be better at promoting human compassion than societies that are very individualistic.
  • Sometimes people say that historical evidence points to centralized economies leading to dystopias. This seems true, but I'm not sure that there's enough evidence to suggest that it's implausible to have some kind of more centralized economy that's better than a market structure.
Comment by abergal on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-22T06:34:48.504Z · EA · GW

AI Impacts now has a 2020 review page so it's easier to tell what we've done this year-- this should be more complete / representative than the posts listed above. (I appreciate how annoying the continuously updating wiki model is.)

Comment by abergal on My upcoming CEEALAR stay · 2020-12-18T05:08:13.553Z · EA · GW

I was speaking about AI safety! To clarify, I meant that investments in formal verification work could in part be used to develop those less primitive proof assistants.

Comment by abergal on Ask Rethink Priorities Anything (AMA) · 2020-12-16T18:57:34.316Z · EA · GW

I found this response  insightful and feel like it echoes mistakes I've made as well; really appreciate you writing it.

Comment by abergal on richard_ngo's Shortform · 2020-12-16T18:51:04.138Z · EA · GW

Thank you for writing this post-- I have the same intuition as you about this being very misleading and found this thread really helpful.

Comment by abergal on Some promising career ideas beyond 80,000 Hours' priority paths · 2020-12-15T00:07:55.792Z · EA · GW

Chiming in on this very late. (I worked on formal verification research using proof assistants for a sizable part of undergrad.)

- Given the stakes, it seems like it could be important to verify 1. formally after the math proofs step. Math proofs are erroneous a non-trivial fraction of the time.

- While I agree that proof assistants right now are much slower than doing math proofs yourself, verification is a pretty immature field. I can imagine them becoming a lot better such that they do actually become better to use than doing math proofs yourself, and don't think this would be the worst thing to invest in.

- I'm somewhat unsure about the extent to which we'll be able to cleanly decompose 1. and 2. in the systems we design, though I haven't thought about it much.

- A lot of the formal verification work on proof assistants seems to me like it's also work that could apply to verifying learned specifications? E.g. I'm imagining that this process would be automated, and the automation used could look a lot like the parts of proof assistants that automate proofs.

Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2020-12-10T22:10:22.407Z · EA · GW

In comparison with nonprofits it’s much more difficult. My read is that we sort of expect the nonprofits to never die, which means we need to be *very very* sure about them before setting them up.  But if this is the case it would be obviously severely limiting.

To clarify, I don’t think that most projects will be actively harmful-- in particular, the “projects that result in a team covering a space that is worse than the next person who have come along” case seems fairly rare to me, and would mostly apply to people who’d want to do certain movement-facing work or engage with policymakers. From a purely hits-based perspective, I think there’s still a dearth of projects that have a non-trivial chance of being successful, and this is much more limiting than projects being not as good as the next project to come along.

The obvious solution to this would be to have bigger orgs with more possibility.  

I agree with this. Maybe another thing that could help would be to have safety nets such that EAs who overall do good work could start and wind down projects without being worried about sustaining their livelihood or the livelihood of their employees? Though this could also create some pretty bad incentives.

 Some ideas I’ve had:

Thanks for these, I haven’t thought about this much in depth and think these are overall very good ideas that I would be excited to fund. In particular:

 - Experiment with advertising campaigns that could be clearly scaled up.  Some of them seem linearly useful up to millions of dollars.

I agree with this; I think there’s a big opportunity to do better and more targeted marketing in a way that could scale. I’ve discussed this with people and would be interested in funding someone who wanted to do this thoughtfully.

 -  Add additional resources to make existing researchers more effective.

 - Pay for virtual assistants and all other things that could speed researchers out.

 - Add additional resources to make nonprofits more effective, easily.

Also super agree with this. I think an unfortunate component here is that many altruistic people are irrationally frugal, including me-- I personally feel somewhat weird about asking for money to have a marginally more ergonomic desk set-up or an assistant, but I generally endorse people doing this and would be happy to fund them (or other projects making researchers more effective).

 - Focus heavily on funding non-EA projects that are still really beneficial. This could mean an emphasis on funding new nonprofits that do nothing but rank and do strategy for more funding.

I think historically, people have found it pretty hard to outsource things like this to non-EAs, though I agree with this in theory.


One total guess at an overarching theme for why we haven’t done some of these things already is that people implicitly model longtermist movement growth on the growth of academic fields, which grow via slowly accruing prestige and tractable work to do over time, rather than modeling them as a tech company the way you describe. I think there could be good reasons for this-- in particular, putting ourselves in the reference class of an academic field might attract the kind of people who want to be academics, which are generally the kinds of people we want-- people who are very smart and highly-motivated by the work itself rather than other perks of the job. For what it’s worth, though, my guess is that the academic model is suboptimal, and we should indeed move to a more tech-company like model on many dimensions.

Comment by abergal on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-10T01:33:05.892Z · EA · GW

(These are more additional considerations, not intended to be counterarguments given that your post itself was mostly pointing at additional considerations.)


  • The longtermism community can enjoy above-average growth for only a finite window of time. (It can at most control all resources, after which its growth equals average growth.)
  • Thus, spending money on growing the longtermism community now rather than later merely moves a transient window of additional resource growth to an earlier point in time.
  • We should be indifferent about the timing of benefits, so this effect doesn't matter. On the other hand, by waiting one year we can earn positive expected returns by (e.g.) investing into the stock market.
  • To sum up, giving later rather than now has two effects: (1) moving a fixed window of additional growth around in time and (2) leaving us more time to earn positive financial returns. The first effect is neutral, the second is positive. Thus, overall, giving later is always better.

Given that longtermists are generally concerned with trajectory changes, controlling all resources seems like we've largely 'won', and having more financial returns on top of this seems fairly negligible by comparison. In many cases I'd gladly trade absolute financial returns for controlling a greater fraction of the world's resources sooner.

Second, the target you need to hit is arguably pretty narrow. The objection only applies conclusively to things that basically create cause-agnostic, transferable resources that are allocated at least as well as if allocated by your future self. If resources are tied to a particular cause area, are not transferable, or are more poorly allocated, they count less.

Speaking to movement-building as an alternative to financial investment in particular:

It feels to me like quality-adjusted longtermists are more readily transferable and cause-agnostic than money on short time-scales, in the sense that they can either earn money or do direct work, and at least right now, we seem to be having trouble effectively turning money into direct work. 

It's definitely a lot less clear whether there's a compounding effect to longtermists and how readily they can be transferred into the longer-term future. For what it's worth, I'd guess there is such a compounding effect, and they can be transferred, especially given historical evidence of transfer of values between generations. Whether this is true / consistent / competitive with stock market returns is definitely debateable and a matter of 'messy empirics', though.

Comment by abergal on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-09T19:43:27.543Z · EA · GW

Thanks for writing this all up, Owen, I bet I'll link to this post many times in the future. I'll also selfishly link my previous post on movement-building as a longtermist investment.

Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T19:03:02.729Z · EA · GW

I think definitely more or equally likely. :) Please apply!

Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T18:30:38.457Z · EA · GW

Some things I think could actively cause harm:

  • Projects that accelerate technological development of risky technologies without corresponding greater speedup of safety technologies
  • Projects that result in a team covering a space or taking on some coordination role that is worse than the next person who could have come along
  • Projects that engage with policymakers in an uncareful way, making them less willing to engage with longtermism in the future, or causing them to make bad decisions that are hard to reverse
  • Movement-building projects that give a bad first impression of longtermists
  • Projects that risk attracting a lot of controversy or bad press
  • Projects with ‘poisoning the well’ effects where if it’s executed poorly the first time, someone trying it again will have a harder time-- e.g., if a large-scale project doing EA outreach to highschoolers went poorly, I think a subsequent project would have a much harder time getting buy-in from parents.

More broadly, I think as Adam notes above that the movement grows as a function of its initial composition. I think that even if the LTFF had infinite money, this pushes against funding every project where we expect the EV of the object-level work to be positive-- if we want the community to attract people who do high-quality work, we should fund primarily high-quality work. Since the LTFF does not have infinite money, I don’t think this has much of an effect on my funding decisions, but I’d have to think about it more explicitly if we end up with much more money than our current funding bar requires. (There are also other obvious reasons not to fund all positive-EV things, e.g. if we expected to be able to use the money better in the future.)

I think it would be good to have scalable interventions for impact. A few thoughts on this:

  • At the org-level, there’s a bottleneck in mentorship and organizational capacity, and loosening it would allow us to take on more inexperienced people. I don’t know of a good way to fix this other than funding really good people to create orgs and become mentors. I think existing orgs are very aware of this bottleneck and working on it, so I’m optimistic that this will get much better over time.
  • Personally, I’m interested in experimenting with trying to execute specific high-value projects by actively advertising them and not providing significant mentorship (provided there aren’t negative externalities to the project not being executed well). I’m currently discussing this with the fund.
  • Overall, I think we will always be somewhat bottlenecked by having really competent people who want to work on longtermist projects, and I would be excited for people to think of scalable interventions for this in particular. I don’t have any great ideas here off the top of my head.
Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T03:50:29.787Z · EA · GW

I think this century is likely to be extremely influential, but there's likely important direct work to do at many parts of this century.  Both patient philanthropy projects we funded have relevance to that timescale-- I'd like to know about how best to allocate longtermist resources between direct work, investment, and movement-building over the coming years, and I'm interested in how philanthropic institutions might change.

I also think it's worth spending some resources thinking about scenarios where this century isn't extremely influential.

Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2020-12-06T23:10:58.079Z · EA · GW

Edit: I really like Adam's answer

There are a lot of things I’m uncertain about, but I should say that I expect most research aimed at resolving these uncertainties not to provide strong enough evidence to change my funding decisions (though some research definitely could!) I do think weaker evidence could change my decisions if we had a larger number of high-quality applications to choose from. On the current margin, I’d be more excited about research aimed at identifying new interventions that could be promising.

Here's a small sample of the things that feel particularly relevant to grants I've considered recently. I'm not sure if I would say these are the most crucial:

  • What sources of existential risk are plausible?

    • If I thought that AI capabilities were perfectly entangled with their ability to learn human preferences, I would be unlikely to fund AI alignment work.
    • If I thought institutional incentives were such that people wouldn’t create AI systems that could be existentially threatening without taking maximal precautions, I would be unlikely to fund AI risk work at all.
    • If I thought our lightcone was overwhelmingly likely to be settled by another intelligent species similar to us, I would be unlikely to fund existential risk mitigation outside of AI.
  • What kind of movement-building work is effective?

    • Adam writes above how he thinks movement-building work that sacrifices quality for quantity is unlikely to be good. I agree with him, but I could be wrong about that. If I changed my mind here, I’d be more likely to fund a larger number of movement-building projects.
    • It seems possible to me that work that’s explicitly labeled as ‘movement-building’ is generally not as effective for movement-building as high-quality direct work, and could even be net-negative. If I decided this was true, I’d be less likely to fund movement-building projects at all.
  • What strands of AI safety work are likely to be useful?

    • I currently take a fairly unopinionated approach to funding AI safety work-- I feel willing  to fund anything that I think a sufficiently large subset of smart researchers would think is promising. I can imagine becoming more opinionated here, and being less likely to fund certain kinds of work.
    • If I believed that it was certain that very advanced AI systems were coming soon and would look like large neural networks, I would be unlikely to fund speculative work focused on alternate paths to AGI.
    • If I believed that AI systems were overwhelmingly unlikely to look like large neural networks, this would have some effect on my funding decisions, but I’d have to think more about the value of near-term work from an AI safety field-building perspective.
Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2020-12-06T22:44:58.226Z · EA · GW

I’d overall like to see more work that has a solid longtermist justification but isn't as close to existing longtermist work. It seems like the LTFF might be well-placed to encourage this, since we provide funding outside of established orgs. This round, we received many applications from people who weren’t very engaged with the existing longtermist community. While these didn’t end up meeting our bar, some of the projects were fairly novel and good enough to make me excited about funding people like this in general.

There are also lots of particular less-established directions where I’d personally be interested in seeing more work, e.g.:

  • Work on structured transparency tools for detecting risks from rogue actors
  • Work on information security’s effect on AI development
  • Work on the offense - defense balance in a world with many advanced AI systems
  • Work on the likelihood and moral value of extraterrestrial life
  • Work on increasing institutional competence, particularly around existential risk mitigation
  • Work on effectively spreading longtermist values outside of traditional movement-building

These are largely a reflection of what I happen to have been thinking about recently and definitely not my fully-endorsed answer to this question-- I’d like to spend time talking to others and coming to more stable conclusions about specific work the LTFF should encourage more of.

Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2020-12-05T22:04:24.655Z · EA · GW

Like Adam, I’ll focus on things that someone reading this might be interested in supporting or applying for. I want to emphasize that this is my personal take, not representing the whole fund, and I would be sad if this response stopped anyone from applying -- there’s a lot of healthy disagreement within the fund, and we fund lots of things where at least one person thinks it’s below our bar. I also think a well-justified application could definitely change my mind.

  1. Improving science or technology, unless there’s a strong case that the improvement would differentially benefit existential risk mitigation (or some other aspect of our long-term trajectory). As Ben Todd explains here, I think this is unlikely to be as highly-leveraged for improving the long-term future as trajectory changing efforts. I don’t think there’s a strong case that generally speeding up economic growth is an effective existential risk intervention.
  2. Climate change mitigation. From the evidence I’ve seen, I think climate change is unlikely to be either directly existentially threatening or a particularly highly-leveraged existential risk factor. (It’s also not very neglected.) But I could be excited about funding research work that changed my mind about this.
  3. Most self-improvement / community-member-improvement type work, e.g. “I want to create materials to help longtermists think better about their personal problems.” I’m not universally unexcited about funding this, and there are people who I think do good work like this, but my overall prior is that proposals here won’t be very good.

    I am also unexcited about the things Adam wrote.
Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2020-12-05T20:54:46.706Z · EA · GW

Just want to say I agree with both Habryka's comments and Adam's take that part of what the LTFF is doing is bridging the gap while orgs scale up (and increase in number) and don't have the capacity to absorb talent.

Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2020-12-05T16:16:37.165Z · EA · GW

Filtering for obvious misfits, I think the majority reason is that I don't think the project proposal will be sufficiently valuable for the long-term future, even if executed well. The minority reason is that there isn't strong enough evidence that the project will be executed well.

Sorry if this is an unsatisfying answer-- I think our applications are different enough that it’s hard to think of common reasons for rejection that are more granular. Also, often the bottom line is "this seems like it could be good, but isn't as good as other things we want to fund". Here are some more concrete kinds of reasons that I think have come up at least more than once:

  • Project seems good for the medium-term future, but not for the long-term future
  • Applicant wants to learn the answer to X, but X doesn't seem like an important question to me
  • Applicant wants to learn about X via doing Y, but I think Y is not a promising approach for learning about X
  • Applicant proposes a solution to some problem, but I think the real bottleneck in the problem lies elsewhere
  • Applicant wants to write something for a particular audience, but I don’t think that writing will be received well by that audience
  • Project would be good if executed exceptionally well, but applicant doesn't have a track record in this area, and there are no references that I trust to be calibrated to vouch for their ability
  • Applicant wants to do research on some topic, but their previous research on similar topics doesn't seem very good
  • Applicant wants money to do movement-building, but several people have reported negative interactions with them
Comment by abergal on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T18:27:34.488Z · EA · GW

Really good question! 

We currently have ~$315K in the fund balance.* My personal median guess is that we could use $2M over the next year while maintaining this year's bar for funding. This would be:

  • $1.7M more than our current balance
  • $500K more per year than we’ve spent in previous years
  • $800K more than the total amount of donations received in 2020 so far
  • $400K more than a naive guess for what the total amount of donations received will be in all of 2020. (That is, if we wanted a year of donations to pay for a year of funding, we would need  $400K more in donations next year than what we got this year.)

Reasoning below:

Generally, we fund anything above a certain bar, without accounting explicitly for the amount of money we have. According to this policy, for the last two years, the fund has given out ~$1.5M per year, or ~$500K per grant round, and has not accumulated a significant buffer. 

This round had an unusually large number of high-quality applicants. We spent $500K, but we pushed two large grant decisions to our next payout round, and several of our applicants happened to receive money from another source just before we communicated our funding decision. This makes me think that if this increase in high-quality applicants persists, it would be reasonable to have $600K - $700K per grant round, for a total of ~$2M over the next year.

My personal guess is that the increase in high-quality applications will persist, and I’m somewhat hopeful that we will get even more high-quality applications, via a combination of outreach and potentially some active grantmaking.  This makes me think that $2M over the next year would be reasonable for not going below the ROI on the last marginal dollar of the grants we made this year, though I’m not certain. (Of the two other fund managers who have made quantitative guesses on this so far, one fund manager also had $2M as their median guess, while another thought slightly above $1.5M was more likely.)

I also think there’s a reasonable case for having slightly more than our median guess available in the fund. This would both act as a buffer in case we end up with more grants above our current bar than expected, and would let us proactively encourage potential grantees to apply for funding without being worried that we’ll run out of money.

If we got much more money than applications that meet our current bar, we would let donors know. I think we would also consider lowering our bar for funding, though this would only happen after checking in with the largest donors.

* This is less than the amount displayed in our fund page, which is still being updated with our latest payouts.

Comment by abergal on EA Forum Prize: Winners for August 2020 · 2020-11-06T01:40:53.590Z · EA · GW

Random thought: I think it would be kind of cool if there were EA forum prizes for people publicly changing their minds in response to comments/ feedback.

Comment by abergal on New report on how much computational power it takes to match the human brain (Open Philanthropy) · 2020-09-20T01:46:41.012Z · EA · GW

Planned summary for the Alignment Newsletter:

In this blog post, Joseph Carlsmith gives a summary of his longer report estimating the number of floating point operations per second (FLOP/s) which would be sufficient to perform any cognitive task that the human brain can perform. He considers four different methods of estimation.

Using the mechanistic method, he estimates the FLOP/s required to model the brain’s low-level mechanisms at a level of detail adequate to replicate human task-performance. He does this by estimating that ~1e13 - 1e17 FLOP/s is enough to replicate what he calls “standard neuron signaling” — neurons signaling to each other via using electrical impulses (at chemical synapses) — and learning in the brain, and arguing that including the brain’s other signaling processes would not meaningfully increase these numbers. He also suggests that various considerations point weakly to the adequacy of smaller budgets.

Using the functional method, he identifies a portion of the brain whose function we can approximate with computers, and then scales up to FLOP/s estimates for the entire brain. One way to do this is by scaling up models of the human retina: Hans Moravec's estimates for the FLOP/s of the human retina imply 1e12 - 1e15 FLOP/s for the entire brain, while recent deep neural networks that predict retina cell firing patterns imply 1e16 - 1e20 FLOP/s.

Another way to use the functional method is to assume that current image classification networks with known FLOP/s requirements do some fraction of the computation of the human visual cortex, adjusting for the increase in FLOP/s necessary to reach robust human-level classification performance. Assuming somewhat arbitrarily that 0.3% to 10% of what the visual cortex does is image classification, and that the EfficientNet-B2 image classifier would require a 10x to 1000x increase in frequency to reach fully human-level image classification, he gets 1e13 - 3e17 implied FLOP/s to run the entire brain. Joseph holds the estimates from this method very lightly, though he thinks that they weakly suggest that the 1e13 - 1e17 FLOP/s estimates from the mechanistic method are not radically too low.

Using the limit method, Joseph uses the brain’s energy budget, together with physical limits set by Landauer’s principle, which specifies the minimum energy cost of erasing bits, to upper-bound required FLOP/s to ~7e21. He notes that this relies on arguments about how many bits the brain erases per FLOP, which he and various experts agree is very likely to be > 1 based on arguments about algorithmic bit erasures and the brain's energy dissipation.

Lastly, Joseph briefly describes the communication method, which uses the communication bandwidth in the brain as evidence about its computational capacity. Joseph thinks this method faces a number of issues, but some extremely preliminary estimates suggest 1e14 FLOP/s based on comparing the brain to a V100 GPU, and 1e16 - 3e17 FLOP/s based on estimating the communication capabilities of brains in traversed edges per second (TEPS), a metric normally used for computers, and then converting to FLOP/s using the TEPS to FLOP/s ratio in supercomputers.

Overall, Joseph thinks it is more likely than not that 1e15 FLOP/s is enough to perform tasks as well as the human brain (given the right software, which may be very hard to create). And he thinks it's unlikely (<10%) that more than 1e21 FLOP/s is required. For reference, an NVIDIA V100 GPU performs up to 1e14 FLOP/s (although FLOP/s is not the only metric which differentiates two computational systems.)

Planned opinion:

I really like this post, although I haven't gotten a chance to get through the entire full-length report. I found the reasoning extremely legible and transparent, and there's no place where I disagree with Joseph's estimates or conclusions. See also [Import AI's summary](
Comment by abergal on Does Economic History Point Toward a Singularity? · 2020-09-08T17:16:50.748Z · EA · GW
On the acceleration model, the periods from 1500-2000, 10kBC-1500, and "the beginning of history to 10kBC" are roughly equally important data (and if that hypothesis has higher prior I don't think you can reject that framing). Changes within 10kBC - 1500 are maybe 1/6th of the evidence, and 1/3 of the relevant evidence for comparing "continuous acceleration" to "3 exponentials." I still think it's great to dig into one of these periods, but I don't think it's misleading to present this period as only 1/3 of the data on a graph.

I'm going to try and restate what's going on here, and I want someone to tell me if it sounds right:

  • If your prior is that growth rate increases happen on a timescale determined by the current growth rate, e.g. you're likely to have a substantial increase once every N doublings of output, you care more about later years in history when you have more doublings of output. This is what Paul is advocating for.
  • If your prior is that growth rate increases happen randomly throughout history, e.g. you're likely to have a substantial increase at an average rate of once every T years, all the years in history should have the same weight. This is what Ben has done in his regressions.

The more weight you start with on the former prior, the more strongly you should weight later time periods.

In particular: If you start with a lot of weight on the former prior, then T years of non-accelerating data at the beginning of your dataset won't give you much evidence against it, because it won't correspond to many doublings. But T years of non-accelerating data at the end of your dataset would correspond to many doublings, so would be more compelling evidence against.

Comment by abergal on Does Economic History Point Toward a Singularity? · 2020-09-08T05:31:47.267Z · EA · GW

I think everyone agrees that the industrial revolution led to an increase in the growth rate. I think 'explosive' growth as Roodman talks about it hasn't happened yet, so I would avoid that term.