Posts

US bill limiting patient philanthropy? 2021-06-24T22:14:32.772Z
EA Infrastructure Fund: Ask us anything! 2021-06-03T01:06:19.360Z
EA Infrastructure Fund: May 2021 grant recommendations 2021-06-03T01:01:01.202Z
Progress studies vs. longtermist EA: some differences 2021-05-31T21:35:08.473Z
What are things everyone here should (maybe) read? 2021-05-18T18:34:42.415Z
How much does performance differ between people? 2021-03-25T22:56:32.660Z
Giving and receiving feedback 2020-09-07T07:24:33.941Z
What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? 2020-08-13T09:15:39.622Z
Max_Daniel's Shortform 2019-12-13T11:17:10.883Z
When should EAs allocate funding randomly? An inconclusive literature review. 2018-11-17T14:53:38.803Z

Comments

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-07-21T21:58:53.328Z · EA · GW

(I'd be very interested in your answer if you have one btw.)

Comment by Max_Daniel on The Centre for the Governance of AI is becoming a nonprofit · 2021-07-09T19:15:12.877Z · EA · GW

FWIW I agree that for some lines of work you might want to do managing conflicts of interests is very important, and I'm glad you're thinking about how to do this.

Comment by Max_Daniel on Linch's Shortform · 2021-07-05T23:13:58.086Z · EA · GW

That seems fair. To be clear, I think "ground truth" isn't the exact framing I'd want to use, and overall I think the best version of such an exercise would encourage some degree of skepticism about the alleged 'better' answer as well.

Assuming it's framed well, I think there are both upsides and downsides to using examples that are closer to EA vs. clearer-cut. I'm uncertain on what seemed better overall if I could only do one of them.

Another advantage of my suggestion in my view is that it relies less on mentors. I'm concerned that having mentors that are less epistemically savvy than the best participants can detract a lot from the optimal value that exercise might provide, and that it would be super hard to ensure adequate mentor quality for some audiences I'd want to use this exercise for. Even if you're less concerned about this, relying on any kind of plausible mentor seems like less scaleable than a version that only relies on access to published material.

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-07-05T22:28:29.922Z · EA · GW

I haven't thought a ton about the implications of this, but my initial reaction also is to generally be open to this.

So if you're reading this and are wondering if it could be worth it to submit an application for funding for past expenses, then I think the answer is we'd at least consider it and so potentially yes.

If you're reading this and it really matters to you what the EAIF's policy on this is going forward (e.g., if it's decision-relevant for some project you might start soon), you might want to check with me before going ahead. I'm not sure I'll be able to say anything more definitive, but it's at least possible. And to be clear, so far all that we have are the personal views of two EAIF managers not a considered opinion or policy of all fund managers or the fund as a whole or anything like that.

Comment by Max_Daniel on Linch's Shortform · 2021-07-05T21:03:40.060Z · EA · GW

I would be very excited about someone experimenting with this and writing up the results. (And would be happy to provide EAIF funding for this if I thought the details of the experiment were good and the person a good fit for doing this.)

If I had had more time, I would have done this for the EA In-Depth Fellowship seminars I designed and piloted recently.

I would be particularly interested in doing this for cases where there is some amount of easily transmissible 'ground truth' people can use as feedback signal. E.g.

  • You first let people red-team deworming papers and then give them some more nuanced 'Worm Wars' stuff. (Where ideally you want people to figure out "okay, despite paper X making that claim we shouldn't believe that deworming helps with short/mid-term education outcomes, but despite all the skepticism by epidemiologists here is why it's still a great philanthropic bet overall" - or whatever we think the appropriate conclusion is.)
  • You first let people red-team particular claims about the effects on hen welfare from battery cages vs. cage-free environments and then you show them Ajeya's report.
  • You first let people red-team particular claims about the impacts of the Justinian plague and then you show them this paper.
  • You first let people red-team particular claims about "X is power-law distributed" and then you show them Clauset et al., Power-law distributions in empirical data.

(Collecting a list of such examples would be another thing I'd be potentially interested to fund.)

Comment by Max_Daniel on COVID: How did we do? How can we know? · 2021-07-01T23:19:47.186Z · EA · GW

We even saw an NYT article about the CDC and whether reform is possible.

There were some other recent NYT articles which based on my limited COVID knowledge I thought were pretty good, e.g. on the origin of the virus or airborne vs. droplet transmission [1].

The background of their author, however, seems fairly consistent with an "established experts and institutions largely failed" story:

Zeynep Tufekci, a contributing opinion writer for The New York Times, writes about the social impacts of technology. She is an assistant professor in the School of Information and Library Science at the University of North Carolina, a faculty associate at the Berkman Center for Internet and Society at Harvard, and a former fellow at the Center for Internet Technology Policy at Princeton. Her research revolves around politics, civics, movements, privacy and surveillance, as well as data and algorithms.

Originally from Turkey, Ms. Tufekci was a computer programmer by profession and academic training before turning her focus to the impact of technology on society and social change.

It is interesting that perhaps some of the best commentary on COVID in the world's premier newspaper comes from a former computer programmer whose main job before COVID was writing about tech issues.

(Though note that this is my super unsystematic impression. I'm not reading a ton of COVID commentary, neither in the NYT nor elsewhere. I guess a skeptical observer could also argue "well, the view you like is the one typically championed by Silicon Valley types and other semi/non-experts, so you shouldn't be surprised that if you see newspaper op-eds you like they are written by such people".)

--

[1] What do you do if you want to expand on this topic "without the word limits" of an NYT article? Easy.

Comment by Max_Daniel on How to get technological knowledge on AI/ML (for non-tech people) · 2021-06-30T20:38:27.829Z · EA · GW

This is great, thank you so much for sharing. I expect that many people will be in a similar situation, and so that I and others will link to this post many times in the future.

(For the same reason, I also think that pointers to potentially better resources by others in the comments would be very valuable.)

Comment by Max_Daniel on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-06-29T17:28:10.519Z · EA · GW

(The following is just my view, not necessarily the view of other EAIF managers. And I can't speak for the LTFF at all.)

FWIW I can think of a number of circumstances I'd consider a "convincing reason" in this context. In particular, cases where people know they won't be available for 6-12 months because they want to wrap up some ongoing unrelated commitment, or cases where large lead times are common (e.g., PhD programs and some other things in academia).

I think as with most other aspects of a grant, I'd make decisions on a case-by-case basis that would be somewhat hard to describe by general rules.

I imagine I'd generally be fairly open to considering cases where an applicant thinks it would be useful to get a commitment now for funding that would be paid out a few months out, and I would much prefer they just apply as opposed to worrying too much about whether their case for this is "convincing". 

Comment by Max_Daniel on What are some key numbers that (almost) every EA should know? · 2021-06-28T09:54:46.423Z · EA · GW

We've now turned most of these into Anki cards

Amazing, thank you so much!

Comment by Max_Daniel on What are some key numbers that (almost) every EA should know? · 2021-06-28T09:54:09.210Z · EA · GW

I'm afraid I don't know of great sources for the numbers you list. They may also only exist for the distribution of compute. Perhaps the numbers on the EA community are too uncertain and dynamic to be a good fit for Anki anyway. On the other hand, it may be mainly the order of magnitude that is interesting, and it should be possible to get this right using crude proxies.

One proxy for the size of the EA community could be the number of EA survey respondents (or perhaps one above a certain engagement level). 

On the other points:

  • For the Great Decoupling you could use "total growth of US labor productivity since 1980" together with "total growth of median household income since 1980" (or both up to some recent year for which data is available). And the same for labor productivity vs. number of jobs since 2000. See, for instance, the graph here. You could also use the graph itself as an answer.
  • For changes in the distribution of world income, you could just use the two graphs in this article as answers (the 'elephant graph' is the one for 1988-2008, and there is also a newer one for 2008-2013/14). You could also extract some key numbers from these graphs, or some other statistics. E.g., the article provides the change of the Gini coefficient of the world income distribution, but this may have the downside that it's hard to interpret: 

As measured by the Gini coefficient, which ranges from zero (a hypothetical situation in which every person has the same income) to one (a hypothetical situation in which one person receives all income), global inequality fell from 0.70 in 1988 to 0.67 in 2008 and then further to 0.62 in 2013. There has probably never been an individual country with a Gini coefficient as high as 0.70, while a Gini coefficient of around 0.62 is akin to the inequality levels that are found today in Honduras, Namibia, and South Africa. (Loosely speaking, South Africa represents the best proxy for the inequality of the entire world.)

  • For the heavy-tailedness of various distributions I'd use the share of, e.g., the top 10% and 1% in the total.
Comment by Max_Daniel on Ben Garfinkel's Shortform · 2021-06-27T10:57:04.930Z · EA · GW

I agree with most of what you say here.

[ETA: I now realize that I think the following is basically just restating what Pablo already suggested in another comment.]

I think the following is a plausible & stronger concern, which could be read as a stronger version of your crisp concern #3.

"Humanity has not had meaningful control over its future, but AI will now take control one way or the other. Shaping the transition to a future controlled by AI is therefore our first and last opportunity to take control. If we mess up on AI, not only have we failed to seize this opportunity, there also won't be any other."

Of course, AI being our first and only opportunity to take control of the future is a strictly stronger claim than AI being one such opportunity. And so it must be less likely. But my impression is that the stronger claim is sufficiently more important that it could be justified to basically 'wager' most AI risk work on it being true.

Comment by Max_Daniel on Which non-EA-funded organisations did well on Covid? · 2021-06-20T19:11:15.887Z · EA · GW

This NYTimes Magazine article might be interesting. Its framing is basically "why did the CDC fail, and how can it do better next time?". 

It mentions some other groups that allegedly did better than the CDC. Though I don't know to what extent these groups were or were not EA-funded. E.g., it says:

The Covid Rapid Response Working Group, at the Edmond J. Safra Center for Ethics at Harvard, was one of several independent organizations that stepped in to help fill the gap. In the last year, these groups, run mostly out of academic centers and private foundations, have transformed reams of raw data — on transmission rates and hospitalization rates and death tolls — into actionable intelligence. They have created county-by-county risk-assessment tools, devised national testing strategies and mapped out national contact-tracing programs. In many if not most cases, they have moved faster than the C.D.C., painting a more accurate picture of the pandemic as it unfolded and offering more feasible solutions to the challenges that state and community leaders were facing.

Comment by Max_Daniel on A ranked list of all EA-relevant documentaries, movies, and TV series I've watched · 2021-06-19T22:23:41.910Z · EA · GW

I'm sure there are a number of interesting movies and documentaries on nuclear security.

Three movies that come to mind immediately:

  1. WarGames - a 1983 film that I found simultaneously interesting and very silly. The plot features the US giving control of their nuclear arsenal to an AI system running on a supercomputer (you can guess where it goes from here), a teenage hacker excitedly exclaiming "let's play 'Global Thermonuclear War'", and Tic Tac Toe as the solution to this film's version of the AI alignment problem. Curiously enough, Wikipedia claims that:

President Ronald Reagan, a family friend of Lasker's, watched the film and discussed the plot with members of Congress,[2] his advisers, and the Joint Chiefs of Staff. Reagan's interest in the film is credited with leading to the enactment 18 months later of NSDD-145, the first Presidential directive on computer security.[3]

2. Stanley Kubrick's famous 1964 film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb.

3. Fail Safe, also from 1964, which I haven't seen.

 

As with all fiction, there is a danger that viewers consider them as realistic depictions of reality or plausible scenarios, which in fact they are clearly not, at least in their details (or, in the case of WarGames, regarding almost everything). They may still be educational or thought-provoking insofar that they all feature accidental nuclear war, which is a facet of risk some may not have considered.

Comment by Max_Daniel on What are some key numbers that (almost) every EA should know? · 2021-06-18T10:59:19.319Z · EA · GW

I like this idea. Here is some brainstorming output. Apologies for it being unedited/not sorted by categories:

  • Age of the universe
  • Age of the Earth
  • Age of homo sapiens
  • Timing of major transitions in evolution
  • Timing of invention of writing, agriculture, and the Industrial Revolution
  • Gross world product
  • Time for which Earth remains habitable absent big intervention
  • Number of working days in a year
  • Number of working hours in a year
  • Net present value of expected lifetime earnings of some reference class such as "graduate from roughly such-and-such uni and discipline"
  • Good Ventures's total assets
  • Net present value of expected total EA-aligned capital by cause area/worldview
  • Number of parameters, training wall clock time, compute requirements, etc. for GPT-3 and some other landmark AI models
  • World population, population of China, population of India, population of Europe/the US, etc. - and predictions for these
  • Some key numbers about the human brain, e.g. number of synapses, energy requirements, ... 
  • Expected number of lives saved by smallpox eradication
  • Volume of yearly Open Phil grants by cause area
  • Volume of donations moved by/to GiveWell and ACE top charities
  • Number of people working on certain cause areas such as AI safety, GCBR reduction, nuclear security, ...
  • The 'Kaldor facts'
  • The 'Great Decoupling' of labor productivity from jobs + wages in the US
  • Some key stats about the distribution of world income and how it has changed, e.g., Milanovic's "elephant graph" and follow-ups
  • Some key stats about, e.g., South Korean economic growth since 1950
  • Something about speed of improvement in various technologies, e.g., Moore's Law, how quickly the price of solar panels or various chemicals has fallen, etc. - weighted toward things that seem more 'relevant', e.g., falling price of various biotech services
  • Number of wild animals (by appropriate groups, e.g., mammals, birds, invertebrates)
  • Number of bacteria
  • Number of atoms in the observable universe
  • USG budget
  • Chinese govt budget
  • Big tech market capitalization
  • Total budget of some key international institutions, e.g., UN, WHO, BWC, OPCW
  • World energy use
  • Certain physics-based limits to growth, and when we'd reach them on a business-as-usual trajectory
  • How much total compute there is, and how it's distributed (e.g. supercomputers vs. gaming consoles vs. personal computers vs. ...)
  • How much EAs should discount future financial resources
  • Size of the EA community
  • Number of impact-weighted career plan changes caused by 80k every year
  • Some key stats about impact distributions where we have them, e.g., on how heavy-tailed the DCP2 global health cost-effectiveness numbers are
  • How much does it cost to cause the equivalent of one life saved by donating to the top-rated GiveWell charity?
  • What 'trade ratio' between doubling someone's consumption and averting the death of a <5 year old do you need to have such that GiveDirectly becomes as cost-effective as AMF according to GiveWell's cost-effectiveness model?
  • How many years of, e.g., chicken suffering do you avert with marginal donations to, e.g., ACE top charities?
  • How many years of, e.g., chicken suffering do you avert by going vegan?
  • How many people, and what share of the world or regional population were killed in certain historical catastrophes such as the Black Death, the Mongol conquests, the Great Leap Forward, or the transatlantic slave trade?
Comment by Max_Daniel on 2018-2019 Long Term Future Fund Grantees: How did they do? · 2021-06-18T08:50:11.897Z · EA · GW

Yeah I agree that info on how much absolute impact each grant seems to have had would be more relevant for making such updates. (Though of course absolute impact is very hard to estimate.)

Strictly speaking the info in the OP is consistent with "99% of all impact came from one grant", and it could even be one of the "Not as successful as hoped for". (Though taking into account all context/info I would guess that the highest-impact grants would be in the bucket "More successful than expected".) And if that was the case one shouldn't make any updates that would be motivated by "this looks less heavy-tailed than I expected".

Comment by Max_Daniel on 2018-2019 Long Term Future Fund Grantees: How did they do? · 2021-06-17T13:25:55.714Z · EA · GW

Thanks, that makes sense.

  • I agree with everything you say about the GovAI example (and more broadly your last paragraph).
  • I do think my system 1 seems to work a bit differently since I can imagine some situations in which I would find it intuitive to update upwards on total success based on a lower 'success rate' - though it would depend on the definition of the success rate. I can also tell some system-2 stories, but I don't think they are conclusive.
    • E.g., I worry that a large fraction of outcomes with "impact at least x" might reflect a selection process that is too biased toward things that look typical or like sufficiently safe bets - thereby effectively sampling from a truncated range of a heavy-tailed distribution. The so selected grants might then have an expected value of n times the median of the full distribution, with  depending on what share of outliers you systematically miss and how good your selection power within the truncated distribution is - and if the distribution is very heavy-tailed this can easily be less than the mean of the full distribution, i.e., might fail to even beat the benchmark of grant decisions by lottery.
      • (Tbc, in fact I think it's implausible that LTFF or EAIF decisions are worse than decisions by lottery, at least if we imagine a lottery across all applications including desk rejects.)
    • Similarly, suppose I have a prior impact distribution that makes me expect that (made-up number) 80% of the total ex-post impact will be from 20% of all grants. Suppose further I then do an ex-post evaluation that makes me think that, actually the top 20% of grants only account for 50% of the total value. There are then different updates I can make (and how much I should make which of these depends on other context and the exact parameters):
      • The ex-post impact distribution is less heavy-tailed than I thought.
      • The grant selection process is systematically missing outliers.
      • The outcome was simply bad luck (which in a sense wouldn't be that surprising since the empirical average is such an unstable estimate of the true mean of a highly heavy-tailed distribution). This could suggest that it would be valuable to find ways to increase the sample size, e.g., by spending less time on evaluating marginal grants and instead spending time on increasing the number of good applications.
  • However, I think that in this case my sys 1 probably "misfired" because the fraction of grants that performed better or worse than expected doesn't seem to have a straightforward implication within the kind of models mentioned in this or your comment.
Comment by Max_Daniel on 2018-2019 Long Term Future Fund Grantees: How did they do? · 2021-06-17T00:02:56.981Z · EA · GW
  • There is a part of me which finds the outcome (a 30 to 40% success rate) intuitively disappointing. However, it may suggest that the LTF was taking the right amount of risk per a hits-based-giving approach.

FWIW, my immediate reaction had been exactly the opposite: "wow, the fact that this skews so positive means the LTFF isn't risk-seeking enough". But I don't know if I'd stand by that assessment after thinking about it for another hour.

Comment by Max_Daniel on Some thoughts on EA outreach to high schoolers · 2021-06-15T21:13:09.503Z · EA · GW

Also, to be clear, are your original comment and this correction talking about the same survey population? I.e., EA survey takers in the same year(s)? Rather than comparing the results for different survey populations?

Comment by Max_Daniel on Some thoughts on EA outreach to high schoolers · 2021-06-15T21:10:57.516Z · EA · GW

How do people who first got involved at 15-17 or 18 compare to people who first got involved age 20-25 (or something like that)? So "unusually young" vs. "median" rather than "unusually young vs. unusually old"?

Comment by Max_Daniel on Progress studies vs. longtermist EA: some differences · 2021-06-15T13:26:44.373Z · EA · GW

Thanks! I think I basically agree with everything you say in this comment. I'll need to read your longer comment above to see if there is some place where we do disagree regarding the broadly 'metaethical' level (it does seem clear we land on different object-level views/preferences).

In particular, while I happen to like a particular way of cashing out the "impartial consequentialist" outlook, I (at least on my best-guess view on metaethics) don't claim that my way is the only coherent or consistent way, or that everyone would agree with me in the limit of ideal reasoning, or anything like that.

Comment by Max_Daniel on Vignettes Workshop (AI Impacts) · 2021-06-15T12:05:36.502Z · EA · GW

Sounds cool. 75% that I'll join on Friday from 10:30AM California time for a few hours. If it seemed like spending more time would be useful, I'd join again on Saturday from 10AM California time for a bit.

Lmk if a firmer RSVP would be helpful.

Comment by Max_Daniel on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-15T09:13:14.272Z · EA · GW

Great! I'm also intuitively optimistic about the effect of these new features on Wiki uptake, editor participation, etc.

Comment by Max_Daniel on My current impressions on career choice for longtermists · 2021-06-09T16:25:48.686Z · EA · GW

Narrowly,"chance favors the prepared mind" and being in either quant trading or cryptography (both competitive fields!) before the crypto boom presumably helps you see the smoke ahead of time, and like you some of the people I know in the space were world-class at an adjacent field like finance trading or programming.  Though I'm aware of other people who literally did stuff closer to fly a bunch to Korea and skirt the line on capital restrictions, which seems less reliant on raw or trained talent. 

(I agree that having knowledge of or experience in adjacent domains such as finance may be useful. But to be clear, the claim I intended to make was that the ability to do things like "fly a bunch to Korea" is, as you later say, a rare and somewhat practiceable skillset.

Looking back, I think I somehow failed to read your bullet point on "hard work being somewhat transferable" etc. I think the distinction you make there between  "doing crunch-time work in less important eras" vs. "steadily climbing towards excellence in very competitive domains" is very on-point, that the crypto examples should make us more bullish on the value of the former relative to the latter, and that my previous comment is off insofar as it can be read as me arguing against this.)

Comment by Max_Daniel on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-08T15:52:26.692Z · EA · GW

(done)

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-06-08T10:20:08.814Z · EA · GW

I also now think that the lower end of the 80% interval should probably be more like $5-15B.

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-06-08T10:16:23.887Z · EA · GW

Shouldn't your lower bound for the 50% interval be higher than for the 80% interval?

If the intervals were centered - i.e., spanning the 10th to 90th and the 25th to 75th percentile, respectively - then it should be, yes.

I could now claim that I wasn't giving centered intervals, but I think what is really going on is that my estimates are not diachronically consistent even if I make them within 1 minute of each other.

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-06-08T00:30:38.019Z · EA · GW

I think I often have an implicit intuition about something like "how heavy-tailed is this grant?". But I also think most grants I'm excited about are either at least somewhat heavy-tailed or aimed at generating information for a decision about a (potentially heavy-tailed) future grant, so this selection effect will reduce differences between grants along that dimension.

But I think for less than 1/10 of the grants I think about I will have any explicit quantitative specification of the distribution in mind. (And if I have it will be rougher than a full distribution, e.g. a single "x% of no impact" intuition.)

Generally I think our approaches are more often qualitative/intuitive than quantitative. There are rare exceptions, e.g. for the children's book grant I made a crappy cost-effectiveness back-of-the-envelope calculation just to check if the grant seemed like a non-starter based on this. As far as I remember, that was the only such case this round.

Sometimes we will discuss specific quantitative figures, e.g., the amount of donations a fundraising org might raise within a year. But our approach for determining these figures will then in turn usually be qualitative/intuitive rather than based on a full-blown quantitative model.

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-06-08T00:21:43.353Z · EA · GW

As an aside, I think that's an excellent heuristic, and I worry that many EAs (including myself) haven't internalized it enough.

(Though I also worry that pushing too much for it could lead to people failing to notice the exceptions where it doesn't apply.)

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-06-08T00:15:23.678Z · EA · GW

My knee-jerk reaction is: If "net negative" means "ex-post counterfactual impact anywhere below zero, but including close-to-zero cases" then it's close to 50% of grantees. Important here is that "impact" means "total impact on the universe as evaluated by some omniscient observer". I think it's much less likely that funded projects are net negative by the light of their own proxy goals or by any criterion we could evaluate in 20 years (assuming no AGI-powered omniscience or similar by then).

(I still think that the total value of the grantee portfolio would be significantly positive b/c I'd expect the absolute values to be systematically higher for positive than for negative grants.)

This is just a general view I have. It's not specific to EA Funds, or the grants this round. It applies to basically any action. That view is somewhat considered but I think also at least somewhat controversial. I have discussed it a bit but not a lot with others, so I wouldn't be very surprised if someone replied to this comment saying "but this can't be right because of X", and then I'd be like "oh ok, I think you're right, the close-to-50% figure now seems massively off to me".

--

If "net negative" means "significantly net negative" (though I'm not sure what the interesting bar for "significant" would  be), then I'm not sure I have a strong prior. Glancing over the specific grants we made I feel that for very roughly 1/4 of them I have some vague sense that "there is a higher-than-baseline risk for this being significantly net negative". But idk what that higher-than-baseline risk is as absolute probability, and realistically I think all that's going on here is that for about 1/4 of grants I can easily generate some prototypical story for why they'd turn out to be significantly net negative. I don't know how well this is correlated with the actual risk.

(NB I still think that the absolute values for 'significantly net negative' grants will be systematically smaller than for 'significantly net positive' ones. E.g., I'd guess that the 99th percentile ex-post impact grant much more than offsets the 1st percentile grant [which I'm fairly confident is significantly net negative].)

Comment by Max_Daniel on My current impressions on career choice for longtermists · 2021-06-07T23:56:04.163Z · EA · GW

I find your crypto trading examples fairly interesting, and I do feel like they only fit awkwardly with my intuitions - they certainly make me think it's more complicated.

However, one caveat is that "willing to see the opportunity"  and "willing to make radical life changes" don't sound quite right to me as conditions, or at least like they omit important things. I think that actually both of these things are practice-able abilities rather than just a matter of "willingness" (or perhaps "willingness" improves with practice). 

And in the few cases I'm aware of it seems to me the relevant people were world-class excellent at some relevant inputs, in part clearly because did spend significant time "practicing" them. 

The point is just that these inputs are broader than "ability to do cryptocurrency trading". On the other hand, they also don't fit super neatly into the aptitudes from the OP, though I'd guess the entrepreneurial aptitude would cover a lot of it (even if it's not emphasized in the description of it).

Comment by Max_Daniel on My current impressions on career choice for longtermists · 2021-06-07T23:46:55.547Z · EA · GW

As Max_Daniel noted, an underlying theme in this post is that "being successful at conventional metrics" is an important desiderata, but this doesn't reflect the experiences of longtermist EAs I personally know. For example, anecdotally, >60% of longtermists with top-N PhDs regret completing their program, and >80% of longtermists with MDs regret it.

Your examples actually made me realize that "successful at conventional metrics" maybe isn't a great way to describe my intuition (i.e., I misdescribed my view by saying that). Completing a top-N PhD or MD isn't a central example - or at least not sufficient for being a central example, and certainly not necessary for what I had in mind.

I think the questions that matter according to my intuition are things like:

  • Do you learn a lot? Are you constantly operating near the boundaries of what you know how to do and have practiced?
  • Are the people around you impressed by you? Are there skills where they would be like "off the top of my head, I can't think of anyone else who's better at this than <you>"?

At least some top-N PhDs will correlate well with this. But I don't think the correlation will be super strong: especially in some fields, I think it's not uncommon to end up in a kind of bad environment (e.g., advisor who isn't good at mentoring) or to be often "under-challenged" because tasks are either too easy or based on narrow skills one has already practiced to saturation or because there are too few incentives to progress fast. 

[ETA: I also think that many of the OP's aptitudes are really clusters of skills, and that PhDs run some risk of only practicing too small a number of skills. I.e., being considerably more narrow. Again this will vary a lot by field, advisor, other environmental conditions, etc.]

What I feel even more strongly is that these (potential) correlates of doing a PhD are much more important than the credential, except for narrow exceptions for some career paths (e.g., need a PhD if you want to become a professor).

I also think I should have said "being successful at <whatever> metric for one of these or another useful aptitude" rather than implying that "being successful at anything" is useful.

Even taking all of this into account, I think your anecdata is a reason to be somewhat more skeptical about this "being successful at <see above>" intuition I have.

Comment by Max_Daniel on Max_Daniel's Shortform · 2021-06-07T23:29:31.218Z · EA · GW

[PAI vs. GPAI]

So there is now (well, since June 2020) both a Partnership on AI and a Global Partnership on AI.

Unfortunately GPAI's and PAI's FAQ pages conspicuously omit "how are you differnet from (G)PAI?".

Can anyone help?

At first glance it seems that:

  • PAI brings together a very large number of below-state actors of different types: e.g., nonprofits, academics, for-profit AI labs, ...
  • GPAI members are countries
  • PAI's work is based on 4 high-level goals that each are described in about two sentences [?]
  • GPAI's work is based on the OECD Recommendation on Artificial Intelligence
  • I don't know how/where the substantive work undertaken by PAI gets done - e.g., by PAI staff or by ad-hoc joint projects between some members or ... ?
  • GPAI has two "centers of excellence" in Montreal and Paris. I would guess [?] that a lot of the substantive work for/by it gets done there.

I also note that it's slightly ironic that GPAI differs from PAI by having added the adjective "global". It's based on an OECD recommendation, but the OECD is very much not a "global" organization - it's a club of rich market democracies. (Though GPAI membership differs a lot from the OECD - fewer than half of OECD members have joined GPAI, and some notable non-OECD member such as India have.)

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-06-07T22:09:10.566Z · EA · GW

I actually think this is surprisingly non-straightforward. Any estimate of the net present value of total longtermist $$ will have considerable uncertainty because it's a combination of several things, many of which are highly uncertain:

  • How much longtermist $$ is there now?
    • This is the least uncertain one. It's not super straightforward and requires nonpublic knowledge about the wealth and goals of some large individual donors, but I'd be surprised if my estimate on this was off by 10x.
  • What will the financial returns on current longtermist $$ be before they're being spent?
    • Over long timescales, for some of that capital, this might be 'only' as volatile as the stock market or some other 'broad' index.
    • But for some share of that capital (as well as on shorter time scale) this will be absurdly volatile. Cf. the recent fortunes some EAs have made in crypto.
  • How much new longtermist $$ will come in at which times in the future?
    • This seems highly uncertain because it's probably very heavy-tailed. E.g., there may well be a single source that increases total capital by 2x or 10x. Naturally, predicting the timing of such a single event will be quite uncertain on a time scale of years or even decades.
  • What should the discount rate for longtermist $$ be?
    • Over the last year, someone who has thought about this quite a bit told me first that they had updated from 10% per year to 6%, and then a few months later back again. This is a difference of one order of magnitude for $$ coming in in 50 years.
  • What counts as longtermist $$? If, e.g., the US government started spending billions on AI safety or biosecurity, most of which goes to things that from a longtermist EA perspective are kind of but not super useful, how would that count?

I think for some narrow notion of roughly "longtermist $$ as 'aligned' as Open Phil's longtermist pot" my 80% credence interval for the net present value is $30B - $1 trillion. I'm super confused how to think about the upper end because the 90th percentile case is some super weird transformative AI future. Maybe I should instead say that my 50% credence interval is $20B - $200B.

Generally my view on this isn't that well considered and probably not that resilient.

Comment by Max_Daniel on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T21:39:27.274Z · EA · GW

FWIW, I actually (and probably somewhat iconoclastically) disagree with this. :P

In particular, I think Part I of Reasons and Persons is underrated, and contains many of the most useful ideas. E.g., it's basically the best reading I know of if you want to get a deep and principled understanding for why 'naive consequentialism' is a bad idea, but why at the same time worries about naive applications of consequentialism or the demandingness objection and many other popular objections to consequentialism don't succeed at undermining it as ultimate criterion of rightness.

(I also expect that it is the part that would most likely be perceived as pointless hair-splitting.)

And I think the most important thought experiment in Reasons and Persons is not the teleporter, nor Depletion or Two Medical Programs, nor the Repugnant Conclusion or the Absurd Conclusion or the Very Repugnant Conclusion or the Sadistic Conclusion and whatever they're all called - I think it's Writer Kate, and then Parfit's Hitchhiker.

Part II in turn is highly relevant for answering important questions such as this one.

Part III is probably more original and groundbreaking than the previous parts. But it is also often misunderstood. I think that Parfit's "relation R" of psychological connectedness/continuity does a lot of the work we might think a more robust notion of personal identity would do - and in fact, Parfit's view helps rationalize some everyday intuitions, e.g., that it's somewhere between unreasonable and impossible to make promises that bind me forever. More broadly, I think that Parfit's view on personal identity is mostly not that revisionary, and that it mostly dispels a theoretical fiction most of our everyday intuitions neither need nor substantively rely on. (There are others, including other philosophers, who disagree with this - and think that there being no fact of the matter about questions of personal identity has, e.g., radically revisionary implications for ethics. But this is not Parfit's view.)

Part IV on population ethics is all good and well. (And in fact, I'm often disappointed by how little most later work in population ethics does to improve on Reasons and Persons.) But its key lessons are already widely appreciated within EA, and today there are more efficient introductions one can get to the field.

All of this is half-serious since I don't think there's a clear and reader-independent fact of the matter of which things in Reasons and Persons are "most important". It's also possible, especially for Part I, that what I think I got out of Reasons and Persons is quite idiosyncratic, and doesn't bear a super direct or obvious relationship to its actual content. Last but not least, it's been 5 years or so since I read Reasons and Persons, so probably some claims in this comment about content in Reasons and Persons are simply false because I misremember what's actually in there.

Comment by Max_Daniel on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T21:03:18.348Z · EA · GW

I think all funds are generally making good decisions.

I think a lot of the effect is just that making these decisions is hard, and so that variance between decision-makers is to some extent unavoidable. I think some of the reasons are quite similar to why, e.g., hiring decisions, predicting startup success, high-level business strategy, science funding decisions, or policy decisions are typically considered to be hard/unreliable. Especially for longtermist grants, on top of this we have issues around cluelessness, potentially missing crucial considerations, sign uncertainty, etc.

I think you are correct that both of the following are true:

  • There is potential of improving decision quality by spending time on discussing diverging views, improving the way we aggregate opinions to the extent they still differ after the amount of discussion that is possible, and maybe by using specific 'decision making tools' (e.g., certain ways of a structured discussion + voting).
  • There are interesting lessons to be learned by identifying cruxes. Some of these lessons might directly improve future decisions, others might be valuable for other reasons - e.g., generating active grantmaking ideas or cruxes/results being shareable and thereby being a tiny bit epistemically helpful to many people.

I think a significant issue is that both of these cost time - both identifying how to improve in these areas and then implementing the improvements -, which is a very scarce resource for fund managers.

I don't think it's obvious whether at the margin the EAIF committee should spend more or less time to get more or fewer benefits in these areas. Hopefully this means we're not too far away from the optimum. 

I think there are different views on this within EA Funds (both within the EAIF committee, and potentially between the average view of the EAIF committee and the average view of the LTFF committee - or at least this is suggested by revealed preferences as my loose impression is that  LTFF fund managers spend more time in discussions with each other). Personally, I actually lean toward spending less time and less aggregation of opinions across fund managers - but I think currently this view isn't sufficiently widely shared that I expect it to be reflected in how we're going to make decisions in the future.

But I also feel a bit confused because some people (e.g., some LTFF fund managers, Jonas) have told me that spending more time discussing disagreements seemed really helpful to them, while I feel like my experience with this and my inside-view prediction of how spending more time on discussions would look like make me expect less value. I don't really know why that is - it could be that I'm just bad at getting value out of discussions, or updating my views, or something like that.

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-06-07T20:45:49.155Z · EA · GW

My very off-the-cuff thoughts are:

  • If it seems like you are in an especially good position to assess that org, you should give to them directly. This could, e.g., be the case if you happened to know the org's founders especially well, or if you had rare subject-matter expertise relevant to assessing that org.
  • If not, you should give to a donor lottery.
  • If you win the donor lottery, you would probably benefit from coordinating with EA Funds. Literally giving the donor lottery winnings to EA Funds would be a solid baseline, but I would hope that many people can 'beat' that baseline, especially if they get the most valuable inputs from 1-10 person-hours of fund manager time.
  • Generally, I doubt that it's good use of the donor's and fund managers' time if donors and fund managers coordinated on $1,000 donations (except in rare and obvious cases). For a donation of $10,000 some very quick coordination may sometimes be useful - especially if it goes to an early-stage organization. For a $100,000 donation, it starts looking "some coordination is helpful more likely than not" (though in many cases the EA Funds answer may still be "we don't really have anything to say, it seems best if you make this decision independently"), but I still don't think explicit coordination should be a strong default or norm.

One underlying and potentially controversial assumption I make is that more variance in funding decisions is good at the margin. This pushes toward more independent funders being good, reducing correlation between the decisions of different funders, etc. - My view on this isn't resilient, and I think I remember that some thoughtful people disagree with that assumption.

Comment by Max_Daniel on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T19:25:17.883Z · EA · GW

From an outside view the actual cost of making the grants from the pot of another fund seems incredibly small. At minimum it could just be having someone to look over the end decisions and see if any feel like they belong in a different fund and then quickly double checking with the other fund's grantmakers that they have no strong objections and then granting the money from a different pot. (You could even do that after the decision to grant has been communicated to applicants, no reason to hold up, if the second fund objects then can still be given by the first fund).

Thank you for this suggestion. It makes sense to me that this is how the situation looks from the outside.

I'll think about the general issue and suggestions like this one a bit more, but currently don't expect large changes to how we operate. I do think this might mean that in future rounds there may be a similar fraction of grants that some donors perceive to better fit with another fund. I acknowledge that this is not ideal, but I currently expect it will seem best after considering the cost and benefits of alternatives.

So please view the following points of me trying to explain why I don't expect to adopt what may sound like a good suggestion, while still being appreciative of the feedback and suggestions.

I think based on my EA Funds experience so far, I'm less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between "EAIF managers think something is good to fund from a longtermist perspective" and "LTFF managers think something is good to fund from a longtermist perspective" (and vice versa for 'meta' grants) than you seem to expect. 

This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they're aligned on broad "EA principes" and other fundamental views. I have this view both because of some cases I've seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers).

To be clear, I would expect decision-relevant disagreements for a minority of grants - but not a sufficiently clear minority that I'd be comfortable acting on "the other fund is going to make this grant" as a default assumption.

Your suggestion of retaining the option to make the grant through the 'original' fund would help with this, but not with the two following points. 

I think another issue is duplication of time cost. If the LTFF told me "here is a grant we want to make, but we think it fits better for the EAIF - can you fund it?", then I would basically always want to have a look at it. In maybe 50% [?, unsure] of cases this would only take me like 10 minutes, though the real attention + time cost would be higher. In the other 50% of cases I would want to invest at least another hour - and sometimes significantly more - assessing the grant myself. E.g., I might want to talk to the grantee myself or solicit additional references. This is because I expect that donors and grantees would hold me accountable for that decision, and I'd feel uncomfortable saying "I don't really have an independent opinion on this grant, we just made it b/c it was recommended by the LTFF".

(In general, I worry that "quickly double-checking" something is close to impossible between two groups of 4 or so people, all of whom are very opinionated and can't necessarily predict each other's views very well, are in parallel juggling dozens of grant assessments, and most of whom are very time-constrained and are doing all of this next to their main jobs.)

A third issue is that increasing the delay between the time of a grant application and the time of a grant payout is somewhat costly. So, e.g., inserting another 'review allocation of grants to funds' step somewhere would somewhat help with the time & attention cost by bundling all scoping decisions together; but it would also mean a delay of potentially a few days or even more given fund managers' constrained availabilities. This is not clearly prohibitive, but significant since I think that some grantees care about the time window between application and potential payments being short.

However, there may be some cases where grants could be quickly transferred (e.g., if for some reason managers from different funds had been involved in a discussion anyway), or there may be other, less costly processes for how to organize transfers. This is definitely something I will be paying a bit more attention to going forward, but for the reasons explained in this and other comments I currently don't expect significant changes to how we operate.

Comment by Max_Daniel on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T18:53:46.257Z · EA · GW

(FWIW, I personally love Reasons and Persons but I think it's much more "not for everyone" than most of the other books Jiri mentioned. It's just too dry, detailed, abstract, and has too small a density of immediately action-relevant content.

I do think it could make sense as a 'second book' for people who like that kind of philosophy content and know what they're getting into.)

Comment by Max_Daniel on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-07T18:41:57.836Z · EA · GW

If you showed me the list here and said 'Which EA Fund should fund each of these?' I would have put the Lohmar and the CLTR grants (which both look like v good grants and glad they are getting funded) in the longtermist fund. Based on your comments above  you might have made the same call as well.

Thank you for sharing - as I mentioned I find this concrete feedback spelled out in terms of particular grants particularly useful.

[ETA: btw I do think part of the issue here is an "object-level" disagreement about where the grants best fit - personally, I definitely see why among the grants we've made they are among the ones that seem 'closest' to the LTFF's scope; but I don't personally view them as clearly being more in scope for the LTFF than for the EAIF.]

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-06-06T18:35:31.600Z · EA · GW

OK, on a second thought I think this argument doesn't work because it's basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, 'crazy' opportunities become available.

Comment by Max_Daniel on Buck's Shortform · 2021-06-06T18:28:00.246Z · EA · GW

I don't think it's crazy at all. I think this sounds pretty good.

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-06-06T18:23:38.993Z · EA · GW

Hmm. Then I'm not sure I agree. When I think of prototypical example scenarios of "business as usual but with more total capital" I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based 'utility function' I'd be surprised if it had returns than diminish much more strongly than logarithmic. (That's at least my initial intuition - not sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%.

(I guess there is also the question what exactly we're assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then I'm much more inclined to agree with "business as usual + this extra capital adds much less than 20%". In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-06-06T18:13:44.175Z · EA · GW

I think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective.

However, I suspect I'm (perhaps significantly) more optimistic than you about 'indirect' effects from promoting good content and advice on effective giving, promoting it as a 'social norm', etc. This is roughly because of the view I state under the first key uncertainty here, i.e., I suspect that encountering effective giving can for some people be a 'gateway' toward more impactful behaviors.

One issue is that I think the sign and absolute value of these indirect effects are not that well correlated with the proxy goals such organizations would optimize, e.g., amount of money raised. For example, I'd guess it's much better for these indirect effects if the org is also impressive intellectually or entrepreneurially; if it produces "evangelists" rather than just people who'll start giving 1% as a 'hobby', are quiet about it, and otherwise don't think much about it; if it engages in higher-bandwidth interactions with some of its audience; and if, in communications it at least sometimes mentions other potentially impactful behaviors.

So, e.g., GiveWell by these lights looks much better than REG, which in turns looks much better than, say, buying Facebook ads for AMF.

(I'm also quite uncertain about all of this. E.g., I wouldn't be shocked if after significant additional consideration I ended up thinking that the indirect effects of promoting effective giving - even in a 'good' way - were significantly net negative.)

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-06-06T17:44:40.594Z · EA · GW

I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.

At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I'm worried that (at least in my case) this is too anchored on imagining "business as usual, but with more total capital". I'm wondering if most of the expected value of an additional $100B - especially when controlled by a single donor who can flexibly deploy them - comes from 'crazy' and somewhat unlikely-to-pan-out options. I.e., things like:

  • Building an "EA city" somewhere
  • Buying a majority of shares of some AI company (or of relevant hardware companies)
  • Being able to spend tens of billions of $ on compute, at a time when few other actors are willing to do so
  • Buying the New York Times
  • Being among the first actors settling Mars

(Tbc, I think most of these things would be kind of dumb or impossible as stated, and maybe a "realistic" additional donor wouldn't be open to such things. I'm just gesturing at the rough shape of things which I suspect might contain a lot of the expected value.)

Comment by Max_Daniel on How well did EA-funded biorisk organisations do on Covid? · 2021-06-06T16:36:20.157Z · EA · GW

I don’t think this should be seen as evidence that these organisations did badly (maybe a bit that they were over-confident) but that this was a very difficult situation to do things well in.

I somewhat agree, but I think this point becomes much weaker if it was the case that at the same time when these organizations were giving poor advice some amateurs in the EA and rationality communities had already arrived at better conclusions, would have given better advice, etc.

I didn't follow the relevant conversations closely enough to have much of an inside view on how strongly the latter is true, but my impression is that many people in the EA/rationality communities (including ones who did follow the conversations more closely) think it's true. Even I am aware of some data points that seem to suggest such a conclusion (e.g., some conversations I remember).

FWIW, at least given what I know I think I find this less compelling as a 'vindication of EA/rationalist epistemics' or whatever than some other people seem to - I think the lessons we should learn depend on a number of additional things which I'm currently uncertain about:

  • Are we comparing like with like rather than cherrypicking? (I.e., comparing anecdotes of flawed advice from 'expert organizations' to anecdotes of ex-post correct advice from EAs/rationalists?)
    • Ideally I'd want to know about the full distribution of views among 'expert organizations' and the full distribution of views among 'amateur EAs/rationalists who spent time looking into COVID things', and then compare those.
  • What about some other relevant groups? E.g., how "well" did pharma/vaccine companies do? What about academics developing tests? It does seem like at least some of these groups started taking the novel coronavirus pretty seriously in January already.
  • What should we think about the 'external validity' of EA/rationalist successes? Some people did well in predicting how COVID would play out, what recommendations are good, etc., in the environmental condition of "having significant spare time and being able to post and discuss views with like-minded people without facing a lot of other constraints". Would they still have done well if they had been in the environmental condition "expert making a public statement that needs to make sense to the general public" plus whatever other incentives these experts & organizations were subject to?
    • (The above wording might sound like I'm thinking it was "easier" for EAs/rationalists to arrive at correct conclusions or give good advice when making statements we would hear about. However, in fact, I'm unsure about the "net effect" of incentives and other environmental conditions. Similarly, I don't mean to suggest that 'external validity' is necessarily poor, just that it seems worth thinking about before drawing strong conclusions.)

However, no matter the answers to these questions, your claim to me still sounds too generous to these organizations.

Comment by Max_Daniel on Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare · 2021-06-06T13:07:22.494Z · EA · GW

To be clear again, the specific question this analysis address is not "is it ethical to eat meat and then pay offsets". The question is "assuming you pay for offsets, is it better to eat chicken or beef?"

(FWIW, this might be worth emphasizing more prominently. When I first read this post and the landing page, it took me a while to understand what question you were addressing.)

Comment by Max_Daniel on My current impressions on career choice for longtermists · 2021-06-06T12:27:47.838Z · EA · GW

I think you probably mean in relation to types of work, activity, organisation, mindsets, aptitudes, etc., and not in relation to what cause areas or interventions you're focusing on, right? 

Basically yes. But I also think (and I understand Holden to say similar things in the OP) that "what cause area is most important" is perhaps less relevant for career choice, especially early-career, than some people (and 80k advice [ETA: though I think it's more like my vague impression of what people including me perceive 80k advice to say, which might be quite different from what current 80k advice literally says if you engage a lot with their content]) think.

Comment by Max_Daniel on Progress studies vs. longtermist EA: some differences · 2021-06-06T12:24:11.219Z · EA · GW

I hope to have time to read your comment and reply in more detail later, but for now just one quick point because I realize my previous comment was unclear:

I am actually sympathetic to an "'egoistic', agent-relative, or otherwise nonconsequentialist perspective". I think overall my actions are basically controlled by some kind of bargain/compromise between such a perspective (or perhaps perspectives) and impartial consequentialism.

The point is just that, from within these other perspectives, I happen to not be that interested in "impartially maximize value over the next few hundres of years". I endorse helping my friends, maybe I endorse volunteering in a soup kitchen or something like that; I also endorse being vegetarian or donating to AMF, or otherwise reducing global poverty and inequality (and yes, within these 'causes' I tend to prefer larger over smaller effects); I also endorse reducing far-future s-risks and current wild animal suffering, but not quite as much. But this is all more guided by responding to reactive attitudes like resentment and indignation than by any moral theory. It looks a lot like moral particularism, and so it's somewhat hard to move me with arguments in that domain (it's not impossible, but it would require something that's more similar to psychotherapy or raising a child or "things the humanities do" than to doing analytic philosophy).

So this roughly means that if you wanted to convince me to do X, then you either need to be "lucky" that X is among the things I happen to like for idiosyncratic reasons - or X needs to look like a priority from an impartially consequentialist outlook.

Comment by Max_Daniel on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-06T11:56:16.270Z · EA · GW

Thanks for sharing your intuition, which of course moves me toward preferences for less/no overlap being common.

I'm probably even more moved by your comparison to finance because I think it's a better analogy to EA Funds than the analogies I used in my previous comments.

However, I still maintain that there is no strong reason to think that zero overlap is optimal in some sense, or would widely be preferred. I think the situation is roughly:

  • There are first-principles arguments (e.g., your 'convex hull' argument) for why, under certain assumptions, zero overlap allows for optimal satisfaction of donor preferences.
    • (Though note that, due to standard arguments for why at least at first glance and under 'naive' assumptions splitting small donations is suboptimal, I think it's at least somewhat unclear how significant the 'convex hull' point is in practice. I think there is some tension here as the loss of the extremal points seems most problematic from a 'maximizing' perspective, while I think that donor preferences to split their giving across causes are better construed as being the result of "intra-personal bargaining", and it's less clear to me how much that decision/allocation process cares about the 'efficiency loss' from moving away from the extremal points.)
  • However, reality is more messy, and I would guess that usually the optimum is somewhere on the spectrum between zero and full overlap, and that this differs significantly on a case-by-case basis. There are things pushing toward zero overlap, and others pushing toward more overlap (see e.g. the examples given for EA Funds below), and they need to be weighed up. It depends on things like transaction costs, principal-agent problems, the shape of market participants' utility functions, etc.
  • Here are some reasons that might push toward more overlap for EA Funds:
    • Efficiency, transaction/communication cost, etc., as mentioned by Jonas.
    • My view is that 'zero overlap' just fails to carve reality at its joints, and significantly so.
      • I think there will be grants that seem very valuable from, e.g., both a 'meta' and a 'global health' perspective, and that it would be a judgment call whether the grant fits 'better' with the scope of the GHDF or the EAIF. Examples might be pre-cause-neutral GWWC, a fundraising org covering multiple causes but de facto generating 90% of its donations in global health, or an organization that does research on both meta and global health but doesn't want to apply for 'restricted' grants.
      • If funders adopted a 'zero overlap' policy, grantees might worry that they will only be assessed a long one dimension of their impact. So, e.g., an organization that does research on several causes might feel incentivized to split up, or to apply for 'restricted' grants. However, this can incur efficiency losses because sometimes it would in fact be better to have less internal separation between activities in different causes than required by such a funding landscape.
    • More generally, it seems to me that incomplete contracting is everywhere.
      • If I as a donor made an ex-ante decision that I want my donations to go to cause X but not Y, I think there realistically would be 'borderline cases' I simply did not anticipate when making that decision. Even if I wanted, I probably could not tell EA Funds which things I do and don't want to give to based on their scope, and neither could EA Funds get such a fine-grained preference out of me if they asked me.
      • Similarly, when EA Funds provides funding to a grantee, we cannot anticipate all the concrete activities the grantee might want to undertake. The conditions implied by the grant application and any restrictions attached to the grant just aren't fine-grained enough. This is particularly acute for grants that support someone’s career – which might ultimately go in a different direction than anticipated. More broadly, a grantee will sometimes realize they might want to fund activities for which neither of us have previously thought about if they're covered by the 'intentions' or 'spirit' of the grant, and this can include activities that would be more clearly in another fund's scope.
    • To drive home how strongly I feel about the import of the previous points, my immediate reaction to hearing "care about EA Meta but not Longtermist things" is literally "I have no idea what that's supposed to even mean". When I think a bit about it, I can come up with a somewhat coherent and sensible-seeming scope of "longtermist but not meta", but I have a harder time making sense of "meta but not longtermist" as a reasonable scope. I think if donors wanted that everything that's longtermist (whether meta or not) was handled by the LTFF, then we should clarify the LTFF's scope, remove the EAIF, and introduce a "non-longtermist EA fund" or something like that instead - as opposed to having an EAIF that funds things that overlap with some object-level cause areas but not others.
    • Some concrete examples:
      • Is 80k meta or longtermist? They have been funded by the EAIF before, but my understanding is that their organizational position is pro-longtermism, that many if not most of their staff are longtermist, and that this has significant implications for what they do (e.g., which sorts of people to advise, which sorts of career profiles to write, etc.).
      • What about Animal Advocacy Careers? If they wanted funding from EA Funds, should they get it from the AWF or the EAIF?
      • What about local EA groups? Do we have to review their activities and materials to understand which fund they should be funded by? E.g., I've heard that EA NYC is unusually focused on animal welfare (idk how strongly, and if this is still true), and I'm aware of other groups that seem pretty longtermist. Should such groups then not be funded by the EAIF? Should groups with activities in several cause areas and worldviews be co-funded by three or more funds, creating significant overhead?
      • What about CFAR? Longtermist? Meta?

--

Taking a step back, I think what this highlights is that feedback like this in the comment may well move me toward "be willing to incur a bit more communication cost to discuss where a grant fits best, and to move grants that arguably fit somewhat better with a different fund". But (i) I think where I'd end up is still a far cry from 'zero overlap', and (ii) I think that even if I made a good-faith efforts it's unclear if I would better fulfil any particular donor's preference because, due to the "fund scopes don't carve reality at its joint" point, donors and me might make different judgment calls on 'where some grant fits best'.

In addition, I expect that different donors would disagree with each other about how to delineate scopes, which grants fits best where, etc.

This also means it would probably more help me to better satisfy donor preferences if I got specific feedback like "I feel grant X would have better fitted with fund Y" as opposed to more abstract preferences about the amount of overlap in fund scope. (Though I recognize that I'm kind of guilty having started/fueled the discussion in more abstract terms.)

However, taking yet another step back, I think that when deciding about the best strategy for EA Funds/the EAIF going forward, I think there are stakeholders besides the donors whose interests matter as well: e.g., grantees, fund managers, and beneficiaries. As implied by some of my points above, I think there can be some tensions between these interests. How to navigate this is messy, and depends crucially on the answer to this question among other things. 

My impression is that when the goal is to “maximize impact” – even within a certain cause or by the lights of a certain worldview – we’re less bottlenecked by funding than by high-quality applications, highly capable people ‘matched’ with highly valuable projects they’re a good fit for, etc. This makes me suspect that the optimal strategy would put somewhat less weight on maximally satisfying donor preferences – when they’re in tension with other desiderata – than might be the case in some other nonprofit contexts. So even if we got a lot of feedback along the lines of “I feel grant X would have fitted better with fund Y”, I’m not sure how much that would move the EAIF’s strategy going forward.

(Note that the above is about what ‘products’ to offer donors going forward. Separately from that, I think it’s of course very important to not be misleading, and to make a good-faith effort to use past donations in a way that is consistent with what we told them we’d do at the time. And these demands are ‘quasi-deontological’ and can’t be easily sacrificed for the sake of better meeting other stakeholders’ interests.)

Comment by Max_Daniel on EA Infrastructure Fund: Ask us anything! · 2021-06-06T11:37:19.581Z · EA · GW

(Ah yeah, good point. I agree that the "even though" is a bit off because of the things you say.)