I re-analyzed OpenPhil's grants data just now (link here), and I noticed that across all your grants since Jan. 1, 2020, scientific research is the focus area with the largest amount granted to, at $66M, or 27% of total giving since then, closely beating out global health and development (chart shown below).
OpenPhil also gave an average of $48M per year from 2017-2020 to scientific research. I'm surprised by this - I knew OpenPhil gave some funding to scientific research, but I didn't know it's now the largest cause OpenPhil grants to.
Was there something that happened that made OpenPhil decide to grant more in this area?
Scientific research isn't a cause area heavily associated with EA currently - 80K doesn't really feature scientific research in their content or as a priority path, other than for doing technical AI safety research or biorisk research. Also, EA groups and CEA don't feature scientific research as one of EA's main causes - the focus still tends to be on global health and development, animal welfare, longtermism, and movement building / meta. (I guess scientific research is cross-cutting across these areas, but I still don't see a lot of focus on it). Do you think more EAs should be looking into careers in scientific research? Why or why not?
A follow-up question: What would this chart look like if all the opportunities you want to fund existed? In other words, to what extent does the breakdown of funding shown here capture Open Phil’s views on cause prioritization vs. reflect limiting factors such as the availability of high-quality funding opportunities, and what would it look like if there were no such limiting factors?
I haven't done a systematic analysis, but at a quick glance I'd note that quite a number of the grants in scientific research seem like their outputs would directly support main EA cause areas such as biorisk and global health - e.g. in the last 1-2 years I see a number on malaria prevention, vaccine development, antivirals, disease diagnostics etc.
(Uh, I just interacted with you but this is not related in any sense.)
I think your are interpreting Open Phil's giving to "Scientific research" to mean it is a distinct cause priority, separate from the others.
For example, you say:
... EA groups and CEA don't feature scientific research as one of EA's main causes - the focus still tends to be on global health and development, animal welfare, longtermism, and movement building / meta
To be clear, in this interpretation, someone looking for an altruistic career could go into "scientific research" and make an impact distinct from "Global Health and Development" and other "regular" cause areas.
However, instead, is it possible that "scientific research" mainly just supports Open Philanthropy's various "regular" causes?
For example, a malaria research grant is categorized under "Scientific Research", but for all intents and purposes is in the area of "Global Health and Development".
So this interpretation, funding that is in "Scientific Research" sort of as an accounting thing, not because it is a distinct cause area.
In support of this interpretation, taking a quick look at the recent grants for "Scientific Research" (on March 18, 2021) shows that most are plausibly in support of "regular" cause areas:
Similarly, sorted by largest amount of grant, the top grants seem to be in the areas of "Global Health", and "Biosecurity".
Your question does highlight the importance of scientific research in Open Philanthropy.
Somewhat of a digression (but interesting) are secondary questions:
Theories of change related, e.g. questions about institutions, credibility, knowledge, power and politics in R1 academia, and how could these be edited or improved by sustained EA-like funding.
There is also the presence of COVID-19 related projects. If we wanted to press, maybe unduly, we could express skepticism of these grants. This is an area immensely less neglected and smaller in scale (?)—many more people will die of hunger or sanitation in Africa, even just indirectly from the effects of COVID-19, than the virus itself. The reason why this is undue is that I could see why people sitting on a board donating a large amount of money would not act during a global crisis in a time with great uncertainty.
Hey Charles, yeah Sean _o_ h made a similar comment. I now see that a lot of the scientific research grants are still targeted towards global health and development or biosecurity and pandemic preparedness.
Nevertheless, I think my questions still stand - I'd still love to hear how OpenPhil decided to grant more towards scientific research, especially for global health and development. I'm also curious if there are already any "big wins" among these scientific research grants.
I also think it's worth asking him "Do you think more EAs should be looking into careers in scientific research? Why or why not?". I think only a few EA groups have discussion groups about scientific research or improving science, so I guess a related question would be if he thinks that there should be more reading groups / discussion groups on scientific research or improving science, in order to increase the number of EAs interested in scientific research as a career.
This seems like great points and of course, your question stands.
I wanted to say that most R1 research is problematic for new grads: this is because of difficulty of success, low career capital, and frankly "impact" can also be dubious. It is also hard to get started. It typically requires PhD, post-doc(s), all poorly paid—contrast with say, software engineering.
My motivation for writing the above is for others, akin to the "bycatch [EA · GW]" article—I don't think you are here to read my opinions.
Thanks for responding thoughtfully and I'm sure you will get an interesting answer from Holden.
Don’t tell me what you think, tell me what you have in your portfolio -- Nassim Taleb
What does your personal investment portfolio look like? Are there any unusual steps you've taken due to your study of the future? What aspect of your approach to personal investment do you think readers might be wise to consider?
How has OpenPhil's Biosecurity and Pandemic Preparedness strategy changed in light of how the COVID-19 pandemic has unfolded so far? What biosecurity interventions, technologies or research directions seem more (or less) valuable now than they did a year ago?
In addition to funding AI work, Open Phil’s longtermist grantmaking includes sizeable grants toward areas like biosecurity and climate change/engineering, while other major longtermist funders (such as the Long Term Future Fund, BERI, and the Survival and Flourishing Fund) have overwhelmingly supported AI with their grantmaking. As an example, I estimate [EA(p) · GW(p)] that “for every dollar Open Phil has spent on biosecurity, it’s spent ~$1.50 on AI… but for every dollar LTFF has spent on biosecurity, it’s spent ~$19 on AI.”
Do you agree this distinction exists, and if so, are you concerned by it? Are there longtermist funding opportunities outside of AI that you are particularly excited about?
To operate in the broad range of cause areas openphil does, I imagine you need to regularly seek advice from external advisors. I have the impression that cultivating good sources of advice is a strong suite of both yours and OpenPhils.
I bet you also get approached by less senior folks asking for advice with some frequency.
As advisor and advisee: how can EAs be more effective at seeking and making use of good advice?
What common mistakes have you seen early career EAs make when soliciting advice, eg on career trajectory? When do you see advice make the biggest positive difference in someone’s impact? What changes would you make to how the EA community typically conducts these types of advisor/advisee relationships, if any?
I hear the vague umbrella term “good judgement” or even more simply “thinking well” thrown around a lot in the EA community. Do you have thoughts on how to cultivate good judgement? Did you do anything - deliberately or otherwise - to develop better judgement?
Consider the premise that the current instantiation of Effective Altruism is defective, and one of the only solutions is some action by Open Philanthropy.
By “defective”, I mean:
A. EA struggles to engage even a base of younger “HYPS” and “FAANG”, much less millions of altruistic people with free time and resources. Also, EA seems like it should have more acceptance in the “wider non-profit world” than it has.
B. The precious projects funded or associated with Open Philanthropy and EA often seem to merely "work alongside EA". Some constructs or side effects of EA, such as the current instantiation of Longtermism and “AI Safety'' have negative effects on community development.
Interactions in meetings with senior people in philanthropy indicates low buy-in: For example, in a private, high-trust meeting, a leader mentions skepticism of EA, and when I ask for elaboration, the leader pauses, visibly shifts uncomfortably in the Zoom screen, and begins slowly, “Well, they spend time in rabbit holes…”. While anecdotal, it also hints perhaps that widespread "data" is not available due to reluctance (to be clear, fear of offending institutions associated with large amounts of funding).
Elaboration on B:
Consider Longtermism [EA · GW]and AI as either manifestations or intermediate reasons for these issues:
The value of present instantiations of “Longtermism” and “AI'' is far more modest than they appear.
This is because they amount to rephrasing of existing ideas and their work usually treads inside a specific circle of competence. This means that no matter how stellar, their activities contribute little to execution of the actual issues.
This is not benign because these activities (unintentionally) are allowing backing in of worldviews that encroach upon the culture and execution of EA in other areas and as a whole. It produces “shibboleths” that run into the teeth of EA’s presentation issues. It also takes attention and interest from under-provisioned cause areas that are esoteric and unpopularized.
Aside: This question would benefit from sketches of solutions and sketches of the counterfactual state of EA. But this isn’t workable as this question is already lengthy, may be contentious, and contains flaws. Another aside: causes are not zero-sum and it is not clear the question contains a criticism of Longtermism or AI as a concern, even stronger criticism can be consistent with say, ten times current funding.
In your role in setting strategy for Open Philanthropy, will you consider the above premise and the three questions below:
To what degree would you agree with the characterizations above or (maybe unfair to ask) similar criticisms?
What evidence would cause you to change your mind to the answer to question #1 (e.g. if you believed EA was defective, what would make disprove this in your mind? Or, if you disagreed with the premise, what evidence would be required for you to agree?)
If there is a structural issue in EA, and in theory Open Philanthropy could intervene to remedy it, is there any reason that would prevent intervention? For example, from an entity/governance perspective or from a practical perspective?
What are some of the central feedback loops by which people who are hoping to positively influence the long run future can evaluate their efforts? What are some feedback sources that seem underrated, or at least worth further consideration?
I think your career story is one of the best examples of not needing formal background/study to do well or become well-versed in multiple EA causes, since from a hedge fund background, you started and led GiveWell, and now you oversee important work across multiple causes with Open Philanthropy. What do you think helped you significantly to learn and achieve these things? Did you have a mindset or mantras you repeated to yourself, or a learning process you followed?
How much work is going into working out how quickly ($/year) OpenPhil should be spending their money? And does OpenPhil offer any advice to Good Ventures on the money that is being invested as it seems like this is a large variable in the total amount of good OpenPhil / Good Ventures will be able to achieve?