Open Philanthropy: Our Progress in 2019 and Plans for 2020

post by Aaron Gertler (aarongertler) · 2020-05-12T11:49:40.509Z · score: 42 (19 votes) · EA · GW · 3 comments

This is a link post for https://www.openphilanthropy.org/blog/our-progress-2019-and-plans-2020

Contents

  Progress in 2019
  Continued grantmaking
  Operations
  Impact evaluation
  Worldview investigations
  Other cause prioritization work
  Hiring and other capacity building
  Outreach to external donors
  Plans for 2020
None
3 comments

I'm not affiliated with Open Phil; I'm just cross-posting this because no one else has done so yet.

This post compares our progress with the goals we set forth a year ago, and lays out our plans for the coming year.

In brief:

 

Progress in 2019

Last year’s post laid out plans for 2019. This section quotes from that section to allow comparisons between our plans and our progress.

 

Continued grantmaking

Last year, we wrote:

We expect to continue grantmaking in potential risks of advanced AI, biosecurity and pandemic preparedness, criminal justice reform, farm animal welfare, and scientific research and effective altruism. We expect that the total across these areas will be well over $100 million.

We hit our goal of giving well over $100 million across these six programs, and our total giving recommendations (including recommendations to support GiveWell’s top charities) were over $200 million. Some highlights:

We also wrote:

By default, we plan to continue with our relatively low level of focus and resource deployment in other areas (e.g., macroeconomic stabilization policy).

Other grants included the Center for Global Development (Global Health and Development), California YIMBY (Land Use Reform), the International Refugee Assistance (Immigration Policy), Employ America (Macroeconomic Stabilization Policy), and the Center for Election Science (other).

 

Operations

We’ve significantly expanded our Operations team, hiring Rinad Al-Anakrih, Povneet Dhillon, Leena Jones, Kira Maker, Eli Nathan, and Matthew Poe over the last year.

This has been needed, as our grants team now manages significant grant volume — 311 grants in 2019 with a median time of 13 days between grant recommendations and payment — and Open Philanthropy now numbers over 40 people. In addition to building and strengthening our culture and systems, the Operations team has made it easier for us to conduct events such as a retreat for our AI Fellows, has helped us build more robust recruiting processes, has greatly improved our office space, and more.

(Unlike some of the functions discussed below, Operations is a familiar function that doesn’t require much explanation; the relatively brief length of this section shouldn’t be taken as indicating lower importance.)

 

Impact evaluation

Last year, we wrote:

Our next step on self-evaluation is to build an internal function — which we’re currently calling impact evaluation — that can provide some degree of independent assessment of these portfolio reviews, and of our overall impact in a given area. We expect that it could take substantial time and experimentation before we develop an impact evaluation process that we’re happy with … We don’t have definite, dated goals for this work yet, as it’s at an early stage, but we hope that by 2020 we will have (a) a much better read on our impact for at least 1-2 grant portfolios to date; (b) a plan for beginning to scale the impact evaluation team and process.

We’ve now completed one major case study, and have several smaller writeups in progress, for cases where we think our funding has plausibly led to significant impact. These are internal writeups, and in many cases the content is based on frank conversations with grantees and others in the fields we work in such that it isn’t suitable for publication.

We feel that we are gaining clarity on how our grantmaking has performed in causes such as criminal justice and farm animal welfare (where our giving is relatively mature and seeks relatively near-term results), but we haven’t yet developed a robust, repeatable process for investigating potential cases for impact. Over the coming year, we hope to get to the point where our process is robust enough that we’re comfortable starting to hire further people for the Impact Evaluation team (this means we would have a job description ready, not necessarily that we would have made hires yet).

 

Worldview investigations

Last year, we wrote:

In 2019, we will be building out a function tentatively called “worldview investigations,” which will be a major priority for new Research Analyst hires. This function will aim to:

  • Identify debatable views we hold that play a key role in our cause prioritization, such as the view that there’s a nontrivial likelihood of transformative artificial intelligence being developed by 2036.
  • Put concentrated effort into examining the arguments for and against these views.
  • Create resources covering the arguments for and against these views as we see them. We have not yet decided what form these resources should take. Our best guess is that they will include Open Phil write-ups with strong reasoning transparency, but they may also include or instead be reports produced by contractors/grantees, recorded conversations covering the arguments for and against these views as we see them, summaries of such conversations, or something else. The goal of these resources will both be to make our own picture more precise and to make it easier for outsiders to understand and critique it, which in turn will hopefully raise the odds that we are able to subject key cause-prioritization-driving views to maximal critical scrutiny. (This could have major benefits whether or not the views withstand such scrutiny; we’d consider it a major benefit if we either changed our minds or caused people who currently disagree to change theirs.)

We expect that it could take substantial time and experimentation before we develop an approach that we’re happy with for worldview investigations … As with impact evaluation, this work is at an early stage and does not yet have definite dated goals, but we hope that by 2020 we will have (a) fairly thorough writeups (not necessarily public-ready) on at least 1-2 beliefs that are key to our cause prioritization; (b) a plan for beginning to scale the worldview investigations team and process.

This work has been significantly more challenging than expected (and we expected it to be challenging). Most of the key questions we’ve chosen to investigate and write about are wide-ranging questions that draw on a number of different fields, while not matching the focus and methodology of any one field. They therefore require the relevant Research Analyst to try to get up to speed on multiple substantial literatures, while realizing they will never be truly expert in any of them; to spend a lot of time getting feedback from experts in relevant domains; and to make constant difficult judgment calls about which sub-questions to investigate thoroughly vs. relatively superficially. These basic dynamics are visible in our report on moral patienthood, the closest thing we have to a completed, public worldview investigation writeup.

We initially started investigating a number of questions relevant to potential risks from advanced AI, but as we revised our expectations for how long each investigation might take, we came to focus the team exclusively on the question of whether there’s a strong case for reasonable likelihood of transformative AI being developed within the next couple of decades.

We now have three in-process writeups covering different aspects of this topic; all are in relatively late stages and could be finished (though still not necessarily public-ready) within months. We have made relatively modest progress on being able to scale the team and process; our assignments are better-scoped than they were a year ago, and we’ve added one new hire (Tom Davidson) focused on this work, but we still consider this a very hard area to hire for.

 

Other cause prioritization work

Last year, we wrote:

We see our work on impact evaluation and worldview investigations as providing key inputs into our cause prioritization. We don’t plan on doing much other cause prioritization work in 2019, and for the time being we are likely to avoid major growth in our total giving.

Our picture on this front has evolved:

 

Hiring and other capacity building

Last year, we wrote:

We are in the midst of another round of hiring for our Research Analyst roles, though this round has not been publicly advertised and we aren’t currently taking new applications. Unlike last year, when we took many people on for simultaneous trials, we will probably instead trial a much smaller number of RA applicants per year, with each trial period more customized to each trialist.

We hired only one new Research Analyst in the past year rather than a full round of trialists like we did in 2018. We also hired two Research Fellows and a number of Operations staff (see previous sections), as well as a new Communications Associate, Gabriela Romero.

We highlighted our new hires in this blog post.

 

Outreach to external donors

Last year, we wrote:

Outreach to external donors will remain a relatively low priority for the organization as a whole, though it may be a higher priority for particular staff.

In November, we announced a co-funding partnership with Ben Delo, co-founder of the cryptocurrency trading platform BitMEX and a recent Giving Pledge signatory. Ben will be providing funds (initially in the $5 million per year range as he gets started with his giving) for Open Philanthropy to allocate to our long-termist grantmaking, which assesses giving opportunities by how well they advance favorable long-term outcomes for civilization. This partnership grew out of Ben’s work with the non-profit Effective Giving UK.

Close partnerships of this type have so far been rare for Open Philanthropy, and pursuing them is still not currently a major organizational priority. However, we aspire to eventually work with many donors in order to maximize our impact. We want to be flexible in terms of relationship structures, and can imagine a variety of different forms.

Additionally, as discussed previously, we have continued to work significantly with other donors interested in particularly mature focus areas where our Program Officers see promising giving opportunities that outstrip their budgets (especially criminal justice reform and farm animal welfare).

 

Plans for 2020

Our major goals for 2020 are as follows:

Continued grantmaking. We expect to continue grantmaking in potential risks of advanced AI, biosecurity and pandemic preparedness, criminal justice reform, farm animal welfare, scientific research and effective altruism, as well as recommending support of GiveWell’s top charities. We expect that the total across these areas will be over $200 million. By default, we plan to continue with our relatively low level of focus and resource deployment in other areas (e.g., macroeconomic stabilization policy).

Impact evaluation. Over the coming year, we hope to get to the point where our process is robust enough that we’re comfortable starting to hire further people for the Impact Evaluation team (this means we would have a job description ready, not necessarily that we would have made hires yet).

Worldview investigations. We expect to continue to build out our worldview investigations function in 2020, as discussed above. This work is at an early stage and does not yet have definite dated goals, but we hope that this year we will finalize the three draft reports mentioned above.

Other cause prioritization work. We now have a team working on investigating our odds of finding significant amounts of giving opportunities in the “near-termist” bucket that are stronger than GiveWell’s top charities, which in turn will help determine what new causes we want to enter and what our annual rate of giving should be on the “near-termist” side. By this time next year, we hope to have a working model (though subject to heavy revision) of how much we intend to give each year in this category to GiveWell’s top charities and other “near-termist” causes.

Hiring and other capacity building will not be a major focus for the coming year, though we will open searches for new roles as needed.

Outreach to external donors will remain a relatively low priority for the organization as a whole, though it may be a higher priority for particular staff.

3 comments

Comments sorted by top scores.

comment by RyanCarey · 2020-05-12T18:20:22.320Z · score: 43 (18 votes) · EA(p) · GW(p)
  • Here's an updated ipynb with OpenPhil's annual spending, showing the breakdown with respect to EA-relevant areas.

My main impressions:

  • Having Ben Delo's participation is great.
  • OpenPhil and its staff working hard on allocating these funds is absolutely great (it's obvious, yet worth saying over and over again.)
  • It would be nice to see more new kinds of grants (to longtermist causes) by EA, via OpenPhil and otherwise. The kinds of grants are relatively stagnant over the last few years. e.g. the typical x-risk grant is a few million to an academic research group. Can we also fund more interventions, or projects in other sectors?
  • The AI OpenPhil Scholarships place substantial weight on the excellence of applicants' supervision, institutional affiliation and publication record. But there seems to be very little weight on the relevance of work done - I've only come across a few papers by any of the 2018-2020 applicants through my work on various aspects of AI x-risk. I've heard many people better-informed than me argue that this is likely to be relatively unproductive, in the sense that excellent researchers working in unrelated areas will tend to accept funding without substantially shifting their research direction. I'm as excited about academic excellence as almost anyone in AI safety, yet in the case of the OpenPhil Scholarships, this assessment sounds about right to me, and I haven't really heard anyone arguing the opposing view - it would be interesting to understand this thinking better.
comment by catherio · 2020-05-14T09:56:15.771Z · score: 6 (6 votes) · EA(p) · GW(p)

Hi Ryan - in terms of the Fellowship, I have a lot of thoughts about what we're trying to do, which feel better suited to "musing, with uncertainty" than "writing an internet comment", so let me know if you want to call/chat about it some time? But the short answer is I think the key pieces to keep in mind are to view the fellowship as 1) a community, not just individual scholarships handed out, and as such also 2) a multi-year project, built slowly.

comment by RyanCarey · 2020-05-14T20:22:29.621Z · score: 12 (7 votes) · EA(p) · GW(p)

Hey Catherio, sure, I've been puzzled by this for long enough that I'll probably reach out for a call.

Community effects could still be mediated by the relevance of participants' research interests. Anyway, I'm also pretty uncertain and interested to see the results as they come in over the coming years.