Effective Altruism Foundation: Plans for 2019

post by Jonas Vollmer · 2018-12-04T16:41:45.603Z · score: 52 (19 votes) · EA · GW · 2 comments

Contents

  Summary
  Table of contents
  About the Effective Altruism Foundation (EAF)
  Plans for 2019
    Research (Foundational Research Institute – FRI)
    Research coordination
    Grantmaking
    Other activities
  Financials
  When does it make sense to support our work?
  Brief review of 2018
    Organizational updates
    Achievements
    Mistakes
    We are interested in your feedback
None
2 comments

By Stefan Torges and Jonas Vollmer

Summary

Table of contents

About the Effective Altruism Foundation (EAF)

We conduct and coordinate research on how to do the most good in terms of reducing suffering, and support work that contributes towards this goal. Together with others in the effective altruism community, we want careful ethical reflection to guide the future of our civilization. We currently focus on efforts to reduce the worst risks of astronomical suffering (s-risks) from advanced artificial intelligence. (More about our mission and priorities.)

Plans for 2019

Research (Foundational Research Institute – FRI)

We plan to continue our research in the areas of AI-related decision theory and bargaining (e.g., implied decision theories of different AI architectures), fail-safe measures (e.g., surrogate goals), and macrostrategy. We would like to make progress in the following areas in particular:

We are looking to grow our research team in 2019, so we would be excited to hear from you if you think you might be a good fit!

Research coordination

Academic institutions, the AI industry, and other EA organizations frequently provide excellent environments for research in the areas mentioned above. Since EAF currently cannot provide such an environment, we aim to act as a global research network, promoting the regular exchange and coordination between researchers whose work contributes to reducing s-risks.

The value of these activities depends to some extent on how many independent researchers are qualified and motivated to work on questions we would like to see progress on. We will re-assess after 6 and 12 months.

Grantmaking

The EAF Fund [EA · GW] will support individuals (students, academics, and independent researchers) and organizations (research institutes and charities) to carry out research in the areas of decision theory and bargaining, AI alignment and fail-safe architectures, macrostrategy research, and AI governance. To identify promising funding opportunities, we will expand our research team with a dedicated grantmaking researcher, invest more research hours from existing staff, and try various funding mechanisms (e.g., requests for proposals, prizes, teaching buy-outs, and scholarships).

We plan to grow the amount of available funding by providing high-fidelity philanthropic advice, i.e., formats which allow for sustained engagement (e.g., 1-on-1 advice, workshops), and investing more time into making our research accessible to non-experts.

We are uncertain how many opportunities there are for enabling the kind of work we would like to see outside of our own organization. Depending on the results, we will expand our efforts in this area further.

Other activities

Financials

When does it make sense to support our work?

Our funding situation has improved a lot compared to previous years. For donors who are on the fence about which cause or organization to support, this is a reason to donate elsewhere this year. However, we rely on a very small number of donors for 80% of our funding, so we are looking to diversify our support base.

If you subscribe to some form of suffering-focused ethics and want to focus on ways to improve the long-term future, we think supporting our work is the best bet for achieving that, as we outline in our donation recommendations.

It may also make sense to support our work if (1) you think suffering risks are particularly neglected in EA given their expected tractability, or (2) you are unusually pessimistic about the quality of the future. We think (1) is the stronger reason.

Note: It is no longer possible to earmark donations for specific projects or purposes within EAF (e.g., REG or FRI). All donations will by default contribute to the entire body of work we have outlined in this post. We might make individual exceptions for large donations.

Would you like to support us? Make a donation.

Brief review of 2018

Organizational updates

Achievements

Mistakes

We are interested in your feedback

If you have any questions or comments, we look forward to hearing from you; you can also send us your critical feedback anonymously. We greatly appreciate any critical thoughts that could help us improve our work.

2 comments

Comments sorted by top scores.

comment by Aaron Gertler (aarongertler) · 2018-12-05T06:24:29.000Z · score: 3 (3 votes) · EA(p) · GW(p)

Thanks for a great writeup, Jonas! I really liked the clear layout of the post and the link to provide anonymous feedback.

Questions I had after reading the post:

1. It's clear that EAF does some unique, hard-to-replace work (REG, the Zurich initiative). However, when it comes to EAF's work around research (the planned agenda, the support for researchers), what sets it apart from other research organizations with a focus on the long-term future? What does EAF do in this area that no one else does? (I'd guess it's a combination of geographic location and philosophical focus, but I have a hard time clearly distinguishing the differing priorities and practices of large research orgs.)

2. Regarding your "fundraising" mistakes: Did you learn any lessons in the course of speaking with philanthropists that you'd be willing to share? Was there any systematic difference between conversations that were more vs. less successful?

3. It was good to see EAF research performing well in the Alignment Forum competition. Do you have any other evidence you can share showing how EAF's work has been useful in making progress on core problems, or integrating into the overall X-risk research ecosystem?

(For someone looking to fund research, it can be really hard to tell which organizations are most reliably producing useful work, since one paper might be much more helpful/influential than another in ways that won't be clear for a long time. I don't know if there's any way to demonstrate research quality to non-technical people, and I wouldn't be surprised if that problem was essentially impossible.)

comment by Jonas Vollmer · 2018-12-05T10:08:43.775Z · score: 8 (4 votes) · EA(p) · GW(p)

Thanks for the questions!

1. It's clear that EAF does some unique, hard-to-replace work (REG, the Zurich initiative). However, when it comes to EAF's work around research (the planned agenda, the support for researchers), what sets it apart from other research organizations with a focus on the long-term future? What does EAF do in this area that no one else does? (I'd guess it's a combination of geographic location and philosophical focus, but I have a hard time clearly distinguishing the differing priorities and practices of large research orgs.)

I'd say it's just the philosophical focus, not the geographic location. In practice, this comes down to a particular focus on conflict involving AI systems. For more background, see Cause prioritization for downside-focused value systems [EA · GW]. Our research agenda will hopefully help make this easier to understand as well.

2. Regarding your "fundraising" mistakes: Did you learn any lessons in the course of speaking with philanthropists that you'd be willing to share? Was there any systematic difference between conversations that were more vs. less successful?

If we could go back, we'd define the relationships more clearly from the beginning by outlining a roadmap with regular check-ins. We'd also focus less on pitching EA and more on explaining how they could use EA to solve their specific problems.

3. It was good to see EAF research performing well in the Alignment Forum competition. Do you have any other evidence you can share showing how EAF's work has been useful in making progress on core problems, or integrating into the overall X-risk research ecosystem?
(For someone looking to fund research, it can be really hard to tell which organizations are most reliably producing useful work, since one paper might be much more helpful/influential than another in ways that won't be clear for a long time. I don't know if there's any way to demonstrate research quality to non-technical people, and I wouldn't be surprised if that problem was essentially impossible.)

In terms of publicly verifiable evidence, Max Daniel's talk on s-risks was received positively on LessWrong [LW · GW], and GPI quoted several of our publications in their research agenda. In-person feedback by researchers at other x-risk organizations was usually positive as well.

In terms of critical feedback, others pointed out that the presentation of our research is often too long and broad, and might trigger absurdity heuristics. We've been working to improve our research along these lines, but it'll take some time for this to become publicly visible.