Effective Altruism Foundation: Plans for 2020

post by Jonas Vollmer · 2019-12-23T11:51:56.315Z · score: 80 (35 votes) · EA · GW · 13 comments

Contents

  Summary
  About us
  Background on our strategy
    Strategic goals
  Plans for 2020
    Research
    Research community
    Other activities
    Organizational opportunities and challenges
  Review of 2019
    Research
    Research community
    Organizational updates
    Other activities
    Mistakes and lessons learned
  Financials
  How to contribute
    Recommendation for donors
    We are interested in your feedback
  Acknowledgments
None
13 comments

Summary

About us

We are building a global community of researchers and professionals working on reducing risks of astronomical suffering (s-risks). (Read more about us and our values.)

We are a London-based nonprofit. Previously, we were located in Switzerland (Basel) and Germany (Berlin). Before shifting our focus to s-risks from artificial intelligence (AI), we implemented projects in global health and development, farm animal welfare, wild animal welfare, and effective altruism (EA) community building and fundraising.

Background on our strategy

For an overview of our strategic thinking, see the following pieces:

The best work on reducing s-risks cuts across a broad range of academic disciplines and interventions. Our recent research agenda, for instance, draws from computer science, economics, political science, and philosophy. That means (a) we must work in many different disciplines and (b) find people who can bridge disciplinary boundaries. The longtermism [EA · GW] community brings together people with diverse backgrounds who understand our prioritization and share it to some extent. For this reason, we focus on making reducing s-risks a well-established priority in that community.

Strategic goals

Inspired by GiveWell’s self-evaluations, we are tracking our progress with a set of deliberately vague performance questions:

  1. Building long-term capacity. Have we made progress towards becoming a research group that will have an outsized impact on the research landscape and relevant actors shaping the future?
  2. Research progress. Has our work resulted in research progress that helps reduce s-risks (both in-house and elsewhere)?
  3. Research dissemination. Have we communicated our research to our target audience, and has the target audience engaged with our ideas?
  4. Organizational health. Are we a healthy organization with an effective board, staff in appropriate roles, appropriate evaluation of our work, reliable policies and procedures, adequate financial reserves and reporting, and so forth?

Our team will answer these questions at the end of 2020.

Plans for 2020

Research

Note: We currently carry out some of our research as part of the Foundational Research Institute (FRI). We plan to consolidate our activities related to s-risks under one brand and website in early 2020.

We aim to investigate research questions listed in our research agenda titled “Cooperation, Conflict, and Transformative Artificial Intelligence [? · GW].” We explain our focus on cooperation and conflict in the preface [? · GW]:

“S-risks might arise by malevolence, by accident, or in the course of conflict. (…) We believe that s-risks arising from conflict are among the most important, tractable, and neglected of these. In particular, strategic threats by powerful AI agents or AI-assisted humans against altruistic values may be among the largest sources of expected suffering. Strategic threats have historically been a source of significant danger to civilization (the Cold War being a prime example). And the potential downsides from such threats, including those involving large amounts of suffering, may increase significantly with the emergence of transformative AI systems.”

Topics covered by our research agenda include:

We did not list some topics in the research agenda because they did not fit its scope, but we consider them very important:

In practice, our publications and grants will be determined to a large extent by the ideas and motivation of the researchers. We understand the above list of topics as a menu for researchers to choose from, and we expect that our actual work will only cover a small portion of the relevant issues. We hope to collaborate with other AI safety research groups on some of these topics.

We are looking to grow our research team, so we would be excited to hear from you if you think you might be a good fit! We are also considering running a hiring round based on our research agenda as well as a summer research fellowship.

Research community

We aim to develop a global research community, promoting regular exchange and coordination between researchers whose work contributes to reducing s-risks.

Other activities

Organizational opportunities and challenges

Review of 2019

Research

S-risks from conflict. In 2019, we mainly worked on s-risks as a result of conflicts involving advanced AI systems:

We also circulated nine internal articles and working papers with the participants of our research workshops.

Foundational work on decision theory. This work might be relevant in the context of acausal interactions (see the last section of the research agenda):

Miscellaneous publications:

Research community

Organizational updates

Other activities

Mistakes and lessons learned

Financials

(View full-size image.)

How to contribute

Recommendation for donors

We think it makes sense for donors to support us if:

  1. you believe we should prioritize interventions that affect the long-term future [EA · GW] positively,
  2. (a) you assign significant credence to some form of suffering-focused ethics, (b) you think s-risks are not unlikely compared to very positive future scenarios, and/or (c) you think work on s-risks is particularly neglected and reasonably tractable, and
  3. you assign significant credence that our prioritization and strategy is sound, i.e., you consider our work on AI and/or non-AI priorities sufficiently pressing (e.g., you assign a nontrivial probability (at least 5–10%) to the development of transformative AI within the next 20 years).

For donors who do not agree with these points, we recommend giving to the donor lottery [EA · GW] (or the EA Funds). We recommend that donors who are interested in the EAF Fund support EAF instead because the EAF Fund has a limited capacity to absorb further funding.

Would you like to support us? Make a donation.

We are interested in your feedback

If you have any questions or comments, we look forward to hearing from you; you can also send us feedback anonymously. We greatly appreciate any thoughts that could help us improve our work. Thank you!

Acknowledgments

I would like to thank Tobias Baumann, Max Daniel, Ruairi Donnelly, Lukas Gloor, Chi Nguyen, and Stefan Torges for giving feedback on this article.

13 comments

Comments sorted by top scores.

comment by MichaelStJules · 2019-12-31T08:00:50.161Z · score: 17 (8 votes) · EA(p) · GW(p)

I'm happy to hear about Beckstead's guidelines to give s-risks and related views more representation. This looks like a big deal, especially given complaints about representativeness at some of the most influential EA orgs.

This is the first I'm hearing about them, actually, and I'm surprised they weren't brought up when concerns about EAF/FRI's guidelines were raised (or did I just miss them?). It seemed like a fairly one-sided compromise on suffering-focused views from EAF/FRI in return for grants at the time, but this looks like a pretty good deal overall, although I would like to see how far the guidelines go. Were Beckstead's guidelines already done or being worked on then?

You mention beliefs, too; does this include suffering-focused views generally?

comment by Jonas Vollmer · 2020-01-02T11:30:04.380Z · score: 5 (4 votes) · EA(p) · GW(p)

Thank you for the feedback!

Yes, we sent out both guidelines simultaneously. They link to each other. The post you're referring to mentioned Nick's guidelines in passing, but it seems readers got an incomplete / incorrect impression.

You mention beliefs, too; does this include suffering-focused views generally?

The guidelines talk about beliefs that are important to us in general. Suffering-focused views aren't mentioned as a concrete example, but flawed futures and s-risks are.

comment by MichaelStJules · 2020-01-03T03:25:52.262Z · score: 6 (4 votes) · EA(p) · GW(p)

Ah, you're right, Beckstead's guidelines are mentioned.

This does still seem a bit asymmetric as a trade: in exchange for grant money and discussing outcomes and problems, i.e. flawed futures and s-risks, that both classical utilitarians (or those with more symmetric views) and those with suffering-focused views would view as astronomically bad, EAF/FRI is expected to emphasize moral uncertainty, reference arguments against asymmetric views and for symmetric views and weaken the framing of arguments against symmetric views (e.g. "world destruction"). Is this accurate?

In other words, those with more symmetric views should already care about flawed futures and s-risks, so it doesn't seem like much of a compromise for them to mention them, but those with suffering-focused views are expected to undermine their own views.

Are, for example, any of the procreation asymmetry, negative utilitarianism, lexicality, prioritarianism or tranquilism mentioned in Beckstead's guidelines? What about moral uncertainty in population ethics generally?

I can see how this could be considered a big win for suffering-focused views overall by getting more consideration for their practical concerns (flawed futures and s-risks), and de-emphasizing these views in itself could also be useful to attract hires and donors for s-risk work, but if someone thought the promotion of suffering-focused views (generally, or within EA) was important, it could be seen as an overall loss.

Maybe having two separate orgs actually is the best option, with one (EAF/FRI) focused on s-risks and emphasizing moral uncertainty, and the other (there's been talk about one) emphasizing suffering-focused views.

comment by Jonas Vollmer · 2020-01-16T22:35:58.239Z · score: 6 (3 votes) · EA(p) · GW(p)

Thanks for giving input on this!

So you seem to think that our guidelines ask people to weaken their views while Nick's may not be doing that, and that they may be harmful to suffering-focused views if we think promoting SFE is important. I think my perspective differs in the following ways:

  • The guidelines are fairly similar in their recommendation to mention moral uncertainty and arguments that are especially important to other parts of the community while representing one's own views honestly.
  • If we want to promote SFE in EA, we will be more convincing for (potential) EAs if we provide nuanced and balanced arguments, which is what the guidelines ask for, and if s-risks research is more fleshed out and established in the community. Unlike our previous SFE content, our recent efforts (e.g., workshops, asking for feedback on early drafts) received a lot of engagement from both newer and long-time EA community members. (Outside of EA, this seems less clear.)
  • We sought feedback on these guidelines from community members and received largely positive feedback. Some people will always disagree but overall, most people were in favor. We'll seek out feedback again when we revisit the guidelines.
  • I think this new form of cooperation across the community is worth trying and improving on. It may not be perfect yet, but we will reassess at the end of this year and make adjustments (or discontinue the guidelines in a worst case).

I hope this is helpful. We have now published the guidelines, you can find the links above!

comment by MichaelStJules · 2020-01-17T06:07:16.614Z · score: 2 (2 votes) · EA(p) · GW(p)

Thanks!

I agree with/appreciate these points. I think there is a difference in how each sides deals with each others' concerns, but I guess I can see that it might be fair anyway. That is, in EAF's guidelines, authors are encouraged to "include some of the best arguments against these positions, and, if appropriate, mention the wide acceptance of these arguments in the effective altruism community", while in Beckstead's, authors are encouraged to discuss the practical concerns of the SFE community, which might not otherwise be practical concerns for them, depending on their empirical views (e.g. astronomical suffering would be outweighed by far more wellbeing).

Also, I expect this not to be the case, but is general advocacy against working on extinction risks (and in favour of other priorities) something that would be discouraged according to the guidelines? This may "cause human extinction" by causing people to (voluntarily) be less likely to try to prevent extinction. Similarly, what about advocacy for voluntary human extinction (however unlikely it is anyway)? I think these should be fine if done in an honest and civil way, and neither underhandedly nor manipulatively.

comment by Jonas Vollmer · 2020-01-17T10:08:29.489Z · score: 6 (4 votes) · EA(p) · GW(p)

Thanks! I think I don't have the capacity to give detailed public replies to this right now. My respective short answers would be something like "sure, that seems fine" and "might inspire riskier content, depends a lot on the framing and context", but there's nuance to this that's hard to convey in half a sentence. If you would like to write something about these topics and are interested in my perspective, feel free to get in touch and I'm happy to share my thoughts!

comment by Aaron Gertler (aarongertler) · 2019-12-31T23:11:43.971Z · score: 15 (9 votes) · EA(p) · GW(p)

This is a model for an organization update, and I can easily see myself referring other orgs to it in the future. Thank you for putting it together!

comment by Jonas Vollmer · 2020-01-01T13:14:17.326Z · score: 2 (1 votes) · EA(p) · GW(p)

Thanks! :)

comment by MichaelStJules · 2019-12-31T08:53:41.256Z · score: 5 (4 votes) · EA(p) · GW(p)
In particular, strategic threats by powerful AI agents or AI-assisted humans against altruistic values may be among the largest sources of expected suffering.

Thanks, this is pretty horrifying.

Do the kinds of s-risks EAF has in mind mostly involve artificial sentience to get to astronomical scale?

Are you primarily concerned with autonomous self-sustaining (self-replicating) suffering processes being created, or are you also very concerned about an agent already having or creating individuals capable of suffering and who require resources from the agent to keep running, despite the costs (of running, or the extra costs of sentience specifically)?

My guess is that the latter is much more limited in potential scale.

EDIT: Ah, of course, they can just run suffering algorithms on existing general-purpose computing hardware.

comment by Jonas Vollmer · 2020-01-16T22:37:18.435Z · score: 6 (3 votes) · EA(p) · GW(p)
Do the kinds of s-risks EAF has in mind mostly involve artificial sentience to get to astronomical scale?

Yes, see here. Though we also put some credence on other "unknown unknowns" that we might prevent through broad interventions (like promoting compassion and cooperation).

Are you primarily concerned with autonomous self-sustaining (self-replicating) suffering processes being created, or are you also very concerned about an agent already having or creating individuals capable of suffering and who require resources from the agent to keep running, despite the costs (of running, or the extra costs of sentience specifically)?
My guess is that the latter is much more limited in potential scale.

Both could be concerning. I find it hard to think about future technological capabilities and agents in sufficient detail. So rather than thinking about specific scenarios, we'd like to reduce s-risks through (hopefully) more robust levers such as making the future less multipolar and differentially researching peaceful bargaining mechanisms.

comment by MichaelA · 2020-03-07T10:36:57.701Z · score: 3 (2 votes) · EA(p) · GW(p)

I was impressed by the structure and clarity of this post, and of EAF/CLR's thinking.

Minor point: In the communications guidelines Google doc, it says "We decided to make this document public in December 2020." I presume that should be 2019.

comment by Jonas Vollmer · 2020-03-07T15:12:44.563Z · score: 3 (2 votes) · EA(p) · GW(p)

Thanks, fixed!

comment by meerpirat · 2019-12-30T23:22:44.571Z · score: 3 (3 votes) · EA(p) · GW(p)

Thank you for the update, and for your work.