Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism)
post by seanrson (email@example.com)
I recently looked through the current version of the virtual groups intro syllabus and was disappointed to see zero mention of s-risks within the sections on longtermism/existential risk. I think this is a symptom of a larger problem, where “longtermism” has come to mean a very particular set of future-oriented projects (primarily extinction risk reduction) that primarily derive from a very particular set of values (primarily classical utilitarianism). As facilitators responsible for introducing people to the ideas of EA, I think it’s important for us to diversify our readings and discussions to account for multiple reasonable starting positions. For a start, I suggest that we rework the week on existential risk to have a more general focus on cause prioritization in longtermism, including readings and discussions on the topic of s-risks.
More generally, I think that we should take the threat of groupthink very seriously. The best-funded and most influential parts of the EA community have come to prioritize a particular worldview and value system that is not necessarily definitive of EA, and one that reasonable people in the community could disagree with. Throughout my experience as a student organizer, I've seen many of my peers just defer to the views and values supported by organizations like 80,000 Hours without reflecting much on their own positions, which strikes me as quite problematic given that many want to represent EA as a question, not an ideology [EA · GW]. Failing to include a broader range of ideas and topics in introductory fellowships only exacerbates this problem of groupthink.
I’d love to talk more about how we can diversify the range of views represented to newcomers, and in particular how we can “diversify longtermism.”
Comments sorted by top scores.
comment by Mauricio ·
2021-09-15T21:44:13.270Z · EA(p) · GW(p)
Thanks for this! I'm not sure what I think about this--a few things might make it challenging/costly to introduce s-risks into the Introductory EA syllabi:
- The Introductory EA Program is partly meant to build participants' sustained interest in EA, and the very speculative / weird nature of s-risks could detract from that (by being off-puttingly out-there to people who--like most program participants--have only spent a handful of hours learning about / reflecting on relevant topics).
- One might wonder: if this is an issue, why introduce x-risks? I'd guess x-risks are more accessible/intuitive to people, given common discourse about climate change and our recent experiences with pandemics. And the Introductory EA Program doesn't dive into, say, the details of AI risk arguments, for a mix of this reason and the following reason.
- The Introductory EA Program aims to have an amount of material that's short enough (a) for many people who are busy and not totally dedicated to EA to want to apply / not drop out, and (b) for many participants to actually do the core readings (in the absence of serious external incentives). Adding core/required readings in general is costly, because it detracts from these things.
- Careful thinking about s-risks seems to lead people to seek cooperative ways to mitigate s-risks, which seems great. The Introductory EA Program is too rushed and shallow to promote much thorough thinking, raising risks that participants could take naive/reckless approaches to reducing s-risks (which seems terrible, including for the reputation of the s-risk community).
Still, your basic point about the importance of diversifying the ideas EAs are introduced to seems right. (This is different from the importance of diversifying the views of people in longtermism--the value of that depends on the correctness of the views that are currently most prominent.) So I'd be tentatively optimistic about some approaches like these:
- Adding more topics to the further readings sections.
- Adding a week where people choose readings out of a wide range of topics.
- Having sections on s-risks in the In-Depth EA Program (if I remember correctly, this is already the case, which seems appropriate because points 1-3 above are all much less applicable to the in-depth program.)
(Also, if anyone is interested in suggesting changes to the Introductory EA Program, you might find it useful to make shovel-ready recommendations, e.g. proposing a specific reading or collection of readings for a week, because organizers are fairly time-constrained. Although broader suggestions/discussion also seems valuable.)Replies from: Jamie_Harris, firstname.lastname@example.org
↑ comment by Jamie_Harris ·
2021-09-16T23:05:05.688Z · EA(p) · GW(p)
I agree with 2. Not sure about 3 as I haven't reviewed the Introductory fellowship in depth myself.
But on 1, I want to briefly make the case that s-risks don't have to be/seem much more weird than extinction risk work. I've sometimes framed it as: The future is vast and it could be very good or very bad. So we probably want to both try to preserve it for the good stuff and improve the quality. (Although perhaps CLR et al don't actually agree with the preserving bit, they just don't vocally object to it for coordination reasons etc)
There are also ways it can seem less weird. E.g. you don't have make complex arguments about wanting to ensure a thing that hasn't happened yet continues to happen, and missed potential, you can just say: "here's a potential bad thing. We should stop that!!" See https://forum.effectivealtruism.org/posts/seoWmmoaiXTJCiX5h/the-psychology-of-population-ethics [EA · GW] for evidence that people, on average, weigh (future/possible) suffering more than happiness.
Also consider that one way of looking at moral circle expansion (one method of reducing s-risks) is that its basically just what many social justicey types are focusing on anyway -- increasing protection and consideration of marginalised groups. It just takes it further.
Replies from: Mauricio
↑ comment by Mauricio ·
2021-09-17T06:17:17.731Z · EA(p) · GW(p)
Thanks! Yeah, I think you're right; that + Sean's specific reading suggestions seem like reasonably intuitive introductions to s-risks. Do you think there are similarly approachable introductions to specific s-risks, for when people ask "OK, I'm into this broad idea--what specific things could I work on?" (Or maybe this isn't critical--maybe people are oddly receptive to weird ideas if they've had good first impressions.)Replies from: Jamie_Harris
↑ comment by Jamie_Harris ·
2021-09-18T07:34:21.556Z · EA(p) · GW(p)
Well I think moral circle expansion is a good example. You could introduce s-risks as a general class of things, and then talk about moral circle expansion as a specific example. If you don't have much time, you can keep it general and talk about future sentient beings; if animals have already been discussed, mention that idea that if factory farming or something similar was spread to astronomical scales, that could be very bad. If you've already talked about risks from AI, I think you could reasonably discuss some content about artificial sentience [EA · GW] without that seeming like too much of a stretch. My current guess is that focusing on detailed simulations as an example is a nice balance between (1) intuitive / easy to imagine and (2) the sorts of beings we're most concerned about. But I'm not confident in that, and Sentience Institute is planning a survey for October that will give a little insight into which sorts of future scenarios and entities people are most concerned about. If by "introductions" you're looking for specific resource recommendations, there are short videos, podcasts, and academic articles depending on the desired length, format etc.
Some of the specifics might be technical, confusing, or esoteric, but if you've already discussed AI safety, you could quite easily discuss the concept of focusing on worst-case / “fail-safe” AI safety measures as a promising area. It's also nice because it overlaps with extinction risk reduction work more (as far as I can tell) and seems like a more tractable goal than preventing extinction via AI or achieving highly aligned transformative AI.
A second example (after MCE) that benefits from being quite close to things that many people already care about is the area of reducing risks from political polarisation. I guess that explaining the link to s-risks might not be that quick though. Here's a short writeup on this topic, and I know that Magnus Vinding of the Center for Reducing Suffering is publishing a book soon called Reasoned Politics, which I imagine includes some content on this. Its all a bit early stages though, so I probably wouldn't pick this one at the moment.
↑ comment by seanrson (email@example.com) ·
2021-09-15T23:09:03.578Z · EA(p) · GW(p)
Hey Mauricio, thanks for your reply. I’ll reply later with some more remarks, but I’ll list some quick thoughts here:
Replies from: Mauricio
I agree that s-risks can seem more “out there,” but I think some of the readings I’ve listed do a good job of emphasizing the more general worry that the future involves a great deal of suffering. It seems to me that the asymmetry in content about extinction risks vs. s-risks is less about the particular examples and more about the general framework. Taking this into account, perhaps we could write up something to be a gentler introduction to s-risks. The goal is to prevent people from identifying “longtermism” as just extinction risk reduction.
Yeah this is definitely true, but completely omitting such a distinctively EA concept like s-risks seems to suggest that something needs to be changed.
I think the reading I listed entitled “Common Ground for Longtermists” should address this worry, but perhaps we could add more. I tend to think that the potential for antagonism is outweighed by the value of broader thinking, but your worry is worth addressing.
↑ comment by Mauricio ·
2021-09-16T06:04:17.343Z · EA(p) · GW(p)
Ah sorry, I hadn't seen your list of proposed readings (I wrongly thought the relevant link was just a link to the old syllabus). Your points about those readings in (1) and (3) do seem to help with these concerns. A few thoughts:
- The dichotomy between x-risk reduction and s-risk reduction seems off to me. As I understand them, prominent definitions of x-risks    (especially the more thorough/careful discussion in ) are all broad enough for s-risks to count as x-risks (especially if we're talking about permanent / locked-in s-risks, which I assume we are, given the context of longtermism).
- One worry is that the proposed list might be overcorrecting--through half of its content being from CFR, it seems to suggest that about half of longtermists endorse prioritizing s-risk reduction, which is a large over-estimate.
- As you say, we want to discourage uncritical acceptance of views presented in the syllabus, so it seems good for such a list to include criticisms of both approaches to improving the long-term future, at least in recommended readings. (Yup, the current syllabus is also light on those, although week 7 does include criticisms of longtermism.)
completely omitting such a distinctively EA concept like s-risks seems to suggest that something needs to be changed.
I'm not sure about that. The intro program omits plenty of distinctively EA concepts due to time/attention constraints--here are some other prominent ideas in EA that (if I remember correctly) are currently omitted from the core/required readings of the introductory program: consequentialism, cause X, patient longtermism, wild animal suffering, EA movement-building, improving institutional decision making, decision theory, the unilateralist's curse, moral uncertainty & cooperation, Bayesian reasoning, forecasting, history of well-being, cluelessness, mental welfare, and global priorities research.
A bunch of these (and s-risk reduction) are covered in depth in (some versions of) the in-depth EA program.
(Like I mentioned earlier, I'm pretty open to there being some discussion of s-risks in the intro syllabus. Mostly wondering about the degree to which it should be covered.)Replies from: firstname.lastname@example.org
↑ comment by seanrson (email@example.com) ·
2021-09-16T13:19:57.537Z · EA(p) · GW(p)
Yeah my mistake, I should have been clearer about the link for the proposed changes. I think we’re mostly in agreement. My proposed list is probably overcorrecting, and I definitely agree that more criticisms of both approaches are needed. Perhaps a compromise would be just including the reading entitled “Common Ground for Longtermists,” or something similar.
I think you’re right that many definitions of x-risk are broad enough to include (most) s-risks, but I’m mostly concerned about the term “x-risk” losing this broader meaning and instead just referring to extinction risks. It’s probably too nuanced for an intro syllabus, but MichaelA’s post (https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering [EA · GW]) could help people to better understand the space of possible problems.