Cause X Guide
post by Joey
One of the factors that makes the effective altruism movement different from so many others is that its members are unified by the broad question [EA · GW] “How can I do the most good” instead of by specific solutions, such as “reduce climate change.” One of the most important questions EAs need to consider is what cause area presents the highest impact for their work.
There are four established cause areas in effective altruism: global poverty, factory-farmed animals, artificial intelligence existential risk, and EA meta. However, there are dozens of other cause areas that some EAs consider promising. The concept behind a “cause X” is that there could be a cause neglected by the EA community but that is as important, or more important, to work on than the four currently established EA cause areas. Finding a new cause X should be one of the biggest goals of the EA movement and one of the largest opportunities for an individual EA to achieve counterfactual impact.
One example of many of cause X’s posts having an impact is that some of these posts have influenced Charity Entrepreneurship’s focus on mental health. The Cause X discussion has also influenced one of the largest foundations in the world, Good Ventures.
This guide, however, aims to compile the most useful content for evaluating new possible cause Xs and compare them to the currently established top cause areas. Some of the content is old, and some of it does not perfectly address its question. However, these were the best sources I could find to debate and explain the issues. This guide is aimed at an intermediate EA audience who already has a solid understanding of EA ideas.
The guide is broken down into three sections. The introduction aims to explain the concepts needed to compare cause areas such as “Cause X,” “How a new cause area might be introduced to the EA community,” “Current methods used to split resources between causes,” and “Concerns with some of those methodologies.” The second section is focused on comparing top causes and reviewing some of the key issues that divide current supporters of the big four cause areas. The final section aims to present several possible candidates for cause X as new areas worth considering. It is only a small sample of the full list of causes presented and considered in the EA movement, but they were selected to represent the areas (other than the big four) that many EAs would consider promising. I used three different methods to devise a list of 15 cause areas that might be considered promising candidates for cause X, selecting five causes per method.
Method 1: Cause areas among the top ten listed on the EA survey [EA · GW]
Method 2: Cause areas endorsed by two or more major EA organizations
Method 3: Cause profiles or pitches with 50 or more upvotes on the EA Forum
This guide aims to be a resource wherein cause Xs can be noticed, read about, and more deeply considered. There are hundreds of ways to make the world a better place. Given the EA movement’s relative youth and frequently unsystematic way of reviewing cause areas, there is ample room for more consideration and research. The goal of the guide is for more people to consider a wider range of cause areas so we, as a movement, have a better chance of finding new and impactful ways to do good.
Cause X guide content
-Four focus areas of EA [LW · GW]
-EA cause selection [EA · GW]
-World view diversification
-Cause X [? · GW]
-What if you’re working on the wrong cause? [EA · GW]
-EA representativeness [EA · GW]
-How to get a cause into EA [EA · GW]
Comparing top causes
-Animals > Humans
-Humans > Animals [EA · GW]
-Long-term future > Near-term future
-Near-term future > Long-term future
-Meta > Direct
-Direct > Meta [EA · GW]
New causes one could consider.
-Mental health [EA · GW]
-Climate change [EA · GW]
-Nuclear war [EA · GW]
-Rationality [EA · GW]
-Biosecurity [? · GW]
-Wild animal suffering
-Meta science research
-Improving institutional decision making
-Invertebrates [EA · GW]
-Moral circle expansion [EA · GW]
-Happiness [EA · GW]
-Pain in the developing world [EA · GW]
-Coal fires [EA · GW]
If this guide is helpful to a lot of people, I will update or deepen the key posts or connect them better to make a more comprehensive PDF handbook. We will also keep a copy of this guide on Charity Entrepreneurship’s website here so it is easier for people to find in the future.
Comments sorted by top scores.
comment by riceissa ·
2019-09-01T21:15:46.922Z · EA(p) · GW(p)
It seems to me that this post has introduced a new definition of cause X that is weaker (i.e. easier to satisfy) than the one used by CEA.
This post defines cause X as:
The concept behind a “cause X” is that there could be a cause neglected by the EA community but that is as important, or more important, to work on than the four currently established EA cause areas.
But from Will MacAskill's talk [? · GW]:
What are the sorts of major moral problems that in several hundred years we'll look back and think, "Wow, we were barbarians!"? What are the major issues that we haven't even conceptualized today?
I will refer to this as Cause X.
See also the first paragraph of Emanuele Ascani's answer here [EA(p) · GW(p)].
From the "New causes one could consider" list in this post, I think only Invertebrates and Moral circle expansion would qualify as a potential cause X under CEA's definition (the others already have researchers/organizations working on them full-time, or wouldn't sound crazy to the average person).
I think it would be good to have a separate term specifically for the cause areas that seem especially crazy or unconceptualized, since searching for causes in this stricter class likely requires different strategies, more open-mindedness, etc.
Related: Guarded definition.
comment by Milan_Griffes ·
2019-09-01T20:17:45.721Z · EA(p) · GW(p)
Improving how we measure well-being & happiness [EA · GW] is related to Mental health [EA · GW] and Meta science research.
See also Logarithmic Scales of Pleasure and Pain [EA · GW].Replies from: algekalipso
↑ comment by algekalipso ·
2019-09-02T01:29:48.142Z · EA(p) · GW(p)
To zoom in on the "logarithmic scales of pleasure and pain" angle (I'm the author), I would say that this way of seeing the world suggests that the bulk of suffering is concentrated on a small percentage of experiences. Thus, finding scaleable treatments specially for ultra-painful conditions could take care of a much larger percent of the world burden of suffering than most people would intuitively realize. I really think this should be up in the list of considerations for Cause X. Specifically:
An important pragmatic takeaway from this article is that if one is trying to select an effective career path, as a heuristic it would be good to take into account how one’s efforts would cash out in the prevention of extreme suffering (see: Hell-Index [EA · GW]), rather than just QALYs and wellness indices that ignore the long-tail. Of particular note as promising Effective Altruist careers, we would highlight working directly to develop remedies for specific, extremely painful experiences. Finding scalable treatments for migraines, kidney stones, childbirth, cluster headaches, CRPS, and fibromyalgia may be extremely high-impact (cf. Treating Cluster Headaches and Migraines Using N,N-DMT and Other Tryptamines, Using Ibogaine to Create Friendlier Opioids [EA · GW], and Frequency Specific Microcurrent for Kidney-Stone Pain). More research efforts into identifying and quantifying intense suffering currently unaddressed would also be extremely helpful.
(see also the writeup of an event we hosted about possible new EA Cause Xs)
comment by Jaime Sevilla (Jsevillamol) ·
2019-09-02T10:49:26.604Z · EA(p) · GW(p)
I like this post a lot; it is succint and provides a great actionable for EAs to act on.
Stylistically I would prefer if the Organization section was broken down into a paragraphs per section to make it easier to read.
I like that you precommited to a transparent way of selecting the new causes you present to the readers and limited the scope to 15. I would personally have liked to see them broken up in sections depending on what method they were chosen by.
For other readers who are eager for more, here there are other two that satisfy the criteria but I suppose did not make it to the list:
Atomically Precise Manufacturing (cause area endorse by two major organizations - OPP and Eric Drexler from FHI)
Aligning Recommender Systems (cause profile with more than 50 upvotes in the EA forum)
comment by Ben_Harack ·
2019-09-03T15:54:24.211Z · EA(p) · GW(p)
Recently, I've been part of a small team that is working on the risks posed by technologies that allow humans to steer asteroids (opening the possibility of deliberately striking the Earth). We presented some of these results in a poster at EA Global SF 2019.
At the moment, we're expanding this work into a paper. My current position is that this is an interesting and noteworthy technological risk that is (probably) strictly less dangerous/powerful than AI, but working on it can be useful. My reasons include: mitigating a risk that is largely orthogonal to AI is still useful; succeeding at preemptive regulation of a technological risk would improve our ability to do it for more difficult cases (e.g., AI); and popularizing the X-risk concept effectively via a more concrete/non-abstract manifestation than the more abstract risks from technologies like AI/biotech (most people understand the prevailing theory of the extinction of the dinosaurs and can somewhat easily imagine such a disaster in the future).Replies from: MichaelA, Milan_Griffes
↑ comment by MichaelA ·
2019-10-01T01:44:05.654Z · EA(p) · GW(p)
That's a very interesting topic that I hadn't considered before, and your argument for why it's worth having at least some people thinking about and working on it seems sound to me.
But I also wondered when reading your comment whether publicly discussing such an idea is net negative due to posing information hazards. (That would probably just mean research on the idea should only be discussed individually with people who've been at least briefly vetted for sensibleness, not that research shouldn't be conducted at all.) I had never heard of this potential issue, and don't think I ever would've thought of it by myself, and my knee-jerk guess would be that the same would be true of most policymakers, members of the public, scientists, etc.
Have you thought about the possible harms of publicising this idea, and ran the idea of publicising it by sensible people to check there's no unilateralist's curse occurring?
(Edit: Some parts of your poster have updated me towards thinking it's more likely than I previously thought that relevant decision-makers are or will become aware of this idea anyway. But I still think it may be worth at least considering potential information hazards here - which you may already have done.
A related point is that I recall someone - I think they were from FHI, but I can't easily find the source - arguing that publicly emphasising the possibility of an AI arms race could make matters worse by making arms race dynamics more likely.)Replies from: Ben_Harack
↑ comment by Ben_Harack ·
2019-10-02T16:49:24.286Z · EA(p) · GW(p)
Thanks for taking a look at the arguments and taking the time to post a reply here! Since this topic is still pretty new, it benefits a lot from each new person taking a look at the arguments and data.
I agree completely regarding information hazards. We've been thinking about these extensively over the last several months (and consulting with various people who are able to hold us to task about our position on them). In short, we chose every point on that poster with care. In some cases we're talking about things that have been explored extensively by major public figures or sources, such as Carl Sagan or the RAND corporation. In other cases, we're in new territory. We've definitely considered keeping our silence on both counts (also see https://forum.effectivealtruism.org/posts/CoXauRRzWxtsjhsj6/terrorism-tylenol-and-dangerous-information [EA · GW] if you haven't seen it yet). As it stands, we believe that the arguments in the poster (and the information undergirding those points) is of pretty high value to the world today and would actually be more dangerous if it were publicized at a later date (e.g., when space technologies are already much more mature and there are many status quo space forces and space industries who will fight regulation of their capabilities).
If you're interested in the project itself, or in further discussions of these hazards/opportunities, let me know!
Regarding the "arms race" terminology concern, you may be referring to https://www.researchgate.net/publication/330280774_An_AI_Race_for_Strategic_Advantage_Rhetoric_and_Risks which I think is a worthy set of arguments to consider when weighing whether and how to speak on key subjects. I do think that a systematic case needs to be made in favor of particular kinds of speech, particularly around 1) constructively framing a challenge that humanity faces and 2) fostering the political will needed to show strategic restraint in the development and deployment of transformative technologies (e.g., though institutionalization in a global project). I think information hazards are an absolutely crucial part of this story, but they aren't the entire story. With luck, I hope to contribute more thoughts along these lines in the coming months. Replies from: MichaelA
↑ comment by MichaelA ·
2019-10-03T00:25:48.442Z · EA(p) · GW(p)
It sounds like you've given the possibility of information hazards careful attention, recognised the value of consulting others, and made reasonable decisions. (I expected you probably would've done so - just thought it'd be worth asking.)
I also definitely agree that the possibility of information hazards shouldn't just serve as a blanket, argument-ending reason to not fairly publicly discuss any potentially dangerous technologies, and that it always has to be weighed against the potential benefits of such discussion.
comment by RomeoStevens ·
2019-09-02T01:22:53.813Z · EA(p) · GW(p)
Although it seems to be fine for the majority, school drives some children to suicide. Given that there is little evidence of benefit from schooling, advocating for letting those most affected have alternative options could be high impact. Replies from: Khorton
↑ comment by Kirsten (Khorton) ·
2019-09-02T23:00:29.543Z · EA(p) · GW(p)
Replies from: Denise_Melchin, RomeoStevens
- There is strong evidence that the majority of children will never learn to read unless they are taught. Most children who go to school learn to read. That in itself is strong evidence that there are benefits to schooling.
- In what countries are there no alternatives to attending school?
↑ comment by RomeoStevens ·
2019-09-03T16:04:30.144Z · EA(p) · GW(p)
>There is strong evidence that the majority of children will never learn to read unless they are taught.
This is a different claim. I don't know of strong evidence that children will fail to learn to read if not sent to school.Replies from: Khorton
↑ comment by Kirsten (Khorton) ·
2019-09-03T17:11:54.310Z · EA(p) · GW(p)
I claim that if state-funded universal primary education did not exist, a significant minority of the population would never learn to read. A current benefit of schools is providing near-universal literacy. I am frankly amazed that you claim that there is little evidence of benefit from schooling.
Replies from: RomeoStevens
↑ comment by RomeoStevens ·
2019-09-04T01:38:00.704Z · EA(p) · GW(p)
It seems like you're arguing from common sense?
https://www.psychologytoday.com/us/blog/freedom-learn/201406/survey-grown-unschoolers-i-overview-findingsReplies from: Khorton
↑ comment by Kirsten (Khorton) ·
2019-09-04T08:20:16.716Z · EA(p) · GW(p)
Blog posts won't convince me; I studied linguistics and education for my undergrad, which convinced me that most children don't teach themselves to read. A few do, and some have parents who teach them. But if you want to convince me that all children (not just a handful!) can and will teach themselves to read without school, you will need to show me some academic evidence.
I am convinced of this not only because I was explicitly taught it by experts in linguistics and education, but also because we did not have universal literacy before we had universal primary education (and countries without universal primary education still don't!), and because we have evidence about which teaching systems will help children read more quickly and fluently than other teaching methods (and if teaching did literally nothing beneficial, like you still seem to be suggesting, we shouldn't see significant differences between teaching methods).
Also consider, in this hypothetical world without schools, how children will access books.
Note: Assuming you're not a senior policymaker or politician, I don't think it's a good use of my time to continue. I will however click on any relevant peer-reviewed studies and at least read the abstract, even if I don't comment.
Replies from: Khorton, jpaddison, RomeoStevens, lucy.ea8
↑ comment by RomeoStevens ·
2019-09-04T14:57:29.721Z · EA(p) · GW(p)
We seem to be having different conversations. I think you're looking for strong evidence of stronger, more universal claims than I am making. I'm trying to say that this hypothesis (for some children) should be within the window of possibility and worthy of more investigation. There's a potential motte and bailey problem with that, and the claims about evidence for benefit from schooling broadly should probably be separated from evidence for harms of schooling in specific cases.
>Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced. -Meditations on Moloch
Imagine that an altruistic community in such a world is very open minded and willing consider not shocking yourself all the time, but wants to see lots of evidence for it produced by the tazer manufacturers, since after all they know the most about tazers and whether they are harmful...
If you give children the option of being tazed or going to school some of them are going to pick the tazer.Replies from: elle
↑ comment by elle ·
2019-09-05T09:41:47.210Z · EA(p) · GW(p)
Does this mean you no longer endorse the original statement you made ("there is little evidence of benefit from schooling")?
I'm feeling confused... I basically agreed with Khorton's skepticism about that original claim, and now it sounds like you agree with Khorton too. It seems like you, in fact, believe something quite different from the original claim; your actual belief is something more like: "for some children, the benefits of schooling will not outweigh the torturous experience of attending school." But it doesn't seem like there has been any admission that the original claim was too strong (or, at the very least, that it was worded in a confusing way). So I'm wondering if I'm misinterpreting.Replies from: RomeoStevens
↑ comment by RomeoStevens ·
2019-09-05T12:09:37.140Z · EA(p) · GW(p)
I think there are two claims. I stand by both, but think arguing them simultaneously causes things like a motte and bailey problem to rear its head.
↑ comment by lucy.ea8 ·
2019-09-05T18:02:56.522Z · EA(p) · GW(p)
we did not have universal literacy before we had universal primary education (and countries without universal primary education still don't!
This is KEY, in already industrialized countries kids may learn on their own or via homeschooling. For society as a whole public education is necessary, otherwise kids don't learn.
comment by Kirsten (Khorton) ·
2019-09-02T22:58:56.685Z · EA(p) · GW(p)
pedantic note: I believe GiveWell's new focus on government policy falls within the existing categories of global health and institutional reform, rather than being its own cause area.
comment by lucy.ea8 ·
2019-09-05T19:12:46.424Z · EA(p) · GW(p)
EA emphasis is on "Global Health and Poverty", the missing cause X here is Basic Education, I suggest that the cause area should be "Global Basic Education and Health"
Basic education being 12 years / high school equivalent in USA.