Who is working on finding "Cause X"?

post by Milan_Griffes · 2019-04-10T23:09:23.892Z · score: 19 (12 votes) · EA · GW · 33 comments

This is a question post.

Contents

  Answers
    11 Emanuele_Ascani
    9 Denkenberger
    8 Denkenberger
    6 Ramiro
    6 Halffull
    6 kbog
    5 Peter_Hurford
    4 aarongertler
    3 agdfoster
None
12 comments

As a community, EA sometimes talks about finding "Cause X" (example 1, example 2).

The search for "Cause X" featured prominently in the billing for last year's EA Global (a).

I understand "Cause X" to mean "new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar."

This afternoon, I realized I don't really know how many people in EA are actively pursuing the "search for cause X." (I thought of a couple people, who I'll note in comments to this thread. But my map feels very incomplete.)

Answers

answer by Emanuele_Ascani · 2019-04-14T08:52:19.768Z · score: 11 (5 votes) · EA · GW

In my understanding "Cause X" is something we almost take for granted today, but that people in the future will see as a moral catastrophe (similarly as to how we see slavery today, versus how people in the past saw it). So it has a bit more nuance than just being a "new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar".

I think there are many candidates seeming to be overlooked by the majority of society. You could also argue that no one of these is a real Cause X due to the fact that they are still recognised as problems by a large number of people. But this could be just the baseline of "recognition"a neglected moral problem will start from in a very interconnected world like ours. Here what comes to my mind:

  • Wild animal suffering (probably not recognised as a moral problem by the majority of the population)
  • Aging (many people probably ascribe it a neutral moral value, maybe because it is rightly regarded as a "natural part of life". Right consideration but it doesn't imply its moral value or how many resources we should devote to the problem)
  • "Resurrection" or, in practice, right now, cryonics. (Probably neutral value/not even remotely in the radar of the general population, with many people possibly even ascribing it a negative moral value)
  • Something related to subjective experience? (stuff related to subjective experience that people don't deem worthy to assign moral value to because "times are still too rough to notice them", or stuff related to subjective experience that we are missing out but could achieve today with the right interventions).

Cause areas that I think don't fit the definition above:

  • Mental Health, since it is recognised as a moral problem by a large enough fraction of the population (but still probably not large enough?). Although it is still too neglected.
  • X-risk. Recognised as a moral problem (who wants the apocalypse?) but too neglected for reasons probably not related to ethics.

But who is working on finding Cause X? I believe you could argue that every organisation devoted to finding new potential cause areas is. You could probably argue that moral philosophers, or even just thoughtful people, have a chance of recognising it. I'm not sure if there is a project or organisation devoted specifically to this task, but judging from the other answers here, probably not.

comment by Milan_Griffes · 2019-04-14T16:36:06.283Z · score: 4 (3 votes) · EA · GW
I believe you could argue that every organisation devoted to finding new potential cause areas is.

What organizations do you have in mind?

comment by Emanuele_Ascani · 2019-04-14T21:07:24.152Z · score: 3 (2 votes) · EA · GW

Open Philanthropy, Give Well, Rethink Priorities probably qualify. To clarify: my phrase didn't mean "devoted exclusively to finding new potential cause areas".

answer by Denkenberger · 2019-04-13T06:48:20.282Z · score: 9 (13 votes) · EA · GW

I think alternate foods for catastrophes like nuclear winter is a cause X (disclaimer, co-founder of ALLFED).

comment by Milan_Griffes · 2019-04-13T17:17:29.484Z · score: 3 (2 votes) · EA · GW

Thanks!

Very curious why this was downvoted. (This idea has been floated before, e.g. on the 80,000 Hours podcast, and seems like a plausible Cause X.)

answer by Denkenberger · 2019-04-20T22:17:18.643Z · score: 8 (2 votes) · EA · GW

I think working on preventing collapse of civilization given loss of electricity/industry due to extreme solar storm, high altitude electromagnetic pulses and narrow AI computer virus is a cause X (disclaimer, co-founder of ALLFED).

answer by Ramiro · 2019-04-15T22:19:44.501Z · score: 6 (4 votes) · EA · GW

This is not a solution/answer, but someone should design a clever way for us to be constantly searching for cause x. I think a general contest could help, such as an "Effective Thesis Prize", to reward good works aligned with EA goals; perhaps cause x could be the aim of a contest of its own.

answer by Halffull · 2019-04-12T12:53:41.089Z · score: 6 (6 votes) · EA · GW

Rethink Priorities seems to be the obvious organization focused on this.

comment by Milan_Griffes · 2019-04-12T17:02:14.080Z · score: 8 (4 votes) · EA · GW

From their website:

Right now, our research agenda is primarily focused on:
prioritization and research work within interventions aimed at nonhuman animals (as research progress here looks uniquely tractable compared to other cause areas)
understanding EA movement growth by running the EA Survey and assisting LEAN and SHIC in gathering evidence about EA movement building (as research here looks tractable and neglected)

Sounds like they're currently focused on new animal welfare & community-building interventions, rather than finding an entirely different cause area.

comment by Peter_Hurford · 2019-04-14T23:29:39.824Z · score: 13 (6 votes) · EA · GW

We're also working on understanding invertebrate sentience and wild animal welfare - maybe not "cause X" because other EAs are aware of this cause already, but I think will help unlock important new interventions.

Additionally, we're doing some analysis of nuclear war scenarios and paths toward non-proliferation. I think this is understudied in EA, though again maybe not "cause X" because EAs are already aware of it.

Lastly, we're also working on examining ballot initiatives and other political methods of achieving EA aims - maybe not cause X because it isn't a new cause area, but I think it will help unlock important new ways of achieving progress on our existing causes.

comment by Milan_Griffes · 2019-04-15T17:20:14.187Z · score: 2 (1 votes) · EA · GW

Thanks!

Is there a public-facing prioritized list of Rethink Priorities projects? (Just curious)

comment by Peter_Hurford · 2019-04-15T21:03:41.917Z · score: 5 (3 votes) · EA · GW

Right now everything I mentioned is in https://forum.effectivealtruism.org/posts/6cgRR6fMyrC4cG3m2/rethink-priorities-plans-for-2019 [EA · GW]

We're working on writing up an update.

answer by kbog · 2019-04-11T20:27:59.619Z · score: 6 (8 votes) · EA · GW

Between this, some ideas about AI x-risk and progress, and the unique position of the EA community, I'm beginning to think that "move Silicon Valley to cooperate with the US government and defense on AI technology" is Cause X. I intend to post something substantial in the future.

answer by Peter_Hurford · 2019-04-11T06:52:46.770Z · score: 5 (13 votes) · EA · GW

Me.

comment by anonymous_ea · 2019-04-12T17:27:29.420Z · score: 14 (7 votes) · EA · GW

Can you expand on this answer? E.g. how much this is a focus for you, how long you've been doing this, how long you expect to continue doing this, etc.

comment by Peter_Hurford · 2019-04-14T23:30:00.489Z · score: 6 (2 votes) · EA · GW

I'd refer you to the comments of https://forum.effectivealtruism.org/posts/AChFG9AiNKkpr3Z3e/who-is-working-on-finding-cause-x#Jp9J9fKkJKsWkjmcj [EA · GW]

comment by anonymous_ea · 2019-04-15T17:25:54.831Z · score: 1 (1 votes) · EA · GW

The link didn't work properly for me. Did you mean the following comment?

We're also working on understanding invertebrate sentience and wild animal welfare - maybe not "cause X" because other EAs are aware of this cause already, but I think will help unlock important new interventions.
Additionally, we're doing some analysis of nuclear war scenarios and paths toward non-proliferation. I think this is understudied in EA, though again maybe not "cause X" because EAs are already aware of it.
Lastly, we're also working on examining ballot initiatives and other political methods of achieving EA aims - maybe not cause X because it isn't a new cause area, but I think it will help unlock important new ways of achieving progress on our existing causes.
comment by Peter_Hurford · 2019-04-15T21:02:56.185Z · score: 3 (2 votes) · EA · GW

Yep :)

answer by aarongertler · 2019-04-12T10:18:13.065Z · score: 4 (3 votes) · EA · GW

GiveWell is searching for cost-competitive causes in many different areas (see the "investigating opportunities" table).

comment by Milan_Griffes · 2019-04-12T17:07:32.284Z · score: 2 (1 votes) · EA · GW

Good point. Plausibly this is Cause X research (especially if they team up with Mark Lutter & co.); I'll be curious to see how far outside their traditional remit they go.

answer by agdfoster · 2019-04-15T21:07:45.896Z · score: 3 (2 votes) · EA · GW

Arguably it was the philosophers that found the last few. Once the missing moral reasoning was shored up the cause area conclusion was pretty deductive.

33 comments

Comments sorted by top scores.

comment by technicalities · 2019-04-11T19:59:41.035Z · score: 14 (8 votes) · EA · GW

One great example is the pain gap / access abyss. Only coined around 2017, got some attention at EA Global London 2017 (?), then OPIS stepped up. I don't think the OPIS staff were doing a cause-neutral search for this (they were founded 2016) so much as it was independent convergence.

comment by Khorton · 2019-04-11T20:30:00.989Z · score: 3 (2 votes) · EA · GW

Their website suggests it wasn't independent.

'The primary issue for OPIS is the ethical imperative to reduce suffering. Linked to the effective altruism movement, they choose causes that are most likely to produce the largest impact, determined by what Leighton calls “a clear underlying philosophy which is suffering-focused”.'

comment by badbadnotgood · 2019-04-12T20:01:07.158Z · score: 3 (2 votes) · EA · GW

I may be wrong, but I remember reading an EA profile report and seeing Leighton comment that the profile report inspired OPIS's movement toward working on the problem.

comment by Milan_Griffes · 2019-04-10T23:10:18.428Z · score: 13 (10 votes) · EA · GW

Michael Plant's cause profile on mental health [EA · GW] seems like a plausible Cause X.

comment by Milan_Griffes · 2019-04-10T23:11:52.771Z · score: 11 (10 votes) · EA · GW

Wild-animal-suffering research seems like a plausible Cause X.

comment by Milan_Griffes · 2019-04-10T23:11:13.181Z · score: 10 (9 votes) · EA · GW

Founders Pledge cause report on climate change seems like a plausible Cause X.

comment by Evan_Gaensbauer · 2019-04-17T01:33:22.491Z · score: 8 (4 votes) · EA · GW

I've always thought of "Cause X" as a theme for events like EAG that are meant to prompt thinking in EA, and wasn't ever intended as something to take seriously and literally in actual EA action. If it was intended to be that, I don't think it ever should have been. I don't think it should be treated as such either. I don't see how it makes sense to anyone as a practical pursuit.

There have been some cause prioritization efforts that took 'Cause X' seriously. Yet the presence of x-risk reduction in EA as a top priority, the #1 question has been to verify the validity and soundness of the fundamental assumptions underlying x-risk reduction as the top global priority. That's because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn't the top priority. For prioritizers willing to work within the boundaries that the assumptions determining x-risk as the top moral priority are all true, cause prioritization focused on how actors should be working on x-risk reduction.

Since the question became reformulated as "Is x-risk reduction Cause X?," much cause prioritization research has been reduced to research on questions in relevant areas of still-great uncertainty (e.g., population ethics and other moral philosophy, forecasting, etc.). As far as I'm aware, no other cause pri efforts have been predicated on the theme of 'finding Cause X.'

In general, I've never thought it made much sense. Any cause that has gained traction in EA already entails a partial answer to that question, along some common lines that arguably define what EA is.

While they're disparate, all the causes in EA combine some form of practical aggregate consequentialism with global-scale interventions to impact the well-being of as large a population as feasible, within whatever other constraints one is working with. This is true of the initial cause areas EA prioritized: global poverty alleviation; farm animal welfare; and AI alignment. Other causes, like public policy reform, life extension, mental health interventions, wild animal welfare, and other existential risks, all fit with this framework.

It's taken for granted in EA conversations, but there are shared assumptions that go into this common perspective that distinguish EA from other efforts to do good. If someone disagrees with that framework, and has different fundamental assumptions about what is important, then they naturally sort themselves into different kinds of extant movements that align with their perspective better, such as more overtly political movements. In essence, what separates EA from any other movement, in terms of how any of us, and other private individuals, chose in which socially conscious community to spend our own time, is the different assumptions we make in trying to answer the question: 'What is Cause X?'

They're not brought to attention much, but there are sources outlining what the 'fundamental assumptions' of EA are (what are typically called 'EA values) which I can provide upon request. Within EA, I think pursuing what someone thinks Cause X is takes the following form:

1. If one is confident one's current priority is the best available option one can realistically impact within the EA framework, working on it directly makes sense. An example of this work is the work of any EA-aligned organization permanently dedicated to work in one or more specific causes, and efforts to support them.

2. If one is confident one's current priority is the best available option, but one needs more evidence to convincingly justify it as a plausible top priority in EA, or doesn't know how individuals can do work to realistically have an impact on the cause, doing research to figure that out makes sense. An example of this kind of work is the research Rethink Priorities is undertaking to identify crucial evidence underpinning fundamental assumptions in causes like wild animal welfare.

3. If one is confident the best available option one will identify is within the EA framework, but you have little to no confidence in what those options will be, it makes sense to do very fundamental research that intellectually explores the principles of effective altruism. An example of this kind of work in EA is that of the Global Priorities Institute.

comment by Milan_Griffes · 2019-04-17T17:50:08.498Z · score: 4 (2 votes) · EA · GW
As far as I'm aware, no other cause pri efforts have been predicated on the theme of 'finding Cause X.'

https://www.openphilanthropy.org/research/cause-reports

comment by Milan_Griffes · 2019-04-17T17:48:54.147Z · score: 4 (2 votes) · EA · GW
I don't see how it makes sense to anyone as a practical pursuit.

GiveWell & Open Phil have at times undertaken systematic reviews of plausible cause areas; their general framework for this seems quite practical.

That's because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn't the top priority.

Pretty strongly disagree with this. I think there's a strong case for x-risk being a priority cause area, but I don't think it dominates all other contenders. (More on this here [EA · GW].)

comment by Evan_Gaensbauer · 2019-04-19T04:32:04.519Z · score: 4 (2 votes) · EA · GW

The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don't currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I've talked to who don't share those priorities say they'd be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.

comment by Evan_Gaensbauer · 2019-04-19T04:27:32.280Z · score: 4 (2 votes) · EA · GW

Givewell's and Open Phil's worked wasn't termed 'Cause X,' but I think a lot of the stuff you're pointing to would've started before 'Cause X' was a common term in EA. They definitely qualify. One thing is Givewell and Open Phil are much bigger organizations than most in EA, so they are unusually able to pursue these things. So my contention that this kind of research is impractical for most organizations to do still holds up. It may be falsified in the near future though. Aside from Givewell and Open Phil, the organizations that can permanently focus on cause prioritization are:

  • institutes at public universities with large endowments, like the Future of Humanity Institute and the Global Priorities Institute at Oxford University.
  • small, private non-profit organizations like Rethink Priorities.

Honestly, I am impressed and pleasantly surprised organizations like Rethink Priorities can go from a small team to a growing organization in EA. Cause prioritization is such a niche cause unique to EA, I didn't know if there was hope for it to keep sustainably growing. So far, the growth of the field has proven sustainable. I hope it keeps up.

comment by Milan_Griffes · 2019-04-13T00:29:47.606Z · score: 5 (4 votes) · EA · GW

The Qualia Research Institute is a good generator of hypotheses for Cause X candidates. Here's a recent example (a).