Three levels of cause prioritisation

post by casebash · 2018-05-28T07:26:32.333Z · EA · GW · Legacy · 7 comments

One of the main goals of Effective Altruism is to persuade people to think more deeply about how to prioritise causes. This naturally leads us to ask, "What is meant by cause prioritisation?" and "Which aspect of cause prioritisation is most important?".

(Epistemic status: Speculative, Rough framework, see In Praise of Fake Frameworks [LW · GW])

I'd suggest that we can divide cause prioritisation into three main levels. This won't be particularly neat as we could debate exact categorisations, but it'll suffice for my purposes:

An intervention is a solution to a given problem. For example, distributing bednets is a solution to the problem of Malaria. People often don't have attachments to particular interventions, but, even if they do, they are often willing to consider that another intervention might work better.

Specific causes are the level that most charities operate on. For example, the Cancer Council researches cancer and the Against Malaria Foundation seeks to treat malaria. Many altruistic people have a strong emotional attachment to one or more specific causes.

High-level causes are broad categorisations. Trying to prioritise at this level of require often requires philosophy - ie. Do we have any special duties to our fellow compatriots or is this irrelevant? Philosophy is simply not something humans are particularly skilled at discussing. Almost everyone who is altruistic has an attachment at this level and it is often very hard to persuade people to seriously reconsider their views.

I suspect that attempting to persuade people to prioritise causes within a higher level can often be a mistake if they don't already accept that you should prioritise within the lower cause level. Emotions play a very strong role in the beliefs that people adopt and we need to think carefully about how to navigate them. Indeed, discussing prioritisation at too high a level [innoculating]( them against lower levels that they would accept if presented. Further, as soon as we've persuaded someone on a lower level, we've established a foothold that could later be used to attempt to persuade them further. We've reduced the inferential distance: instead of having to persuade them that we should prioritise charitable donations and that we should apply this principle rather radically, we only have to convince them of the later. And I suspect that this will be much easier.

If we want to grow the effective altruism movement, we will have to become skilled in persuasion. A large part of this is understanding how people think so that we can avoid trigger emotions that would interfere with their reasoning. I hope that my suggestion of focusing on the lower levels first will help with this.


Comments sorted by top scores.

comment by RomeoStevens · 2018-05-28T17:50:34.617Z · EA(p) · GW(p)

Another way to frame it is thinking about Marr's three levels of analysis. The computational (what are we even trying to do?), the algorithmic (what algorithms/heuristics should we run given we want to accomplish that?), and implementation (what, concretely should our next actions be to implement those algorithms in reality?). Cleanly separating which step you are working on prevents confusion.

comment by BenMillwood · 2018-06-03T07:47:44.368Z · EA(p) · GW(p)

I think this framing is a good one, but I don't immediately agree with the conclusion you make about which level to prioritize.

Firstly, consider the benefits we expect from a change in someone's view at each level. Do most people stand to improve their impact the most by choosing the best implementation within their cause area, or switching to an average implementation in a more pressing cause area? I don't think this is obvious, but I lean to the latter.

Higher levels are more generalizable: cross-implementation comparisons are only relevant to people within that cause, whereas cross-cause comparisons are relevant to everyone who shares approximately the same values, so focusing on lower levels limits the size of the audience that can benefit from what you have to say.

Low-level comparisons tend to require domain-specific expertise, which we won't be able to have across a wide range of domains.

I also think there's just a much greater deficit of high-quality discussion of the higher levels. They're virtually unexamined by most people. Speaking personally, my introduction to EA was approximately that I knew I was confused about the medium-level question, so I was directly looking for answers to that: I'm not sure a good discussion of the low-level question would have captured me as effectively.

comment by adamaero · 2018-05-30T03:33:45.688Z · EA(p) · GW(p)

Thank you. I commonly try to say something at a "high-level" (such as the difference between relative and absolute/extreme poverty). Now, instead, I will mention something about distributing mosquito bed nets, steel roofs in Kenya (GiveDirectly) or developing clean meat. I anticipate some questions on that last one :)

comment by Emanuele_Ascani · 2018-05-28T07:49:55.337Z · EA(p) · GW(p)

I want to add something: It probably has been discussed before, but it occurs to me that when thinking about prioritisation in general it's almost always better to think at the lowest level possible. That's because the impact per dollar is only evaluable for specific interventions, and because causes that at first don't appear particularly cost effective can hide particular interventions that are. And those particular interventions could be in principle even more cost effective than other interventions in causes that do appear cost effective overall. I think high-level cause prioritisation is mostly good for gaining a first superficial understanding of the promise of a particular class of altruistic interventions.

comment by Flodorner · 2018-05-28T14:35:33.267Z · EA(p) · GW(p)

I disagree. If we are fairly certain, that the average intervention in Cause X is 10 times more effective than the average Intervention in Cause Y (For a comparision, 80000 hours currently believes, that AI-safety work is 1000 times as effective as global health), it seems like we should strongly prioritize Cause X. Even if there are some interventions in Cause Y, which are more effective, than the average intervention in Cause X, finding them is probably as costly as finding the most effective interventions in Cause X (Unless there is a specific reason, why evaluating cost effectiveness in Cause X is especially costly, or the distributions of Intervention effectiveness are radically different between both causes). Depending on how much we can improve on our current comparative estimates of cause effctiveness, the potential impact of doing so could be quite high, since it is essentially multiplies the effects of our lower level prioritization. Therefore it seems, like high to medium level prioritization in combination with low-level prioritization restricted to the best causes seems the way to go. On the other hand, it seems at least plausible, that we cannot improve our high-level prioritization significantly at the moment and should therefore focus on the lower level within the most effective causes.

comment by Emanuele_Ascani · 2018-05-28T20:41:48.764Z · EA(p) · GW(p)

Yes, maybe I exaggerated saying "almost always" or at least I have been too vague. If you haven't any idea of specific interventions to evaluate, then a good way to go is to do superficial high level analyses first and then proceed with lower level ones. Sometimes the contrary could happen though, when a particular promising intervention is found without first investigating its cause area.