# Formalizing the cause prioritization framework

post by Michael_Wiebe · 2019-11-05T18:09:24.746Z · score: 24 (17 votes) · EA · GW · 16 comments

## Contents

  A graphical approach
Implications
Conclusion
Notes
None


When prioritizing causes, what we ultimately care about is how much good we can do per unit of resources. In formal terms, we want to find the causes with the highest marginal utility per dollar, MU/$(or, marginal cost-effectiveness). The Importance-Tractability-Neglectedness (ITN) framework has been used as a way of calculating MU/$ by estimating its component parts. In this post I discuss some issues with the current framework, propose a modified version, and consider a few implications.

80,000 Hours defines ITN as follows:

• Importance = utility gained / % of problem solved
• Tractability = % of problem solved / % increase in resources
• Neglectedness = % increase in resources / extra $With these definitions, multiplying all three factors gives us utility gained / extra$, or MU/$(as the middle terms cancel out). However, I will make two small amendments to this setup. First, it seems artificial to have a term for "% increase in resources", since what we care about is the per-dollar effect of our actions.[1] Hence, we can instead define tractability as "% of problem solved / extra$", and eliminate the third factor from the main definition. So to calculate MU/$, we simply multiply importance and tractability: This defines MU/$ as a function of the amount of resources allocated to a problem, which brings me to my second amendment. Apart from the above definition, 80k defines 'neglectedness' informally as the amount of resources allocated to solving a problem. This definition is confusing, because the everyday meaning of 'neglected' is "improperly ignored". To say that a cause is neglected intuitively means that it is ignored relative to its cost-effectiveness. But if neglectedness is supposed to be a proxy for cost-effectiveness, this everyday meaning is circular. And really, how useful is the advice to focus on causes that have been improperly ignored? This should go without saying.

I suggest we instead use "crowdedness" to mean the amount of resources allocated to a problem. This captures intuitions about diminishing returns (other things equal, a more crowded cause is less cost-effective), uses an absolute rather than a relative standard, and avoids the problem of having the technical definition conflict with the everyday meaning.

Thus, our revised framework is now ITC:

• Importance = utility gained / % of problem solved
• Tractability = % of problem solved / extra $• Crowdedness =$ allocated to the problem

So how does crowdedness fit into this setup, if it's not part of the main definition? Intuitively, tractability will be a function of crowdedness: the % of the problem solved per dollar will vary depending on how many resources are already allocated. This is the phenomenon of diminishing marginal returns, where the first dollar spent on a problem is more effective in solving it than is the millionth dollar. Hence, crowdedness tells us where we are on the tractability function.

## A graphical approach

Let's see how this works graphically. First, we start with tractability as a function of dollars (crowdedness), as in Figure 1. With diminishing marginal returns, "% solved/$" is decreasing in resources. Next, we multiply tractability by importance to obtain MU/$ as a function of resources, in Figure 2. Assuming that Importance = "utility gained/% solved" is a constant[2], all this does is change the units on the y-axis, since we're multiplying a function by a constant.

Now we can clearly see the amount of good done for an additional dollar, for every level of resources invested. To decide whether we should invest more in a cause, we calculate the current level of resources invested, then evaluate the MU/$function at that level of resources. We do this for all causes, and allocate resources to the highest MU/$ causes, ultimately equalizing MU/$across all causes as diminishing returns take effect. (Note the similarity to the utility maximization problem from intermediate microeconomics, where you choose consumption of goods to maximize utility, given their prices and subject to a budget constraint.) While MU/$ is sufficient for prioritizing across causes, we can also look at total utility, by integrating the MU/$function over resources spent. Figure 3 plots the total utility gained from spending on a problem, as a function of resources spent. Note that the slope is equal to MU/$, which is decreasing in $. ## Implications (1) All three factors in the ITC framework are necessary to draw a conclusion about which cause is best. Consider this passage from the 80k article: [M]ass immunisation of children is an extremely effective intervention to improve global health, but it is already being vigorously pursued by governments and several major foundations, including the Gates Foundation. This makes it less likely to be a top opportunity for future donors. This last sentence is not strictly true. To be precise, all we can say is that other things equal, a cause with more resources has lower MU/$. That is, for two causes with the same MU/$function, the cause with higher resources will be farther along the function, and hence have a lower MU/$. If other things are not equal, the cause with more resources may have a higher or lower MU/$. (And generally, if a cause scores low on one of the three factors, it can still have the highest MU/$, through high scores on one or both of the other two factors.)

(2) With this setup, we can clearly see how MU/$depends on context (in particular, resources spent). To make up a hypothetical example, AI risk might have had the highest MU/$ in 2013, but the funding boost from OpenAI pushed it down the tractability curve to a lower value of MU/$. Hence, claims about "cause C is the highest priority" should be framed as "cause C is the highest priority, given current funding levels". We should expect the "best" cause (defined as highest MU/$) to change over time as spending changes, which we could indicate by using a time subscript, .

(3) This model also incorporates Joey Savoie's argument [EA · GW] about using the limiting factor instead of importance. Here, a limiting factor would show up as strongly diminishing returns in the tractability function at some level of spending. That is, the percent of the problem solved per dollar would drop off sharply after spending some level of resources on the problem.

(4) The systemic change critique [EA · GW] argues that the standard cause prioritization framework cannot handle increasing marginal returns. For example, large-scale political reform yields no results until a critical mass is reached and massive change occurs. But in fact this is easily modeled as a tractability function (Fig. 1) that is increasing for some part of its domain. That is, when nearing the critical mass, each additional dollar solves a larger percent of the problem than the previous dollar. While this case requires a different decision rule than "allocate resources to the cause with the highest MU/$", it is a straightforward extension of the standard model. ## Conclusion I propose a model of cost-effectiveness using Importance, Tractability, and Crowdedness. Tractability is a function of crowdedness, and multiplying importance and tractability gives us marginal utility per dollar. So is the 80k model wrong? No. I simply find it more intuitive to think about tractability as "% of problem solved / extra$" instead of "% of problem solved / % increase in resources", and this is the resulting model.

Notes

[1] Also, the Neglectedness term "% increase in resources / extra $" is always equal to (1/resources)%, which seems a bit redundant. That is, given resources, an extra dollar always increases your resources by . Eg, given$100, an extra dollar increases your resources by 1%.

[2] This seems to be a definitional issue: we can define importance as a constant, so that "utility gained / % of problem solved" is a constant function of "% of problem solved". That is, solving 1% of the problem just means gaining 1% of the total utility from solving the entire problem.

comment by JustinShovelain · 2019-11-13T20:04:06.660Z · score: 7 (5 votes) · EA(p) · GW(p)

Nice article Michael. Improvements to EA cause prioritization frameworks can be quite beneficial and I'd like to see more articles like this.

One thing I focus on when trying to make ITC more practical is ways to reduce its complexity even further. I do this by looking for which factors intuitively seem to have wider ranges in practice. Impact can vary by factors of millions or trillions, from harmful to helpful, from negative billions to positive billions. Tractability can vary by factors of millions, from negative millionths to positive digits. The Crowdedness component generally implies diminishing or increasing marginal returns only vary by factors of thousands, from negative tens to positive thousands.

In summary the ranges are intuitively roughly:

• Importance (util/%progress): (-10^9, 10^9)
• Tractability (%progress/$): (-10^-6, 1) • Crowdedness adjustment factor ($/$in): (-10, 10^3) Let's assume interventions have randomly associated with them samples from probability distributions over these ranges. Roughly speaking then we should care about these factors based on the degree to which they help us clearly see which intervention is better than another. The extent to which these let us distinguish between the value interventions is based on our uncertainty per factor for each intervention and how the value depends on each factor. Because the value is equal to Importance*Tractability*CrowdednessAdjustmentFactor each factor is treated the same (there is abstract symmetry). Thus we only need to consider how big each factor range is in terms of our typical intervention factor uncertainty. This then tells us how useful each factor is at distinguishing interventions based on importance. Pulling numbers out the the intuitive hat for the typical intervention uncertainty I get: • Importance (util/%progress uncertainty unit): 10 • Tractability (%progress/$ uncertainty unit): 10^-6
• Crowdedness adjustment factor ($/$in uncertainty unit): 1

Dividing the ranges into these units lets us measure the distinguishing power of each factor:

• Importance normalized range (distinguishing units): 10^8
• Tractability normalized range (distinguishing units): 10^6
• Crowdedness adjustment factor normalized range (distinguishing units): 10^3

As a rule of thumb then it looks like focusing on Importance is better than Tractability is better than Crowdedness. This lends itself to a sequence of improving heuristics for comparing the value of interventions then:

• Importance only
• Importance and Tractability
• The full ITC framework

(The above analysis is only approximately correct and will depend on details like the precise probability distribution over interventions you're comparing and your uncertainty distributions over interventions for each factor.

The ITC framework can be further extended in several ways like: making precise curves interventions on the factors of ITC, extending the detail of the analysis of resources to other possible bottlenecks like time and people, incorporating the ideas of comparative advantage and market places, .... I hope someone does this!)

(PS I'm thinking of making this into a short post and enjoy writing collaborations so if someone is interested send me an EA forum message.)

comment by Michael_Wiebe · 2019-11-16T21:49:05.067Z · score: 1 (1 votes) · EA(p) · GW(p)

Hi Justin, thanks for the comment.

I'm in favor of reducing the complexity of the framework, but I'm not sure if this is the right way to do it. In particular, estimating "importance only" or "importance and tractability only" isn't helpful, because all three factors are necessary for calculating MU/$. A cause that scores high on I and T could be low MU/$ overall, due to being highly crowded. Or is your argument that the variance (across causes) in crowdedness is negligible, and therefore we don't need to account for diminishing returns in practice?

comment by JustinShovelain · 2019-11-18T12:53:03.267Z · score: 1 (1 votes) · EA(p) · GW(p)

My argument is about the later; the variances decrease in size from I to T to C. The unit analysis still works because the other parts are still implicitly there but treated as constants when dropped from the framework.

comment by Michael_Wiebe · 2019-11-24T18:10:49.862Z · score: 1 (1 votes) · EA(p) · GW(p)

I guess I'm expecting diminishing returns to be an important factor in practice, so I wouldn't place much weight on an analysis that excludes crowdedness.

comment by Stefan_Schubert · 2019-11-05T19:02:44.819Z · score: 7 (5 votes) · EA(p) · GW(p)

I think some images don't display for me. This is what it looks like for me:

comment by Michael_Wiebe · 2019-11-06T04:59:27.646Z · score: 4 (3 votes) · EA(p) · GW(p)

For future reference, this is what worked for me, using Dropbox:

• Open in incognito browser (regular browser doesn't work)
comment by Stefan_Schubert · 2019-11-06T16:16:58.009Z · score: 2 (1 votes) · EA(p) · GW(p)

I still can't see them. This is what it looks like now.

As mentioned here [EA(p) · GW(p)], copying images from Google Doc and pasting them seems to work reliably.

It would be good if there were more visible guides on how to post, as discussed in that thread.

comment by Stefan_Schubert · 2019-11-06T17:41:47.614Z · score: 2 (1 votes) · EA(p) · GW(p)

comment by Michael_Wiebe · 2019-11-07T00:08:43.948Z · score: 1 (1 votes) · EA(p) · GW(p)

The google docs method worked, but you can't control image size.

I'm now using imgur, which should be recommended somewhere here for authors.

comment by AnonymousEAForumAccount · 2019-11-06T17:32:29.560Z · score: 1 (1 votes) · EA(p) · GW(p)

comment by Pablo_Stafforini · 2019-11-06T01:09:34.496Z · score: 4 (3 votes) · EA(p) · GW(p)

Clicking on 'Open Image in New Tab' indicates that the image is hosted by Google Photos, so I suspect the privacy settings are preventing us from seeing them. Maybe Google read Rob's angry post and have now taken things to the other extreme. :P

comment by anonymous_ea · 2019-11-05T20:59:00.910Z · score: 3 (2 votes) · EA(p) · GW(p)

None of the images display for me either. This is what it looks like for me:

Let's see how this works graphically. First, we start with tractability as a function of dollars (crowdedness), as in Figure 1. With diminishing marginal returns, "% solved/$" is decreasing in resources. Next, we multiply tractability by importance to obtain MU/$ as a function of resources, in Figure 2. Assuming that Importance = "utility gained/% solved" is a constant[2], all this does is change the units on the y-axis, since we're multiplying a function by a constant.

Now we can clearly see the amount of good done for an additional dollar, for every level of resources invested. To decide whether we should invest more in a cause, we calculate the current level of resources invested, then evaluate the MU/$function at that level of resources. We do this for all causes, and allocate resources to the highest MU/$ causes, ultimately equalizing MU/\$ across all causes as diminishing returns take effect. (Note the similarity to the utility maximization problem from intermediate microeconomics, where you choose consumption of goods to maximize utility, given their prices and subject to a budget constraint.)

comment by anonymous_ea · 2019-11-06T17:37:31.826Z · score: 1 (1 votes) · EA(p) · GW(p)

Update: The pictures load for me now

comment by AlexanderSaeri · 2019-11-05T22:59:48.108Z · score: 3 (2 votes) · EA(p) · GW(p)

Michael, thanks for this post. I have been following the discussion about INT and prioritisation frameworks with interest.

Exactly how should I apply the revised framework you suggest? There are a number of equations, discussions of definitions and circularities in this post, but a (hypothetical?) worked example would be very useful.

comment by Michael_Wiebe · 2019-11-07T01:45:57.033Z · score: 1 (1 votes) · EA(p) · GW(p)

Yes, the difficult part is applying the ITC framework in practice; I don't have any special insight there. But the goal is to estimate importance and the tractability function for different causes.

You can see how 80k tries to rank causes here.

comment by Pablo_Stafforini · 2019-11-06T01:08:51.536Z · score: 2 (1 votes) · EA(p) · GW(p)