Why we look at the limiting factor instead of the problem scale

post by Joey · 2019-01-28T19:12:34.114Z · score: 58 (34 votes) · EA · GW · 8 comments

Contents

  Scale model
  Limiting factor model
None
8 comments

Scale, or importance, is held as one of the 3 criteria to consider when evaluating an intervention for promisingness. With the idea being that large scale problems might suggest area area will be more effective to work, on assuming it also scores well on the other criteria. Some interventions are predicated on very strong scale arguments, such as far future or wild animal suffering. However, we (Charity Entrepreneurship) have found that scale specifically is quite a poor indicator of the promisingness of an area.

Organizations tend to be incentivised  to scale both from an impact perspective and from a personal perspective. However, all organizations, eventually, hit a limiting factor that makes it hard to scale faster. For some organizations, it might be the total scale of the issue. For example, perhaps when working on a nearly eradicated disease, the limiting factor might just be how much more of the problem there is left to deal with. This would be a case when the traditional use of scale comes to almost the same result as the limiting factor model. Both models suggest this would not be a great area to work on due to the problem scale being quite small (currently).

However, there are many times when it diverges. For example, certain issues might have a massive problem scale, but be quickly limited by some other factor. Givewell has talked about surgeries being limited by the supply of surgeons - this is not a scale of problem issue specifically, but it is a limiting factor. A scale model might suggest that if there is a ton of surgeries still to be done then this is a worthwhile issue to focus on. But a limiting factor model would suggest that it would be quickly capped by the number of surgeons. Below is a very simplified comparison.

Scale model

Limiting factor model

As you can see, the results end up being quite different: surgeries are limited by the talent pool far earlier than vaccinations become limited by anything else. The magnitudes of the categories were set to be more cross-comparable (e.g. 1 million compares to 10 full-time staff). And both get limited several times before their problem size limit. In this case, it would not even matter if surgeries had 10 times the scale (say, 200 million people affected) if they will be stopped by talent, logistics, and funding before they can even help 20 million people. These numbers are estimates, but given our work and research in these areas we are confident both of these will hit a limiting factor far below their problem size (or what is often referred to as ‘Scale or importance’).

A common response is that scale is only one of the factors considered when evaluating an intervention. However, things like ‘logistical limit’ fall under the radar of tractability/solvability, and there are two problems with this. First, tractability is not really currently used in this way. Right now, lots of claims are being made along the lines of “cause X should be focused on more due to it having a huge problem size” with no further reference to tractability. Secondly, tractability is often seen as the speed at which you will make progress as opposed to a specific factor that will stop growth from occurring. For example, an intervention could be very shovel ready, but only at a small scale before its limiting factor comes into play. If we go back the surgery charity example, it could be that 3 surgeons want to start a charity, and for them it is very tractable and shovel ready, since their marginal effort is great and, at a very limited scale, their solvability rate is very high. The endeavour will run into specific “scale” issues involving a limiting factor of ‘hiring other surgeons’, And this ends up feeling more like a scale issue rather than a tractability one.

Another common concern might be funding limits, which on their own is are not hard limits like problem size. However, while something like a funding limit can be improved upon with more fundraising and field building, this is not an easy task. For the money ranking, I do not think you would just put down the amount you have fundraised for the area, but a reasonable bound for how much could be fundraised, while taking into account the current donor space and a reasonable amount of time (e.g. 2-5 years). This can change over time, but so can the size of the problem: factory farming is a growing problem and global poverty is a shrinking one, but that does not change the importance of having sense of their scale.

A claim that I hear a lot in the animal space is that Wild animal suffering is such a huge scale problem that we should seriously consider working on it. Many people would suggest that Wild animal suffering is an issue with a much larger scale than something like vaccinations based on the pure number of beings affected. This claim is definitely true, since there are trillions of wild animals and only ~20 million people in need of any single vaccination. But if we look more closely at its limiting factors, I think this claim is pretty misleading.

The problem size limit is indeed huge. In fact, it has to be cropped, or otherwise it would go all the way to the top of this post and the other sections would become impossible to see. However, that is not really what matters. Even if wild animal suffering is a huge problem and even if there is only a very limited amount of funding and talent that wants to work in the area, you will bump into problems with scaling far before it starts to matter if there are a billion wild animals or a billion billion. In the practical cases, when comparing these interventions, if a charity were founded in both of these areas, the vaccination charity would be able to get to a much larger scale than the wild animal suffering-focused charity could. A claim that “we should work on intervention X due to its massive problem scale” seems quite inaccurate. These sorts of arguments are [EA · GW] extremely common for wild animals specifically and more broadly in EA.

Researching to determine the causes’ limiting factors generally ends up with more time being spent on considering the funder and talent space compared to, for example, mapping out the specific number of animals affected by any given intervention. And it usually ends up with a set of fairly different interventions looking promising.

Of course, the perspective changes depending on what you are looking to do in a given area. For example, when considering donating to a charity, the main thing examined by Givewell is room for funding. They are looking for an intervention for which its limiting factor to doing more good is funding. Some interventions might be very promising, but due to talent, logistical problems, or the size of the problem limitations, even if Givewell gave them more funding, they would not necessarily be able to create more impact. Organizations with room for more funding are generally stopped not by other factors but by ‘room for funding’ itself. Due to Givewell being a funder, it makes sense for them to take a careful look at room for funding as that is the limiting factor they can improve. As a charity entrepreneur, what the funding space looks like for a given organization should play a big role in what charity to found. As an employee at an NGO, you mostly have to consider how much of a limiting factor talent is for the organization.

These models are really simplified on both the scale and the limiting factor side, and I think there are other possible ways to use scale differently (or use tractability in a different way to cover some of the same concerns). I do not think 100% of EAs use a simple scale-based way of looking at problems, but I do think a large % of EAs use a fairly simple “size of the problem” based way of considering scale without thought to the % of the problem they can solve, given the first limiting factor that will stop growth/progress.

8 comments

Comments sorted by top scores.

comment by abrahamrowe · 2019-01-28T22:31:49.742Z · score: 13 (10 votes) · EA · GW

This is cool! There are definitely limiting factors on working on an issue, but that doesn't mean that you shouldn't focus on that cause, but that part of the cost-effectiveness calculation will be how much it costs to raise those limits. In the 1970's and 80's, the talent pool for working on farmed animal advocacy, for example, was much smaller. But if we hadn't worked on it, and built up a better talent pool, brought in more donors, etc., we'd still be in that position today, and wouldn't have the capacity we have now. The scale of a problem is important because it is something true independent of the state of the movement. Limiting factors are not that - Wild Animal Initiative (where I work), for example, is pursuing academic outreach on wild animal welfare because it will help us in the long run to address these limits, by growing the talent pool etc. AI alignment research probably had a talent pool of basically 0 only a few years ago. Does that mean that no one should have started working on it at that point?

Regardless, you can just update your cost-effectiveness estimates but factoring in the costs to raise these limits.

E.g. it currently costs X dollars to help Y wild animals, up to 1000*Y, at which point some limiting factor stops us from helping more wild animals.

We can increase that limit at a cost of Z per Y more animals (perhaps through advocacy to bring in new talent or donors, or to improve the logistical limit).

The real cost-effectiveness is not X/Y dollars per animal, it is:

X/Y, Y<=1000

(X+Z)/Y, Y>1000

Given this, it is still possible that working on wild animals is really cost-effective. Look at the sheer number of invertebrates negatively impacted by insecticides, for example. If we can develop a tractable intervention in ~2 years to help them, it is possible that in that time, we can spend a little more to improve some of these limits as well, and over the whole period, have a really cost-effective intervention overall.

Similarly, in your surgery example, when you hit a limit on surgeries you can provide due to the number of surgeons, you can pay more to train more surgeons (or address the limit however you're able too). Obviously this lowers the cost-effectiveness, but for many interventions, it still might be a good option at the higher cost.

I guess my thought is, the problem scale definitely is super important. Limiting factors matter because they change the cost-effectiveness. But since they are mutable, they shouldn't be viewed as hard barriers to working on an issue. And it seems that regardless of what the limits are, or what the scale is, what we actually should be looking at is the cost-effectiveness of improving the issue, and given the above, limiting factors are a consideration within that, as is scale (given that scale is a limit on a certain cost-effectiveness being applicable - e.g. including costs to increase limits, say in the future we can help farmed animals at $Z per animal and wild animals at $Z/3 per animal, so we might want to help wild animals until we run out of wild animals to help, and then focus on the next best thing, which might be farmed animals (obviously an oversimplified example)).

comment by Joey · 2019-02-02T01:35:15.148Z · score: 7 (5 votes) · EA · GW

Hey Abraham,

The endline goal of any piece of evaluation criteria is to be able to be used to best predict “good done”. I broadly agree that one criteria factor is unlikely to rule in or out an intervention fully (including limiting factor - it was one of four in our system). If we know a criteria that was that powerful there would be no need for complex evaluation.

Although limiting factor is not a pure hard limit I do not think this changes its usefulness much; an intervention might be low evidence, and in theory multiple RCTs could be done to improve this, but in practice if there is say a limiting factor on funding (such that multiple RCTs could not be funded) the intervention might be indefinitely low evidenced even if in theory evidence is not an independent of movement factor. It seems fairly clear that all things being equal running an intervention will be easier than running an equivalent intervention that also requires you to build a field of talent or otherwise work on a limiting factor.

In principle I think this could be put into a more numerical form (e.g. included in CEA), but I think in practice this has not been done. Historically maybe the closest is different levels of funding gaps that Givewell has put for there top charities, but that is mostly considering a single possible limiting factor (funding). I would love to see more models on limiting factor and think it would be a natural next step in the current EA talent vs funding conversations [EA · GW].

A different way to think about this question is do we think problem scale or limiting factor are better predictors of areas where the most good can be done? I pretty strongly disagree that problem scale is more important than the limiting factor that will hit an intervention. Theoretically scale of the problem is a harder limit but that doesn't really matter if in practice an intervention is never capped by it. We ended up looking at quite a number of charities to consider what was stopping them (including GiveWell and ACE recommendations) and none of them seemed to be capped by problem scale, they had all been stopped by other limiting factors far before that became an issue (for example, with AMF it was funding and logistical bottlenecks not the number of people with malaria). I think this is even true for the specific case of wild suffering interventions. The absolute number of bugs does not matter much when considering ethical pest control so much as the density per hectare of field or the available funding for a humane insecticides charity. You could imagine a world where the bug populations of colder locations (such as Canada and Russia) where close to 0 and it would do very little to affect the estimated good done- broadly due to having a ton of work to do in warmer locations before one would expand to Canada and likely hitting many limiting factors before expanding that far. How soon these problems hit would be more predictive of impact than if there were twice or half as many bugs in the world as there are now.

I think historical evidence like “if this was not done X would not never have happened” is not a very strong argument unless some research is done systematically and compares both the hits and misses that occured (e.g. there where a lot of issues that were attempted to be founded but never got traction at that same point in time). To take a more clear example you could look a friend who won the lottery, and although clearly he benefited from his ticket it still would have been the wrong call from an expected value perspective to buy it, and certainly would not suggest you should buy a lottery ticket we have to be careful of survivorship bias. Mainly we are looking at factors that are predictive of something having the most impact and singular examples do not describe much about field building vs making quicker progress on a more established field. Although I would be really interested in more systematic research in this area.

comment by abrahamrowe · 2019-02-02T19:59:48.503Z · score: 4 (2 votes) · EA · GW

That makes a lot of sense. Maybe one way of framing scale + cost-effectiveness could be "how long will a particular cost-effectiveness be applicable in the real world?", and then two ways of describing that cost-effectiveness are either incorporating costs to raise these limits or not.

In either case, I definitely agree that these should be considered. One other thought - it seems like in certain ways, a donation to a charity will account for their efforts to raise limits, to some extent. I don't know enough about how ACE does cost-effectiveness analysis (and obviously the degree to which this information is incorporated would definitely depend on that), but I could imagine that if you make a statement like "a donation of $100 to The Humane League will help reduce the suffering of X animals", in a complete assessment of that donation, some of that funding would be going to their development department (raising the amount of funding available), some might be going to volunteer cultivation (maybe volunteer capacity is another limiting factor).

So the issue is more that while the outcome per dollar we are looking at is based on historical performance, over time that outcome per dollar is actually worse because some of that funding was going towards raising limits, and actually would need to be applied to animals not yet helped, if that makes sense.

Either way, I'm really interested in this - since reading it, I've been thinking of how I can incorporate this kind of thinking about cost-effectiveness into my organization - it seems tricky, but definitely worth doing a lot more of. Thanks for posting it!

comment by aarongertler · 2019-01-29T01:11:20.935Z · score: 10 (7 votes) · EA · GW

Good piece!

When you use phrases like "we have found" in pieces on the Forum, I'd recommend you identify your organization right away. Someone who joins the Forum and then reads this without knowing that you work for Charity Entrepreneurship might be quite confused.

(I think it's fine to write very technical pieces for the Forum, even if they risk confusing people, because it's important to have high-fidelity work that isn't constrained by a need to re-explain the basics. Noting which organizations we represent seems not to have this downside, though, especially since the names and staff members of EA orgs change pretty often.)

comment by Joey · 2019-01-30T23:16:16.867Z · score: 4 (3 votes) · EA · GW

Good idea, I added CE to the first use of "we".

comment by Michael_Wiebe · 2019-02-18T02:24:50.321Z · score: 3 (2 votes) · EA · GW

Good post!

First, tractability is not really currently used in this way. Right now, lots of claims are being made along the lines of “cause X should be focused on more due to it having a huge problem size” with no further reference to tractability.

Charitably, this is an "other things equal" claim. But I agree, it seems like people have just forgotten about tractability.

comment by Davidmanheim · 2019-02-04T10:12:08.986Z · score: 2 (1 votes) · EA · GW

This is spot-on, and as a matter of decision theory, the question is never "which outcome matters most," but is rather "what action has the highest impact." This incorporates the economic issues with marginal investment, as well as the issues with constraints discussed above. I'd recommend Tiago Forte's series explaining the "Theory of Constraints" (ToC) for a better way to formalize the intuitive model presented in the post; https://praxis.fortelabs.co/theory-of-constraints-101-table-of-contents-8bbb6627915b/

As applied to EA, this notes that we should build clear system models for interventions in order to identify how to help. The ToC model notes that effort expended to help at any point of the system other than the limiting factor is wasted - double the funding but don't fix the logistic constraints on spending it and you've helped not-at-all. (In fact, you might have made the problem worse by increasing the pressure on the logistics management!)

comment by saulius · 2019-02-20T20:45:32.951Z · score: 1 (1 votes) · EA · GW

The way I see it, if a cause is big in scale and few people are working on it, there is a significant probability of finding some low-hanging fruits within it. So looking at the scale is useful for determining in which cause areas to look for cost-effective interventions. However, once you have some idea of how cost-effective interventions are, looking at the scale or neglectedness is not very useful.

WAS (Wild Animal Suffering) is a huge problem space, and we are only beginning to explore possible interventions. That doesn't mean that founding WAS charities right now is a good idea. However, it does suggest that searching for effective WAS interventions might be worthwhile.