What is a 'broad intervention' and what is a 'narrow intervention'? Are we confusing ourselves?

post by Robert_Wiblin · 2015-12-19T16:12:49.618Z · EA · GW · Legacy · 2 comments



Across the community it is common to hear distinctions drawn between ‘broad’ and ‘narrow’ interventions - though less so lately than in 2014/2013. For some imperfect context see this blog post by Holden on 'flow-through effects'. Typical classifications people might make would be:




All I want to do here is draw attention to the fact that there are multiple distinctions that should be drawn out separately so that we can have more productive conversations about the relative merits and weaknesses of different approaches. One way to do this is to make causal diagrams. I’ve made two below for illustrative purposes.

Below is a list of some possible things we might mean by narrow and broad, or related terms.


Other quick observations about this:

As they say, more research is needed.



Comments sorted by top scores.

comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2015-12-22T17:43:46.438Z · EA(p) · GW(p)

Thanks Rob, I think this is a valuable space to explore. I like what you've written. I'm going to give an assortment of thoughts in this space.

I have tended to refer to the long- vs short- path to impact as "indirect vs direct", and the many-paths-to-impact vs few-paths-to-impact as "broad vs narrow/targeted". I'm not sure how consistently these terms are understood. Another distinction which comes up is the degree of speculativeness of the intervention.

There are some correlations between these different distinctions:

  • Indirect interventions have more opportunities to be broad than direct ones
  • Indirect interventions are typically more speculative than direct ones
  • but broad interventions are often more robust (less speculative) than narrow ones

I think it's typically easier to get a good understanding of effectiveness for more direct and more narrow interventions. I therefore think they should normally be held to a higher standard of proof -- the cost of finding that proof shouldn't be prohibitive in a way it might be for the broader interventions.

I'm particularly suspicious of indirect, narrow interventions. Here there is a single chain for the intended effect, and lots of steps in the chain. This means that if we've made a mistake about our reasoning at any stage, the impact of the entire thing could collapse.

comment by RomeoStevens · 2015-12-20T21:55:03.984Z · EA(p) · GW(p)

The outside view should in theory have input into how fragile we can build our inference chains. In practice it is likely very hard to establish a base rate due to the issue you bring up of not knowing where to carve. "Steps of uniform likelihood" isn't exactly an operation we can apply to a data set on past results. If we're stuck with expert judgement, that limits how many cases we can evaluate and how confident we can be in the results. Still better than nothing though.