Minimal-trust investigations

post by Holden Karnofsky (HoldenKarnofsky) · 2021-11-23T18:02:46.511Z · EA · GW · 6 comments

Contents

  Example minimal-trust investigations
  Detailed example from GiveWell
  Other examples of minimal-trust investigations
  Navigating trust
  Conclusion
  Footnotes
None
6 comments

This piece is about the single activity ("minimal-trust investigations") that seems to have been most formative for the way I think.

Most of what I believe is mostly based on trusting other people.

For example:

I think it's completely reasonable to form the vast majority of one's beliefs based on trust like this. I don't really think there's any alternative.

But I also think it's a good idea to occasionally do a minimal-trust investigation: to suspend my trust in others and dig as deeply into a question as I can. This is not the same as taking a class, or even reading and thinking about both sides of a debate; it is always enormously more work than that. I think the vast majority of people (even within communities that have rationality and critical inquiry as central parts of their identity) have never done one.

Minimal-trust investigation is probably the single activity that's been most formative for the way I think. I think its value is twofold:

In this piece, I will:

Example minimal-trust investigations

The basic idea of a minimal-trust investigation is suspending one's trust in others' judgments and trying to understand the case for and against some claim oneself, ideally to the point where one can (within the narrow slice one has investigated) keep up with experts.2 It's hard to describe it much more than this other than by example, so next I will give a detailed example.

Detailed example from GiveWell

I'll start with the case that long-lasting insecticide-treated nets (LLINs) are a cheap and effective way of preventing malaria. I helped investigate this case in the early years of GiveWell. My discussion will be pretty detailed (but hopefully skimmable), in order to give a tangible sense of the process and twists/turns of a minimal-trust investigation.

Here's how I'd summarize the broad outline of the case that most moderately-familiar-with-this-topic people would give:3

When I did a minimal-trust investigation, I developed a picture of the situation that is pretty similar to the above, but with some important differences. (Of all the minimal-trust investigations I've done, this is among the cases where I learned the least, i.e., where the initial / conventional wisdom picture held up best.)

First, I read the Cochrane review in its entirety and read many of the studies it referenced as well. Some were quite old and hard to track down. I learned that:

This opened up a lot of further investigation, in an attempt to determine whether modern-day LLIN distributions have similar effects to those observed in the studies.

My current (now outdated, because it's based on work I did a while ago) understanding of LLINs has a lot of doubt in it:

But all in all, the case for LLINs holds up pretty well. It's reasonably close to the simpler case I gave at the top of this section.

For GiveWell, this end result is the exception, not the rule. Most of the time, a minimal-trust investigation of some charitable intervention (reading every study, thinking about how they might mislead, tracking down all the data that bears on the charity's activities in practice) is far more complicated than the above, and leads to a lot more doubt.

Other examples of minimal-trust investigations

Some other domains I've done minimal-trust investigations in:

And I wish I had time to try out minimal-trust investigations in a number of other domains, such as:

Minimal-trust investigations look different from domain to domain. I generally expect them to involve a combination of "trying to understand or build things from the ground up" and "considering multiple opposing points of view and tracing disagreements back to primary sources, objective evidence, etc." As stated above, an important property is trying to get all the way to a strong understanding of the topic, so that one can (within the narrow slice one has investigated) keep up with experts.

I don't think exposure to minimal-trust investigations ~ever comes naturally via formal education or reading a book, though I think it comes naturally as part of some jobs.

Minimal-trust investigations are extremely time-consuming, and I can't do them that often. 99% of what I believe is based on trust of some form. But minimal-trust investigation is a useful tool in deciding what/whom/when/why to trust.

Trusting arguments. Doing minimal-trust investigations in some domain helps me develop intuitions about "what sort of thing usually checks out" in that domain. For example, in social sciences, I've developed intuitions that:

Trusting people. When trying to understand topic X, I often pick a relatively small part of X to get deep into in a minimal-trust way. I then look for people who seem to be reasoning well about the part(s) of X I understand, and put trust in them on other parts of X. I've applied this to hiring and management as well as to forming a picture of which scholars, intellectuals, etc. to trust.

There's a lot of room for judgment in how to do this well. It's easy to misunderstand the part of X I've gotten deep into, since I lack the level of context an expert would have, and there might be some people who understand X very well overall but don't happen to have gotten into the weeds in the subset I'm focused on. I usually look for people who seem thoughtful, open-minded and responsive about the parts of X I've gotten deep into, rather than agreeing with me per se.

Over time, I've developed intuitions about how to decide whom to trust on what. For example, I think the ideal person to trust on topic X is someone who combines (a) obsessive dedication to topic X, with huge amounts of time poured into learning about it; (b) a tendency to do minimal-trust investigations themselves, when it comes to topic X; (c) a tendency to look at any given problem from multiple angles, rather than using a single framework, and hence an interest in basically every school of thought on topic X. (For example, if I'm deciding whom to trust about baseball predictions, I'd prefer someone who voraciously studies advanced baseball statistics and watches a huge number of baseball games, rather than someone who relies on one type of knowledge or the other.)

Conclusion

I think minimal-trust investigations tend to be highly time-consuming, so it's impractical to rely on them across the board. But I think they are very useful for forming intuitions about what/whom/when/why to trust. And I think the more different domains and styles one gets to try them for, the better. This is the single practice I've found most (subjectively) useful for improving my ability to understand the world, and I wish I could do more of it.


Footnotes

  1. I do recall some high-level points that seem compelling, like "No one disagrees that if you just increase the CO2 concentration of an enclosed area it'll warm up, and nobody disagrees that CO2 emissions are rising." Though I haven't verified either of those claims beyond noting that they don't seem to attract much disagreement. And as I wrote this, I was about to add "(that's how a greenhouse works)" but it's not. And of course these points alone aren't enough to believe the temperature is rising - you also need to believe there aren't a bunch of offsetting factors - and they certainly aren't enough to believe in official forecasts, which are far more complex. 

  2. I think this distinguishes minimal-trust reasoning from e.g. naive epistemology

  3. This summary is slightly inaccurate, as I'll discuss below, but I think it is the most common case people would cite who are casually interested in this topic. 

  4. From GiveWell, a quote from the author of the Cochrane review: "To the best of my knowledge there have been no more RCTs with treated nets. There is a very strong consensus that it would not be ethical to do any more. I don't think any committee in the world would grant permission to do such a trial." Though I last worked on this in 2012 or so, and the situation may have changed since then. 

  5. More on insecticide resistance at https://www.givewell.org/international/technical/programs/insecticide-treated-nets/insecticide-resistance-malaria-control.  

  6. See https://www.givewell.org/international/technical/programs/insecticide-treated-nets#Usage.  

  7. See https://www.givewell.org/charities/amf#What_proportion_of_targeted_recipients_use_LLINs_over_time.  

  8. I think this distinguishes minimal-trust reasoning from e.g. naive epistemology

6 comments

Comments sorted by top scores.

comment by MichaelPlant · 2021-11-24T12:44:04.148Z · EA(p) · GW(p)

I have to say, I rather like putting a name to this concept. I know this wasn't the upshot of the article, but it immediately struck me, on reading this, that it would be a good idea for the effective altruist community to engage in some minimal trust investigations of each other's analyses and frame them as such.

I'm worried about there being too much deference and actually not very much criticism of the received wisdom. Part of the issue is that to criticise the views of smart, thoughtful, well-intentioned people in leadership positions might imply either that you don't trust them (which is rude) or that you're not smart and well-informed enough to 'get it'; there are also the normal fears associated with criticising those with greater power.

These issues are somewhat addressed by saying "look, I have a lot of respect for X and assume there are right about lots of things, but I wanted to get to the bottom of this issue myself and not take anything they said for granted. So I did a 'minimal-trust investigation'. Here's what I found..."

Replies from: AllAmericanBreakfast
comment by AllAmericanBreakfast · 2021-11-27T03:21:50.416Z · EA(p) · GW(p)

I worry that, if adopted, an annoying fraction of people will use this term to mean “I looked at the citations for an article” rather than “I exhaustively looked at the evidence for X from multiple angles over a long period of time.”

An “X-hour investigation” is a more precise claim. Including the references and sources they looked at, and a description of why they chose these, is a complement to saying how much time they’ve spent. In general, I like that this post illustrates what raising one’s research ambitions looks like.

Holden: how many hours, roughly, do you think you spent on some of these minimal-trust investigations? And how many hours would you spend reading a given paper?

comment by MaxRa · 2021-11-26T10:07:12.232Z · EA(p) · GW(p)

Maybe a minimal-trust investigation hackathon could be a cool idea. For example a local EA chapter could spend a day digging into some claim together. Or it could be an online co-working investigation event.

comment by WilliamKiely · 2021-11-29T10:04:48.248Z · EA(p) · GW(p)

I think the vast majority of people (even within communities that have rationality and critical inquiry as central parts of their identity) have never done one.

I think most people in such communities have done the low-time-commitment sort of minimal-trust investigations, such as:

  • Checking attribution. A simple, low-time-commitment sort of minimal-trust investigation: when person A criticizes person B for saying X, I sometimes find the place where person B supposedly said X and read thoroughly, trying to determine whether they've been fairly characterized. This doesn't require having a view on who's right - only whether person B seems to have meant what person A says they did. Similarly, when someone summarizes a link or quotes a headline, I often follow a trail of links for a while, reading carefully to decide whether the link summary gives an accurate impression.

I do this sort of "checking attribution" minimal-trust investigation frequently and expect many others within the EA and rationality community do too.

I also sometimes dig a big deeper, e.g. when someone makes a claim about a study rather than a claim about what someone said. (E.g. I remember investigating some claims a guest on the Joe Rogan podcast made about the effects of plant agriculture on animal deaths.)

But in general, I think you are right that it's quite rare for people to do the high-time-commitment versions of minimal trust investigations.

I can't think of any examples of times that I've put in the enormous amount of work required to do more than a partial high-time-commitment minimal-trust investigation. I ~always stop after a handful of hours (or sometimes a bit longer) because of some combination of (a) it not seeming worth my time (e.g. because I have no training in evaluating studies and so it's very time consuming for me to do so) and (b) laziness.

Replies from: Linch
comment by Linch · 2021-12-01T03:30:09.086Z · EA(p) · GW(p)

Yeah I was surprised by that claim too. Here are just [LW(p) · GW(p)] two [LW(p) · GW(p)] of my comments on incidentally side-conversations of a single blog post, on unrelated topics (Warning: the main topic of that blog post is heavy+full of drama, and may not be worth people reading).

comment by Linch · 2021-12-01T03:21:53.450Z · EA(p) · GW(p)

I think minimum-trust investigations, red-teaming [EA(p) · GW(p)], and epistemic spot checks [? · GW] form a natural cluster. I'd be interested/excited to see more people draw an ontology of what this cluster looks like, what other approaches are in this cluster, and how people can prioritize between these options.