Are there lists of causes (that seemed promising but are) known to be ineffective?
post by Maxime Perrigault
This is a question post.
Among the EA community, the questions "what are the most important, the most effective causes?" is a crucial topic.
There are even lists of the most important ones, so people can focus their effort on them.
I assume that to build these lists, a lot of causes have been studied and compared. In the end, only the most neglected, important, and tractable ones have been chosen, the other ones have been discarded.
In my opinion knowing what are the most effective cause is a must, but knowing which causes have been discarded is also important. Here are a couple reasons why:
- If someone decides to use its time to search for new promising areas, it would be a waste of time to explore already discarded areas.
- In my opinion, for someone discovering EA (such as myself), some of the most important problems are often counter intuitive. Therefore, to have explanation about these problems helps to have a better understanding of the EA's way of thinking. I believe that also understanding what problems are ineffective and why they are ineffective would also be an interesting approach to EA's way of thinking.
So here is my question: Are there lists of causes (that seemed promising but are) known to be ineffective?
answer by MichaelA
) · GW
This seems to me like a good question/a good idea.
Some quick thoughts:
- I can't think of such a list (at least, off the top of my head).
- There was a very related comment thread [EA(p) · GW(p)] on a recent post from 80,000 Hours. I'd recommend checking that out. (It doesn't provide the sort of list you're after, but touches on some reasons for and against making such a list.)
- I've now also commented a link to this question from that thread, to tie these conversations together.
- I'd suggest avoiding saying "known to be ineffective" (or "known to be low-priority", or whatever). I think we'd at best create a list of causes we have reason to be fairly confident are probably low-priority. More likely, we'd just have a list of causes we have some confident are low-priority, but not much confidence, because once they started to seem low-priority we (understandably) stopped looking into them.
- To compress that into something more catchy, we could maybe say "a list of causes that were looked into, but that seem to be low-priority". Or even just "a list of causes that seem to be low-priority".
- This sort of list could be generated not only for causes, but also for interventions, charities, and/or career paths.
- E.g., I imagine looking through some of the "shallow reviews" from GiveWell and Charity Entrepreneurship could help one create lists of charities and interventions that were de-prioritised for specific reasons, and that thus may not be worth looking into in future.
answer by Buck
) · GW
In an old post, Michael Dickens writes:
The closest thing we can make to a hedonium shockwave with current technology is a farm of many small animals that are made as happy as possible. Presumably the animals are cared for by people who know a lot about their psychology and welfare and can make sure they’re happy. One plausible species choice is rats, because rats are small (and therefore easy to take care of and don’t consume a lot of resources), definitively sentient, and we have a reasonable idea of how to make them happy.
Thus creating 1 rat QALY costs $120 per year, which is $240 per human QALY per year.
This is just a rough back-of-the-envelope calculation so it should not be taken literally, but I’m still surprised by how cost-inefficient this looks. I expected rat farms to be highly cost-effective based on the fact that most people don’t care about rats, and generally the less people care about some group, the easier it is to help that group. (It’s easier to help developing-world humans than developed-world humans, and easier still to help factory-farmed animals.) Again, I could be completely wrong about these calculations, but rat farms look less promising than I had expected.
I think this is a good example of something seeming like a plausible idea for making the world better, but which turned out to seem pretty ineffective.
Comments sorted by top scores.
comment by xccf
) · GW
One problem with making a list like this: People already get mad at EA for saying that their favorite trendy cause is ineffective. I'm somewhat sympathetic to this: Even if I don't think [super trendy cause] is the most important cause, I'm usually glad people are working on it, and I don't want to discourage them (if the likely result of me discouraging them is that they switch to playing video games or something). It's also bad from a public relations point of view for EA to be seen as existing in opposition to trendy causes.
Therefore, if anyone makes a list like this, I suggest you stick to obscure causes.