What would a pre-mortem for the long-termist project look like?
post by Azure
This is a question post.
Suppose we're sometime in the (near-ish) future. The longtermist project hasn't fulfilled 2020's expectations. Where did we go wrong? What scenarios (and with what probabilities) may have lead to this?
I hope this question isn't strictly isomorphic to asking about objections to long-termism.
answer by alexrjl
) · GW
Both of the possibilities below don't seem to be things that it would be that easy to realise even once we're in some (near-ish) future. I hope this isn't begging the question, it isn't intended to be. I've put credences and I'm glad you asked for them, but they are very uncertain.
One possibility is that we were just wrong about the whole long-termism thing. Given how much disagreement in philosophy there seems to be about basically everything, it seems prudent to give this idea non-trivial credence, even if you find arguments for long-termism very convincing. I'd maybe give a 10% probability to long-termism just being wrong.
More significant seems to be the chance that long-termism was right but that trying to directly intervene in the long-term future by taking actions that were only expected to have consequences in the long term was a bad strategy, and instead we should have been (approximate credence):
- Investing money to be spent in the future (10%)
- Investing in the future by growing the EA community (25%)
- Doing the most good posible in the short term for the developing world/animals, as this turns out to positively shape the future more than actually trying to. (20%)
↑ comment by reallyeli ·
2020-04-12T14:39:36.439Z · EA(p) · GW(p)
I'd maybe give a 10% probability to long-termism just being wrong.
What could you observe that would cause you to think that longtermism is wrong? (I ask out of interest; I think it's a subtle question.)Replies from: alexrjl
↑ comment by alexrjl ·
2020-04-12T19:52:26.361Z · EA(p) · GW(p)
A really convincing argument from a philosopher or group of philosophers I respected would probably do it, especially if it caused prominent longtermists to change their minds. I've no idea what this argument would be, because if I could think of the argument myself it would already have changed my mind.
Replies from: reallyeli
↑ comment by StevenLochner ·
2020-04-13T09:08:58.401Z · EA(p) · GW(p)
What about a scenario where long-termism turns out to be right, but there is some sort of community-level value drift which results in long-term cause areas becoming neglected, perhaps as a result of the community growing too quickly or some intra-community interest groups becoming too powerful? I wouldn't say this is very likely (maybe 5%), but we should consider the base rate of this type of thing happening.
I realise that this outcome might be subsumed in the the points raised above. Specifically, it might be that instead of directly trying to intervene in the long-term future, EA should have invested in sustainably growing the community with the intention of avoiding value drift (option 2) - I am just wondering how granular we can get with this pre-mortem before it becomes unhelpfully complex.
From a strategic point of view this pre-mortem is a great idea.
↑ comment by technicalities ·
2020-04-12T07:58:34.208Z · EA(p) · GW(p)
Great comment. I count only 65 percentage points - is the other third "something else happened"?
Or were you not conditioning on long-termist failure? (That would be scary.)
Replies from: alexrjl, Azure
↑ comment by alexrjl ·
2020-04-12T19:47:42.825Z · EA(p) · GW(p)
I was not conditioning on long termist failure, but I also don't think my last three points are mutually exclusive, so they shouldn't be naively summed.
Comments sorted by top scores.