[link post] The Case for Longtermism in The New York Times
post by abierohrig
This is a link post for https://www.nytimes.com/2022/08/05/opinion/the-case-for-longtermism.html
An adapted excerpt from What We Owe The Future by Will MacAskill is now live in The New York Times and will run as the cover story for their Sunday Opinion this weekend.
I think the piece makes for a great concise introduction to longtermism, so please consider sharing the piece on social media to boost its reach!
Comments sorted by top scores.
comment by Zach Stein-Perlman (zsp) ·
2022-08-05T16:39:43.212Z · EA(p) · GW(p)
From the comments in the NYT, two notes on communicating longtermism to people-like-NYT-readers:
- Many readers are confused by the focus on humans.
- Some readers are confused by the suggestion that longtermism is weird (Will: "It took me a long time to come around to longtermism") rather than obvious.
Re 2, I do think it's confusing to act like longtermism is nonobvious unless you're emphasizing weird implications like our calculations being dominated by the distant future and x-risk and things at least as weird as digital minds filling the universe.Replies from: nathan98000, henryj, Gabe Mukobi, Sharmake
↑ comment by henryj ·
2022-08-05T17:29:06.012Z · EA(p) · GW(p)
I'm also a bit surprised at how many of the comments are concerned about overpopulation. The most-recommended comment is essentially the tragedy of the commons. That comment's tone - and the tone of many like it, as well as a bunch of anti-GOP ones - felt really fatalistic, which worries me. So many of the comments felt like variations on "we're screwed", which goes against the belief in a net-positive future upon which longtermism is predicated.Replies from: antimonyanthony
On that note, I'll shoutout Jacy's post [EA · GW] from about a month ago, echoing those fears in a more-EA way.
↑ comment by antimonyanthony ·
2022-08-06T09:30:39.291Z · EA(p) · GW(p)
which goes against the belief in a net-positive future upon which longtermism is predicated
Longtermism per se isn't predicated on that belief at all—if the future is net-negative, it's still (overwhelmingly) important to make future lives less bad.
↑ comment by Gabriel Mukobi (Gabe Mukobi) ·
2022-08-07T01:42:24.440Z · EA(p) · GW(p)
I can't unread this comment:
"Humanity could, theoretically, last for millions of centuries on Earth alone." I find this claim utterly absurd. I'd be surprised if humanity outlasts this century.
Ughh they're so close to getting it! Maybe this should give me hope?
↑ comment by Sharmake ·
2022-08-07T18:05:45.284Z · EA(p) · GW(p)
Basically, William MacAskill's longtermism, or EA longtermism is trying to solve the distributional shift issue. Most cultures that have long-term thinking assume that there's no distributional shift such that no key assumptions of the present are wrong. Now if this assumption is correct, we shouldn't interfere with cultures, as they will go to local optimums. But it isn't and thus longtermism from has to deal with weird scenarios like AI or x-risk.
Thus the form of EA longtermism is not obvious, as it can't assume that there's no distributional shift into out of distribution behavior. In fact, we have good reasons of thinking that there will be massive distributional shifts. That's the key difference between EA and other culture's longtermism.
comment by Larks ·
2022-08-06T00:39:12.394Z · EA(p) · GW(p)
Nice article, thanks for linking (and Will for writing).
Unfortunately some people I know thought this section was a little misleading, as they felt it was insinuating that Xrisk from nuclear was over 20% - a figure I think few EAs would endorse. Perhaps it was judged to be a low-cost concession to the prejudices of NYT readers?
Replies from: Daniel_Eth
We still live under the shadow of 9,000 nuclear warheads, each far more powerful than the bombs dropped on Hiroshima and Nagasaki. Some experts put the chances of a third world war by 2070 at over 20 percent. An all-out nuclear war could cause the collapse of civilization, and we might never recover.
↑ comment by Daniel_Eth ·
2022-08-06T05:59:35.036Z · EA(p) · GW(p)
Hmm, I don't read it that way. My read of this passage is: the risk of WWIII by 2070 might be as high as somewhat over 20% (but that estimate is probably picked from the higher end of serious estimates), WWIII may or may not lead to all-out nuclear war, all-out nuclear war has some unknown chance of leading to the collapse of civilization, and if that happened then there would also be some further unknown chance of never recovering. So all-in-all, I'd read this as Will thinking that X-risk from nuclear war in the next 50 years was well below 20%.
I also don't think NYT readers have particularly clear prejudices about nuclear war (they probably have larger prejudices about things like overpopulation), so this would be a weird place to make a concession, in my mind.