post by [deleted] · · score: 0 (0 votes) · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Milan_Griffes · 2019-02-05T15:12:42.959Z · score: 5 (5 votes) · EA(p) · GW(p)

Most of my impulse towards short-termism arises from concerns about cluelessness, which I wrote about here [EA · GW].

Holding a person-affecting ethic is another reason to prioritize the short-term; Michael Plant argues for the person-affecting view here [EA(p) · GW(p)].

comment by Pablo_Stafforini · 2019-02-05T19:20:24.681Z · score: 2 (2 votes) · EA(p) · GW(p)
Another object-level point, due to AGB

Would you mind linking to the comment left by that user, rather than to the user who left the comment? Thanks.

comment by Ward (AshwinAcharya) · 2019-02-05T19:47:07.269Z · score: 4 (4 votes) · EA(p) · GW(p)

He brought this up in a conversation with me; I don't know if he's written it up anywhere.

comment by Max_Daniel · 2019-02-08T23:32:50.078Z · score: 5 (4 votes) · EA(p) · GW(p)

If I recall correctly this paper by Tom Sittler also makes the point you paraphrased as "some reasonable base rate of x-risk means that the expected lifespan of human civilization conditional on solving a particular risk is still hundreds or thousands of years", among others.

comment by Pablo_Stafforini · 2019-02-05T22:12:35.762Z · score: 1 (1 votes) · EA(p) · GW(p)

I see. Thanks.

comment by Denkenberger · 2019-02-08T04:36:50.661Z · score: 4 (5 votes) · EA(p) · GW(p)

I think the argument was written up formally on the forum, but I'm not finding it. I think it goes like if the chance of X risk is 0.1%/year, the expected duration of humans is 1000 years. If you decrease the risk to 0.05%/year, the duration is 2000 years, so you have only added a millennium. However, if you get safe AI and colonize the galaxy, you might get billions of years. But I would argue if you reduce the chance that nuclear war destroys civilization (from which we might not recover), then you increase the chances of getting safe AI and colonization, and therefore you can attribute overwhelming value of mitigating nuclear war.

comment by AGB · 2019-02-09T19:27:22.386Z · score: 6 (5 votes) · EA(p) · GW(p)

> But I would argue if you reduce the chance that nuclear war destroys civilization (from which we might not recover), then you increase the chances of getting safe AI and colonization, and therefore you can attribute overwhelming value of mitigating nuclear war.

For clarity's sake, I don't disagree with this. This does mean that your argument for overwhelming value of mitigating nuclear war is still predicated on developing a safe AI (or some other way of massively reducing the base rate) at a future date, rather than being a self-contained argument based solely on nuclear war being an x-risk. Which is totally fine and reasonable, but a useful distinction to make in my experience. For example, it would now make sense to compare whether working on safe AI directly or working on nuclear war in order to increase the number of years we have to develop safe AI is generating better returns per effort spent. This in turn I think is going to depend heavily on AI timelines, which (at least to me) was not obviously an important consideration for the value of working on mitigating the fallout of a nuclear war!

comment by Denkenberger · 2019-02-12T04:48:52.656Z · score: 2 (1 votes) · EA(p) · GW(p)

I should have said develop safe AI or colonize the galaxy, because I think either one would dramatically reduce the base rate of existential risk. The way I think about the value of nuclear war mitigation being affected by AI timelines is that if AI comes soon, there are fewer years that we are actually threatened by nuclear war. This is one reason I only looked out about 20 years for my cost-effectiveness analysis [EA · GW] for alternate foods versus AI. I think these risks could be correlated, because one mechanism of far future impact of nuclear war is worse values ending up in AI (if nuclear war does not collapse civilization).