comment by jushy (sanjush) ·
2020-12-19T12:27:52.360Z · EA(p) · GW(p)
Longtermism which doesn't care about Existential Risk - Implications of Benatar's asymmetry between pain and pleasure
I think a major implication of longtermism is that "we should care far more about problems which will cause suffering to many generations, or problems that will deprive many generations of pleasure".
But if like me, you accept Benatar's argument on the asymmetry of suffering and pleasure, i.e, that a lack of pleasure isn't a bad thing if no one is around to miss it, then the "existential risk component" of an existential risk isn't a problem, since depriving many generations of pleasure by preventing them from existing in the first place isn't a bad thing.
However, many existential risks are "progressive" in the sense that they will cause suffering for many generations before causing extinction, so they would still be a cause for concern. But the fact that they are an "existential risk" wouldn't really be relevant.
On the other hand, some existential risks that EAs are concerned about could only affect a small number of generations (eg - very large asteroids), and could almost entirely be ignored in comparison to issues which could plague many generations.
I think a reasonable amount of people agree with Benatar, because I think most people don't see depriving an individual of pleasure by preventing them from existing as a 'con' of contraception.
Originally posted to r/effectivealtruism because I thought I was missing something obvious.