jushy's Shortform

post by jushy (sanjush) · 2020-12-19T12:27:51.998Z · EA · GW · 4 comments

4 comments

Comments sorted by top scores.

comment by jushy (sanjush) · 2021-02-25T10:23:57.246Z · EA(p) · GW(p)

Is anyone aware of previous writings by EAs on founding think tanks as a way of having an impact over the long-term?

In the UK,  I think the Fabian Society and the Centre for Policy Studies are continuing to influence British politics long after the deaths of their founders.

Replies from: HaukeHillebrandt
comment by HaukeHillebrandt · 2021-03-07T21:33:28.419Z · EA(p) · GW(p)

Slate Star Codex had an interesting review on the Fabian Society and how advocacy can backfire.

Open Philanthropy Project has an interesting review of the Center for Global Development.

comment by jushy (sanjush) · 2021-01-27T10:05:20.707Z · EA(p) · GW(p)

Is anyone aware of any research / blog posts specifically on how much free range hens suffer? Most of the ones I can see repeatedly deviate from this question.

comment by jushy (sanjush) · 2020-12-19T12:27:52.360Z · EA(p) · GW(p)

Longtermism which doesn't care about Existential Risk - Implications of Benatar's asymmetry between pain and pleasure

I think a major implication of longtermism is that "we should care far more about problems which will cause suffering to many generations, or problems that will deprive many generations of pleasure".

But if  like me, you accept Benatar's argument on the asymmetry of suffering and pleasure, i.e, that a lack of pleasure isn't a bad thing if no one is around to miss it, then the "existential risk component" of an existential risk isn't a problem, since depriving many generations of pleasure by preventing them from existing in the first place isn't a bad thing. 

However, many existential risks are "progressive" in the sense that they will cause suffering for many generations before causing extinction, so they would still be a cause for concern. But the fact that they are an "existential risk" wouldn't really be relevant.

On the other hand, some existential risks that EAs are concerned about could only affect a small number of generations (eg - very large asteroids), and could almost entirely be ignored in comparison to issues which could plague many generations. 

I think a reasonable amount of people agree with Benatar, because I think most people don't see depriving an individual of pleasure by preventing them from existing as a 'con' of contraception.

Originally posted to r/effectivealtruism because I thought I was missing something obvious.