seanrson's Shortform 2020-10-27T05:20:03.342Z


Comment by seanrson ( on Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure · 2020-12-20T07:27:25.050Z · EA · GW

Yeah I'm not really sure why we use the term x-risk anymore. There seems to be so much disagreement and confusion about where extinction, suffering, loss of potential, global catastrophic risks, etc. fit into the picture. More granularity seems desirable. is helpful.

Comment by seanrson ( on What is a "Kantian Constructivist view of the kind Christine Korsgaard favours"? · 2020-12-06T06:26:39.265Z · EA · GW

Just adding onto this, for those interested in learning how a Kantian meta-ethical approach might be compatible with a consequentialist normative theory, see Kagan's "Kantianism for Consequentialists":

Comment by seanrson ( on Questions for Peter Singer's fireside chat in EAGxAPAC this weekend · 2020-11-20T06:20:23.858Z · EA · GW

Has Singer ever said anything about s-risks? If not, I’m curious to hear his thoughts, especially concerning how his current view compares to what he would’ve thought during his time as a preference utilitarian.

Comment by seanrson ( on Longtermism and animal advocacy · 2020-11-18T00:15:47.348Z · EA · GW

Sorry, I'm a bit confused on what you mean here. I meant to be asking about the prevalence of a view giving animals the same moral status as humans. You say that many might think nonhuman animals' interests are much less strong/important than humans. But I think saying they are less strong is different than saying they are less important, right? How strong they are seems more like an empirical question about capacity for welfare, etc.

Comment by seanrson ( on some concerns with classical utilitarianism · 2020-11-16T05:40:51.078Z · EA · GW

Ya, I think 80,000 Hours has been a bit uncareful. I think GPI has done a fine job, and Teruji Thomas has worked on person-affecting views with them.

Woops yeah, I meant to say that GPI is good about this but the transparency and precision gets lost as ideas spread. Fixed the confusing language in my original comment.

In the longtermism section on their key ideas page, 80,000 Hours essentially assumes totalism without making that explicit:

Yeah this is another really great example of how EA is lacking in transparent reasoning. This is especially problematic since many people probably don't have the conceptual resources necessary to identify the assumption or how it relates to other EA ideas, so the response might just be a general aversion to EA.

This article is a bit older (2017) so maybe it's more forgiveable, but their coverage of the asymmetry there is pretty bad.

As another piece of evidence, my university group is using an introductory fellowship syllabus recently developed by Oxford EA and there are zero required readings about anything related to population ethics and how different views here might affect cause prioritization. Instead extinction risks are presented as pretty overwhelmingly pressing.

FWIW, I'm skeptical of this, too. I've responded to that paper here, and have discussed some other concerns here.

Thanks, gonna check these out!

Comment by seanrson ( on Longtermism and animal advocacy · 2020-11-16T02:19:28.248Z · EA · GW

Thanks for this post. Looking forward to more exploration on this topic.

I agree that moral circle expansion seems massively neglected. Changing institutions to enshrine (at least some) consideration for the interests of all sentient beings seems like an essential step towards creating a good future, and I think that certain kinds of animal advocacy are likely to help us get there. 

As a side note, do we have any data on what proportion of EA's adhere to the sort of "equal consideration of interests" view on animals which you advocate? I also hold this view, but its rarity may explain some differences in cause prioritization.  I wonder how rare this view is even within animal advocacy.

Comment by seanrson ( on some concerns with classical utilitarianism · 2020-11-16T01:58:11.478Z · EA · GW

Thanks for writing this up.

These are all interesting thoughts and objections that I happen to find persuasive. But more  generally, I think EA should be more transparent about what philosophical assumptions are being made, and how this affects cause prioritization. Of course the philosophers associated with GPI are good about this, but often this transparency and precision gets lost as ideas spread.

For instance, in discussions of longtermism, totalism often seems to be assumed without making that assumption clear. Other views are often misrepresented, for example in 80,000's post "Introducing longtermism" where they say: 

This objection is usually associated with a “person-affecting” view of ethics, which is sometimes summed up as the view that “ethics is about helping make people happy, not making happy people”. In other words, we only have moral obligations to help those who are already alive...

But of course person-affecting views are diverse and they need not imply presentism.

From my experience leading an EA university group, this lack of transparency and precision often has the effect of causing people with different philosophical assumptions to reject longtermism altogether, which is a mistake since it's robust across various population axiologies. I worry that this same sort of thing might cause people to reject other EA ideas.

Comment by seanrson ( on seanrson's Shortform · 2020-10-27T05:20:03.906Z · EA · GW

Hi all, I'm sorry if this isn't the right place to post. Please redirect me if there's somewhere else this should go.

I'm posting on behalf of my friend, who is an aspiring AI researcher in his early 20's, and is looking to live with likeminded individuals. He currently lives in Southern California, but is open to relocating (preferably USA, especially California).

Please message if you're interested!

Comment by seanrson ( on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-09-13T06:35:52.548Z · EA · GW

AFAIK the paralysis argument is about the implications of non-consequentialism, not about down-side focused axiologies. In particular, it's about the implications of a pair of views. As Will says in the transcript you linked:

"but this is a paradigm nonconsequentialist view endorses an acts/omissions distinction such that it’s worse to cause harm than it is to allow harm to occur, and an asymmetry between benefits and harms where it’s more wrong to cause a certain amount of harm than it is right or good to cause a certain amount of benefit... And if you have those two claims, then you’ve got to conclude [along the lines of the paralysis argument]".

Also, I'm not sure how Lukas would reply but I think one way of defending his claim which you criticize, namely that "the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled", is by appealing to the existence of impossibility theorems in ethics. In that case we truly won't be able to avoid counterintuitive results (see e.g. Arrhenius 2000, Greaves 2017). This also shouldn't surprise us too much if we agree with the evolved nature of some of our moral intuitions.

Comment by seanrson ( on Book Review: Deontology by Jeremy Bentham · 2020-08-13T03:03:07.490Z · EA · GW

This was such a fun read. Bentham is often associated with psychological egoism, so it seems somewhat odd to me that he felt a need to exhort readers to fulfill their own pleasure (since apparently all actions are done on this basis anyway).

Comment by seanrson ( on The academic contribution to AI safety seems large · 2020-08-04T21:46:15.152Z · EA · GW

Could you say more (or work on that post) about why formal methods will be unhelpful? Why are places like Stanford, CMU, etc. pushing to integrate formal methods with AI safety? Also Paul Christiano has suggested formal methods will be useful for avoiding catastrophic scenarios. (Will update with links if you want.)