The Possibility of an Ongoing Moral Catastrophe (Summary)

post by Linch · 2019-08-02T21:55:57.827Z · score: 36 (18 votes) · EA · GW · 10 comments

This is a link post for https://docs.google.com/document/d/18ZzC-WkDcWK-WPlIzKvDv83j8aBwSfdOxnZRmoio-zE/edit?usp=sharing

A few years ago, I made a outline of Evan G. Williams' excellent philosophy paper, for a local discussion group. It slowly got circulated on the EA internet. Somebody recently recommended that I make the summary more widely known, so here it is.

The paper is readable and not behind a paywall, so I'd highly recommend reading the original paper if you have the time.

I'm uncertain whether it makes sense to link summaries on the EA Forum instead of just the original paper. I'm more than happy to take this post down if advised to do so.

10 comments

Comments sorted by top scores.

comment by aarongertler · 2019-08-03T01:22:37.033Z · score: 7 (5 votes) · EA · GW

We absolutely welcome summaries! People getting more ways to access good material is one of the reasons the Forum exists.

That said, did you consider copying the summary into a Forum post, rather than linking it? That's definitely more work, but my impression is that it usually leads to more discussion when people don't have to click away into another page. I don't have strong evidence to back that up, though.

Also: because the title is long and long titles are cut short in some views of the Forum, I'd recommend that summaries of pieces be something like "The Possibility of an Ongoing Moral Catastrophe (Summary)".

comment by Linch · 2019-08-03T02:02:23.567Z · score: 1 (1 votes) · EA · GW
We absolutely welcome summaries! People getting more ways to access good material is one of the reasons the Forum exists.

Yay!

did you consider copying the summary into a Forum post, rather than linking it?

Yes. I did a lot of non-standard formatting tricks in Google Docs when I first wrote it (because I wasn't expecting to ever need to port it over to a different format). So when I first tried to copy it over, the whole thing looked disastrously unreadable.

Changed the title. :)


comment by Jsevillamol · 2019-08-03T20:31:48.730Z · score: 4 (4 votes) · EA · GW

Strong upvoting because I want to incentivize people to write and share more summaries.

Summaries are awesome and allow me to understand the high level of papers that I would not have read otherwise. This summary in particular is well written and well-formatted.

Thanks for writing it and sharing it!

comment by edoarad · 2019-08-04T04:42:24.836Z · score: 3 (3 votes) · EA · GW

Thanks for the summary. I have two takeaways:

1. EA is (in part) claiming that there are several ongoing moral catastrophes caused by inaction against global poverty, animal suffering, x-risk,... (some of them are definitely caused by action, but that does not matter as much on consequentialist grounds). Unknown ongoing moral catastrophes are cause-X.

2. The possibility of working to increase our capability to handle undiscovered ongoing moral catastrophe in the future as a major goal. The idea I saw here was to reserve resources, which is a very interesting argument to invest in economic growth.

comment by Ikaxas · 2019-08-03T23:46:34.695Z · score: 3 (3 votes) · EA · GW

I'm entering philosophy grad school now, but in a few years I'm going to have to start thinking about designing courses, and I'm thinking of designing an intro course around this paper. Would it be alright if I used your summary as course material?

comment by Linch · 2019-08-04T07:24:53.308Z · score: 1 (1 votes) · EA · GW

You may also like our discussion sheets for this topic:

https://drive.google.com/drive/u/1/folders/0B3C8dkkHYGfqeXh1SE5kLVJPdGM

comment by Ikaxas · 2019-08-06T16:45:13.764Z · score: 1 (1 votes) · EA · GW

Thanks!

comment by Linch · 2019-08-04T03:24:53.907Z · score: 1 (1 votes) · EA · GW

Sure! In general you can assume that anything I write publicly is freely available for academic purposes. I'd also be interested in seeing the syllabus if/when you end up designing it.

comment by Ikaxas · 2019-08-06T17:06:40.130Z · score: 1 (1 votes) · EA · GW

Definitely, I'll send it along when I design it. Since intro ethics at my institution is usually taught as applied ethics, the basic concept would be to start by introducing the students to the moral catastrophes paper/concept, then go through at least some of the moral issues Williams brings up in the disjunctive portion of the argument to examine how likely they are to be moral catastrophes. I haven't picked particular readings yet though as I don't know the literatures yet. Other possible topics: a unit on historical moral catastrophes (e.g. slavery in the South, the Holocaust); a unit on biases related to moral catastrophes; a unit on the psychology of evil (e.g. Baumeister's work on the subject, which I haven't read yet); a unit on moral uncertainty; a unit on whether antirealism can escape or accommodate the possibility of moral catastrophes.

Assignment ideas:

  1. pick one of the potential moral catastophes Williams mentions, which you think is least likely to actually be a moral catastrophe. Now, imagine that you are yourself five years from now and you’ve been completely convinced that it is in fact a moral catastrophe. What convinced you? Write a paper trying to convince your current self that it is a moral catastrophe after all.
  2. Come up with a potential moral catastrophe that Williams didn’t mention, and write a brief (maybe 1-2 pages?) argument for why it is or isn’t one (whatever you actually believe). Further possibility: Once these are collected, I observe how many people argued that the one they picked was not a moral catastrophe, and if it’s far over 50%, discuss with the class where that bias might come from (e.g. status quo bias, etc.).

This is all still in the brainstorming stage at the moment, but feel free to use any of this if you're ever designing a course/discussion group for this paper.

comment by Linch · 2019-08-07T06:48:40.908Z · score: 1 (1 votes) · EA · GW

For #2, Ideological Turing Tests could be cool too.