Posts

Comments

Comment by ikaxas on What posts do you want someone to write? · 2020-03-24T20:18:15.185Z · EA · GW

Ooh, I would also very much like to see this post

Comment by ikaxas on Normative Uncertainty and the Dependence Problem · 2020-03-23T21:41:25.108Z · EA · GW

Hm. Do you think it would be useful for me to write a short summary of the arguments against taking normative uncertainty into account and post it to the EA forum? (Wrote a term paper last semester arguing against Weatherson, which of course involved reading a chunk of that literature.)

Comment by ikaxas on Ask Me Anything! · 2019-08-21T03:04:43.723Z · EA · GW

I'd be very interested in hearing more about the views you list under the "more philosophical end" (esp. moral uncertainty) -- either here or on the 80k podcast.

Comment by ikaxas on The Possibility of an Ongoing Moral Catastrophe (Summary) · 2019-08-06T17:06:40.130Z · EA · GW

Definitely, I'll send it along when I design it. Since intro ethics at my institution is usually taught as applied ethics, the basic concept would be to start by introducing the students to the moral catastrophes paper/concept, then go through at least some of the moral issues Williams brings up in the disjunctive portion of the argument to examine how likely they are to be moral catastrophes. I haven't picked particular readings yet though as I don't know the literatures yet. Other possible topics: a unit on historical moral catastrophes (e.g. slavery in the South, the Holocaust); a unit on biases related to moral catastrophes; a unit on the psychology of evil (e.g. Baumeister's work on the subject, which I haven't read yet); a unit on moral uncertainty; a unit on whether antirealism can escape or accommodate the possibility of moral catastrophes.

Assignment ideas:

  1. pick one of the potential moral catastophes Williams mentions, which you think is least likely to actually be a moral catastrophe. Now, imagine that you are yourself five years from now and you’ve been completely convinced that it is in fact a moral catastrophe. What convinced you? Write a paper trying to convince your current self that it is a moral catastrophe after all.
  2. Come up with a potential moral catastrophe that Williams didn’t mention, and write a brief (maybe 1-2 pages?) argument for why it is or isn’t one (whatever you actually believe). Further possibility: Once these are collected, I observe how many people argued that the one they picked was not a moral catastrophe, and if it’s far over 50%, discuss with the class where that bias might come from (e.g. status quo bias, etc.).

This is all still in the brainstorming stage at the moment, but feel free to use any of this if you're ever designing a course/discussion group for this paper.

Comment by ikaxas on The Possibility of an Ongoing Moral Catastrophe (Summary) · 2019-08-06T16:45:13.764Z · EA · GW

Thanks!

Comment by ikaxas on The Possibility of an Ongoing Moral Catastrophe (Summary) · 2019-08-03T23:46:34.695Z · EA · GW

I'm entering philosophy grad school now, but in a few years I'm going to have to start thinking about designing courses, and I'm thinking of designing an intro course around this paper. Would it be alright if I used your summary as course material?

Comment by ikaxas on Please May I Have Reading Suggestions on Consistency in Ethical Frameworks · 2019-07-10T04:34:45.217Z · EA · GW

David Moss mentioned a "long tradition of viewing ethical theorising (and in particular attempts to reason about morality) sceptically." Aside from Nietzsche, another very well-known proponent of this tradition is Bernard Williams. Take a look at his page in the Stanford Encyclopedia of Philosophy, and if it looks promising check out his book Ethics and the Limits of Philosophy. You might also check out his essays "Ethical Consistency" (which I haven't read; in his essay collection Problems of the Self) and "Conflicts of Values" (in Moral Luck). There are probably lots of other essays of his that are relevant that I just don't know about. Another essay you might read is Steven Lukes' "Making Sense of Moral Conflict" in his book Moral Conflict and Politics. On the question of whether there can ever be impossible moral demands (that is, situations where all of the available options are morally wrong, potentially because of conflicting moral requirements), one recent book (which I haven't read, but sounds good) is Lisa Tessman's Moral Failure: On the Impossible Demands of Morality (see also the SEP article here). Don Loeb has an essay called "Moral Incoherentism," which despite its title seems to deal with something slightly different than what you're talking about, but might still be of interest.

The piece that comes the closest to speaking directly to what you're talking about here, that I know of, is Richard Ngo's blog post "Arguments for Moral Indefinability". He also has a post on "realism about rationality" which is probably also related.

On "consistency with our intuitions," a book to check out might be Michael Huemer's Ethical Intuitionism. And of course the SEP article on ethical intuitionism. Though of course intuitionism isn't the only metaethical theory that takes consistency with our intuitions as a criterion; David Moss mentioned reflective equilibrium -- and I definitely second his recommendation to look into this further -- and Constructivism also has some of this flavor, for instance. Also check out this paper on Moorean arguments in ethics ("Moorean arguments" in reference to G.E. Moore's famous "here is one hand" argument).

David Moss also mentioned "hyper-methodism and hyper-particularism." Another paper that touches on that distinction, and on Moorean arguments (though not specifically in ethics) is Thomas Kelly's "Moorean Facts and Belief Revision."

Comment by ikaxas on The harm of preventing extinction · 2018-12-25T16:18:22.247Z · EA · GW

Counterpoint (for purposes of getting it into the discussion; I'm undecided about antinatalism myself): that argument only applied to people who are already alive, and thus not to most of the people who would be affected by the decision whether to extend the human species or not (I.e. those who don't yet exist). David Benatar argues (podcast, book) that while, as you point out, many human lives may well be worth continuing, those very same lives (he thinks all lives, but that's more than I need to make this argument) may nevertheless not have been worth starting. If this is the case, then some or all of the lives that would come into existence by preventing extinction may also not be worth starting.

Comment by ikaxas on Would killing one be in line with EA if it can save 10? · 2018-12-08T17:21:25.008Z · EA · GW

What I was describing wasn't exactly Pascal's mugging. Pascal's mugging is an attempted argument *against* this sort of reasoning, by arguing that it leads to pathological conclusions (like that you ought to pay the mugger, when all he's told you is some ridiculous story about how, if you don't, there's a tiny chance that something catastrophic will happen). Of course, some people bite the bullet and say that you just should pay the mugger, others claim that this sort of uncertainty reasoning doesn't actually lead you to pay the mugger, and so on. I don't really have a thought-out view on Pascal's mugging myself. The reason what I'm describing is different is that [this sort of reasoning leading you to *not* kill someone] wouldn't be considered a pathological conclusion by most people (same with buying flood insurance).

Comment by ikaxas on Would killing one be in line with EA if it can save 10? · 2018-12-02T02:50:55.892Z · EA · GW

Here are two other considerations that haven't yet been mentioned:

1. EA is supposed to be largely neutral between ethical theories. In practice, most EAs tend to be consequentialists, specifically utilitarians, and a utilitarian might plausibly think that killing one to save ten was the right thing to do (though others in this thread have given reasons why that might not be the case even under utilitarianism), but in theory one could unite EA principles with most ethical systems. So if the ethical system you think is most likely to be correct includes side constraints/deontological rules against killing, then EA doesn't require you to violate those side constraints in the service of doing good; one can simply do the most good one can do within those side constraints.

2. Many EAs are interested in taking into account moral uncertainty, i.e. uncertainty about which moral system is correct. Even if you think the most likely theory is consequentialism, it can be rational to act as if there is a side constraint against killing if you place some amount of credence in a theory (e.g. a deontological theory) on which killing is always quite seriously wrong. The thought is this: if there's some chance that your house will be damaged by a flood, it can be worth it to buy flood insurance, even if that chance is quite small, since the damage if it does happen will be very great. By the same token, even if the theory you think is most probably recommends killing in a particular case, it can still be worth it to refrain, if you also place some small credence in another theory that thinks killing is always seriously wrong. Will MacAskill discusses this in his podcast with Rob Wiblin.

Tl;dr: you might think killing one to save ten is wrong because you're a nonconsequentialist, and this is perfectly compatible with EA. Or, even if you are consequentialist, and even if you think consequentialism sometimes recommends killing one to save ten, it might still be rational not to kill in those cases, if you place even a small credence in some other theory on which this would be seriously wrong.

Comment by ikaxas on Rationality as an EA Cause Area · 2018-11-14T13:38:48.049Z · EA · GW

Scott Alexander has actually gotten academic citations, e.g. in Paul Bloom's book Against Empathy (sadly I don't remember which article of his Bloom cites), and I get the impression a fair few academics read him.

Comment by ikaxas on Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy · 2018-03-26T19:55:07.333Z · EA · GW

Hi,

What would be the attitude towards someone who wanted to work with you after undergrad for a year or two, but then go on to graduate school (likely for philosophy in my case), with an eye towards then continuing to work with you or other EA orgs after grad school?