Posts

Comments

Comment by simon_knutsson on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-11T06:19:30.067Z · score: 1 (1 votes) · EA · GW

Sure. I’ll use traditional total act-utilitarianism defined as follows as the example here so that it’s clear what we are talking about:

Traditional total act-utilitarianism: An act is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.

I gather the metaethical position you describe is something like one of the following three:

(1) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will think that any act I perform is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’

This (1) was about which of your actions will be right. Alternatively, the metaethical position could be as follows:

(2) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will think that any act anyone performs is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’

Or perhaps formulating it in terms of want or preference instead of rightness, like the following, better describes your metaethical position (using utilitarianism as just an example):

(3) When I say ‘I think utilitarianism is right’ I mean ‘I think that after I reach reflective equilibrium I will want or have a preference for that everyone act in a way that results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.’

My impression is that in the academic literature, metaethical theories/positions are usually, always or almost always formulated as general claims about what, for example, statements such as ‘one ought to be honest’ means; the metaethical theories/position do not have the form ‘when I say “one ought to be honest” I mean …’ But, sure, talking, as you do, about what you mean when you say ‘I think utilitarianism is right’ sounds fine.

The new version of your thought experiment sounds fine, which I gather would go something like the following:

Suppose almost all humans adopt utilitarianism as their moral philosophy and fully colonize the universe, and then someone invents the technology to kill humans and replace humans with beings of greater well-being. (Assume it would be optimal, all things considered, to kill and replace humans.) Utilitarianism seems to imply that at least humans who are utilitarians should commit mass suicide (or accept being killed) in order to bring the new beings into existence, because that's what utilitarianism implies is the optimal and hence morally right action in that situation.

Comment by simon_knutsson on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-09T11:58:39.378Z · score: 7 (3 votes) · EA · GW

Very interesting :) I don’t mean to be assuming moral realism, and I don’t think of myself as a realist. Suppose I am an antirealist and I state some consequentialist criterion of rightness: ‘An act is right if and only if…’. When stating that, I do not mean or claim that it is true in a realist sense. I may be expressing my feelings, I may encourage others to act according to the criterion of rightness, or whatever. At least I would not merely be talking about how I prefer to act. I would mean or express roughly ‘everyone, your actions and mine are right if and only if …’. But regardless of whether I would be speaking about myself or everyone, we can still talk about what the criterion of rightness (the theory) implies in the sense that one can check which actions satisfy the criteria. So we can say: according to the theory formulated as ‘an act is right if and only if…’ this act X would be right (simply because it satisfies the criteria). A simpler example is if we understand the principle ‘lying is wrong’ from an antirealist perspective. Assuming we specify what counts as lying, we can still talk about whether an act is a case of lying and hence wrong, according to this principle. And then one can discuss whether the theory or principle is appealing, given which acts it classifies as right and wrong. If repugnant action X is classified as right or if something obviously admirable act is classified as wrong, we may want to reject the theory/criterion, regardless of realism or antirealism.

Maybe all I’m saying is obvious and compatible with what you are saying.

Comment by simon_knutsson on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-09T09:22:53.364Z · score: 2 (2 votes) · EA · GW

Yea, one can formulate many variants. I can't recall seening yours before. The following is one thing that might seem like nitpicking, but which I think it is quite important: In academia, it seems standard to formulate utilitarianism and other consequentialist theories so that they apply to everyone. For example,

Traditional total act-utilitarianism: An act is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.

These theories are not formulated as 'traditional utilitarians ought to ...'. I can't recall ever seeing a version of utilitarianism or consequentialism formluated as 'utilitarians/consequentialists ought to ...'.

So when you write "Utilitarianism seems to imply that humans who are utilitarians should" I would rephrase as 'Utilitarianism seems to imply that humans should' since utilitarianism applies to all agents not only utilitarians. But perhaps you mean 'utilitarianism seems to imply that humans, including those who are utilitarians, should...' which would make sense.

Why does my nitpicking matter? One reason is when thinking about scenarios or thought experiments. For example, I don't think one can reply to world destruction or replacement arguments by saying 'a consequentialist ought to not kill everyone because ...'. We can picture a dictator who has never heard of consequentialism, and who is just about to act out of hatred. And we can ask, 'According to the traditional total act-utilitarian criterion of rightness (i.e. an act is right if and only if ...), would the dictator taking action X (say, killing everyone) be right?

Another reason the nitpicking matters is when thinking about the plausibility of the theories. A theory might sound nicer and more appealing if it merely says 'Those who endorse this theory act in way X' rather than as they are usually roughly formulated 'Everyone act in way X, regardless of whether you endorse this theory or not'.

Comment by simon_knutsson on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-09T08:26:51.650Z · score: 3 (3 votes) · EA · GW

Carl, you write that you are “more sympathetic to consequentialism than the vast majority of people.” The original post by Richard is about utilitarianism and replacement thought experiments but I guess he is also interested in other forms of consequentialism since the kind of objection he talks about can be made against other forms of consequentialism too.

The following you write seems relevant to both utilitarianism and other forms of consequentialism:

I don't think a 100% utilitarian dictator with local charge of a society on Earth removes pragmatic considerations, e.g. what if they are actually a computer simulation designed to provide data about and respond to other civilizations, or the principle of their action provides evidence about what other locally dominant dictators on other planets will do including for other ideologies, or if they contact alien life?

Even if these other pragmatic considerations you mention would not be removed by having control of Earth, the question remains whether they (together with other considerations) are sufficient to make it suboptimal to kill and replace everyone. What if the likelihood that they are in a simulation is not high enough? What if new scientific discoveries about the universe or multiverse indicate that taking into account agents far away from Earth is not so important?

You say,

But you could elaborate on the scenario to stipulate such things not existing in the hypothetical, and get a situation where your character would commit atrocities, and measures to prevent the situation hadn't been taken when the risk was foreseeable.

I don’t mean that the only way to object to the form of consequentialism under consideration is to stipulate away such things and assume they do not exist. One can also object that what perhaps make it suboptimal to kill and replace everyone are complicated and speculative considerations about living in a simulation or what beings on other planets will do. Maybe your reasoning about such things is flawed somewhere or maybe new scientific discoveries will speak against such considerations. In which case (as I understand you) it may become optimal for the leader we are talking about to kill and replace everyone.

You bring up negative utilitarianism. As I write in my paper, I don’t think negative utilitarianism is worse off than traditional utilitarianism when it comes to these scenarios that involve killing everyone. The same goes for negative vs. traditional consequentialism or the comparison negative vs. traditional consequentialist-leaning morality. I would be happy to discuss that more, but I guess it would be too off-topic given the original post. Perhaps a new separate thread would be appropriate for that.

You write,

That's reason for everyone else to prevent and deter such a person or ideology from gaining the power to commit such atrocities while we can, such as in our current situation.

In that case the ideology (I would say morality) is not restricted to forms of utilitarianism but also include many forms of consequentialism and views that are consequentialist-leaning. It may also include views that are non-consequentialist but open to that killing is sometimes right if it is done to accomplish a greater goal, and that, for example, place huge importance on the far future so that far future concerns make what happens to the few billion humans on Earth a minor consideration. My point is that I think it’s a mistake to merely talk about utilitarianism or consequentialism here. The range of views about which one can reasonable ask ‘would it be right to kill everyone in this situation, according to this theory?’ is much wider.

Comment by simon_knutsson on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T15:21:46.727Z · score: 8 (5 votes) · EA · GW

To bite the bullet here would be to accept that it would be morally right to kill and replace everyone with other beings who, collectively, have a (possibly only slightly) greater sum of well-being. If someone could do that.

The following are two similar scenarios:

Traditional Utilitarian Elimination: The sum of positive and negative well-being in the future will be negative if humans or sentient life continues to exist. Traditional utilitarianism implies that it would be right to kill all humans or all sentient beings on Earth painlessly.

Suboptimal Paradise: The world has become a paradise with no suffering. Someone can kill everyone in this paradise and replace them with beings with (possibly only slightly) more well-being in total. Traditional utilitarianism implies that it would be right to do so.

To bite the bullet regarding those two scenarios would be to accept that killing everyone would be morally right in those scenarios.

Comment by simon_knutsson on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T11:40:31.380Z · score: 12 (6 votes) · EA · GW

If we are concerned with how vulnerable moral theories such as traditional total act-utilitarianism and various other forms of consequentialism are to replacement arguments, I think much more needs to be said. Here are some examples.

1. Suppose the agent is very powerful, say, the leader of a totalitarian society on Earth that can dominate the other people on Earth. This person has access to technology that could kill and replace either everyone on Earth or perhaps everyone except a cluster of the leader’s close, like-minded allies. Roughly, this person (or the group of like-minded people the leader belongs to) is so powerful that the wishes of others on Earth who disagree can essentially be ignored from a tactical perspective. Would it be optimal for this agent to kill and replace either everyone or, for example, at least everyone in other societies who might otherwise get in the way of the maximization of the sum of well-being ?

2. You talk about modifying one’s ideology, self-bind and commit, but there are questions about whether humans can do that. For example, if some agent in the future would be about to be able to kill and replace everyone, can you guarantee that this agent will be able to change ideology, self-bind and commit to not killing? It would not be sufficient that some or most humans could change ideology, self-bind and commit.

3. Would it be optimal for every agent in every relevant future situation to change ideology, self-bind or commit to not kill and replace everyone or billions of individuals? Again, we can consider a powerful, ruthless dictator or totalitarian leader. Assume this person has so far neither modified nor commited to non-violence. This agent is then in a situation in which the agent could kill and replace everyone. Would it at that time be optimal for the leader to change ideology, self-bind or commit to not killing and replacing everyone?

Comment by simon_knutsson on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-07T14:10:50.546Z · score: 2 (10 votes) · EA · GW

Hi Richard. You ask, “People who identify as utilitarians, do you bite the bullet on such cases? And what is the distribution of opinions amongst academic philosophers who subscribe to utilitarianism?”

Those are good questions, and I hope utilitarians or similar consequentialists reply.

It may be difficult to find out what utilitarians and consequentialists really think of such cases. Such theories could be understood as sometimes prescribing ‘say whatever is optimal to say; that is, say whatever will bring about the best results.’ It might be optimal to pretend to not bite the bullet even though the person actually does.

Regarding the opinions among academic philosophers who subscribe to traditional utilitarianism. I don’t know of many such people who are alive, but a few are Torbjörn Tännsjö, Peter Singer, Yew-Kwang Ng (is my impression), and Katarzyna de Lazari-Radek (is also my impression). And Toby Ord has written, “I am very sympathetic towards Utilitarianism, carefully construed.” Tännsjö (2000) says, “Few people today seem to believe that utilitarianism is a plausible doctrine at all.” Perhaps others could list additional currently living academic philosophers who are traditional utilitarians, but otherwise it’s a very small population when talking about a distribution. Here is a list https://en.wikipedia.org/wiki/List_of_utilitarians#Living, but it includes people who are not academic philosophers, like Krauss, Layard, Lindström, Matheny and Reese, and it lists negative utilitarian David Pearce, and I doubt it is correct regarding the academic philosophers included in the list.

I can’t think of any traditional utilitarian who has discussed the replacement argument (i.e., the one that involves killing and replacing everyone). Tännsjö has bitten a bullet on another issue that involves killing. As I write here https://www.simonknutsson.com/the-world-destruction-argument/#Appendix_Reliability_of_intuitions, Tännsjö thinks that a doctor ought to kill one healthy patient to give her organs to five other patients who need them to survive (if there are no bad side effects). He argues that if this is counterintuitive, that intuition is unreliable partly because it is triggered by something that is not directly morally relevant. The intuition stems from an emotional reluctance to kill in an immediate manner using physical force, which is a heuristic device selected for us by evolution, and we should realize that it is morally irrelevant whether the killing is done using physical force (Tännsjö 2015b, 67–68, 205–6, 278–79). And as I also write in my paper, he has written, among other things, “Let us rejoice with all those who one day hopefully … will take our place in the universe.” I like his way of writing. It is illuminating, he feels straightforward, and he often writes as if he teaches (in a good way). But I could only speculate about what he thinks about the replacement argument against his form of utilitarianism.

Comment by simon_knutsson on EA != minimize suffering · 2016-07-24T07:26:07.422Z · score: 1 (3 votes) · EA · GW

Thank you cdc482 for raising the topic. I agree describing EA as having only the goal of minimizing suffering would be inaccurate. As would it be to say that it has the goal to “maximizing the difference between happiness and suffering.” Both would be inaccurate simply because EAs disagree about what the goal should be. William MacAskill’s (a) is reasonable: “to ‘do the most good’ (leaving what ‘goodness’ is undefined).” But ‘do the most good’ would need to be understood broadly or perhaps rephrased into something roughly like ‘make things as much better as possible’ to also cover views like ‘only reduce as much badness as possible.’

Julia Wise pointed to Toby Ord's essay “Why I'm not a negative utilitarian” related to negative utilitarianism in the EA community. Since I strongly disagree with that text, I want to share my thoughts on it: http://www.simonknutsson.com/thoughts-on-ords-why-im-not-a-negative-utilitarian

Summary: In 2013, Toby Ord published an essay called “Why I’m Not a Negative Utilitarian” on his website. One can regard the essay as an online text or blog post about his thinking about negative utilitarianism (NU) and his motives for not being NU. After all, the title is about why he is not NU. It is fine to publish such texts, and regarded in that way, it is an unusually thoughtful and well-structured text. In contrast, I will discuss the content of the essay regarded as statements about NU that can be illuminating or confusing, or true or false. Regarded in that way, the essay is an inadequate place for understanding NU and the pros and cons of NU.

The main reason is that the essay makes strong claims without making sufficient caveats or pointing the reader to existing publications that challenge the claims. For clarity and to avoid creating misconceptions, Ord should either have added caveats of the kind “I am not an expert on NU. This is my current thinking, but I haven’t looked into the topic thoroughly.” Or, if he was aware of the related literature, pointed the reader to it. (I also disagree with many of the statements and arguments that his essay presents, but that is a different question.)

[End of summary]

There are also other commentaries on or replies to Ord’s essay: Pearce, David. A response to Toby Ord's essay Why I Am Not A Negative Utilitarian Contestabile, Bruno. Why I’m (Not) a Negative Utilitarian – A Review of Toby Ord’s Essay

Comment by simon_knutsson on A Long-run perspective on strategic cause selection and philanthropy · 2013-11-08T11:01:00.000Z · score: 0 (0 votes) · EA · GW

I see, thanks for your reply

Comment by simon_knutsson on A Long-run perspective on strategic cause selection and philanthropy · 2013-11-07T15:30:00.000Z · score: 1 (1 votes) · EA · GW

Hi, Your text mentions the importance of cause-neutrality but focuses on humanity, e.g. “maximizing good accomplished largely reduces to doing what is best in terms of very long-run outcomes for humanity.“ Why don’t you include any other species?

To explain where I’m coming from: To my knowledge, GiveWell and Good Ventures also focus on “humanity” and talk about “humanitarians” but I’m not familiar with any argument that shows why that focus makes sense (I’m grateful to be pointed to one). Of course, I don’t expect you to answer on behalf of GW or GV, and I should ask them directly in public, I just mention them to explain that I wonder the same thing about other organizations that write about similar topics.

To me, it makes much more sense to replace ‘humanity,’ in your text with 'beings that can suffer' or similar.

Thanks!