Am I an Effective Altruist for moral reasons?

post by Diego_Caleiro · 2016-02-10T18:17:53.953Z · score: 0 (14 votes) · EA · GW · Legacy · 20 comments

After Nakul Krishna posted the best critique of Effective Altruism so far, I did what anyone would do. I tried to steelman his opinions into their best version, and read his sources. For the third time, I was being pointed to Bernard Williams, so I conceded, and read Bernard Williams's book Ethics and The Limits of Philosophy. It's a great book, and I'd be curious to hear what Will, Toby, Nick, Amanda, Daniel, Geoff, Jeff and other philosophers in our group have to say about it at some point. But what I want to talk about is what it made me realize: that my reasons for Effective Altruism are not moral reasons. 

When we act there can be several reasons for our actions, and some of those reasons may be moral in kind. When a utilitarian reasons about a trolley problem, they usually save the 5 people mostly for moral reasons. They consider the situation not from the perspective of physics, or of biology, or of entropy of the system. They consider which moral agents are participants of the scenario, they reason about how they would like those moral agents (or in the case of animals moral recipients) to fare in the situation, and once done, they issue a response on whether they would pull the lever or not. 

This is not what got me here, and I suspect not what got many of you here either. 

My reasoning process goes: 

Well I could analyse this from the perspective of physics. - but that seems irrelevant. 

I could analyse it from the perspective of biology. - that also doesn't seem like the most important aspect of a trolley problem. 

I could find out what my selfish preferences are in this situation. - Huh, that's interesting, I guess my preferences, given I don't know any one of the minds involved are a ranking of states of affairs, from best to worst, where if 6 survive, I prefer that, then 5 and so on. 

I could analyse what morality would issue me to do. - this has two parts 1) Does morality require of me that I do something in particular? and 2) Does morality permit that I do a thing from a specific (unique) set of actions? 

It seems to me that morality certainly permits that I pull the lever, possibly permits that I don't too. Does it require that I pull it? Not so sure. Let us assume for the time being it does not. 

After doing all this thinking, I pull the lever, save 5 people, kill one, and go home with the feeling of a job well done. 

However there are two confounding factors there. So far, I have been assuming that I save them for moral reasons, so I backtrack those reasons into the moral theory that would make that action permissible and even sometimes demanded, I find aggregative consequentialism (usually utilitarianism) and thus, I conclude: "I am probably an aggregative consequentialist utilitarian." 

There is other factor though, which is what I prefer in that situation, and that is the ranking of states of affairs I mentioned previously. Maybe I'm not a utilitarian, and I just want the most minds to be happy.  

I never tried to tell those apart, until Bernard Williams came knocking.  He makes several distinctions that are much more fine grained and deeper than my understanding of ethics or that I could explain here, he writes well and knows how to play the philosopher game. Somehow, he made me realize those confounds in my reasoning. So I proceeded to reason about situations in which there is a conflict between the part of my reasoning that says "This is what is moral" and the part that says "I want there to be the most minds having the time of their lives." 

After doing a bit of this tinkering, tweaking knobs here and there in thought experiments, I concluded that my preference for there being most minds having the time of their lives supersedes my morals. When my mind is in conflict between those things I will happily sacrifice the moral action to instead do the thing that makes most minds better off the most. 

So let me add one more strange label to my already elating, if not accurate, "positive utilitarian" badge:

I am an amoral Effective Altruist. 

I do not help people (computers, animals and aliens) because I think this is what should be done. I do not do it because this is morally permissible or morally demanded. Like anyone, I have moral uncertainty, maybe some 5% of me is virtue ethicist or Kantian, or some other perspective. But the point is that even if those parts were winning, I would still go there and pull that lever. Toby or Nick suggested that we use a moral parliament to think of moral uncertainty. Well, if I do, then my conclusion is that basically I am not in a parliamentary system, but in some other form of government, and the parliament is not that powerful. I take Effective Altruist actions not because they are what is morally right for me to do, but in spite of what is morally right to do. 

So Nakul Krishna and Bernard Williams may well, and in fact might have, reasoned me out of the claim "utilitarianism is the right way to reason morally." That deepened my understanding of morality a fair bit. 

But I'd still pull that goddamn lever. 

So much the worse for Morality. 

20 comments

Comments sorted by top scores.

comment by Castand · 2016-02-11T18:23:37.915Z · score: 5 (9 votes) · EA(p) · GW(p)

I'd be curious to hear what Will, Toby, Nick, Amanda, Daniel, Geoff, Jeff and other philosophers in our group have to say about it at some point.

To give a minor bit of feedback: this use of unexplained first names rubbed me up the wrong way by making EA feel like a cliquish celebrity culture.

comment by Holly · 2016-08-30T20:54:19.045Z · score: 0 (0 votes) · EA(p) · GW(p)

Interestingly, this has the opposite effect on me :P ("Will" is just a bloke; "William MacAskill" is a celebrity).

comment by Diego_Caleiro · 2016-02-11T23:56:37.043Z · score: 0 (0 votes) · EA(p) · GW(p)

I frequently use surnames, but in this case since it was a call to action of sorts, first names seemed more appropriate. Thanks for the feedback though, makes sense.

comment by Denis Drescher (Telofy) · 2016-02-11T09:34:48.358Z · score: 3 (3 votes) · EA(p) · GW(p)

I don’t understand how “I want there to be the most minds having the time of their lives” is different from “aggregative consequentialist utilitarian[ism].” Isn’t it the same, just phrased a bit more informally? Or do you mean it’s not the same as “this is what is moral” because it didn’t give room for the 5% deontology/virtue ethics? But you seem to be arguing in the other direction. Could you elucidate that for me? Thanks!

Also you used “moral uncertainty,” so am I right to infer that you’re arguing from a moral realist perspective or are you referring to uncertainty about your moral preferences?

(To me, acting to optimally satisfy my moral preferences is ipso facto the same as “doing what is moral,” though I would avoid that phrasing lest someone think I wanted to imply that there’s some objective morality.)

comment by kbog · 2016-02-13T00:52:32.991Z · score: 2 (2 votes) · EA(p) · GW(p)

I don’t understand how “I want there to be the most minds having the time of their lives” is different from “aggregative consequentialist utilitarian[ism].” Isn’t it the same, just phrased a bit more informally?

Well one is a moral claim and the other is a personal preference. If I said "I want pasta for dinner" that doesn't imply that my moral theory demands that I eat pasta.

comment by Diego_Caleiro · 2016-02-13T01:15:43.872Z · score: 1 (1 votes) · EA(p) · GW(p)

That.

comment by Denis Drescher (Telofy) · 2016-02-13T15:45:16.703Z · score: 0 (0 votes) · EA(p) · GW(p)

I explain my view a bit more in my reply to Diego below. When I wrote this sentence I wasn’t aware of the nature of the inferential gap between us.

comment by nino · 2016-02-11T11:29:16.432Z · score: 0 (0 votes) · EA(p) · GW(p)

am I right to infer that you’re arguing from a moral realist perspective

If you're not arguing from a moral realist perspective, wouldn't {move the universe into a state I prefer} and {act morally} necessarily be the same because you could define your own moral values to match your preferences?

If morality is subjective, the whole distinction between morals and preferences breaks down.

comment by Diego_Caleiro · 2016-02-11T17:37:36.797Z · score: 0 (0 votes) · EA(p) · GW(p)

Telofy: Trying to figure out the direction of the inferential gap here. Let me try to explain, I don't promise to succeed.

Aggregative consequentialist utilitarianism holds that people in general should value most minds having the times of their lives, where "in general" here actually translated into a "should" operator. A moral operator. There's a distinction between me wanting X, and morality suggesting, requiring, or demanding X. Even if X is the same, different things can hold a relation to it.

At the moment I both hold a personal preference relation to you having a great time as I do a moral one. But if the moral one was dropped (as Williams makes me drop sevral of my moral reasons) I'd still have the personal one, and it supersedes the moral considerations that could arise otherwise.

Moral Uncertainty: To confess, that was my bad not disentangling uncertainty about my preferences that happen to be moral, my preferences that happen to coincide with preferences that are moral, and the preferences that morality would, say, require me to have. That was bad philosophy and on my part and I can see Lewis, Chalmers and Muelhauser blushing at my failure.

I meant uncertainty I have as an empirical subject in determining which of the reasons for argument I find are moral reasons or not, and within that which belong to which moral perspective. For instance I assign high credence that breaking a promise is bad from a Kantian standpoint, times a low credence that Kant was right about what is right. So not breaking a promise has a few votes in my parliament, but not nearly as many as giving a speech about EA at UC Berkeley has, because I'm confident that a virtuous person would do that, and I'm somewhat confident it is good from a utilitarian standpoint too, so lots of votes.

I disagree that optimally satifying your moral preferences equals doing what is moral. For one thing you are not aware of all moral preferences that, on reflection you would agree with, for another, you could bias your dedication intensity in a way that even though you are acting on moral preferences, the outcome is not what is moral all things considered. Furthermore It is not obvious to me that a human is compelled necessarily to have all moral preferences that are "given" to them. You can flat out reject 3 preferences, act on all others, and in virtue of your moral gap, you would not be doing what is moral, even though you are satisfying all preferences in your moral preference class.

Nino: I'm not sure where I stand on moral realism (leaning against but weakly). The non-moral realist part of me replies:

wouldn't {move the universe into a state I prefer} and {act morally} necessarily be the same because you could define your own moral values to match your preferences?

Definitely not the same. First of all to participate in the moral discussion, there is some element of intersubjectivity that kicks in, which outright excludes defining my moral values to a priori equate my preferences, they may a posteriori do so, but the part where they are moral values involves clashing them against something, be it someone else, a society, your future self, a state of pain, or, in the case of moral realism, the moral reality out there.

To argue that my moral values equate all my preferences would be equivalent to universal ethical preference egoism, the hilarious position which holds that the morally right thing to do is for everyone to satisfy my preferences, which would tile the universe with whiteboards, geniuses, ecstatic dance, cuddlepiles, orgasmium, freckles, and the feeling of water in your belly when bodysurfing a warm wave at 3pm, among other things. I don't see a problem with that, but I suppose you do, and that is why it is not moral.

If morality is intersubjective, there is discussion to be had. If it is fully subjective, you still need to determine in which way it is subjective, what a subject is, which operations transfer moral content between subjects if any, what legitimizes you telling me that my morality is subjective, and finally why call it morality at all if you are just talking about subjective preferences.

comment by Denis Drescher (Telofy) · 2016-02-12T12:18:41.968Z · score: 0 (0 votes) · EA(p) · GW(p)

Thanks for bridging the gap!

Why call it morality at all if you are just talking about subjective preferences.

Yeah, that is my current perspective, and I’ve found no meaningful distinction that would allow me to distinguish moral from amoral preferences. What you call intersubjective is something that I consider a strategic concern that follows from wanting to realize my moral preferences. I’ve wondered whether I should count the implications of these strategic concerns into my moral category, but that seemed less parsimonious to me. I’m wary of subjective things and want to keep them contained the same way I want to keep some ugly copypasted code contained, black-boxed in a separate module, so it has no effects on the rest of the code base.

There's a distinction between me wanting X, and morality suggesting, requiring, or demanding X.

I like to use two different words here to make the distinction clearer, moral preferences and moral goals. In both cases you can talk about instrumental and terminal moral preferences/goals. This is how I prefer to distinguish goals from preferences (copypaste from my thesis):

To aid comprehension, however, I will make an artificial distinction of moral preferences and moral goals that becomes meaningful in the case of agent-relative preferences: two people with a personal profit motive share the same preference for profit but their goals are different ones since they are different agents. If they also share the agent-neutral preference for minimizing global suffering, then they also share the same goal of reducing it.

I’ll assume that in this case we’re talking about agent-neutral preferences, so I’ll just use goal here for clarity. If someone has the personal goal of wanting to get good at playing the theremin, then on Tuesday morning, when they’re still groggy from a night of coding and all out of coffee and Modafinil, they’ll want to stay in bed and very much not want to practice the theremin on one level but still want to practice the theremin on another level, a system 2 level, because they know that to become good at it, they’ll need to practice regularly. Here having practiced is an instrumental goal to the (perhaps) terminal goal of becoming good at playing the theremin. You could say that their terminal goal requires or demands them to practice even though they don’t want to. When I had to file and send out donation certificates to donors I was feeling the same way.

I can see Lewis, Chalmers and Muelhauser blushing at my failure.

Aw, hugs!

For one thing you are not aware of all moral preferences that, on reflection you would agree with.

Oops, yes. I should’ve specified that.

For another, you could bias your dedication intensity.

If I understand you correctly, then that is what I tried to capture by “optimally.”

You can flat out reject 3 preferences, act on all others, and in virtue of your moral gap, you would not be doing what is moral, even though you are satisfying all preferences in your moral preference class.

This seems to me like a combination of the two limitations above. A person can decide to not act on moral preferences that they continue to entertain for strategic purposes, e.g., to more effectively cooperate with others on realizing another moral goal. When a person rejects, i.e. no longer entertains a moral preference (assuming such a thing can be willed), and optimally furthers other moral goals of theirs, then I’d say they are doing what is moral (to them).

To argue that my moral values equate all my preferences would be equivalent to universal ethical preference egoism, the hilarious position which holds that the morally right thing to do is for everyone to satisfy my preferences.

Cuddlepiles? Count me in! But these preferences also include “the most minds having the time of their lives.” I would put all these preferences on the same qualitative footing, but let’s say you care comparatively little about the whiteboards and a lot about the happy minds and the ecstatic dance. Let’s further assume that a lot of people out there are fairly neutral about the dance (at least so long as they don’t have to dance) but excited about the happy minds. When you decide to put the realization of the dance goal on the back-burner and concentrate on maximizing those happy minds, you’ll have an easy time finding a lot of cooperation partners, and together you actually have a bit of a shot of nudging the world in that direction. If you concentrated on the dance goal, however, you’d find much fewer partners and make much less progress, incurring a large opportunity cost in goal realization. Hence pursuing this goal would be less moral by (lacking) dint of its intersubjective tractability.

So yes, to recap, according to my understanding, everyone has, from your perspective, the moral obligation to satisfy your various goals. However, other people disagree, particularly on agent-relative goals but also at times on agent-neutral ones. Just as you require resources to realize your goals, you often also require cooperation from others, and costs and lacking tractability make some goals more and others less costly to attain. Hence, then moral thing to do is to minimize one’s opportunity cost in goal realization.

Please tell me if I’m going wrong somewhere. Thanks!

comment by Diego_Caleiro · 2016-02-13T01:07:38.168Z · score: 0 (0 votes) · EA(p) · GW(p)

I really appreciate your point about intersubjective tractability. It enters the question of how much should we let empirical and practical considerations spill into our moral preferences (ought implies can for example, does it also imply "in a not extremely hard to coordinate way"?)

At large I'd say that you are talking about how to be an agenty Moral agent. I'm not sure morality requires being agenty, but it certainly benefits from it.

Bias dedication intensity: I meant something ortogonal to optimality. Dedicating only to moral preferences, but more to some that actually don't have that great of a standing, and less to others which normally do the heavy lifting (don't you love when philosophers talk about this "heavy lifting"?). So doing it non-optimally.

comment by joaolkf · 2016-02-16T21:56:38.923Z · score: 1 (1 votes) · EA(p) · GW(p)

Why not conclude so much worse for ought, hedonism, or impersonal morality? There are many other moral theories build away from these notions which would not lead you to these conclusions – of course, this does not mean they ignore these notions. If this simplistic moral theory makes you want to abandon morality, please abandon the theory.

I find the idea that there are valid reasons to act that are not moral reasons weird; I think some folks call them prudential reasons. It seems that your reason to be an EA is a moral reason if utilitarianism (plus a bunch of other specific assumptions) is right, and “just a reason” if it isn't. But if increasing other's welfare is not producing value - or is not right or whatever - what is your reason for doing it? Is it due to some sort of moral akrasia? You know it is not the right thing to do, but you do it nevertheless? It seems there would only be bad reasons for you to act this way.

If you are not acting like you think you should after having complete information and moral knowledge, perfect motivation and reasoning capacity, then it does not seem like you are acting on prudential reasons, it seems you are being unreasonable. If you are acting on the best of your limited knowledge and capacities, it seems you are acting for moral reasons. These limitations might explain why you acted in a certain sub-optimal way, but they do not seem to constitute your reason to act.

Suppose the scenario where you are stuck on a desert island with another starving person with a slightly higher chance of survival (say, he is slightly healthier than you). There’s absolutely no food and you know that the best shot for at least one of you surviving is if one eats the other. He comes to attack you. Some forms of utilitarianism would say you ought to let him kill you. Any slight reaction would be immoral. If later on people find out you fought for your life, killed the other person and survived, the right thing for them to say would be “He did the wrong thing and had no right to defend his life.” The intuition you have the right to self-defence would be simply mistaken; there is no moral basis for it.

But we need not to abandon this intuition and that some forms of utilitarianism require us to do so will always be a point against them - in a similar manner that the intuition sentient pleasure is good is an intuition for them. It would be morally right to defend yourself in many other moral systems, including more elaborate forms of utilitarianism. You may believe people ought to have the right of self-defence as a deontological principle on its own, or even for utilitarian reasons (e.g., society works better that way). There might be impersonal reasons to have the right to put your personal interest in your survival above the interest that another person with slightly higher life expectancy survives. Hence, even if impersonal reasons are all the moral reasons there are, insofar as there are impersonal reasons for people to have personal reasons these latter are moral reasons.
If someone is consistently not acting like he thinks he should and upon reflection there is no change in behaviour or cognitive dissonance, then that person either is a hypocrite - he does not really think he should act that way - or a psychopath - he is incapable of moral reasoning. Claiming one does not have the right to self-defence even though you would feel you have strong reasons not to let the other person kill you seems like an instance of hypocrisy. Being an EA while fully knowing maximizing welfare is not the right thing to do seems like an instance of psychopathy (in the odd case EA is only about maximizing welfare). Of course, besides these two pathologies, you might have some form of cognitive dissonance or other accidental failures. Perhaps you are not really that sure maximizing welfare is not the right thing to do. You might not have the will to commit to do the things you should do in case right actions consists in something more complicated than maximizing welfare. You might be overwhelmed by a strong sense you have the right to life. It might not be practical at the time to consider these other complicated things. You might not know which moral theory is right. These are all accidental things clouding or limiting your capacity for moral reasoning, things you should prefer to overcome. This would be a way of saving the system of morality by attributing any failure to act right to accidents, uncertainties or pathologies. I prefer this solution of sophisticating the way moral reasons behave than to claim that there are valid reasons to act that are not moral reasons; the latter looks, even more than the former, like shielding the system of morality from the real world. If there are objective moral truths, they better have something to do with what people want to want to do upon reflection.

But perhaps there is no system to be had. Some other philosophers believe these limitations above are inherent to moral reason, and it is a mistake to think moral reasoning should function the same way as pure reasoning does. The right thing to do will always be an open question, and all moral reasoning can do is recommend certain actions over others, never to require. If there is more than one fundamental value, or if this one fundamental value is epistemically inaccessible, I see no other way out besides this solution. Incommensurable fundamental values are incompatible with pure rationality in its classical form. Moreover, if the fundamental value is simply hard to access, this solution is at least the most practical one and the one we should use in most of applied ethics until we come up with Theory X. (In fact, it is the solution the US Supreme Court adopts)

I personally think there is a danger with going about believing to believe in some simple moral theory while ignoring it whenever it feels right. Pretending to be able to abandon morality altogether would be another danger. How actually believing and following these simplistic theories fare among these latter two options is uncertain. If, as in Williams joke, one way of acting inhumanely is to act on certain kinds of principles, it does not fare very well.

It seems to me Williams made his point; or the point I wished him to make to you. You are saying “if this is morality, I reject it”. Good. Let’s look for one you can accept.

comment by Diego_Caleiro · 2016-02-16T23:38:51.664Z · score: 1 (1 votes) · EA(p) · GW(p)

I find the idea that there are valid reasons to act that are not moral reasons weird; I think some folks call them prudential reasons. It seems that your reason to be an EA is a moral reason if utilitarianism is right, and “just a reason” if it isn't. But if not what is your reason for doing it?

My understanding of prudential reasons is that they are reasons of the same class as those I have to want to live when someone points a gun at me. They are reasons that relate me to my own preferences and survival, not as a recipient of the utilitarian good, but as the thing that I want. They are more like my desire for a back massage than like my desire for a better world. A function from my actions to my reasons to act would be partially a moral function, partially a prudential function.

If you are not acting like you think you should after having complete information and moral knowledge, perfect motivation and reasoning capacity, then it does not seem like you are acting on prudential reasons, it seems you are being unreasonable.

Appearances deceive here because "that I should X" does not imply "that I think I should X". I agree that if both I should X and I think I should X, then by doing Y=/=X I'm just being unreasoable. But I deny that mere knowledge that I should X implies that I think I should X. I translate

I should X = A/The moral function connects my potential actions to set X. I think I should X = The convolution of the moral function and my prudential function take my potential actions to set X.

In your desert scenario, I think I should(convolution) defend my self, though I know I should (morality) not

Hence, even if impersonal reasons are all the moral reasons there are, insofar as there are impersonal reasons for people to have personal reasons these latter are moral reasons.

We are in disagreement. My understanding is that the four quadrants can be empty or full. There can be impartial reasons for personal reasons, personal reasons for impartial reasons, impartial reasons for impartial reasons and personal reasons for personal reasons. Of course not all people will share personal reasons, and depending on which moral theory is correct, there may well be distinctions in impersonal reasons as well.

Being an EA while fully knowing maximizing welfare is not the right thing to do seems like an instance of psychopathy (in the odd case EA is only about maximizing welfare). Of course, besides these two pathologies, you might have some form of cognitive dissonance or other accidental failures.

In what sense do you mean psychopathy? I can see ways in which I would agree with you, and ways in which not.

Perhaps you are not really that sure maximizing welfare is not the right thing to do.

Most of my probability mass is that maximizing welfare is not the right thing to do, but maximizing a combination of identity, complexity and welfare is.

I prefer this solution of sophisticating the way moral reasons behave than to claim that there are valid reasons to act that are not moral reasons; the latter looks, even more than the former, like shielding the system of morality from the real world. If there are objective moral truths, they better have something to do with what people want to want to do upon reflection.

One possibility is that morality is a function from person time slices to a set of person time slices, and the size to which you expand your moral circle is not determined a priori. This would entail that my reasons to act morally only when considering time slices that have personal identity 60%+ with me would look a lot like prudential reasons, whereas my reasons to act morally accounting for all time slices of minds in this quantum branch and its descendants would be very distinct. The root theory would be this function.

The right thing to do will always be an open question, and all moral reasoning can do is recommend certain actions over others, never to require. If there is more than one fundamental value, or if this one fundamental value is epistemically inaccessible, I see no other way out besides this solution.

Seems plausible to me.

Incommensurable fundamental values are incompatible with pure rationality in its classical form.

Do you just mean VNM axioms? It seems to me that at least token commensurability certainly obtains. Type commensurability quite likely obtains. The problem is that people want the commensurability ratio to be linear on measure, which I see no justification for.

It seems to me Williams made his point; or the point I wished him to make to you. You are saying “if this is morality, I reject it”. Good. Let’s look for one you can accept.

I would look for one I can accept if I was given sufficient (convoluted) reasons to do so. At the moment it seems to me that all reasonable people are either some type of utilitarian in practice, or are called Bernard Williams. While I don't get pointed thrice to another piece that may overwhelm the sentiment I was left with, I see no reason to enter exploration stage. For the time being, the EA in me is peace.

comment by joaolkf · 2016-02-17T15:34:55.566Z · score: 1 (1 votes) · EA(p) · GW(p)

My understanding of prudential reasons is that they are reasons of the same class as those I have to want to live when someone points a gun at me. They are reasons that relate me to my own preferences and survival, not as a recipient of the utilitarian good, but as the thing that I want. They are more like my desire for a back massage than like my desire for a better world. A function from my actions to my reasons to act would be partially a moral function, partially a prudential function.

That seems about right under some moral theories. I would not want to distinguish being the recipient of the utilitarian good and getting back massages. I would want to say getting back massages instantiate the utilitarian good. According to this framework, the only thing these prudential reasons capture not in impersonal reasons themselves is the fact people give more weight to themselves than others, but I would like to argue there are impersonal reasons for allowing them to do so. If that fails, then I would call these prudential reasons pure personal reasons, but I would not remove them from the realm of moral reasons. There seems to be already established moral philosophers that tinker with apparently similar types of solutions. (I do stress the “apparently” given that I have not read them fully or fully understand what I read.)

Appearances deceive here because "that I should X" does not imply "that I think I should X". I agree that if both I should X and I think I should X, then by doing Y=/=X I'm just being unreasoable. But I deny that mere knowledge that I should X implies that I think I should X.

They need not imply, but I would like a framework where they do under ideal circumstances. In that framework - which I paraphrase from Lewis - if I know a certain moral fact, e.g., that something is one of my fundamental values, then I will value it (this wouldn’t obtain if you are a hypocrite, in such case it wouldn’t be knowledge). If I value it, and if I desire as I desire to desire (which wouldn’t obtain in moral akrasia), then I will desire it. If I desire it, and if this desire is not outweighed by other conflicting desires (either due to low-level desire multiplicity or high-level moral uncertainty), and if I have moral reasoning to do what servers my desires according to my beliefs (wouldn't obtain for a psychopath), then I will pursue it. And if my relevant beliefs are near enough true, then I will pursue it as effectively as possible. I concede valuing something may not lead to pursuing it, but only if something goes wrong in this chain of deductions. Further, I claim this chain defines what value is.

I should X = A/The moral function connects my potential actions to set X. I think I should X = The convolution of the moral function and my prudential function take my potential actions to set X.

I’m unsure I got your notation. =/= means different? What is the meaning of “/” in “A/The…”?

In your desert scenario, I think I should(convolution) defend my self, though I know I should (morality) not.

I would claim you are mistaken about your moral facts in this instance.

We are in disagreement. My understanding is that the four quadrants can be empty or full. There can be impartial reasons for personal reasons, personal reasons for impartial reasons, impartial reasons for impartial reasons and personal reasons for personal reasons. Of course not all people will share personal reasons, and depending on which moral theory is correct, there may well be distinctions in impersonal reasons as well.

What leads you to believe we are in disagreement if my claim was just that one of the quadrants are full?

In what sense do you mean psychopathy? I can see ways in which I would agree with you, and ways in which not.

I mean failure to exercise moral reasoning. You would be right about what you value, you would desire as you desire to desire, have all the relevant beliefs right, have no conflicting desires or values, but you would not act to serve your desires according to your beliefs. In your instance things would be more complicated given that it involves knowing a negation. Perhaps we can go about like this. You would be right maximizing welfare is not your fundamental value, you would have the motivation to stop solely desiring to desire welfare, you would cease to desire welfare, there would be no other desire inducing a desire on welfare, there would be no other value inducing desire on welfare, but you would fail to pursue what serves your desire. This fits well with the empirical fact psychopaths have low-IQ and low levels of achievement. Personally, I would bet your problem is more with allowing to have moral akrasia with the excuse of moral uncertainty.

Most of my probability mass is that maximizing welfare is not the right thing to do, but maximizing a combination of identity, complexity and welfare is.

Hence, my framework says you ought to pursue ecstatic dance every weekend.

One possibility is that morality is a function from person time slices to a set of person time slices, and the size to which you expand your moral circle is not determined a priori. This would entail that my reasons to act morally only when considering time slices that have personal identity 60%+ with me would look a lot like prudential reasons, whereas my reasons to act morally accounting for all time slices of minds in this quantum branch and its descendants would be very distinct. The root theory would be this function.

Why just minds? What determines the moral circle? Why does the core need to be excluded from morality? I claim these are worthwhile questions.

Seems plausible to me.

If this is true, maximizing welfare cannot be the fundamental value because there is not anything that can and is epistemically accessible.

Do you just mean VNM axioms? It seems to me that at least token commensurability certainly obtains. Type commensurability quite likely obtains. The problem is that people want the commensurability ratio to be linear on measure, which I see no justification for.

It is certainly true of VNM, I think it is true of a lot more of what we mean by rationality. Not sure I understood your token/type token, but it seems to me that token commensurability can only obtain if there is only one type. It does not matter if it is linear, exponential or whatever, if there is a common measure it would mean this measure is the fundamental value. It might also be that the function is not continuous, which would mean rationality has a few black spots (or that value monism has, which I claim are the same thing).

I would look for one I can accept if I was given sufficient (convoluted) reasons to do so. At the moment it seems to me that all reasonable people are either some type of utilitarian in practice, or are called Bernard Williams. While I don't get pointed thrice to another piece that may overwhelm the sentiment I was left with, I see no reason to enter exploration stage. For the time being, the EA in me is peace.

I know a lot of reasonable philosophers that are not utilitarians, most of them are not mainstream utilitarians. I also believe the far future (e.g. Nick Beckstead) or future generations (e.g. Samuel Scheffler) is a more general concern than welfare monism, and that many utilitarians do not share this concern (I’m certain to know a few). I believe if you are more certain about the value of the future than about welfare being the single value, you ought to expand your horizons beyond utilitarianism. It would be hard to provide another Williams regarding convincingness, but you will find an abundance of all sort of reasonable non-utilitarian proposals. I already mentioned Jonathan Dancy (e.g. http://media.philosophy.ox.ac.uk/moral/TT15_JD.mp4), my Nozick’s Cube, value pluralism and so on. Obviously, it is not recommendable to let these matters depend on being pointed.

comment by Diego_Caleiro · 2016-02-17T22:20:06.357Z · score: 0 (0 votes) · EA(p) · GW(p)

They need not imply, but I would like a framework where they do under ideal circumstances. In that framework - which I paraphrase from Lewis - if I know a certain moral fact, e.g., that something is one of my fundamental values, then I will value it (this wouldn’t obtain if you are a hypocrite, in such case it wouldn’t be knowledge).

I should X = A/The moral function connects my potential actions to set X. I think I should X = The convolution of the moral function and my prudential function take my potential actions to set X.

I’m unsure I got your notation. =/= means different? yes What is the meaning of “/” in “A/The…”? same as person/persons, it means either.

In what sense do you mean psychopathy? I can see ways in which I would agree with you, and ways in which not.

I mean failure to exercise moral reasoning. You would be right about what you value, you would desire as you desire to desire, have all the relevant beliefs right, have no conflicting desires or values, but you would not act to serve your desires according to your beliefs. In your instance things would be more complicated given that it involves knowing a negation. Perhaps we can go about like this. You would be right maximizing welfare is not your fundamental value, you would have the motivation to stop solely desiring to desire welfare, you would cease to desire welfare, there would be no other desire inducing a desire on welfare, there would be no other value inducing desire on welfare, but you would fail to pursue what serves your desire. This fits well with the empirical fact psychopaths have low-IQ and low levels of achievement. Personally, I would bet your problem is more with allowing to have moral akrasia with the excuse of moral uncertainty.

I don't think you carved reality at the joints here, let me do the heavy lifting: The distinction between our paradigms seems to be that I am using weightings for values and you are using binaries. Either you deem something a moral value of mine or not. I however think I have 100% of my future actions left to do, how do I allocate my future resources towards what I value. Part of it will be dedicated to moral goods, and other parts won't. So I do think I have moral values which I'll pay high opportunity cost for, I just don't find them to take a load as large as the personal values, which happen to include actually implementing some sort of Max(Worldwide Welfare) up to a Brownian distance from what is maximally good. My point, overall is that the moral uncertainty is only part of the problem. The big problem is the amoral uncertainty, which contains the moral uncertainty as a subset.

Why just minds? What determines the moral circle? Why does the core need to be excluded from morality? I claim these are worthwhile questions.

Just minds because most of the value seems to lie in mental states, the core is excluded from morality by definition of morality. My immediate one second self, when thinking only about itself of having an experience simply is not a participant of the moral debate. There needs to be some possibility of reflection or debate for there to be morality, it's a minimum complexity requirement (which by the way makes my Complexity value seem more reasonable).

If this is true, maximizing welfare cannot be the fundamental value because there is not anything that can and is epistemically accessible.

Approximate maximization under a penalty of distance from the maximally best outcome, and let your other values drift within that constraint/attractor.

Do you just mean VNM axioms? It seems to me that at least token commensurability certainly obtains. Type commensurability quite likely obtains. The problem is that people want the commensurability ratio to be linear on measure, which I see no justification for.

It is certainly true of VNM, I think it is true of a lot more of what we mean by rationality. Not sure I understood your token/type token, but it seems to me that token commensurability can only obtain if there is only one type. It does not matter if it is linear, exponential or whatever, if there is a common measure it would mean this measure is the fundamental value. It might also be that the function is not continuous, which would mean rationality has a few black spots (or that value monism has, which I claim are the same thing).

I was referring to the trivial case where the states of the world are actually better or worse in the way they are (token identity) and where another world, if it has the same properties this one has (type identity) the moral rankings would also be the same.

About black spots in value monism, it seems that dealing with infinities leads to paradoxes. I'm unaware of what else would be in this class.

I know a lot of reasonable philosophers that are not utilitarians, most of them are not mainstream utilitarians. I also believe the far future (e.g. Nick Beckstead) or future generations (e.g. Samuel Scheffler) is a more general concern than welfare monism, and that many utilitarians do not share this concern (I’m certain to know a few). I believe if you are more certain about the value of the future than about welfare being the single value, you ought to expand your horizons beyond utilitarianism. It would be hard to provide another Williams regarding convincingness, but you will find an abundance of all sort of reasonable non-utilitarian proposals. I already mentioned Jonathan Dancy (e.g. http://media.philosophy.ox.ac.uk/moral/TT15_JD.mp4), my Nozick’s Cube, value pluralism and so on. Obviously, it is not recommendable to let these matters depend on being pointed.

My understanding is that by valuing complexity and identity in addition to happiness I already am professing to be a moral pluralist. It also seems that I have boundary condition shadows, where the moral value of extremely small values of these things are undefined, in the same way that a color is undefined without tone, saturation and hue.

comment by kbog · 2016-02-13T01:03:00.145Z · score: 0 (0 votes) · EA(p) · GW(p)

Metaethics has always been tremendously confused about how to turn moral demands into psychological motivation, so I can see the appeal of dropping the whole paradigm and focusing on amoral motivations.

But I don't see the strength of the argument against robust consequentialist motivations. I read Nakul's piece, and of course it was useless, because it's a witty journal entry rather than a work of moral philosophy and contains no rigorous argument. Williams' point, to my understanding, is that consequentialism doesn't provide a proper account of how people conceive of morality and think of it on a deep personal level. If that's it, then I don't see any reason to believe it, because I can think of no reason that we should expect coherence between intuitions and morality in the first place. Perhaps you could TL;DR what it was about the book that convinced you - I've been thinking of reading it for a while, but of course as a consequentialist I have other things to do.

comment by Tom_Davidson · 2016-02-11T12:10:27.646Z · score: 0 (0 votes) · EA(p) · GW(p)

I found Nakul's article v interesting too but am surprised at what it led you to conclude.

I didn't think the article was challenging the claim that doing paradigmatic EA activities was moral. I thought Nakul was suggesting that doing them wasn't obligatory, and that the consequentialist reasons for doing them could be overridden by an individual's projects, duties and passions. He was pushing against the idea that EA can demand that everyone support them.

It seems like your personal projects would lead to do EA activities. So I'm surprised you judge EA activities to be less moral than alternatives. Which activities and why?

I would have expected you to conclude something like "Doing EA activities isn't morally required of everyone; for some people it isn't the right thing to do; but for me it absolutely is the right thing to do".

comment by Diego_Caleiro · 2016-02-11T17:54:26.541Z · score: 1 (1 votes) · EA(p) · GW(p)

Agreed with 2 first paragraphs.

Activities that are more moral than EA for me: At the moment I think working directly on assembling and conveying knowledge in philosophy and psychology to the AI safety community has higher expected value. I'm taking the AI human compatible course at Berkeley, with Stuart Russell, I hang out at MIRI a lot, so in theory I'm in good position to do that research and some of the time I work on it. But I don't work on it all the time, I would if I got funding for our proposal.

But actually I was referring to a counterfactual world where EA activities are less aligned with what I see as morally right than this world. There's a dimension, call it "skepticism about utilitarianism" that reading Bernard Williams made me move along. If I moved more and more along that dimension, I'd still do EA activities, that's all.

Your expectation is partially correct, I assign 3% to EA activities is morally required of everyone, I feel personally more required to do them than 25% (because this is the dream time, I was lucky, I'm at a high leverage position etc..), but although I think it is right for me to do them, I don't do them because its right, and that's my overall point.

comment by markobalogh · 2016-02-11T00:45:12.248Z · score: 0 (0 votes) · EA(p) · GW(p)

One thing which I think you should consider is the idea that one's preferences become "tuned" to one's moral beliefs. I would challenge the sentence in which you claim that "even if [virtue ethics/kant] were winning, I would still go there and pull that lever"...for wouldn't the idea that virtue ethics is winning be contradicted by your choosing to pull the lever? How do we know when we are fully convinced by an ethical theory? We measure our conviction to follow it. If you are fully convinced of utilitarianism, for example, your preferences will reflect that---for how could you possibly prefer to not follow an ethical theory which you completely believe in? It is not possible to say something similar to "I know for certain that this is right, but I prefer not to do it". What is really happening in a situation like this is that you actually give some ethical priority to your own preferences---hence you are partially an ethical egoist. To map this onto your situation, I would interpret your writing above as meaning that you are not fully convinced of the ethical theories you listed---you find that reason guides you to utilitarianism, Kantianism, whatever it may be, but you are overestimating your own certainty. You say that you take EA actions in spite of what is morally right to do. If you were truly convinced that something else were morally right, you would do it. Why wouldn't you?

If I observe that you do something which qualifies as an EA action, and then ask you why you did it, you might say something like "Because it is my preference to do it, even though I know that X is morally right", X being some alternative action. What I'm trying to say---apologies because this idea is difficult to communicate clearly---is that when you say "Because it is my preference", you are offering your preference as valid justification for your actions. This form of justification is a principle of ethical egoism, so some non-zero percentage of your ethical commitments must be toward yourself. Even though you claimed to be certain that X is right, I have reason to challenge your own certainty, because of the justification you gave for the action. This is certainly a semantics issue in some sense, turning on what we consider to qualify as "belief" in an ethical system.

comment by Diego_Caleiro · 2016-02-11T09:03:44.334Z · score: 0 (0 votes) · EA(p) · GW(p)

It seems that you feel the moral obligation strongly from your comment. Like the Oxford student cited by Krishna you don't want to do what you want to do, you want to do what you oughtto do.

I don't experience that feeling, so let me reply to your questions:

Wouldn't virtue ethics winning be contradicted by your pulling the lever?

Not really, the pulling of the lever is what I would do, it is what I would think I have reason to do, but it is not what I think I would have moral reason to do. I would reason that a virtuous person (ex hypothesi) wouldn't kill someone, that the moral thing to do is let the lever be. Then I would act on my preference that is stronger than my preference that the moral thing be done. The only case where a contradiction would arise is if you subscribe to all reasons for action being moral reasons, or moral reasons having the ultimate call in all action choice. I don't.

In the same spirit, you suggest I'm an ethical egoist. This is because when you simulated me in this lever conflict, you think "morality comes first" so you dropped the altruism requirement to make my beliefs compatible with my action. When I reason however I think "morality is one of the things I should consider here" and it doesn't win over my preference for most minds having an exulting time. So I go with my preference even when it is against morality. This is orthogonal to Ethical Egoism, a position that I consider both despicable and naïve, to be frank. (Naïve because I know the subagents with whom I have personal identity care for themselves about more than just happiness or their preference satisfaction, and despicable because it is one thing to be a selfish prick, understandable in an unfair universe into which we are thrown into a finite life with no given meaning or sensible narrative, it is another thing to advocate a moral position in which you want everyone to be a selfish prick, and to believe that being a selfish prick is the right thing to do, that I find preposterous at a non-philosophical level.)

If you were truly convinced that something else were morally right, you would do it. Why wouldn't you?

Because I don't always do what I should do. In fact I nearly never do what is morally best. I try hard to not stay too far from the target, but I flinch from staring into the void almost as much as the average EA Joe. I really prefer knowing what the moral thing to do is in a situation, it is very informative and helpful to assess what I in fact will do, but it is not compelling above and beyond the other contextual considerations at hand. A practical necessity, a failure of reasoning, a little momentary selfishness, and an appreciation for aesthetic values have all been known to cause me to act for non-moral reasons at times. And of course I often did what I should do too. I often acted the moral way.

To reaffirm, we disagree on what Ethical Egoism means. I take it to be the position that individuals in general ought to be egoists (say, some of the time). You seem to be saying that , and furthermore that if I use any egoistic reason to justify my action, then merely in virtue of my using it as justification I mean that everyone should be (permitted to) doing the same. That makes sense if your conception of just-ice is contractualist and you were assuming just-ification has a strong connection to just-ice. From me to me, I take it to be a justification (between my selves perhaps), but from me to you, you could take it as an explanation of my behavior, to avoid the implications you assign to the concept of justification as demanding the choice for ethical egoism.

I'm not sure what my ethical (meta-ethical) position is, but I am pretty certain it isn't, even in part, ethical egoism.