Introduction to the Philosophy of Well-Being

post by finm · 2020-12-07T12:32:41.592Z · EA · GW · 19 comments

Contents

  1. The Concept
  2. Theories of Well-Being
    Hedonism
    Desire Satisfaction
    Objective List Theories
  3. Practical Implications
None
19 comments

This is a cross-post from the Happier Lives Institute website. It summarises what philosophers do (and don't) mean by 'well-being', introduces the three main rival accounts of what well-being is, then briefly considers their theoretical strengths and weaknesses.

Authors: Fin Moorhouse, Michael Plant, Tom Houlden.

1. The Concept

In philosophy, ‘well-being’ refers to what is intrinsically or non-instrumentally good for someone. Whereas instrumental goods like wealth are valuable only as a means to something else, well-being is what ultimately makes someone’s life go well. Understanding what ultimately makes life go well is of obvious value: every plausible ethical view holds that well-being matters in principle, and in practice we do put great effort into improving the well-being of ourselves and others.

2. Theories of Well-Being

Theories of well-being are generally divided into three families: hedonism, desire-based views, and objective list views (Parfit, 1984, p.493). Even within each, there is scope for significant disagreement. For instance, it wouldn’t make sense to refer to ‘the objective list theory’, as there are so many variants. In this short article, we will briefly introduce the ‘classic’ versions of each family, before considering their theoretical appeal and key objections.

Note that the question of what well-being consists in is separate from the questions of whether it is the only intrinsic good, and whether we ought always and only to maximise it—questions this article does not address (Schroeder, 2016).

2.1. Hedonism

Hedonism is the claim that well-being consists in an overall positive balance of pleasure over pain (both broadly construed). In other words, our lives go best when they have the greatest amount of happiness.

The positive case for hedonism is obvious: what is good for me is plausibly whatever feels good to me, and vice versa. What makes agony bad is, in the first instance, how it feels. Further, it seems strange to think that something I never experience can make a difference to my well-being—if I never feel its impact, how can it have affected me? 

Some clarificatory points: hedonism claims that pleasure or happiness are valuable irrespective of whether we think they are valuable—if they were only valuable because we desired them, hedonism would amount to a kind of desire theory (which we come to next). Secondly, the word ‘hedonism’ means something different as a philosophical term than it does in ordinary usage, where it is associated with reckless or indulgent pleasure-seeking. Third, accepting hedonism does not entail that we should act only in pursuit of our own well-being—that view is called ‘egoism’. There is nothing irrational with accepting that what makes everyone’s lives go well is happiness, but that my happiness is not more morally valuable than yours.

Perhaps the most prominent objection to hedonism is the ‘experience machine’ (Nozick, 1974, pp. 42-45) . You are offered the opportunity to spend the remainder of your life inside a virtual reality capable of simulating any experience, all of which are subjectively indistinguishable from the genuine article. Once plugged in, you could indulge in any number of pleasurable or happy experiences without ever wishing or even knowing you had left the real world. Would plugging in make your life go better? You might object to plugging in because you could not then fulfil your moral duty to help others. Note, however, that the question concerns what makes your life go well for you. To sidestep this moral objection, just imagine you were the last living person and therefore you have no moral duties to anyone else.

The hedonist is committed to saying that the experience machine would make your life go better. But most people react with aversion to the prospect of leaving behind the real world for a simulated existence, however pleasurable or happy. The main reason they give is that they desire to do many things in the actual world; the mere experience of (apparently) doing them is no substitute for in fact doing them. This suggests a second family of theories.

2.2. Desire Satisfaction

Desire or preference satisfaction theories appeal instead to the fulfilment of a person’s desires as what makes their life go well. Importantly, it is not the feeling or experience of a desire being satisfied which is being said to matter but that a desire is in fact satisfied. You want to have climbed Everest, rather than merely believe that you have. If one only wanted to feel satisfied, the view would also be vulnerable to the experience machine objection (although Heathwood (2006, 2015) does argue for such a ‘hybrid’ view, where pleasure is understood as the subjective satisfaction of desire).

Different theories disagree over which desires count. The simplest ‘summative’ answer says that the more fulfilment of the more desires the better. Against this, Parfit (1984, p.497) suggests a case where someone is given the chance to become addicted to a readily available and inexpensive drug, which delivers no pleasure on being taken but promises intense pains of withdrawal. Becoming addicted allows the person to fulfil many more desires than not becoming addicted but it does not seem addiction would make their life go better.

A more plausible desire theory must therefore find a way of limiting the kinds of desires that count. One such strategy appeals to those desires a person would have if they were better or fully informed about the relevant facts. Another strategy privileges higher-order over first-order desires (a ‘higher-order desire’ is a desire about what one desires). On ‘global’ desire theories, a desire counts just in case it is “about some part of one's life considered as a whole, or is about one's whole life” (Parfit, 1984). Such a theory can explain that addiction is bad for someone because they desire not to have an addicted life.

Perhaps surprisingly, the philosophical literature has paid relatively little attention to life satisfaction—an overall judgement about how well one’s life is going. Yet, measures of life satisfaction are popular, indeed increasingly so, in both the social sciences and in policymaking (Diener, Lucas, and Oishi, 2018). While life satisfaction theories of well-being are usually understood as distinct from desire theories (Haybron, 2016), life satisfaction might instead be taken as an aggregate of one’s global desires: I am satisfied with my life to the extent that it achieves my overall desires about it.

Plant (2020) argues that life satisfaction/global desire theories are vulnerable to ‘automaximisation’: because such views hold the only desires that matter are ones individuals choose for themselves, individuals can make their lives go best by choosing to only have desires that are trivially easy to fulfil—such as desiring to have (or not have) hair. Another issue for such views is that they seem to offer no way of ascribing well-being to beings which lack desires about how their lives go overall, such as dogs, who we tend to think do have well-being.

Whatever form they take, desire theories treat the fulfilment of (certain) desires as what ultimately makes someone’s life go well. A general objection to this is that the objects of desire (friendships, success, happiness) are not valuable because they are desired; rather they are desired because they are valuable—and so desire theories have got the explanation the wrong way around.

2.3. Objective List Theories

All objective list theories claim there can be things which make a person’s life go better which are neither pleasurable to nor desired by them. Classic items for this list include success, friendship, knowledge, virtuous behaviour, and health. Such items are ‘objective’ in the sense of being concerned with facts beyond both a person’s conscious experience and/or their desires. Facts about my physical health, for instance, can be true or false regardless of whether I value about my health, enjoy my health, or believe I am healthy. Defenders of objective list theories might object to the previous two monistic theories on the grounds that they are naively simplistic in holding that well-being can be reduced to a single element: life is far more complicated than that (Fletcher, 2013). 

Yet, pluralism—the view that more than one thing makes up well-being—faces its own challenges. Do the items have a characteristic feature in common? If they do, why not replace the list with that single feature? If autonomy, wisdom, and pleasure are each intrinsically good because we desire them, is it not more straightforward to say that desire fulfilment is what matters? Yet, if they have no common characteristic, then in virtue of what are these items, and no others, on the list? 

One option is to point towards the common characteristic of perfectibility —either of our species’ nature, or of achievements in “art, science, and culture” (Rawls, 1971, p.325). For instance, Foot (2003) argues that what is good for me is what makes me a more perfect human being—much as what is good for a cactus is what makes for a flourishing cactus.

Another approach is to appeal to the method(s) that generate the list. Foremost among these methods is ‘reflective judgement’ or ‘refined intuition’ (Rawls 1971) — the process of iteratively updating our beliefs to achieve coherence between our various attitudes towards particular cases and general principles. Defenders of this approach would be right to point out that neither hedonism nor desire theories claim to use any further methods: they just happen to end up with a single component of well-being while objective list theories end up with many.

3. Practical Implications

Do advocates of different theories disagree about how we can use our resources most effectively to improve people’s lives? The good news is that uncertainty about these theoretical questions does not seem to preclude reaching some understanding of how to improve well-being in practice. This is because philosophical theories of well-being often converge on which things lead to well-being: the person who is happy, successful, wise, and loved will have a high well-being life on all plausible theories. Determining the extent of practical disagreement between these theories is a further, empirical challenge. To answer it, we first need to determine valid measures for each theory being considered and then investigate the extent to which they differ in the world as it is. 

Our current understanding of the extent to which different theories of well-being generate different priorities is far from complete. The area where we perhaps know most is about the practical differences between hedonism and the ‘global’ desire theory. These theories correspond to well-established measures in the social sciences: momentary affect (happiness) and life satisfaction, respectively. While the two measures tend to agree on what is good or bad for people, some changes affect one measure more than the other. For instance, a high income has a larger impact on life-satisfaction than on affect (Kahneman & Deaton, 2010; Jebb et al., 2018).

Because these questions influence what projects should be prioritised, further work is clearly required: both theoretical work pointing us toward the most plausible account of well-being and empirical research investigating how each account differs in practice. At the Happier Lives Institute, we conduct both kinds of research in order to find the most effective ways to measure and increase global well-being.

19 comments

Comments sorted by top scores.

comment by Akash · 2020-12-08T05:15:20.577Z · EA(p) · GW(p)

Thank you for sharing this post! It's definitely useful to think about different ways of conceptualizing/measuring well-being. Here's one part of the post I wasn't fully convinced by:

"While life satisfaction theories of well-being are usually understood as distinct from desire theories (Haybron, 2016), life satisfaction might instead be taken as an aggregate of one’s global desires: I am satisfied with my life to the extent that it achieves my overall desires about it."

From a measurement perspective, is there evidence suggesting that peoples' judgments of life satisfaction are highly correlated with their achievement of overall desires? I would guess that life satisfaction (at least the way it's operationalized on Diener's scale) would only correlate modestly with one's appraisal of specific desires.

Measurement aside, I still think it may be important to distinguish between "life satisfaction" (i.e., an individual's subjective appraisal of how well their life is going-- which could be influenced by positive affective, desire fulfillment, or other factors) and "satisfaction of global desires." 

The post seems to suggest that "satisfaction of global desires" should be equated with "life satisfaction." I disagree. It seems like having a construct that refers to "an individual's subjective appraisal of their life" is useful, and it seems like people are currently using the term "life satisfaction" to refer to this. Perhaps a new term could be created to refer to "satisfaction of global desires" (for instance, maybe we would call this "objective life satisfaction" as opposed to "subjective life satisfaction", which is what popular life satisfaction scales currently measure).

Replies from: MichaelPlant
comment by MichaelPlant · 2020-12-08T14:24:32.191Z · EA(p) · GW(p)

Hello Akash, thanks for this!

One thing you could test, as an empirical matter, would be  to ask people break their life down into various domains (e.g. health, wealth, relationships, etc.), getting people to score those, then for them to assign weights to each domain, to so create an overall score. This would be their satisfaction of global desires. 

You could then compare this their single judgement of life satisfaction.

I don't see why this would be particularly interesting though, and I can't think why the two scores would be different except due to user error. It's not at all clear what life satisfaction is supposed to be if not the aggregate of one's global desires. I discuss this further in my working paper which is linked to on the blog post. 

Replies from: Akash
comment by Akash · 2020-12-10T19:32:02.872Z · EA(p) · GW(p)

Thank you, Michael! I think this hypothetical is useful & makes the topic easier to discuss.

Short question: What do you mean by "user error?" 

Longer version of the question:

Let's assume that I fill out weights for the various categories of desire (e.g., health, wealth, relationships) & my satisfaction in each of those areas.

Then, let's say you erase that experience from my mind, and then you ask me to rate my global life satisfaction.

Let's now assume there was a modest difference between the two ratings. It is not instinctively clear to me why I should prefer judgment #1 to judgment #2. That is, I think it's an open question whether the "desire-based life satisfaction judgment" or the "desire-free life satisfaction judgment" is the more "valid" response.

To me, "user error" could mean several things:

  • The "desire-free" judgment is flawed because the user is not thinking holistically enough or reflecting enough. They are not thinking carefully about what they care about & how those things have actually went. 
  • The "desire-based" judgment is flawed because the list of desires misses some things that the user actually finds important (i.e., it's impossible to create a comprehensive list)
  • The "desire-based" judgment is flawed because the user is not assigning weights properly (i.e., I might report that wealth matters twice as much to my life satisfaction than friendship, but I might be misperceiving my true preferences, which are better reflected in the "desire-free" case).

In other words, if we could eliminate these forms of user error, I would probably agree with you that this distinction is arbitrary. In practice, though, I think these "desire-based" and "desire-free" versions of life satisfaction ought to be considered distinct (albeit I'd expect them to be modestly correlated). I also don't think it's clear to me that the "desire-based" judgment should be considered better (i.e., more valid). And even if it should be considered better, I think I'd still want to know about the

Furthermore, when making decisions, I would probably want to see both judgments. For example, let's assume:

  • Intervention A improves "desire-based life satisfaction judgments" by 15% and "desire-free life satisfaction judgments" by 5%
  • Intervention B improves "desire-based life satisfaction judgments" by 10% and "desire-free life satisfaction judgments" by 10%
  • Intervention C improves "desire-based life satisfaction judgments" by 15% and "desire-free life satisfaction judgments" by 15%.

I would prefer Intervention C over intervention A, even though they both improve "desire-based satisfaction judgments" by the same amount.  I also think reasonable people would disagree when comparing Intervention A to Intervention B.

For these reasons, I wonder if it's practically useful to consider "desire-based" and "desire-free" life satisfactions as separate constructs.

comment by RobertDaoust · 2020-12-07T14:50:31.882Z · EA(p) · GW(p)

Quick thoughts. The goal of effective altruism ought to be based on something more precise than the good of others defined as "well-being" because nothing is intrinsically or non-instrumentally good for a sentient entity when qualia depend on each other for having any value/meaning.  As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don't want than on what we want, and the latter can be much more easily subordinated to the former than the contrary. 

Replies from: MichaelPlant, HStencil
comment by MichaelPlant · 2020-12-08T13:23:26.655Z · EA(p) · GW(p)

I'm not quite sure I understand what you mean. My experiences have no value unless there is another experiencer in the world? If I'm the last person on Earth and I stub my toe, I think that's bad because it bad's for me, that is, it reduces my well-being. 

Also,  given your concerns, you'll need to define suffering in a way that is distinct from well-being. If I think suffering is just negative well-being - aka 'ill-being' - then your concerns about well-being apply to suffering too. 

Also also, if suffering isn't instrinsically bad, in what sense is  it bad?

Finally, I note that all of these concerns are about the value of well-being in a moral theory, which is a distinct question from what this post tackles, which is just what the theories of well-being are. One could (implausibly) say well-being had no moral value (which is, I suppose, almost what impersonal views of value do say...).

Replies from: RobertDaoust
comment by RobertDaoust · 2020-12-08T17:39:21.302Z · EA(p) · GW(p)

Thanks, Michael, for your reaction. Clearly, "qualia depend on each other for having any value/meaning" is a too short sentence to be readily understood. I meant that if consciousness or sentience are made up of qualia, i.e. meaningful and (dis)valuable elementary contents of experience, then each of those qualia has no value/meaning except inasmuch as it relates to other qualia: nothing is (dis)valuable by itself, qualia depend on each other... In other words, one "quale" has a conscious value or meaning only when it is within a psychoneural circuit that necessarily counts several qualia, as it may be illustrated in the next paragraph. 

Thus, suffering is not intrinsically bad. Badness here may have two distinct senses:  unpleasant in an affective sense, or wrong in a moral sense. Both senses depend on other concomitant qualia than suffering itself to take on their value and meaning. For instance, stubbing your toe might not be unpleasant if you are rushing to save your baby from the flames, whilst it may be quite unpleasant if you are going to bed for sleeping... Or a very unpleasant occurrence of suffering like being whipped might be morally right if you feel that it is deserved and formative, whilst it may be utterly wrong in other circumstances...

Replies from: MichaelPlant
comment by MichaelPlant · 2020-12-09T11:48:51.900Z · EA(p) · GW(p)

Sorry, I really don't follow your point in the first para. 

One thing to say is that experience of suffering are pro tanto bad (bad 'as far as it goes'). So stubbing your toe is bad, but this may be accompanied by another sensation such that overall you feel good. But the toe stubbing is still pro tanto bad.

Anyway, like I said, none of this is directly relevant to the post itself!

Replies from: RobertDaoust
comment by RobertDaoust · 2020-12-10T15:54:51.133Z · EA(p) · GW(p)

Okay, I realize that the relevance of neuroscience to the philosophy of well-being can hardly be made explicit in sufficient detail at the level of an introduction. That is unfortunate, if only for our mutual understanding because, with enough attention to details,  the stubbing toe example that I used would not be understood as you do:  if it is not unpleasant to stub your toe how can it be bad, pro tanto or otherwise?

Replies from: MichaelPlant
comment by MichaelPlant · 2020-12-10T17:46:48.318Z · EA(p) · GW(p)

I think we may well be speaking past each other someone. In my example, I took it the toe stubbing was unpleasant, and I don't see any problem in saying the toe stubbing is unpleasant but I am simultaneously experiencing other things such that I feel pleasure overall.

The usual case people discuss here is "how can BDSM be pleasant if it involves pain?" and the answer is to distinguish between bodily pain in certain areas vs a cognitive feeling of pleasure overall resulting from feeling bodily pain.

Replies from: RobertDaoust
comment by RobertDaoust · 2020-12-10T18:30:57.714Z · EA(p) · GW(p)

We may sympathize in the face of such difficulties. Terminology is a big problem when speaking about suffering in the absence of a systematic discipline dealing with suffering itself. That's another reason why the philosophy of well-being is fraught with traps and why I suggest the alleviation of suffering as the most effective first goal. 

comment by HStencil · 2020-12-08T01:46:29.368Z · EA(p) · GW(p)

It’s not clear to me how one can believe 1) that there is nothing that ultimately explains what makes a person’s life go well for them, and 2) that we have an overriding moral reason to alleviate suffering. It would seem dangerously close to believing that we have an overriding moral reason to alleviate suffering in spite of the fact that it is not Bad for those who experience it. You might claim that suffering is instrumentally bad, that it makes it harder to achieve... whatever one wants to achieve, but presumably, if achieving whatever one wants to achieve is valuable, it is valuable because of the way in which it leads one’s life to “go well.” If that is the case, then you have a theory of well-being. If, on the other hand, achieving whatever one wants to achieve is not valuable in any absolute sense, then it is hard to say why it would be valuable at all, and you, again, would struggle to justify why suffering is a bad.

Replies from: RobertDaoust
comment by RobertDaoust · 2020-12-08T17:55:26.711Z · EA(p) · GW(p)

Is there anyone who believes 1) and 2)?

Replies from: HStencil
comment by HStencil · 2020-12-08T18:25:16.461Z · EA(p) · GW(p)

I’m not sure, but it seemed to me that this was the view that you were defending in your original comment. Based on this comment, I take it that this is not, in fact, your view. Could clarify which premise you reject, 1) or 2)?

Replies from: RobertDaoust
comment by RobertDaoust · 2020-12-08T21:18:08.683Z · EA(p) · GW(p)

Hmm... 1) When an individual's life is evaluated as good or bad there may be an ultimate reason that is invoked to explain it, but I would not say that an ultimate reason has an intrinsic value: it is just valued as more fundamental than others, in the current thinking scheme of the evaluating entity. 2)  Do we have an overriding moral reason to alleviate suffering?  In certain circumstances, yes, like if there is an eternal hell we ought to end it if we can. But in general, no,  I don't think morality is paramount:  it surely counts but many other things also count, more or less depending on the circumstances. I personally am concerned with the alleviation of suffering because it is a branch of activity that fits with my profile as a worker. But if I suggest that effective altruists should prioritize the alleviation of suffering, it is because that's ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.

Replies from: HStencil
comment by HStencil · 2020-12-08T22:47:09.610Z · EA(p) · GW(p)

I suspect there may be too much inferential distance between your perspective on normative theory and my own for me to explain my view on this clearly, but I will try. To start, I find it very difficult to understand why someone would endorse doing something merely because it is “effective” without regard for what it is effective at. The most effective way of going about committing arson may be with gasoline, but surely we would not therefore recommend using gasoline to commit arson. Arson is not something we want people to be effective at! I think that if effective altruism is to make any sense, it must presuppose that its aims are worth pursuing.

Similarly, I disagree with your contention that morality isn't, as you put it, paramount. I do not think that morality exists in a special normative domain, isolated far away from concerns of prudence or instrumental reason. I think moral principles follow directly from the principle of instrumental reason, and there is no metaphysical distinction between moral reasons and other practical reasons. They are all just considerations that bear on our choices. Accordingly, the only sensible understanding of what it means to say that something is morally best is: “It is what one ought to do,” (I am skeptical of the idea of supererogation). It is a practical contradiction to say, “X is what I ought to do, but I will not do it,” in the same way that it is a theoretical contradiction to say, “It is not raining, but I believe it’s raining.” Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best). To me, that sounds like saying, “EA should do X regardless of whether or not EA should do X.”

Regarding the idea of intrinsic value, I think what Fin, Michael et al. meant by “X has intrinsic value” is “X is valuable for its own sake, not for the sake of any further end or moral good.” This is the conventional understanding of what “intrinsic value” means in academic philosophy. Under this definition, if there is an ultimate reason that in fact explains why an individual’s life is Good or Bad, then that reason must, by virtue of the logical properties of the concepts in that sentence, have grounding in some kind of intrinsic value. But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad. In this case, however, I do not think it is possible to justify why we could ever have an overriding moral reason to do anything, including to eliminate an eternal Hell, as we could not justify why that Hell was Bad for those individuals who were stuck inside. 

If you wanted to justify why that Hell was bad for those stuck inside, and you were committed to the notion that the structure of value must be determined by the subjective, evaluative judgments of people (or animals, etc.), you would wind up—deliberately or not—endorsing a “desire-based theory of wellbeing,” like one of those described in this forum post. However, as a note of caution, in order to believe that the structure of value is determined entirely by people’s subjective, evaluative judgments, probably as expressed through their preferences (on some understanding of what a preference is), you would have to consider those judgments to be ultimately without justification. Either I prefer X to Y because X is relevantly better than Y, or I prefer X to Y without justification, and there are no absolute, universal facts about what one should prefer. I think there are facts about what one should prefer and so steer clear of such theories.

Replies from: RobertDaoust
comment by RobertDaoust · 2020-12-09T01:08:30.820Z · EA(p) · GW(p)

Excellent response! I'll think about it and come back to let you know my thoughts, if you will. 

Replies from: HStencil
comment by HStencil · 2020-12-09T01:32:14.552Z · EA(p) · GW(p)

Thank you — please do!

Replies from: RobertDaoust
comment by RobertDaoust · 2020-12-09T18:21:56.451Z · EA(p) · GW(p)

Inferential distance makes discussion hard indeed. Let’s try to go first to this focal point: what ultimate goal is the best for effective altruists. The answer cannot be found only by reasoning, it requires a collective decision based on shared values. Some prefer the goal of having a framework for thinking and acting with effectiveness in altruistic endeavors. You and I would not be satisfied with that because altruism has no clear content: your altruistic endeavor may go against mine (examples may be provided on demand). Some, then, realizing the necessity of defining altruism, appeal to well-being as what must be promoted for benefitting others (with everyone’s wellbeing counting equally, says MacAskill). That is pretty good, except that what constitutes well-being remains in question, as the opening post here states. Your conception of well-being may go against mine (examples may be provided on demand). Some, at last, realizing the necessity of a more precise goal, though of course not perfect, suggest that prioritizing the alleviation of suffering is the effective altruistic endeavor par excellence. 

I am arguing against well-being and for suffering-alleviation as ultimately the best goal for effective altruists. 

Now, one of my arguments is simply that suffering-alleviation is better than well-being because “that's ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.”, and as I wrote in a previous comment “As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don't want than on what we want, and the latter can be much more easily subordinated to the former than the contrary.” You invoke that the end goal is more important than such considerations. You are right, of course, but you seem to have overlooked that in order to specify that I was speaking only about effectiveness, I added “this being the case whether that purpose is morally good or not.” 

The same applies, I think, when you say “Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best).” 

Our views differ on morality being paramount or not. If your profession is ethicist, or if you are a highly virtuous person, I understand that morality may be paramount for you, and I think such persons are often useful. But personally, notwithstanding practical contradiction, I might go against any ought for various reasons, for instance just because I do not believe that my moral judgment is always right. More importantly, I think that science, and especially the science of suffering, is not subordinated to ethics or any other sphere of human activity, except in very exceptional circumstances.  

Finally, perhaps we might ease our discussion by clarifying the following. You wrote “But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad.” Well, I believe, as you put it, that “the structure of value must be determined by the subjective, evaluative judgments of people”. I cannot see why you add “you would have to consider those judgments to be ultimately without justification.” Is it because you consider that a subjective judgment is not a fact? Do you really think that “there are no absolute, universal facts about what one should prefer” if there are only subjective preferences? If I actually prefer yellow to red, is it not an absolute, universal fact about what I should prefer? Collectively it is more complicated... we are not alone and no two preferences are ever exactly the same... so it seems that we can come to collective preferential judgments that are ultimately justified, but not absolute and not based on universal facts.

Thanks for your interaction.

comment by antimonyanthony · 2020-12-11T00:28:46.773Z · EA(p) · GW(p)

A fourth alternative that may be appealing to those who don't find any of these three theories completely satisfying: tranquilism.

Tranquilism states that an individual experiential moment is as good as it can be for her if and only if she has no craving for change.

(You could argue this is a subset of hedonism, in that it is fundamentally concerned with experiences, but there are important differences.)