Posts

MagnusVinding's Shortform 2020-09-25T17:25:14.822Z
New book — "Suffering-Focused Ethics: Defense and Implications" 2020-05-31T09:27:58.428Z
Tips for overcoming low back pain 2020-03-24T16:41:24.383Z

Comments

Comment by magnusvinding on MagnusVinding's Shortform · 2020-09-25T17:25:15.280Z · EA · GW

An argument in favor of (fanatical) short-termism?

[Warning: potentially crazy-making idea.]

Section 5 in Guth, 2007 presents an interesting, if unsettling idea: on some inflationary models, new universes continuously emerge at an enormous rate, which in turn means (maybe?) that the grander ensemble of pocket universes consists disproportionally of young universes.

More precisely, Guth writes that, "in each second the number of pocket universes that exist is multiplied by a factor of exp{10^37}." Thus, naively, we should expect earlier points in a given pocket universe's timeline to vastly outnumber later points — by a factor of exp{10^37} per second!

(A potentially useful way to visualize the picture Guth draws is in terms of a branching tree, where for each older branch, there are many more young ones, and this keeps being true as the new, young branches grow and spawn new branches.)

If this were true, or even if there were a far weaker universe generation process to this effect (say, one that multiplied the number of pocket universes by two for each year or decade), it would seem that we should, for acausal reasons, mostly prioritize the short-term future (perhaps even the very short-term future).

Guth tentatively speculates whether this could be a resolution of sorts to the Fermi paradox, though he also notes that he is skeptical of the framework that motivates his discussion:

Perhaps this argument explains why SETI has not found any signals from alien civilizations [because if there were an earlier civ at our stage, we would be far more likely to be in that civ], but I find it more plausible that it is merely a symptom that the synchronous gauge probability distribution is not the right one.

I'm not claiming that the picture Guth outlines is likely to be correct. It's highly speculative, as he himself hints, and there are potentially many ways to avoid it — for example, contra Guth's preferred model, it may be that inflation eventually stops, cf. Hawking & Hertog, 2018, and thus that each point in a pocket universe's timeline will have equal density in the end; or it might be that inflationary models are not actually right after all.

That said, one could still argue that the implication Guth explores — which is potentially a consequence of a wide variety of (eternal) inflationary models — is a weak reason, among many other reasons, to give more weight to short-term stuff (after all, in EV terms, the enormous rate of universe generation suggested by Guth would mean that even extremely small credences in something like his framework could still be significant). And perhaps it's also a weak reason to update in favor of thinking that as yet unknown unknowns will favor a short(er)-term priority to a greater extent than we had hitherto expected, cf. Brian Tomasik's discussion of how we might model unknown unknowns.

Comment by magnusvinding on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-10T18:03:29.526Z · EA · GW

Concerning how EA views on this compare to the views of the general population, I suspect they aren’t all that different. Two bits of weak evidence:

I.

Brian Tomasik did a small, admittedly unrepresentative and imperfect Mechanical Turk survey in which he asked people the following:

At the end of your life, you'll get an additional X years of happy, youthful, and interesting life if you first agree to be covered in gasoline and burned in flames for one minute. How big would X have to be before you'd accept the deal?

More than 40 percent said that they would not accept it “regardless of how many extra years of life” they would get (see the link for some discussion of possible problems with the survey).

II.

The Future of Life Institute did a Superintelligence survey in which they asked, “What should a future civilization strive for?” A clear plurality (roughly a third) answered “minimize suffering” — a rather different question, to be sure, but it does suggest that a strong emphasis on reducing suffering is very common.

1. Do you know about any good articles etc. that make the case for such views?

I’ve tried to defend such views in chapter 4 and 5 here (with replies to some objections in chapter 8). Brian Tomasik has outlined such a view here and here.

But many authors have in fact defended such views about extreme suffering. Among them are Ingemar Hedenius (see Knutsson, 2019); Ohlsson, 1979 (review); Mendola, 1990; 2006; Mayerfeld, 1999, p. 148, p. 178; Ryder, 2001; Leighton, 2011, ch. 9; Gloor, 2016, II.

And many more have defended views according to which happiness and suffering are, as it were, morally orthogonal.

2. Do you think such or similar views are necessary to prioritize S-Risks?

As Tobias said: No. Many other views can support such a priority. Some of them are reviewed in chapter 1, 6, and 14 here.

3. Do you think most people would/should vote in such a way if they had enough time to consider the arguments?

I say a bit on this in footnote 23 in chapter 1 and in section 4.5 here.

4 For me it seems like people constantly trade happiness for suffering ... Those are reasons for me to believe that most people ... are also far from expecting 1:10^17 returns or even stating there is no return which potentially could compensate any kind of suffering.

Many things to say on this. First, as Tobias hinted, acceptable intrapersonal tradeoffs cannot necessarily be generalized to moral interpersonal ones (cf. sections 3.2 and 6.4 here). Second, there is the point Jonas made, which is discussed a bit in section 2.4 in ibid. Third, tradeoffs concerning mild forms of suffering that a person agrees to undergo do not necessarily say much about tradeoffs concerning states of extreme suffering that the sufferer finds unbearable and is unable to consent to (e.g. one may endorse lexicality between very mild and very intense suffering, cf. Klocksiem, 2016, or think that voluntarily endured suffering occupies a different moral dimension than does suffering that is unbearable and which cannot be voluntarily endured). More considerations of this sort are reviewed in section 14.3, “The Astronomical Atrocity Problem”, here.

Comment by magnusvinding on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-10T11:01:56.343Z · EA · GW

[Warning: potentially disturbing discussion of suicide and extreme suffering.]

I agree with many of the points made by Anthony. It is important to control for these other confounding factors, and to make clear in this thought experiment that the person in question cannot reduce more suffering for others, and that the suicide would cause less suffering in expectation (which is plausibly false in the real world, also considering the potential for suicide attempts to go horribly wrong, Humphry, 1991, “Bizarre ways to die”). (So to be clear, and as hinted by Jonas, even given pure NU, trying to commit suicide is likely very bad in most cases, Vinding, 2020, 8.2.)

Another point one may raise is that our intuitions cannot necessarily be trusted when it comes to these issues, e.g. because we have an optimism bias (which suggests that we may, at an intuitive level, wholly disregard these tail risks); because we evolved to prefer existence almost no matter the (expected) costs (Vinding, 2020, 7.11); and because we intuitively have a very poor sense of how bad the states of suffering in question are (cf. ibid., 8.12).

Intuitions also differ on this matter. One EA told me that he thinks we are absolutely crazy for staying alive (disregarding our potential to reduce suffering), especially since we have no off-switch in case things go terribly wrong. This may be a reason to be less sure of one's immediate intuitions on this matter, regardless of what those intuitions might be.

I also think it is important to highlight, as Tobias does, that there are many alternative views that can accommodate the intuition that the suicide in question would be bad, apart from a symmetry between happiness and suffering, or upside-focused views more generally. For example, there is a wide variety of harm-focused views, including but not restricted to negative consequentialist views in particular, that will deem such a suicide bad, and they may do so for many different reasons, e.g. because they consider one or more of the following an even greater harm (in expectation) than the expected suffering averted: the frustration of preferences, premature death, lost potential, the loss of hard-won knowledge, etc. (I say a bit more about this here and here.)

Relatedly, one should be careful about drawing overly general conclusions from this case. For example, the case of suicide does not necessarily say much about different population-ethical views, nor about the moral importance of creating happiness vs. reducing suffering in general. After all, as Tobias notes, quite a number of views will say that premature deaths are mostly bad while still endorsing the Asymmetry in population ethics, e.g. due to conditional interests (St. Jules, 2019; Frick, 2020). And some views that reject a symmetry between suffering and happiness will still consider death very bad on the basis of pluralist moral values (cf. Wolf, 1997, VIII; Mayerfeld, 1996, “Life and Death”; 1999, p. 160; Gloor, 2017; 1, 4.3, 5).

Similar points can be made about intra- vs. interpersonal tradeoffs: one may think that it is acceptable to risk extreme suffering for oneself without thinking that it is acceptable to expose others to such a risk for the sake of creating a positive good for them, such as happiness (Shiffrin, 1999; Ryder, 2001; Benatar & Wasserman, 2015, “The Risk of Serious Harm”; Harnad, 2016; Vinding, 2020, 3.2).

(Edit: And note that a purely welfarist view entailing a moral symmetry between happiness and suffering would actually be a rather fragile basis on which to rest the intuition in question, since it would imply that people should painlessly end their lives if their expected future well-being were just below "hedonic zero", even if they very much wanted to keep on living (e.g. because of a strong drive to accomplish a given goal). Another counterintuitive theoretical implication of such a view is that one would be obliged to end one's life, even in the most excruciating way, if it in turn created a new, sufficiently happy being, cf. the replacement argument discussed in Jamieson, 1984; Pluhar, 1990. I believe many would find these implications implausible as well, even on a purely theoretical level, suggesting that what is counterintuitive here is the complete reliance on a purely welfarist view — not necessarily the focus on reducing suffering over increasing happiness.)

Comment by magnusvinding on The case of the missing cause prioritisation research · 2020-08-28T22:07:42.797Z · EA · GW

Thanks for writing this post! :-)

Two points:

i. On how we think about cause prioritization, and what comes before

2. Consideration of different views and ethics and how this affects what causes might be most important.

It’s not quite clear to me what this means. But it seems related to a broader point that I think is generally under-appreciated, or at least rarely acknowledged, namely that cause prioritization is highly value relative.

The causes and interventions that are optimal relative to one value system are unlikely to be optimal relative to another value system (which isn't to say that there aren't some causes and interventions that are robustly good on many different value systems, as there plausibly are, and identifying novel such causes and interventions would be a great win for everyone; but then it is also commensurately difficult to identify new such causes and have much confidence in them given both our great empirical uncertainty and the necessarily tight constraints).

I think it makes sense that people do cause prioritization based on the values, or the rough class of values, that they find most plausible. Provided, of course, that those values have been reflected on quite carefully in the first place, and scrutinized in light of the strongest counterarguments and alternative views on offer.

This is where I see a somewhat mysterious gap in EA, more fundamental and even more gaping than the cause prioritization gap highlighted here: there is surprisingly little reflection on and discussion of values (something I also noted in this post, along with some speculations as to what might explain this gap).

After all, cause prioritization depends crucially on the fundamental values based on which one is trying to prioritize (a crude illustration), so this is, in a sense, the very first step on the path toward thoroughly reasoned cause prioritization.

ii. On the apparent lack of progress

As hinted in Zoe's post, it seems that much (most?) cutting edge cause prioritization research is found in non-public documents these days, which makes it appear like there is much less research than there in fact is.

This is admittedly problematic in that it makes it difficult to get good critiques of the research in question, especially from skeptical outsiders, and it also makes it difficult for outsiders to know what in fact animates the priorities of different EA agents and orgs. It may well be that it is best to keep most research secret, all things considered, but I think it’s worth being transparent about the fact that there is a lot that is non-public, and that this does pose problems, in various ways, including epistemically.

Comment by magnusvinding on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-20T21:43:18.195Z · EA · GW
The way I think about it, when I'm suffering, this is my brain subjectively "disvaluing" (in the sense of wanting to end or change it) the state it's currently in.

This is where I see a dualism of sorts, at least in the way it's phrased. There is the brain disvaluing (as an evaluating subject) the state it's in (where this state is conceived of as an evaluated object of sorts). But the way I think about it, there is just the state your mind-brain is in, and the disvaluing is part of that mind-brain state. (What else could it be?)

This may just seem semantic, but I think it's key: the disvaluing, or sense of disvalue, is intrinsic to that state. It relates back to your statement that reality simply is, and interpretation adds something to it. To which I'd still say that interpretations, including disvaluing in particular, are integral parts of reality. They are intrinsic to the subset of reality that is our mind-brains.

This is not the same as saying that there exists a state of the world that is objectively to be disvalued.

I think it's worth clarifying what the term "objectively" means here. Cf. my point above, I think it's true to say that there is a state of the world that is disvalued, and hence disvaluable according to that state itself. And this is true no matter where in the universe this state is instantiated. In this sense, it is objectively (i.e. universally) disvaluable. And I don't think things change when we introduce "other" individuals into the picture, as we discussed in the comments on your first post in this sequence (I also defended this view at greater length in the second part of my book You Are Them).

I talk about notions like 'life goals' (which sort of consequentialist am I?), 'integrity' (what type of person do I want to be?), 'cooperation/respect' (how do I think of the relation between my life goals and other people's life goals?), 'reflective equilibrium' (part of philosophical methodology), 'valuing reflection' (the anti-realist notion of normative uncertainty), etc.

Ah, I think we've talked a bit past each other here. My question about bedrock concepts was mostly about why you would question them in general (as you seem to do in the text), and what you think the alternative is. For example, it seems to me that the notions you consider foundational in your ethical perspective in particular do in turn rest on bedrock concepts that you can't really explain more reductively, i.e. with anything but synonymous concepts ("goals" arguably being an example).

From one of your replies to MichaelA:

I should have chosen a more nuanced framing in my comment. Instead of saying, "Sure, we can agree about that," the anti-realist should have said "Sure, that seems like a reasonable way to use words. I'm happy to go along with using moral terms like 'worse' or 'better' in ways where this is universally considered self-evident. But it seems to me that you think you are also saying that for every moral question, there's a single correct answer [...]"

It seems to me your conception of moral realism conflates two separate issues:

1. Whether there is such a thing as (truly) morally significant states, and

2. Whether there is a single correct answer for every moral question.

I think these are very different questions, and an affirmative answer to the former need not imply an affirmative answer to the latter. That is, one can be a realist about 1. while being a non-realist about 2.

For example, one can plausibly maintain that a given state of suffering is intrinsically bad and ought not exist without thinking that there is a clear answer, even in principle, concerning whether it is more important to alleviate this state or some other state of similarly severe suffering. As Jamie Mayerfeld notes, even if we think states of suffering occupy a continuum of (genuine) moral importance, the location of any given state of suffering on this continuum "may not be a precise point" (Mayerfeld, 1999, p. 29). Thus, one can be a moral realist and still embrace vagueness in many ways.

I think it would be good if this distinction were more clear in this discussion, and if these different varieties of realism were acknowledged. After all, you seem quite sympathetic to some of them yourself.

Comment by magnusvinding on New book — "Suffering-Focused Ethics: Defense and Implications" · 2020-06-20T20:31:15.650Z · EA · GW

Thanks for sharing your reflections :-)

This is because of imagining and seeing examples as in the book and here.

Just wanted to add a couple of extra references like this:

The Seriousness of Suffering: Supplement

The Horror of Suffering

Preventing Extreme Suffering Has Moral Priority

To be more specific, I think that one second of the most extreme suffering (without subsequent consequences) would be better than, say, a broken leg.

Just want to note, also for other readers, that I say a bit about such sentiments involving "one second of the most extreme suffering" in section 8.12 in my book. One point I make is that our intuitions about a single second of extreme suffering may not be reliable. For example, we probably tend not to assign great significance, intuitively, to any amount of one-second long chunks of experience. This is a reason to think that the intuition that one second of extreme suffering can't matter that much may not say all that much about extreme suffering in particular.

If that holds, than any extreme suffering can be overcome by mild suffering.

I think this is a little too quick, at least in the way you've phrased it. A broken leg hardly results in merely mild suffering, at least by any common definition. And a lexical threshold has, for example, been defended between "mere discomfort" and "genuine pain" (see Klocksiem, 2016), where a broken leg would clearly entail the latter.

There are also other reasons why this argument (i.e. "one second of extreme suffering can be outweighed by mild suffering, hence any amount of extreme suffering can") isn't valid.

Note also that even if one thinks that aggregates of milder forms of suffering can be more important than extreme suffering in principle, one may still hold that extreme suffering dominates profusely in practice, given its prevalence.

Now, many people would trade mild tradeoff for other things they hold important.

I just want to flag here that the examples you give seem to be intrapersonal ones, and the permissibility of intrapersonal tradeoffs like these (which is widely endorsed) does not imply the permissibility of similar tradeoffs in the interpersonal case (which more people would reject, and which there are many arguments against, cf. chapter 3).

The following is neither a request nor a complaint, but in relation to the positions you express, I see little in the way of counterarguments to, or engagement with, the arguments I've put forth in my book, such as in chapters 3 and 4, for example. In other words, I don't really see the arguments I present in my book addressed here (to be clear, I'm not claiming you set out to do that), and I'm still keen to see some replies to them.

Comment by magnusvinding on New book — "Suffering-Focused Ethics: Defense and Implications" · 2020-06-16T18:26:06.214Z · EA · GW

Thanks for your comment. I appreciate it! :-)

In relation to counterintuitions and counterarguments, I can honestly say that I've spent a lot of time searching for good ones, and tried to include as many as I could in a charitable way (especially in chapter 8).

I'm still keen to find more opposing arguments and intuitions, and to see them explored in depth. As hinted in the post, I hope my book can provoke people to reflect on these issues and to present the strongest case for their views, which I'd really like to see. I believe such arguments can help advance the views of all of us toward greater levels of nuance and sophistication.

Comment by magnusvinding on New book — "Suffering-Focused Ethics: Defense and Implications" · 2020-06-14T18:24:07.716Z · EA · GW

Thanks for your comment, Michael :-)

What I was keen to get an example of was mainly this (omitted in the text you quoted above):

Also, whenever there was a problem with an argument, Magnus can retreat to a less demanding version of Suffering-Focused Ethics, which makes it more difficult for the reader to follow the arguments.

That is, an example of how I retreat from the main position I defend (in chapters 4 and 5), such as by relying on the views of other philosophers whose premises I haven't defended. I don't believe I do that anywhere. Again, what I do in some places is simply to show that there are other kinds of suffering-focused views one may hold; I don't retreat from the view I in fact hold.

It's true that I do mention the views of many different philosophers, and note how their views support suffering-focused views, and in some cases I merely identify the moral axioms, if you will, underlying these views. I then leave it to the reader to decide whether these axioms are plausible (this is a way in which I in fact do explain/present views rather than try to "persuade"; chapter 2 is very similar, in that it also presents a lot of views in this way).

It seems that Shiffrin and Parfit did, for example, consider their respective principles rather axiomatic, and provided little to no justification for them (indeed, Parfit considered his compensation principle "clearly true", https://web.archive.org/web/20190410204154/https://jwcwolf.public.iastate.edu/Papers/JUPE.HTM ). Mill's principle was merely mentioned as one that "can be considered congruent" with a conclusion I argued for; I didn't rely on it to defend the conclusion in question.

Comment by magnusvinding on New book — "Suffering-Focused Ethics: Defense and Implications" · 2020-06-11T12:34:41.095Z · EA · GW

Thanks for sharing your review. A few comments:

Concerning the definition of suffering, I do actually provide a definition: an overall bad feeling, or state of consciousness (as I note, I here follow Mayerfeld, 1999, pp. 14-15). One may argue that this is not a particularly reductive definition, and I say the same in a footnote:

One cannot, I submit, define suffering in more precise or reductive terms than this. For just as one cannot ultimately define the experience of, say, phenomenal redness in any other way than by pointing to it, one cannot define a bad overall feeling, i.e. suffering, in any other way than by pointing to the aspect of consciousness it refers to.

I think that he made a deliberate choice to focus on capturing a wide range of views and defenses instead of going deep into defending one view.

Partly. I would say I both tried to make a broad case and defend a specific view, namely the view(s) I defend in chapters 4 and 5 (they aren't quite identical, but I'd say they are roughly equivalent at the level of normative ethics).


In Chapter 5 Magnus explains his position regarding suffering, but throughout the first part he does not rely on that in order to make a case for suffering focused ethics. Instead, he loads philosophical ammunition from all over the suffering-focused ethics coalition and shoots them at every obstacle in sight.

That's not quite how I see it (though it's true that I don't rely strongly on the meta-ethical view defended in chapter 5). My own view, including chapter 5 in particular, is not really isolated from the arguments I make in the preceding chapters. I see most of the arguments outlined in previous chapters as lending support to the arguments made in chapter 5, and I indeed explicitly cite many of them there.

Many of the arguments are of the form "philosopher X thinks that Y is true", but without appropriate arguments for Y. Also, whenever there was a problem with an argument, Magnus can retreat to a less demanding version of Suffering-Focused Ethics, which makes it more difficult for the reader to follow the arguments.

I'd appreciate some examples (or just one) of this. :-)

I don't think I at any point retreat from the view I defend in chapters 4 and 5. But I do explain how one can hold other suffering-focused views (e.g. pluralist ones, such as those defended by Wolf and Mayerfeld).

My major issue with this book is that it feels heavily biased. I felt that I was being persuaded, not explained to.

I did seek to explain the arguments and considerations that have led me to hold a suffering-focused view, and I do happen to find these arguments persuasive.

I wonder what you think I should have done differently, and whether you can refer me to a book defending a moral view in a way that was more "explaining".

It feels that Magnus offers no major concessions, related to the point above that there is always a line of retreat.

What major concessions do you feel I should make? My view is that it cannot be justified to create purported positive goods at the price of extreme suffering, and it would be dishonest for me to claim that I've found a persuasive argument against this view. But I'm keen to hear any counterargument you find persuasive.


In chapter 7, there are a long list of possible biases that prevent us from accepting Suffering-Focused Ethics.

This is not quite accurate, and I should have made this clearer. :-)

As I say at the beginning of this chapter, I here "present various biases against giving suffering its due moral weight and consideration." This is not the same as (only) presenting biases against suffering-focused moral views in particular. One can be a classical utilitarian and still think that most, perhaps even all, of the biases mentioned in this chapter plausibly bias us against giving sufficient priority to suffering.

For example, a classical utilitarian can agree that we tend to shy away from contemplating suffering (7.2); that we underestimate how bad suffering often is (7.4); that we underestimate and ignore our ability to reduce suffering, in part because of omission bias (7.5); that we have a novelty bias and scope insensitivity (7.6); that we have a perpetrator bias that leads us to dismiss suffering not caused by moral agents (7.7); that the Just World Fallacy leads us to dismiss others' suffering (7.8); that we have a positivity and an optimism bias (7.9); that a craving for certain sources of pleasure, e.g. sex and status, can distort our judgments (7.10); that we have an existence bias — widespread resistance against euthanasia is an example — (7.11); that suffering is a very general phenomenon, which makes it difficult for us to make systematic and effective efforts to prevent it (7.13); etc.

I'd actually say that most of the biases reviewed are not biases against accepting suffering-focused moral views, but rather biases against giving the priority to reducing suffering that the values we already hold would require. I should probably have made this more clear (I say a bit more on this in the second half of section 12.3).

and really the biggest flaw for me was that there was no analogous comparison with possible biases [favoring] Suffering-Based Ethics.

But there was in fact a section on this: 7.15. If you feel I've missed some important considerations, I'm keen to hear about them.

Also, in Chapter 8 Magnus presents many arguments against his views, each a couple of sentences, and spends the majority of the time making counterarguments and half-hearted concessions.

I wonder what you mean by "half-hearted concessions", and why you think they are half-hearted. Also, it's not true that "each [counterargument is] a couple of sentences", even as most are stated very concisely.

Instead of acknowledging reasonable ethical views that may oppose Suffering-Focused Ethics, there is an attempt at convincing the readers that there is still some way of reducing suffering that they should prefer.

As mentioned above, my view is that it cannot be justified to create purported positive goods at the price of extreme suffering. I cannot honestly say that I find views that would have us increase extreme suffering in order to increase, say, pleasure to be reasonable. So again, all I can say is that I'd invite you to present and defend the views that you think I should acknowledge as reasonable.

After reading this book, it is clearer to me that I find extreme suffering very bad

I'm glad to hear that. Helping people clarify their views of the significance of extreme suffering is among the main objectives of the book.

but that in general I tend to think suffering can be outweighted.

This is then where I, apropos your complaint about a lack of "appropriate arguments" for a stated premise, would ask for some arguments: how and why can extreme suffering be outweighed? What counterarguments would you give to the arguments presented in, say, chapters 3 and 4?

Also, I was worried before reading the book that there is an inherent difficulty in cooperation between suffering-focused ethical systems and aspirations for more (happy) people to exist. I still think that's somewhat the case but it is clearer that these differences can be overcome and that one can value both.

Pleased to hear this. The second part of the book should lend even more support to that view. I very much hope we can all cooperate closely rather than fall victim to tribal psychology, as difficult as that can be. As I note in chapter 10, disagreeing on values is arguably a strong catalyst for outgroup perception. Let's resist falling prey to that.

Thanks again for taking the time to read and review the first part of the book. :-)

Comment by magnusvinding on New book — "Suffering-Focused Ethics: Defense and Implications" · 2020-06-11T09:30:27.666Z · EA · GW

Thanks for your question, Niklas. It's an important one.

The following link contains some resources for sustainable activism that I've found useful:

https://magnusvinding.com/2017/12/30/resources-for-sustainable-activism/

But specifically, it may be useful to cultivate compassion — the desire for other beings to be free from suffering — more than (affective) empathy, i.e. actually feeling the feelings of those who suffer.

Here is an informative conversation about it: https://www.youtube.com/watch?v=CJ1SuKOchps

As I write in section 9.5 (see the book for references):

Research suggests that these meditation practices [i.e. compassion and loving-kindness meditation] not only increase compassionate responses to suffering, but that they also help to increase life satisfaction and reduce depressive symptoms for the practitioner, as well as to foster better coping mechanisms and increased positive affect in the face of suffering.
Comment by magnusvinding on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-09T16:56:59.855Z · EA · GW
Normative ethics: There’s a sense in which consequentialist obligations to avoid purchasing meat from factory-farmed animals are “real.” But we could also take a different perspective (according to which morality is about hypothetical contracts between people), in which case we’d see no obligations toward animals.

Realists of course agree that we can take another perspective, and that this can be fruitful, but the crucial issue for the realist is whether one perspective is ultimately more valid, or true, than others (as you hint further down). This is where I think the duck-rabbit analogy breaks down for most realists: no one is tempted to claim that one interpretation of the image is more valid or true than the other; there seems no compelling reason for believing that. But when it comes to ethics, realists maintain that we do have truly compelling reasons to prefer, for example, a view that says "minimize suffering" rather than "maximize suffering". The realist will deny that both are valid "interpretations" of ethics.

According to my anti-realist perspective, reality simply is, but interpretations always add something. Deep down, all interpretations are arbitrary and we can always take on the “stubborn” perspective to say that there’s not even a question that needs to be answered.

I see a duality in such an anti-realist view in that interpretations appear to be thought of as something separate from reality. I think this is a mistake — especially relative to qualia-based moral and value realism — and I think it is a mistake to consider all interpretations arbitrary: it is not arbitrary to consider suffering bad. Indeed, interpretation (broadly defined) is arguably intrinsic to, and in some sense constitutive of, suffering itself (cf. Aydede, 2014).

Of course, if we define reality as everything devoid of interpretive features, and consider all evaluations to be separate from reality, then I'd agree that there obviously are no valid evaluations to be found in "reality". But then I think we're working with a very impoverished conception of reality, and certainly one that many realists about value and ethics would reject. This might be a crux in terms of how people think differently about these things.

I see a similar duality in the dichotomy drawn between "expressions of attitudes" versus "statements of fact" in the outline of non-cognitivism. Many moral realist views (e.g. the views defended in Pearce, 1995; Hewitt, 2008) are all about the "attitudes" (in a broad sense) of sentient beings, and such views consider the possibility space of these "attitudes" a domain of facts. Such views are not really "speaker-dependent", as in merely being about the attitudes of the people who hold these moral views, but are rather about the "attitudes" of all sentient beings. I think you've expressed a similar view/definition of qualia in the past — one that excludes evaluations and preferences, unlike the conception of qualia employed by most of those who defend value or moral realist views based on qualia.


In the context of bedrock concepts, it's not clear to me why such concepts should be considered problematic. After all, what is the alternative? An infinite regress of concepts? A circular loop? Having bedrock concepts seems to me the least problematic — indeed positively plausible — option.

Laying out what constitutes philosophical progress then becomes a bedrock concept as well

I don't see how that follows. Accepting bedrock concepts need not imply that the most plausible conception of philosophical progress will be bedrock.


In relation to anti-realism about consciousness, I think anti-realists often fail to be clear about what they are denying the reality of. I've drawn an analogy to sound (phenomenal experience in general) and music (a coherent, ordered conscious mind in particular). Realists will agree, trivially, that whether we assign a set of sounds (phenomenal experience) the label "music" (conscious) is up to us, and that there is hardly any fact of the matter, yet whether such a thing as sound (phenomenal experience) exists in the first place is undeniable. The defenses I've seen of anti-realism about consciousness are generally very unclear about whether they are denying the reality of "music" or the reality of "sound" altogether. For example:

I’m a consciousness denier, though it’s important to clarify that I’m not denying that I have first-person experiences of the world. I am fully on board with, “I think, therefore I am,” and the notion that you can have 100% confidence in your own first-person experience.

This seems to affirm the reality of phenomenal experience, and suggests that the position defended is just a fairly trivial nominalist view of the concept of consciousness.

Comment by magnusvinding on New book — "Suffering-Focused Ethics: Defense and Implications" · 2020-06-02T22:59:54.187Z · EA · GW

Thanks, Mike!

Great questions. Let me see whether I can do them justice.

If you could change peoples' minds on one thing, what would it be? I.e. what do you find the most frustrating/pernicious/widespread mistake on this topic?

Three important things come to mind:

1. There seems to be this common misconception that if you hold a suffering-focused view, then you will, or at least you should, endorse forms of violence that seem abhorrent to common sense. For example, you should consider it good when people get killed (because it prevents future suffering for them), and you should try to destroy the world. This doesn't follow. For many reasons.

First, one may hold a pluralist view according to which we have a prima facie obligation to reduce suffering, but also, for example, prima facie obligations not to kill and to respect the autonomy of other individuals. Indeed, academics such as Clark Wolf and Jamie Mayerfeld defend suffering-focused views of this kind. See:

https://web.archive.org/web/20190410204154/https://jwcwolf.public.iastate.edu/Papers/JUPE.HTM https://onlinelibrary.wiley.com/doi/abs/10.1111/j.2041-6962.1996.tb00795.x https://www.amazon.com/Suffering-Moral-Responsibility-Oxford-Ethics/dp/0195115996

Beyond that, even on purely welfarist (suffering-focused) views, there are many strong reasons to consider it bad when individuals die, and to oppose world destruction (see sections 8.1 and 8.2). In fact, the objections commonly raised against suffering-focused views are often more objections against purely welfarist views than they are against the moral asymmetry between happiness and suffering, as you for any welfarist view can construct an argument to the effect that one should be willing to kill for trivial reasons. For example, naively interpreted, a classical utilitarian should also be willing to kill a person, and indeed destroy the world, to prevent the smallest amount of suffering if the "sum" of happiness and suffering is exactly zero otherwise (a point often made by David Pearce). Likewise, a classical utilitarian should endorse what is arguably an even more repugnant world-destruction conclusion than the negative utilitarian: if we could push a button that first unleashes ceaseless torture upon every sentient individual for decades, and then destroys our world to in turn give rise to a "greater" amount of pleasure in some new world, then classical utilitarianism would oblige us to press this button.

But these arguments obviously don't come close to showing that classical utilitarians should endorse violence of this sort in practice; they obviously shouldn't. The same holds true when similar arguments are applied to suffering-focused views.

2. Another belief I would want to challenge is that suffering-focused EAs make the world a more dangerous place from the perspective of other value systems. I would suggest the opposite is the case, and I think what's dangerous is that people don't appreciate this.

Among people who hold suffering-focused views, suffering-focused EAs fall toward the high tail in terms of being cooperative, measured, and prudent. It's a group that does, and to an even greater extent has the potential to, move other suffering-focused people in less naive and more cooperative directions, which is very positive on all value systems. Marginalizing people with suffering-focused views within EA is really not helpful to this end.

3. A third misunderstanding is that people who hold suffering-focused views are much more concerned about mild suffering than, say, the average ethically concerned person. This need not be the case. One can hold suffering-focused views that are primarily concerned with extreme suffering, and which give overriding weight to extreme suffering without giving commensurable weight to mild suffering. I defend such views in chapters 4-5.

'if you were given 10 billion dollars and 10 years to move your field forward, how precisely would you allocate it, and what do you think you could achieve at the end?'

I think I would devote it mostly to research — to building a research field. The field of "effective suffering reduction" is very young and unexplored at this point, and much of the discussion that has taken place so far has been tied to the idiosyncratic and speculative views of a few people (unavoidably so, given that so few people have done research on these issues so far). This means that there is likely a lot of low-hanging fruit here. Building such a research project is in large part the goal of the new organization that I have recently co-founded with Tobias Baumann: Center for Reducing Suffering ( https://centerforreducingsuffering.org/ ).

I think this can give us better insights into which risks we should be most concerned about and more clarity about how we can best reduce them. There's much more to be said here, but I'll let this suffice for now.

Comment by magnusvinding on New book — "Suffering-Focused Ethics: Defense and Implications" · 2020-05-31T11:41:34.283Z · EA · GW

Thanks for your comment, George.

Sections 1.4 and 8.5 in my book deal directly with the first issue you raise. Also see chapter 3, "Creating Happiness at the Price of Suffering Is Wrong", for various arguments against a moral symmetry between pleasure and suffering. But many chapters in the first part of the book deal with this.

>Empirically, I think it's pretty clear that most people are willing to trade off pleasure and pain for themselves.

I say a good deal about this in chapter 2. I also discuss the moral relevance of such intrapersonal claims in section 3.2, "Intra- and Interpersonal Claims".

Comment by magnusvinding on What analysis has been done of space colonization as a cause area? · 2019-10-12T10:20:22.861Z · EA · GW

You're welcome! :-)

Whether this is indeed a dissenting view seems unclear. Relative to the question of how space expansion would affect x-risk, it seems that environmentalists (of whom there are many) tend to believe it would increase such risks (though it's of course debatable how much weight to give their views). Some highly incomplete considerations can be found here: https://en.wikipedia.org/wiki/Space_colonization#Objections

The sentiment expressed in the following video by Bill Maher, i.e. that space expansion is a "dangerous idea" at this point, may well be shared by many people on reflection: https://www.youtube.com/watch?v=mrGFEW2Hb2g

One may say similar things in relation to whether it's a dissenting view on space expansion as a cause (even if we hold x-risk constant). For example, space expansion would most likely increase total suffering in expectation — see https://reducing-suffering.org/omelas-and-space-colonization/ — and one (probably unrepresentative) survey found that a significant plurality of people favored "minimizing suffering" as the ideal goal a future civilization should strive for: https://futureoflife.org/superintelligence-survey/.

Interestingly, the same survey also found that the vast majority of people want life to spread into space, which appears inconsistent with the plurality preference for minimizing suffering. An apparent case of (many) people's preferences contradicting themselves, at least in terms of the likely implications of these preferences.

Comment by magnusvinding on What analysis has been done of space colonization as a cause area? · 2019-10-10T16:10:54.879Z · EA · GW

Some have argued that space colonization would increase existential risks. Here is political scientist Daniel Deudney, whose book Dark Skies is supposed to be published by OUP this fall:

Once large scale expansion into space gets started, it will be very difficult to stop. My overall point is that we should stop viewing these ambitious space expansionist schemes as desirable, even if they are not yet feasible. Instead we should see them as deeply undesirable, and be glad that they are not yet feasible.[…] Space expansion may indeed be inevitable, but we should view this prospect as among the darkest technological dystopias. Space expansion should be put on the list of catastrophic and existential threats to humanity, and not seen as a way [to] solve or escape from them.

Quoted from: http://wgresearch.org/an-interview-with-daniel-h-deudney/

See also:

https://www.youtube.com/watch?v=6D09e6igS4o

https://docs.wixstatic.com/ugd/d9aaad_5c9b881731054ee8bca5fd30699e7df9.pdf

http://nautil.us/blog/-why-we-should-think-twice-about-colonizing-space

Regardless of one's values, it seems worth exploring the likely outcomes of space expansion in depth before pursuing it.

Comment by magnusvinding on How much EA analysis of AI safety as a cause area exists? · 2019-09-28T11:01:58.234Z · EA · GW

Thanks for the stab, Anthony. It's fairly fair. :-)

Some clarifying points:

First, I should note that my piece was written from the perspective of suffering-focused ethics.

Second, I would not say that "investment in AI safety work by the EA community today would only make sense if the probability of AI-catalyzed GCR were decently high". Even setting aside the question of what "decently high" means, I would note that:

1) Whether such investments in AI safety make sense depends in part on one's values. (Though another critique I would make is that "AI safety" is less well-defined than people often seem to think: https://magnusvinding.com/2018/12/14/is-ai-alignment-possible/, but more on this below.)

2) Even if "the probability of AI-catalyzed GCR" were decently high — say, >2 percent — this would not imply that one should focus on "AI safety" in a standard narrow sense (roughly: constructing the right software), nor that other risks are not greater in expectation (compared to the risks we commonly have in mind when we think of "AI-catalyzed catastrophic risks").

You write of "scenarios in which AGI becomes a catastrophic threat". But a question I would raise is: what does this mean? Do we all have a clear picture of this in our minds? This sounds to me like a rather broad class of scenarios, and a worry I have is that we all have "poorly written software" scenarios in mind, although such scenarios could well comprise a relatively narrow subset of the entire class that is "catastrophic scenarios involving AI".

Zooming out, my critique can be crudely summarized as a critique of two significant equivocations that I see doing an exceptional amount of work in many standard arguments for "prioritizing AI".

First, there is what we may call the AI safety equivocation (or motte and bailey): people commonly fail to distinguish between 1) a focus on future outcomes controlled by AI and 2) a focus on writing "safe" software. Accepting that we should adopt the former focus by no means implies we should adopt the latter. By (imperfect) analogy, to say that we should focus on future outcomes controlled by humans does not imply that we should focus primarily on writing safe human genomes.

The second is what we may call the intelligence equivocation, which is the one you described. We operate with two very different senses of the term "intelligence", namely 1) the ability to achieve goals in general (derived from Legg & Hutter, 2007), and 2) "intelligence" in the much narrower sense of "advanced cognitive abilities", roughly equivalent to IQ in humans.

These two are often treated as virtually identical, and we fail to appreciate the rather enormous difference between them, as argued in/evident from books such as The Knowledge Illusion: Why We Never Think Alone, The Ascent of Man, The Evolution of Everything, and The Secret of Our Success. This was also the main point in my Reflections on Intelligence.

Intelligence2 lies all in the brain, whereas intelligence1 includes the brain and so much more, including all the rest of our well-adapted body parts (vocal cords, hands, upright walk — remove just one of these completely in all humans and human civilization is likely gone for good). Not to mention our culture and technology as a whole, which is the level at which our ability to achieve goals at a significant level really emerges: it derives not from any single advanced machine but from our entire economy. A vastly greater toolbox than what intelligence2 covers.

Thus, to assume that we by boosting intelligence2 to vastly super-human levels necessarily get intelligence1 at a vastly super-human level is a mistake, not least since "human-level intelligence1" already includes vastly super-human intelligence2 in many cognitive domains.

Comment by magnusvinding on How much EA analysis of AI safety as a cause area exists? · 2019-09-19T17:26:43.665Z · EA · GW

In brief: the less of a determinant specific AGI structure is of future outcomes, the less relevant/worthy of investment it is.

Comment by magnusvinding on How much EA analysis of AI safety as a cause area exists? · 2019-09-16T14:57:50.991Z · EA · GW

Interesting posts. Yet I don't see how they support that what I described is unlikely. In particular, I don't see how "easy coordination" is in tension with what I wrote.

To clarify, competition that determines outcomes can readily happen within a framework of shared goals, and as instrumental to some overarching final goal. If the final goal is, say, to maximize economic growth (or if that is an important instrumental goal), this would likely lead to specialization and competition among various agents that try out different things, and which, by the nature of specialization, have imperfect information about what other agents know (not having such specialization would be much less efficient). In this, a future AI economy would resemble ours more than far-mode thinking suggests (this does not necessarily contradict your claim about easier coordination, though).

A reason I consider what I described likely is not least that I find it more likely that future software systems will consist in a multitude of specialized systems with quite different designs, even in the presence of AGI, as opposed to most everything being done by copies of some singular AGI system. This "one system will take over everything" strikes me as far-mode thinking, and not least unlikely given the history of technology and economic growth. I've outlined my view on this in the following e-book (though it's a bit dated in some ways): https://www.smashwords.com/books/view/655938 (short summary and review by Kaj Sotala: https://kajsotala.fi/2017/01/disjunctive-ai-scenarios-individual-or-collective-takeoff/)



Comment by magnusvinding on How much EA analysis of AI safety as a cause area exists? · 2019-09-15T15:24:25.221Z · EA · GW

Thanks for sharing and for the kind words. :-)

I should like to clarify that I also support FRI's approach to reducing AI s-risks. The issue is more how big a fraction of our resources approaches of this kind deserve relative to other things. My view is that, relatively speaking, we very much underinvest in addressing other risks, by which I roughly mean "risks not stemming primarily from FOOM or sub-optimally written software" (which can still involve AI plenty, of course). I would like to see a greater investment in broad explorative research on s-risk scenarios and how we can reduce them.

In terms of explaining the (IMO) skewed focus, it seems to me that we mostly think about AI futures in far mode, see https://www.overcomingbias.com/2010/06/near-far-summary.html and https://www.overcomingbias.com/2010/10/the-future-seems-shiny.html. The perhaps most significant way in which this shows is that we intuitively think the future will be determined by a single or a few agents and what they want, as opposed to countless different agents, cooperating and competing with many (for those future agents) non-intentional factors influencing the outcomes.

I'd argue scenarios of the latter kind are far more likely given not just the history of life and civilization, but also in light of general models of complex systems and innovation (variation and specialization seem essential, and the way these play out is unlikely to conform to a singular will in anything like the neat way far mode would portray it). Indeed, I believe such a scenario would be most likely to emerge even if a single universal AI ancestor took over and copied itself (specialization would be adaptive, and significant uncertainty about the exact information and (sub-)aims possessed by conspecifics would emerge).

In short, I think we place too much weight on simplistic toy models of the future, in turn neglecting scenarios that don't conform neatly to these, and the ways these could come about.

Comment by magnusvinding on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-10T14:58:30.906Z · EA · GW
That's why the very first words of my comment were "I don't identify as a utilitarian."

I appreciate that, and as I noted, I think this is fine. :-)

I just wanted to flag this because it took me some time to clarify whether you were replying based on 1) moral uncertainty/other frameworks, or 2) instrumental considerations relative to pure utilitarianism. I first assumed you were replying based on 2) (as Brian suggested), and I believe many others reading your answer might draw the same conclusion. But a closer reading made it clear to me you were primarily replying based on 1).

Comment by magnusvinding on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T14:11:35.967Z · EA · GW
The contractarian (and commonsense and pluralism, but the theory I would most invoke for theoretical understanding is contractarian) objection to such things greatly outweighs the utilitarian case.

It is worth noting that this is not, as it stands, a reply available to a pure traditional utilitarian.

failing to leave one galaxy, let alone one solar system for existing beings out of billions of galaxies would be ludicrously monomaniacal and overconfident

But a relevant question here is whether that also holds true given a purely utilitarian view, as opposed to, say, from a perspective that relies on various theories in some notional moral parliament.

It is, of course, perfectly fine to respond to the question "how do most utilitarians feel about X?" by saying "I'm not a utilitarian, but I am sympathetic to it, and here is how someone sympathetic to utilitarianism can reply by relying on other moral frameworks". But then it's worth being clear that the reply is not a defense of pure traditional utilitarianism — quite the contrary.

Comment by magnusvinding on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-07T09:37:20.632Z · EA · GW

Thanks for posting this, Richard. :-)

I think it is worth explaining what Knutsson's argument in fact is.

His argument is not that the replacement objection against traditional/classical utilitarianism (TU) is plausible. Rather, the argument is that the replacement objection against TU (as well as other consequentialist views it can be applied to, such as certain prioritarian views) is roughly as plausible as the world destruction argument is against negative utilitarianism (NU). And therefore, if one rejects NU and favors TU, or a similarly "replacement vulnerable" view, because of the world destruction argument, one must explain why the replacement argument is significantly less problematic for these other views.

That is, if one rejects such thought experiments in the case of TU and similar views because 1) endorsing or even entertaining such an idea would be sub-optimal in the bigger picture for cooperation reasons, 2) because it would be overconfident to act on it even if one finds the underlying theory to be the most plausible one, 3) because it leaves out "consideration Y", 4) because it seems like a strawman on closer examination, Knutsson's point is that one can make similar points in the case of NU and world destruction with roughly equal plausibility.

As Knutsson writes in the abstract:

>The world destruction argument is not a reason to reject negative utilitarianism in favour of these other forms of consequentialism, because there are similar arguments against such theories that are at least as persuasive as the world destruction argument is against negative utilitarianism.



Comment by magnusvinding on Critique of Superintelligence Part 2 · 2019-06-20T13:20:06.848Z · EA · GW

Thanks for writing this. :-)

Just a friendly note: even as someone who largely agrees with you, I must say that I think a term like "absurd" is generally worth avoiding in relation to positions one disagrees with (I also say this as someone who is guilty of having used this term in similar contexts before).

I think it is better to use less emotionally-laden terms, such as "highly unlikely" or "against everything we have observed so far", not least since "absurd" hardly adds anything of substance beyond what these alternatives can capture.

To people who disagree strongly with one's position, "absurd" will probably not be received so well, or at any rate optimally. It may also lead others to label one as overconfident and incapable of thinking clearly about low-probability events. And those of us who try to express skepticism of the kind you do here already face enough of a headwind from people who shake their heads while thinking to themselves "they clearly just don't get it".


Other than that, I'm keen to ask: are you familiar with my book Reflections on Intelligence? It makes many of the same points that you make here. The same is true of many of the (other) resources found here: https://magnusvinding.com/2017/12/16/a-contra-ai-foom-reading-list/

Comment by magnusvinding on Moral Anti-Realism Sequence #1: What Is Moral Realism? · 2018-06-06T08:05:58.457Z · EA · GW

Thanks for your reply :-)

For instance, I don't understand how [open individualism] differs from empty individualism. I'd understand if these are different framings or different metaphores, but if we assume that we're talking about positions that can be true or false, I don't understand what we're arguing about when asking whether open individualism or true, or when discussing open vs. empty individualism.

I agree completely. I identify equally as an open and empty individualist. As I've written elsewhere (in You Are Them): "I think these 'positions' are really just two different ways of expressing the same truth. They merely define the label of 'same person' in different ways."

Also, I think it's perfectly coherent to have egoistic goals even under a reductionist view of personal identity.

I guess it depends on what those egoistic goals are. The fact that some egoistic goals are highly instrumentally useful for the benefit of others (even if one doesn't intend to benefit others, cf. Smith's invisible hand, the deep wisdom of Ayn Rand, and also, more generally, the fact that many of our selfish desires probably shouldn't be expected to be that detrimental to others, or at least our in-group, given that we evolved as social creatures) is, I think, a confounding factor that makes it seem plausible to say that pursuing them is coherent/non-problematic (in light of a reductionist view of personal identity). Yet if it is transparent that the pursuit of these egoistic goals comes at the cost of many other beings' intense suffering, I think we would be reluctant to say that pursuing them is "perfectly coherent" (especially in light of such a view of personal identity, yet many would probably even say it regardless; one can, for example, also argue it is incoherent with reference to inconsistency: "we should not treat the same/sufficiently similar entities differently"). For instance, would we, with this view of personal identity, really claim that it is "perfectly coherent" to choose to push button A: "you get a brand new pair of shorts", when we could have pushed button B: "You prevent 100 years of torture (for someone else in one sense, yet for yourself in another, quite real sense) which will not be prevented if you push button A". It seems much more plausible to deem it perfectly coherent to have a selfish desire to start a company or to signal coolness or otherwise gain personal satisfaction by being an effective altruist.

But if that's all we mean by "moral realism" then it would be rather trivial.

I don't quite understand why you would call this trivial. Perhaps it is trivial that many of us, perhaps even the vast majority, agree. Yet, as mentioned, the acceptance of a principle like "avoid causing unnecessary suffering" is extremely significant in terms of its practical implications; many have argued that it implies the adoption of veganism (where the effects on wildlife as a potential confounding factor is often disregarded, of course), and one could even employ it to argue against space colonization (depending on what we hold to constitute necessity). So, in terms of practical consequences at least, I'm almost tempted to say that it could barely be more significant. And it's not clear to me that agreement on a highly detailed axiology would necessarily have significantly more significant, or even more clear, implications than what we could get off the ground from quite crude principles (it seems to me there may well be strong diminishing returns here, if you will, as you can also seem to agree weakly with in light of the final sentence of your reply). Also because the large range of error produced by empirical uncertainty may, on consequentialist views at least, make the difference in practice between realizing a detailed and a crude axiology a lot less clear than the difference between the two axiologies at the purely theoretical level -- perhaps even so much so as to make it virtually vanish in many cases.

Maybe my criteria are a bit too strict [...]

I'm just wondering: too strict for what purpose?

This may seem a bit disconnected, but I just wanted to share an analogy I just came to think of: Imagine mathematics were a rather different field where we only agreed about simple arithmetic such as 2 + 2 = 4, and where everything beyond that were like the Riemann hypothesis: there is no consensus, and clear answers appear beyond our grasp. Would we then say that our recognition that 2 + 2 = 4 holds true, at least in some sense (given intuitive axioms, say), is trivial with respect to asserting some form of mathematical realism? And would finding widely agreed-upon solutions to our harder problems constitute a significant step toward deciding whether we should accept such a realism? I fail to see how it would.

Comment by magnusvinding on Moral Anti-Realism Sequence #1: What Is Moral Realism? · 2018-06-04T12:29:13.142Z · EA · GW

Thanks for writing this, Lukas. :-)

As a self-identified moral realist, I did not find my own view represented in this post, although perhaps Railton’s naturalist position is the one that comes the closest. I can identify both as an objectivist, a constructivist, and a subjectivist, indeed even a Randian objectivist. It all rests on what the nature of the ill-specified “subject” in question is. If one is an open individualist, then subjectivism and objectivism will, one can argue, collapse into one. According to open individualism, the adoption of Randianism (or, in Sidgwick’s terminology, “rational egoism”) implies that we should do what is best for all sentient beings. In other words, subjectivism without indefensibly demarcated subjects (or at least subjects whose demarcation is not granted unjustifiable metaphysical significance) is equivalent with objectivism. Or so I would argue.

As for Moore’s open question argument (which I realize was not explored in much depth here), it seems to me, as has been pointed out by others, that there can be an ontological identity between that which different words refer to even if these words are not commonly reckoned strictly synonymous. For example: Is water the same as H2O? Is the brain the mind? These questions are hardly meaningless, even if we think the answer to both questions is 'yes'. Beyond that, one can also defend the view that “the good” is a larger set of which any specific good thing we can point to is merely a subset, and hence the question can also make sense in this way (i.e. it becomes a matter of whether something is part of “the good”).

To turn the tables a bit here, I would say that to reject moral realism, on my account, one would need to say that there is no genuine normative force or property in, say, a state of extreme suffering (consider being fried in a brazen bull for concreteness). [And I think one can fairly argue that to say such a state has “genuine normative force” is very much an understatement.]

“Normative force for the experiencing subject or for all agents?” one may then ask. Yet on my account of personal identity, the open individualist account (cf. https://en.wikipedia.org/wiki/Open_individualism and https://www.smashwords.com/books/view/719903), there is no fundamental distinction, and thus my answer would simply be: yes, for the experiencing subject, and hence for all agents (this is where our intuitions scream, of course, unless we are willing to suspend our strong, Darwinianly adaptive sense of self as some entity that rides around in some small part of physical reality). One may then object that different agents occupy genuinely different coordinates in spacetime, yet the same can be said of what we usually consider the same agent. So there is really no fundamental difference here: If we say that it is genuinely normative for Tim at t1 (or simply Tim1) to ensure that Tim at t2 (or simply Tim2) suffers less, then why wouldn’t the same be true of Tim1 with respect to John1, 2, 3…?

With respect to the One Compelling Axiology you mention, Lukas, I am not sure why you would set the bar so high in terms of specificity in order to accept a realist view. I mean, if “all philosophers or philosophically-inclined reasoners” found plausible a simple, yet inexhaustive principle like “reduce unnecessary suffering” why would that not be good enough to demonstrate its "realism" (on your account) when a more specific one would? It is unclear to me why greater specificity should be important, especially since even such an unspecific principle still would have plenty of practical relevance (many people can admit that they are not living in accordance with this principle, even as they do accept it).