Posts

Lukas_Gloor's Shortform 2020-07-27T14:35:50.329Z
Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue) 2020-06-17T12:33:05.392Z
Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails 2020-06-14T13:33:41.638Z
Moral Anti-Realism Sequence #3: Against Irreducible Normativity 2020-06-09T14:38:49.163Z
Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree 2020-06-05T07:51:59.975Z
Moral Anti-Realism Sequence #1: What Is Moral Realism? 2018-05-22T15:49:52.516Z
Cause prioritization for downside-focused value systems 2018-01-31T14:47:11.961Z
Multiverse-wide cooperation in a nutshell 2017-11-02T10:17:14.386Z
Room for Other Things: How to adjust if EA seems overwhelming 2015-03-26T14:10:52.928Z

Comments

Comment by lukas_gloor on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T16:28:18.363Z · EA · GW

It's maybe worth noting that there's an asymmetry: For people who think wild-animal lives are net positive, there are many things that contain even more sentient value than rainforest. By contrast, if you think wild-animal lives are net negative, only few things contain more sentient disvalue than rainforest. (Of course, in comparison to expected future sentience, biological life only makes up a tiny portion, so rainforest is unlikely to be a priority from a longtermist perspective.)

I understand the worries described in the OP (apart from the "let's better not find out" part).  I
 think it's important for  EAs in the WAS reduction movement to proactively counter simplistic memes and advocate interventions that don't cause great harm from the perspective of some very popular moral perspectives. I think that's a moral responsibility for animal advocates with suffering-focused views. (And as we see in other replies here, this sounds like it's already common practice!) 

At the same time, I feel like the discourse on this topic can be a bit disingenuous sometimes, where people whose actions otherwise don't indicate much concern for the moral importance of the action-omission distinction (esp. when it comes to non-persons) suddenly employ rhetorical tactics that make it sound like "wrongly thinking animal lives are negative" is a worse mistake than "wrongly thinking they are positive". 

I also think this issue is thorny because, IMO, there's no clear answer. There are moral judgment calls to make that count for at least as much as empirical discoveries.
 

Comment by lukas_gloor on What is a book that genuinely changed your life for the better? · 2020-10-23T09:11:18.338Z · EA · GW

I also read Animorphs! I saw this tweet about it recently that was pretty funny. 
 

Comment by lukas_gloor on What is a book that genuinely changed your life for the better? · 2020-10-23T09:05:36.745Z · EA · GW

The Ancestor's Tale got me hooked with trying to understand the world. It was the perfect book for me at the time I read it (2008) because my English wasn't that good yet and I would plausibly have been too overwhelmed with reading The Selfish Gene right away. And it was just way too cool to have this backwards evolutionary journey to go through. Apart from the next item on this list, I can't remember another book that I was so eager to read once I saw what it's about. I really wish I could have that feeling again!

Practical Ethics was life-changing for the obvious reasons and also because it got me far enough into ethics to develop the ambition to solve all the questions Singer left open.

Atonement  was maybe the fiction book that influenced me the most. I had to re-read it for an English exam and it got me thinking about typical mind fallacy and how people can perceive/interpret the same situation in very different ways.

Fiction books I read when I was younger must have affected me in various ways, but I can't point to any specific effect with confidence.

Comment by lukas_gloor on What is a "Kantian Constructivist view of the kind Christine Korsgaard favours"? · 2020-10-21T10:55:04.389Z · EA · GW

I'm not sure I remember this the right way, but here's an attempt: 

"Constructivism" can refer to a family of normative-ethical views according to which objectively right moral facts are whatever would be the output of some constructive function, such as an imagined social contract or the Kantian realm of ends. "Constructivism" can also refer to a non-realist metaethical view that moral language doesn't refer to moral facts that exist in an outright objective sense, but are instead "construed" intersubjectively via some constructive function. 

So, a normative-ethical constructivist uses constructive functions to find the objectively right moral facts, while a metaethical constructivist uses constructive functions to explain why we talk as though there are some kind of moral facts at all, and what their nature is.

I'm really not sure I got this exactly right, but I am confident that in the context of this "letter to a young philosopher," the author meant to refer to the metaethical version of constructivism. It's mentioned right next to subjectivism, which is another non-realist metaethical position. Unlike some other Kantians, Korsgaard is not an objectivist moral realist. 

So, I think the author of this letter is criticizing consequentialist moral realism because there's a sense in which its recommendations are "too impartial." The most famous critique of this sort is the "Critique of Utilitarianism" by Bernard Williams. I quoted the most relevant passage here. One way to point to the intuitive force of this critique is as follows: If your moral theory gives the same recommendation whether or not you replace all existing humans with intelligent aliens, something seems (arguably) a bit weird. The "human nature element," as well as relevant differences between different people, are all lost! At least, to anyone who cares about something other than "The one objectively correct thing to care about," the objective morality will seem wrong and alienating. Non-objectivist morality has the feature that moral actions depend on "who's here." That morality arises from people rather than people being receptacles for it. 

I actually agree with this type of critique – I just wouldn't say that it's incompatible with EA. It's only incompatible with how many EAs (especially Oxford-educated ones) currently think about the foundations of ethics.

Importantly, it doesn't automatically follows from this critique of objectivist morality that a strong focus on (some type of) effectiveness is misguided, or that "inefficient" charities suddenly look a lot better. Not at all. Maybe it can happen that certain charities/projects look better from that vantage point, depending on the specifics and so on. But this would require further arguments. 

Comment by lukas_gloor on Buck's Shortform · 2020-09-13T19:55:51.444Z · EA · GW

I thought the same thing recently.

Comment by lukas_gloor on Asking for advice · 2020-09-09T13:33:55.596Z · EA · GW

I have the same!

For me it's the feeling of too many options, that some options may be less convenient for the other person than they initially would think, and that I have to try to understand this interface (IT aversion) instead of replying normally (even just clicking on the link feels annoying).

Comment by lukas_gloor on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T06:38:04.533Z · EA · GW

I did read the post, and I mostly agree with you about the content (Edit: at least in the sense that I think large parts of the argument are valid; I think there are some important disanalogies that Hanson didn't mention, like "right to bodily integrity" being way clearer than "moral responsibility toward your marriage partner"). I find it weird that just because I think a point is poorly presented, people think I disagree with the point. (Edit: It's particularly the juxtaposition of "gently raped" that comes also in the main part of the text. I also would prefer more remarks that put the reader at ease, e.g., repeating several times that it's all just a thought experiment, and so on.)

There's a spectrum of how much people care about a norm to present especially sensitive topics in a considerate way. You and a lot of other people here seem to be so far on one end of the spectrum that you don't seem to notice the difference between me and Ezra Klein (in the discussion between Sam Harris and Ezra Klein, I completely agreed with Sam Harris.) Maybe that's just because there are few people in the middle of this spectrum, and you usually deal with people who bring the same types of objections. But why are there so few people in the middle of this spectrum? That's what I find weird.

Some people here talk about a slippery slope and having to defend the ground at all costs. Is that the reasoning?

I want to keep up a norm that considerateness is really good. I think that's compatible with also criticizing bad outgrowths of considerate impulses. Just like it's compatible to care about truth-seeking, but criticize bad outgrowths of it. (If a virtue goes too far, it's not a virtue anymore.)

Comment by lukas_gloor on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T06:02:14.196Z · EA · GW

Thanks, that makes sense to me now! The three categories are also what I pointed out in my original comment:

Yes, it's a tradeoff, but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident. I don't think he's even trying, and maybe he's trying to deliberately walk as close to the line as possible.

Okay, so you cared mostly about this point about mind reading:

While I'm comfortable predicting those categories will exist, confidently asserting that someone falls into any particular category is hard,

This is a good point, but I didn't find your initial comment so helpful because this point against mind reading didn't touch on any of the specifics of the situation. It didn't address the object-level arguments I gave:

[...] I just feel like some of the tweet wordings were deliberately optimized to be jarring.)
but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident.

I felt confused about why I was presented with a fully general argument for something I thought I indicated I already considered. If I read your comment as "I don't want to comment on the specific tweets, but your interpretation might be a bit hasty" – that makes perfect sense. But by itself, it felt to me like I was being strawmanned for not being aware of obvious possibilities. Similar to khorton, I had the impulse to say "What does this have to do with trolleys, shouldn't we, if anything, talk about the specific wording of the tweets?" Because to me, phrases like "gentle, silent rape" seem obviously unnecessarily jarring even as far as twitter discussions about rape go." (And while one could try to defend this as just blunt or blithe, I think the reasoning would have to be disanalogous to your trolley or food examples, because it's not like it should be surprising to any Western person in the last two decades that rape is a particularly sensitive topic – very unlike the "changing animal food to vegan food" example you gave.)

Comment by lukas_gloor on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T19:13:27.735Z · EA · GW
Now, I'm not saying Hanson isn't deliberately edgy; he very well might be.

If you're not saying that, then why did you make a comment? It feels like you're stating a fully general counterargument to the view that some statements are clearly worth improving, and that it matters how we say things. That seems like an unattractive view to me, and I'm saying that as someone who is really unhappy with social justice discourse.

Edit: It makes sense to give a reminder that we may sometimes jump to conclusions too quickly, and maybe you didn't want to voice unambiguous support for the view that the comment wordings were in fact not easy to improve on given the choice of topic. That would make sense – but then I have a different opinion.

Comment by lukas_gloor on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T12:53:11.640Z · EA · GW

That all makes sense. I'm a bit puzzled why it has to be edgy on top of just talking with fewer filters. It feels to me like the intention isn't just to discuss ideas with people of a certain access need, but also some element of deliberate provocation. (But maybe you could say that's just a side product of curiosity about where the lines are – I just feel like some of the tweet wordings were deliberately optimized to be jarring.) If it wasn't for that one tweet that Hanson now apologized for, I'd have less strong opinions on whether to use the term "misstep." (And the original post used it in plural, so you have a point.)

Comment by lukas_gloor on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T09:29:29.590Z · EA · GW

Thanks, those are good points. I agree that this is not black and white, that there are some positives to being edgy.

That said, I don't think you make a good case for the alternative view. I wouldn't say that the problem with Hanson's tweets is that they cause "emotional damage."The problem is that they contribute to the toxoplasmosa of rage dynamics (esp. combined with some people's impulse to defend everything about them). My intuition is that this negative effect outweighs the positive effects you describe.

Comment by lukas_gloor on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T08:17:33.290Z · EA · GW
This seems like a tradeoff to me

Yes, it's a tradeoff, but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident. I don't think he's even trying, and maybe he's trying to deliberately walk as close to the line as possible. What's the point in that? If I'm right, I wouldn't want to gratify that. I think it's lacking nuance if you blanket object to the "misstep" framing, especially since that's still a relatively weak negative judgment. We probably want to be able to commend some people on their careful communication of sensitive topics, so we also have to be willing to call it out if someone is doing an absolutely atrocious job at it.

For reference, I have listened to a bunch of politically controversial podcasts by Sam Harris, and even though I think there's a bit of room to communicate even better, there were no remarks I'd label as 'missteps.' By contrast, several of Hanson's tweets are borderline at best, and at least one now-deleted tweet I saw was utterly insane. I don't think it's fair that everyone has to be at least as good at careful communication as Harris to be able to openly talk about sensitive topics (and it seems the bar from societal backlash is even higher now, which is of course terrible), but maybe we can expect people to at least do better than Hanson? That doesn't mean that Hanson should be disinvited from events, but I feel like it would suck if he didn't take more time to make his tweets less needlessly incendiary.

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-08-24T22:01:55.939Z · EA · GW

I'm not following the developments anymore. I could imagine that the IFR is now lower than it used to be in April because treatment protocols have improved.

Comment by lukas_gloor on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-13T10:37:54.666Z · EA · GW

Thinking about insights that were particularly relevant for me / my values:

  • Reducing long-term risks from malevolent actors as a potentially promising cause area
  • The importance of developing (the precursors for) peaceful bargaining strategies
    • Related: Anti-realism about bargaining? (I don't know if people still believed this in 2015, but early discussions on Lesswrong seemed to indicate that a prevalent belief was that there exists a proper solution to good bargaining that works best independently of the decision architecture of other agents in the environment.)
  • Possible implications of correlated decision-making in large worlds
    • Arguably, some people were thinking along these lines before 2015. However, so many things fall under the heading of "acausal trade" that it's hard to tell, and judging by conversations with people who think they understood the idea but actually mixed it up with something else, I assign 40% to this having been relevantly novel.
  • Some insights on metaethics might qualify. For instance, the claim "Being morally uncertain and confidently a moral realist are in tension" is arguably a macrostrategically relevant insight. It suggests that more discussion of the relevance of having underdetermined moral values (Stuart Armstrong wrote about this a lot) seems warranted, and that, depending on the conclusions from how to think about underdetermined values, peer disagreement might work somewhat differently for moral questions than for empirical ones. (It's hard to categorise whether these are novel insights or not. I think it's likely that there were people who would have confidently agreed with these points in 2015 for the right reasons, but maybe lacked awareness that not everyone will agree on addressing the underdetermination issue in the same way, and so "missed" a part of the insight.)
Comment by lukas_gloor on EA reading list: utilitarianism and consciousness · 2020-08-08T12:41:25.368Z · EA · GW

I also noticed this when I started planning a blogpost on this topic!

De Lazari-Radek and Singer's The Point of View of the Universe has a chapter on hedonism, but I think the argument is less developed than in the two links you give. (BTW, if you have a copy of the paper by Adam Lerner and think it's okay to share it with me, I'd be very interested!)

It's interesting to note that Sinhababu's epistemic argument for hedonism explicitly relies on the premise "moral realism is true." Without that premise, the argument would be less forceful (what remains would be the comparison that pleasure's goodness is similar to the brigthness of the color "lemon yellow" – but that doesn't seem to support the strong version of the claim "pleasure is good.")

Comment by lukas_gloor on Max_Daniel's Shortform · 2020-08-06T13:36:37.584Z · EA · GW

Related: Relationships in a post-singularity future can also be set up to work well, so that the setup overdetermines any efforts by the individuals in them.

To me, that takes away the whole point. I don't think this would feel less problematic if somehow future people decided to add some noise to the setup, such that relationships occasionally fail.

The reason I find any degree of "setup" problematic is because this seems like emphasizing the self-oriented benefits one gets out of relationships, and de-emphasizing the from-you-independent identity of the other person. It's romantic to think that there's a soulmate out there who would be just as happy to find you as you are about finding them. It's not that romantic to think about creating your soulmate with the power of future technology (or society doing this for you).

This is the "person-affecting intuition for thinking about soulmates." If the other person exists already, I'd be excited to meet them, and would be motivated to put in a lot of effort to make things work, as opposed to just giving up on myself in the face of difficulties. By contrast, if the person doesn't exist yet or won't exist in a way independent of my actions, I feel like there's less of a point/appeal to it.

Comment by lukas_gloor on The problem with person-affecting views · 2020-08-06T11:36:29.470Z · EA · GW

It's great to have a short description of the difficulties for person-affecting intuitions!

Any reasonable theory of population ethics must surely accept that C is better than B. C and B contain all of the same people, but one of them is significantly better off in C (with all the others equally well off in both cases). Invoking a person-affecting view implies that B and C are equally as good as each other, but this is clearly wrong.

That a good argument. Still, I find person-affecting views underrated because I suspect that many people have not given much thought to whether it even makes sense to treat population ethics in the same way as other ethical domains.

Why do we think we have to be able rate all possible world states according to how impartially good or bad they are? Population ethics seems underspecified on exactly the dimension where many moral philosophers derive "objective" principles from: others’ interests. It’s the one ethical discipline where others’ interests are not fixed. The principles that underlie preference utilitarianism aren’t sufficiently far-reaching to specify what to do with newly created people. And preference utilitarianism is itself incomplete, because of the further question: What are my preferences? (If everyone's preference was to be a preference utilitarian, we'd all be standing around waiting until someone has a problem or forms a preference that's different from selflessly adhering to preference utilitarianism.)

Preference utilitarianism seems like a good answer to some important question(s) that fall(s) under the "morality" heading. But it can't cover everything. Population ethics is separate from the rest of ethics.

And there's an interesting relation between how we choose to conceptualize population ethics and how we then come to think about "What are my life goals?"

If we think population ethics has a uniquely correct solution that ranks all world states without violations of transitivity or other, similar problems, we have to think that, in some way, there's a One Compelling Axiology telling us the goal criteria for every sentient mind. That axiology would specify how to answer "What are my life goals?"

By contrast, if axiology is underdetermined, then different people can rationally adopt different types of life goals.

I self-identify as a moral anti-realist because I'm convinced there's no One Compelling Axiology. Insofar as there's something fundamental and objective to ethics, it's this notion of "respecting others' interests." People's life goals (their "interests") won't converge.

Some people take personal hedonism as their life goals, some just want to Kill Bill, some want to have a meaningful family life and die from natural causes here on earth, some don't think about the future at all and live the party life, some discount any aspirations of personal happiness in favor of working toward positively affecting transformative AI, some want to live forever but also do things to help others realize their dreams along the way, some want to just become famous, etc.

If you think of humans as the biological algorithm we express, rather than the things we come to believe and identify with at some particular point in our biography (based on what we've lived), then you might be tempted to seek a One Compelling Axiology with the question "What's the human policy?" ("Policy" in analogy to machine learning.) For instance, you could plan to devote the future's large-scale simulation resources to figuring out the structure of what different humans come to value in different simulated environments, with different experienced histories. You could do science about this and identify general patterns. But suppose you've figured out the general patterns and tell the result to the Bride in Kill Bill. You tell her "the objective human policy is X." She might reply "Hold on with your philosophizing, I'm going to have to kill Bill first. Maybe I'll come back to you and consider doing X afterwards." Similarly, if you tell a European woman with husband and children about the arguments to move to San Francisco to work on reducing AI risks, because that's what she ended up caring about on many runs of simulations of her in environments where she had access to all the philosophical arguments, she might say "Maybe I'd be receptive to that in another life, but I love my husband in this world here, and I don't want to uproot my children, so I'm going to stay here and devote less of my caring capacity to longtermism. Maybe I'll consider wanting to donate 10% of my income, though." So, regardless of questions about their "human policy," in terms of what actual people care about at given points in time, life goals may differ tremendously between people, and even between copies of the same person in different simulated environments. That's because life goals also track things that relate to the identities we have adopted and the for-us meaningful social connections we have made.

If you say that population ethics is all-encompassing, you're implicitly saying that all the complexities in the above paragraphs count for nothing (or not much), and that people should just adopt the same types of life goals, no matter their level of novelty-seeking, achievement striving, proscociality, embeddedness in meaningful social connections, views on death, etc. You're implicitly saying that the way the future should ideally go has almost nothing to do with the goals of presently existing people. To me, that stance is more incomprehensible than some problem with transitivity.

Alternatively, you can say that maybe all of this can't be put under a single impartial utility function. If so, it seems that you're correct that you have to accept something similar to the violation of transitivity you describe. But is it really so bad if we look at it with my framing?

It's not "Even though there's a One Compelling Axiology, I'll go ahead and decide to do the grossly inelegant thing with it." Instead, it's "Ethics is about life goals and how to relate to other people with different life goals, as well as asking what types of life goals are good for people. Probably, different life goals are good for different people. Therefore, as long as we don't know which people exist, not everything can be determined. There also seems to be a further issue about how to treat cases where we create new people: that's population ethics, and it's a bit underdetermined, which gives more freedom for us to choose what to do with our future lightcone."

So, I propose to consider the possibility of drawing a more limited role for population ethics than it is typically conceptualized under. We could maybe think of it as: A set of appeals or principles by which beings can hold accountable the decision-makers that created them. This places some constraints on the already existing population, but it leaves room for personal life projects (as opposed to "dictatorship of the future," where all our choices about the future light cone are predetermined by the One Compelling Axiology, and so have no relation to which exact people are actually alive and care about it).

To give a few examples for population-ethical principles:

  • All else equal, it seems objectionable on other-regarding grounds to create minds that lament their existence.
  • It also seems objectionable, all else equal, to create minds and place them in situations where their interests are only somewhat fulfilled, if one could have provided them with better circumstances.
  • Likewise, it seems objectionable, all else equal, to create minds destined to constant misery, yet with a strict preference for existence over non-existence.

(Note that the first principle is about objecting to the fact of being created, while the latter two principles are about objecting to how one was created.)

We can also ask: Is it ever objectionable to fail to create minds – for instance, in cases where they’d have a strong interest in their existence?

(From a preference-utilitarian perspective, it seems left open whether the creation of some types of minds can be intrinsically important. Satisfied preferences are good because satisfying preferences is just what it means to consider the interests of others. Also counting the interests of not-yet-existent beings is a possible extension of that, but a somewhat peculiar one. The choice looks underdetermined, again.)

Ironically, the perspective I have described becomes very similar to how non-philosophers commonly think about the ethics of having children:

  • Parents are obligated to provide a very high standard of care for their children (universal principle)
  • People are free to decide against becoming parents (personal principle)
  • Parents are free to want to have as many children as possible (personal principle), as long as the children are happy in expectation (universal principle)
  • People are free to try to influence other people’s stances and parenting choices (personal principle), as long as they remain within the boundaries of what is acceptable in a civil society (universal principle)

Universal principles fall out of considerations about respecting others' interests. Personal principles fall out of considerations about "What are my life goals?"

Personal principles can be inspired by considerations of morality, i.e., they can be about choosing to give stronger weight to universal principles and filling out underdetermined stuff with one's most deeply held moral intuitions. Many people find existence meaningless without dedication to something greater than oneself.

Because there are different types of considerations at play in all of this, there's probably no super-elegant way to pack everything into a single, impartially valuable utility function. There will have to be some messy choices about how to make tradeoffs, but there isn't really a satisfying alternative. Just like people have to choose some arbitrary-seeming percentage of how much caring capacity they dedicate toward self-oriented life goals versus other-regarding ones (insofar as the separation is clean; it often isn't so clean), we have to also somehow choose how much weight to give to different moral domains, including the considerations commonly discussed under the heading of population ethics, and how they relate to my life goals and those of other existing people.

Comment by lukas_gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-08-06T10:07:41.216Z · EA · GW
The exact thing that Williams calls 'alienating' is the thing that Singer, Yudkowsky, Parfit and many other realists and anti-realists consider to be the most valuable thing about morality! But you can keep this 'alienation' if you reframe morality as being the result of the basic, deterministic operations of your moral reasoning, the same way you'd reframe epistemic or practical reasoning on the anti-realist view. Then it seems more 'external' and less relativistic.

Nice point!

If your goal here is to convince those inclined towards moral realism to see anti-realism as existentially satisfying, I would recommend a different framing of it. I think that framing morality as a 'personal life goal' makes it seem as though it is much more a matter of choice or debate than it in fact is, and will probably ring alarm bells in the mind of a realist and make them think of moral relativism.

Yeah, I think that's a good suggestion. I had a point about "arguments can't be unseen" – which seems somewhat related to the alienation point.

I didn't quite want to imply that morality is just a life goal. There's a sense in which morality is "out there" – it's just more underdetermined than the realists think, and maybe more goes into whether or not to feel compelled to dedicate all of one's life to other-regarding concerns.

I emphasize this notion of "life goals" because it will play a central role later on in this sequence. I think it's central to all of normativity. Back when I was a moral realist, I used to say "ethics is about goals" and "everything is ethics." There's this position "normative monism" that says all of normativity is the same thing. I kind of feel this way, except that I think the target criteria can differ between people, and are often underdetermined. (As you point out in some comment, things also depend on which parts of one's psychology one identifies with.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-08-06T09:36:40.122Z · EA · GW

This discussion continues to feel like the most productive discussion I've had with a moral realist! :)

However, I do think that normative anti-realism is self-defeating, assuming you start out with normative concepts (though not an assumption that those concepts apply to anything). I consider this argument to be step 1 in establishing moral realism, nowhere near the whole argument.

[...]

So the wager argument for normative realism actually goes like this -
2) We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist 'justification' for our beliefs, in purely descriptive terms of what we will probably end up believing given basic facts about how our minds work in some idealised situation. The other is where there are mind-independent facts about which of our beliefs are justified. The latter is more plausible because of 1).

[...]

Either you think some basic epistemic facts have to exist for reasoning to get off the ground and therefore that epistemic anti-realism is self-defeating, or you are an epistemic anti-realist and don't care about the realist's sense of 'self-defeating'. The AI is in the latter camp, but not because of evidence, the way that it's a moral anti-realist (...However, you haven’t established that all normative statements work the same way—that was just an intuition...), but just because it's constructed in such a way that it lacks the concept of an epistemic reason.
So, if this AI is constructed such that irreducibly normative facts about how to reason aren't comprehensible to it, it only has access to argument 1), which doesn't work. It can't imagine 2).

I think I agree with all of this, but I'm not sure, because we seem to draw different conclusions. In any case, I'm now convinced I should have written the AI's dialogue a bit differently. You're right that the AI shouldn't just state that it has no concept of irreducible normative facts. It should provide an argument as well!

What would you reply if the AI uses the same structure of arguments against other types of normative realism as it uses against moral realism? This would amount to the following trilemma for proponents of irreducible normativity (using section headings from my text):

(1) Is irreducible normativity about super-reasons?

(2) Is (our knowledge of) irreducible normativity confined to self-evident principles?

(3) Is there a speaker-independent normative reality?

I think you're inclined to agree with me that (1) and (2) are unworkable or not worthy of the term "normative realism." Also, it seems like there's a weak sense in which you agree with the points I made in (3), as it relates to the domain of morality.

But maybe you only agree with my points in (3) in a weak sense, whereas I consider the arguments in that section to have stronger implications. The way I thought about this, I think the points in (3) apply to all domains of normativity, and they show that unless we come up with some other way to make normative concepts work that I haven't yet thought of, we are forced to accept that normative concepts, in order to be action-guiding and meaningful, have to be linked to claims about convergence in human expert reasoners. Doesn't this pin down the concept of irreducible normativity in a way that blocks any infinite wagers? It doesn't feel like proper non-naturalism anymore once you postulate this link as a conceptual necessity. "Normativity" became a much more mundane concept after we accepted this link.

However, I think that we humans are in a situation where 2) is open to consideration, where we have the concept of a reason for believing something, but aren't sure if it applies - and if we are in that situation, I think we are dragged towards thinking that it must apply, because otherwise our beliefs wouldn't be justified.

The trilemma applies here as well. Saying that it must apply still leaves you with the task of making up your mind on how normative concepts even work. I don't see alternatives to my suggestions (1), (2) and (3).

What I'm giving here is such a 'partners-in-crime' argument with a structure, with epistemic facts at the base. Realism about normativity certainly should lower the burden of proof on moral realism to prove total convergence now, because we already have reason to believe normative facts exist. For most anti-realists, the very strongest argument is the 'queerness argument' that normative facts are incoherent or too strange to be allowed into our ontology. The 'partners-in-crime'/'infinite wager' undermines this strong argument against moral realism. So some sort of very strong hint of a convergence structure might be good enough - depending on the details.

Since I don't think we have established anything interesting about normative facts, the only claim I see in the vicinity of what you say in this paragraph, would go as follows:

"Since we probably agree that there is a lot of convergence among expert reasoners on epistemic facts, we shouldn't be too surprised if morality works similarly."

And I kind of agree with that, but I don't know how much convergence I would expect in epistemology. (I think it's plausible that it would be higher than for morality, and I do agree that this is an argument to at least look really closely for ways of bringing about convergence on moral questions.)

All I'll say is that I don't consider strongly conflicting intuitions in e.g. population ethics to be persuasive reasons for thinking that convergence will not occur. As long as the direction of travel is consistent, and we can mention many positive examples of convergence, the preponderance of evidence is that there are elements of our morality that reach high-level agreement.

I agree with this. My confidence that convergence won't work is based on not only observing disagreements in fundamental intuitions, but also on seeing why people disagree, and seeing that these disagreements are sometimes "legitimate" because ethical discussions always get stuck in the same places (differences in life goals, which is intertwined with axiology). If one actually thinks about what sorts of assumptions are required for the discussions not to get stuck (something like: "all humans would adopt the same broad types of life goals under idealized conditions"), many people would probably recognize that those assumptions are extremely strong and counterintuitive. Oddly enough, people often don't seem to think that far because they self-identify as moral realists for reasons that don't make any sense. They expect convergence on moral questions because they somehow ended up self-identifying as moral realists, instead of them self-identifying as moral realists because they expect convergence.

(I'll maybe make another comment later today to briefly expand on my line of argument here.)

(I say elements because realism is not all-or-nothing - there could be an objective 'core' to ethics, maybe axiology, and much ethics could be built on top of such a realist core - that even seems like the most natural reading of the evidence, if the evidence is that there is convergence only on a limited subset of questions.)

I also agree with that, except that I think axiology is the one place where I'm most confident that there's no convergence. :)

Maybe my anti-realism is best described as "some moral facts exist (in a weak sense as far as other realist proposals go), but morality is underdetermined."

(I thought "anti-realism" was the best description for my view, because as I discussed in this comment, the way in which I treat normative concepts takes away the specialness they have under non-naturalism. Even some non-naturalists claim that naturalism isn't interesting enough to be called "moral realism." And insofar as my position can be characterized as naturalism, it's still underdetermined in places where it matters a lot for our ethical practice.)

Belief in God, or in many gods, prevented the free development of moral reasoning. Disbelief in God, openly admitted by a majority, is a recent event, not yet completed. Because this event is so recent, Non-Religious Ethics is at a very early stage. We cannot yet predict whether, as in Mathematics, we will all reach agreement. Since we cannot know how Ethics will develop, it is not irrational to have high hopes.

When I read some similar passage at the end of Parfit's Reasons and Persons (which may have even included a quote of this passage?), I shared Parfit's view. But I've done a lot of thinking since then. At some point one also has to drastically increase one's confidence that further game-changing considerations won't show up, especially if one's map of the option space feels very complete in a self-contained way, and intellectually satisfying.

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-07-27T14:57:56.302Z · EA · GW

[Is pleasure ‘good’?]

What do we mean by the claim “Pleasure is good”?

There’s an uncontroversial interpretation and a controversial one.

Vague and uncontroversial claim: When we say that pleasure is good, we mean that all else equal, pleasure is always unobjectionable, and often it is desired.

Specific and controversial claim: When we say that pleasure is good, what we mean is that, all else equal, pleasure is an end we should be striving for. This captures points like:

  • that pleasure is in itself desirable,
  • that no mental states without pleasure are in itself desirable,
  • that more pleasure is always better than less pleasure.

People who say “pleasure is good” claim that we can establish this by introspection about the nature of pleasure. I don’t see how one could establish the specific and controversial claim from mere introspection. After all, even if I personally valued pleasure in the strong sense (I don’t), I couldn’t, with my own introspection, establish that everyone does the same. People’s psychologies differ, and how pleasure is experienced in the moment doesn’t fully determine how one will relate to it. Whether one wants to dedicate one’s life (or, for altruists, at least the self-oriented portions of one's life) to pursuing pleasure depends on more than just what pleasure feels like.

Therefore, I think pleasure is only good in the weak sense. It’s not good in the strong sense.

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-07-27T14:53:39.939Z · EA · GW

[Are underdetermined moral values problematic?]

If I think my goals are merely uncertain, but in reality they are underdetermined and the contributions I make to shaping the future will be driven, to a large degree, by social influences, ordering effects, lock-in effects, and so on, is that a problem?

I can’t speak for others, but I’d find it weird. I want to know what I’m getting up for in the morning.

On the other hand, because it makes it easier for the community to coordinate and pull things in the same directions, there's a sense in which underdetermined values are beneficial.

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-07-27T14:50:44.321Z · EA · GW

[Takeaways from Covid forecasting on Metaculus]

I’m probably going to win the first round of the Li Wenliang forecasting tournament on Metaculus, or maybe get second. (My screen name shows up in second on the leaderboard, but it’s a glitch that’s not resolved yet because one of the resolutions depends on a strongly delayed source.)

With around 52 questions, this was the largest forecasting tournament on the virus. It ran from late February until early June.

I learned a lot during the tournament. Next to claiming credit, I want to share some observations and takeaways from this forecasting experience, inspired by Linch Zhang’s forecasting AMA:

  • I did well at forecasting, but it came at the expense of other things I wanted to do. In February, March and April, Covid had completely absorbed me. I spent several hours per day reading news and had anxiety about regularly updating my forecasts. This was exhausting; I was relieved when the tournament came to an end.
  • I had previously dabbled in AI forecasting. Unfortunately, I can’t tell if I excelled at it because the Metaculus domain for it went dormant. In any case, I noticed that I felt more motivated to delve into Covid questions because they seemed more connected. It felt like I was not only learning random information to help me with a single question, but I was acquiring a kind of expertise. (Armchair epidemiology? :P ) I think this impression was due to a mixture of perhaps suboptimal question design for the AI Metaculus domain and the increased difficulty of picking up useful ML intuitions on the go.
  • One thing I think I’m good at is identifying reasons why past trends might change. I’m always curious to understand the underlying reasons behind some trends. I come up with lots of hypotheses because I like the feeling of generating a new insight. I often realized that my hunches were wrong, but in the course of investigating them, I improved my understanding.
  • I have an aversion to making complex models. I always feel like model uncertainty is too large anyway. When forecasting Covid cases, I mostly looked for countries where similar situations have already played out. Then, I’d think about factors that might be different with the new situation, and make intuition-based adjustments in the direction predicted by the differences.
  • I think my main weakness is laziness. Occasionally, when there’s an easy way to do it, I’d spot-check hypotheses by making predictions about past events that I hadn’t yet read about. However, I don’t do this nearly enough. Also, I rely too much on factoids I picked up from somewhere without verifying how accurate they are. For instance, I had it stuck in my head that someone said that the case doubling rate was 4 days. So, I operated with this assumption for many days of forecasting, before realizing that it’s actually looking like 2.5 days in densely populated areas and that I should anyway have spent more time looking firsthand into this crucial variable. Lastly, I noticed a bunch of times that other forecasters were talking about issues I don't have a good grasp on (e.g., test-positivity rates), and I felt that I'd probably improve my forecasting if I looked into it, but I preferred to stick with approaches I was more familiar with.
  • IT skills really would have helped me generate forecasts faster. I had to do crazy things with pen and paper because I lacked them. (But none of what I did involved more than elementary-school math.)
  • I learned that confidently disagreeing with the community forecast is different from “not confidently agreeing.” I lost a bunch of points twice due to underconfidence. In cases where I had no idea about some issue and saw the community predict <10%, I didn’t want to go <20% because that felt inappropriate given my lack of knowledge about the plausible-sounding scenario. I couldn't confidently agree with the community, but since I also didn't confidently disagree with them, I should have just deferred to their forecast. Contrarianism is a valuable skill, but one also has to learn to trust others in situations where one sees no reason not to.
  • I realized early that when I changed my mind on some consideration that initially had me predict different from the community median, I should make sure to update thoroughly. If I no longer believe my initial reason for predicting significantly above the median, maybe I should go all the way to slightly below the median next. (The first intuition is to just move closer to it but still stay above.)
  • From playing a lot of poker, I have the habit of imagining that I make some bet (e.g., a bluff or thin value bet) and it will turn out that I’m wrong in this instance. Would I still feel good about the decision in hindsight? This heuristic felt very useful to me in forecasting. It made me reverse initially overconfident forecasts when I realized that my internal assumptions didn’t feel like something I could later on defend as “It was a reasonable view at the time.”
  • I made a couple of bad forecasts after I stopped following developments every day. I realized I needed to re-calibrate how much to trust my intuitions once I no longer had a good sense of everything that was happening.

Some things I was particularly wrong about:

  • This was well before I started predicting on Metaculus, but up until about February 5th, I was way too pessimistic about the death rate for young healthy people. I think I lacked the medical knowledge to have the right prior about how strongly age-skewed most illnesses are, and therefore updated too strongly upon learning about the deaths of two young healthy Chinese doctors.
  • Like others, I overestimated the importance of hospital overstrain. I assumed that this would make the infection fatality rate about 1.5x–2.5x worse in countries that don’t control their outbreaks. This didn’t happen.
  • I was somewhat worried about food shortages initially, and was surprised by the resilience of the food distribution chains.
  • I expected more hospitalizations in Sweden in April.
  • I didn’t expect the US to put >60 countries on the level-3 health warning travel list. I was confident that they would not do this, because “If a country is gonna be safer than the US itself, why not let your citizens travel there??”
  • I was nonetheless too optimistic about the US getting things under control eventually, even though I saw comments from US-based forecasters who were more pessimistic.
  • My long-term forecasts for case numbers tended to be somewhat low. (Perhaps this was in part related to laziness; the Metaculus interface made it hard to create long tails for the distribution.)

Some things I was particularly right about:

  • I was generally early to recognize the risks from novel coronavirus / Covid.
  • For European countries and the US initially, I expected lockdown measures to work roughly as well as they did. I confidently predicted lower than the community for the effects of the first peak.
  • I somewhat confidently ruled out IFR estimates <0.5% in early March already, and I think this was for good reasons, even though I continued to accumulate better evidence for my IFR predictions later and was wrong about the effects of hospital overstrain.
  • I very confidently doubled down against <0.5% IFR estimates in late March, despite the weird momentum that developed around taking them seriously, and the confusion about the percentage of asymptomatic cases.
  • I have had very few substantial updates since mid March. I predicted the general shape of the pandemic quite well, e.g. here or here.
  • I confidently predicted that the UK and the Netherlands (later) would change course about their initial “no lockdown” policy.
  • I noticed early that Indonesia had a large undetected outbreak. A couple of days after I predicted this, the deaths there jumped from 1 to 5 and its ratio of confirmed cases to deaths became the worst (or second worst?) in the world at the time.

(I have stopped following the developments closely by now.)

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-07-27T14:40:15.316Z · EA · GW

[When thinking about what I value, should I take peer disagreement into account?]

Consider the question “What’s the best career for me?”

When we think about choosing careers, we don’t update to the career choice of the smartest person we know or the person who has thought the most about their career. Instead, we seek out people who have approached career choice with a similar overarching goal/framework (in my case, 80,000 Hours is a good fit), and we look toward the choices of people with similar personalities (in my case, I notice a stronger personality overlap with researchers than managers, operations staff, or those doing earning to give).

When it comes to thinking about one’s values, many people take peer disagreement very seriously.

I think that can be wise, but it shouldn’t be done unthinkingly. I believe that the quest to figure out one’s values shares strong similarities with the quest of figuring out one’s ideal career. Before deferring to others with one's deliberations, I recommend making sure that others are asking the same questions (not everything that comes with the label “morality” is the same) and that they are psychologically similar in the ways that seem fundamental to what you care about as a person.

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-07-27T14:38:16.393Z · EA · GW

[I’m an anti-realist because I think morality is underdetermined]

I often find myself explaining why anti-realism is different from nihilism / “anything goes.” I wrote lengthy posts in my sequence on moral anti-realism (2 and 3) about partly this point. However, maybe the framing “anti-realism” is needlessly confusing because some people do associate it with nihilism / “anything goes.” Perhaps the best short explanation of my perspective goes as follows:

I’m happy to concede that some moral facts exist (in a comparatively weak sense), but I think morality is underdetermined.

This means that beyond the widespread agreement on some self-evident principles, expert opinions won’t converge even if we had access to a superintelligent oracle. Multiple options will be defensible, and people will gravitate to different attractors in value space.

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-07-27T14:35:50.632Z · EA · GW

[Moral uncertainty and moral realism are in tension]

Is it ever epistemically warranted to have high confidence in moral realism, and also be morally uncertain not only between minor details of a specific normative-ethical theory but between theories?

I think there's a tension there. One possible reply might be the following. Maybe we are confident in the existence of some moral facts, but multiple normative-ethical theories can accommodate them. Accordingly, we can be moral realists (because some moral facts exist) and be morally uncertain (because there are many theories to choose from that accommodate the little bits we think we know about moral reality).

However, what do we make of the possibility that moral realism could be true only in a very weak sense? For instance, maybe some moral facts exist, but most of morality is underdetermined. Similarly, maybe the true morality is some all-encompassing and complete theory, but humans might be forever epistemically closed off to it. If so, then, in practice, we could never go beyond the few moral facts we already think we know for sure.

Assuming a conception of moral realism that is action-relevant for effective altruism (e.g., because it predicts reasonable degrees of convergence among future philosophers, or makes other strong claims that EAs would be interested in), is it ever epistemically warranted to have high confidence in that, and be open-endedly morally uncertain?

Another way to ask this question: If we don't already know/see that a complete and all-encompassing theory explains many of the features related to folk discourse on morality, why would we assume that such a complete and all-encompassing theory exists in a for-us-accessible fashion? Even if there are, in some sense, "right answers" to moral questions, we need more evidence to conclude that morality is not vastly underdetermined.

For more detailed arguments on this point, see section 3 in this post.

Comment by lukas_gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-07-25T14:47:46.019Z · EA · GW
[...] one thing that it convinced me of is that there is a close connection between your particular normative ethical theory and moral realism. If you claim to be a moral realist but don't make ethical claims beyond 'self-evident' ones like pain is bad, given the background implausibility of making such a claim about mind-independent facts, you don't have enough 'material to work with' for your theory to plausibly refer to anything.

Cool, I'm happy that this argument appeals to a moral realist!

I agree that it then shifts the arena to convergence arguments. I will discuss them in posts 6 and 7.

In short, I don't think of myself as a moral realist because I see strong reasons against convergence about moral axiology and population ethics.

This won't compel the anti-realist, but I think it would compel someone weighing up the two alternative theories of how justification works. If you are uncertain about whether there are mind-independent facts about our beliefs being justified, the argument that anti-realism is self-defeating pulls you in the direction of realism.

I don't think this argument ("anti-realism is self-defeating") works well in this context. If anti-realism is just the claim "the rocks or free-floating mountain slopes that we're seeing don't connect to form a full mountain," I don't see what's self-defeating about that.

One can try to say that a mistaken anti-realist makes a more costly mistake than a mistaken realist. However, on close inspection, I argue that this intuition turns out to be wrong. It also depends a lot on the details. Consider the following cases:

(1) A person with weak object-level normative opinions. To such a person, the moral landscape they're seeing looks like either:

(1a) free-floating rocks or parts of mountain slope, with a lot of fog and clouds.

(1b) many (more or less) full mountains, all of which are similarly appealing. The view feels disorienting.

(2) A person with strong object-level normative opinions. To such a person, the moral landscape they're seeing looks like either:

(2a) a full mountain with nothing else of note even remotely in the vicinity.

(2b) many (more or less) full mountains, but one of which is definitely theirs. All the other mountains have something wrong/unwanted about them.

2a is confident moral realism. 2b is confident moral anti-realism. 1a is genuine uncertainty, which is compatible with moral realism in theory, but there's no particular reason to assume that the floating rocks would connect. 1b is having underdefined values.

Of course, how things appear to someone may not reflect how they really are. We can construct various types of mistakes that people in the above examples might be making.

This requires longer discussion, but I feel strongly that someone whose view is closest to 2b has a lot to lose by trying to change their psychology into something that lets them see things as 1a or 1b instead. They do have something to gain if 1a or 1b are actually epistemically warranted, but they also have stuff to lose. And the losses and gains here are commensurate – I tried to explain this in endnote 2 of my fourth post. (But it's a hastily written endnote and I would have ideally written a separate post about just this issue. I plan to touch on it again in a future post on how anti-realism changes things for EAs.)

Lastly, it's worth noting that sometimes people's metaethics interact with their normative ethics. A person might not adopt a mindset of thinking about or actually taking stances on normative questions because they're in the habit of deferring to others or waiting until morality is solved. But if morality is a bit like career choice, then there are things to lose from staying indefinitely uncertain about one's ideal career, or just going along with others.

To summarize: There's no infinitely strong wager for moral realism. There is an argument for valuing moral reflection (in the analogy: gaining more clarity on the picture that you're seeing, and making sure you're right about what you think you're seeing). However, the argument for valuing moral reflection is not overridingly strong. It is to be traded off against one's the strength of one's object-level normative opinions. And without object-level normative opinions, one's values might be underdetermined.

Comment by lukas_gloor on Do research organisations make theory of change diagrams? Should they? · 2020-07-23T07:15:36.078Z · EA · GW
I'd guess that both of those conditions will be harder to maintain as an organisation scales up. Would you guess that ToC diagrams tend to become more useful as organisations scale up?

I think so. I'm somewhat nervous about this because if the culture changes drastically, maybe that's generally bad and ToC documents just mitigate some of the badness, but can't quite get you back the culture at a smaller organization. Whether large scaling even makes sense might depend on the organization's mission, or the ability of the executive director (and hiring committee) to scale in a way that preserves the right culture.

Also, when you say "prioritization abilities", do you just mean ability to prioritise between research questions?

Also the other things you list.

I ask largely because one reason I suspect ToC diagrams may be helpful is to guide decisions about things like which forms of output to produce, who to share research findings with, and whether and how to disseminate particular findings broadly. It seems plausible to me that a researcher who's excellent at prioritizing among research questions might not be good at thinking about those matters, and a ToC diagram (or the process of making one) might speed up or clarify their thoughts on those matters.

That seems reasonable. My experience is that people often know the right answers in theory, but need a lot of nudging to choose mediums or venues different from the ones they find personally the most rewarding. I think there are also just large constraints by individual psychology that make things less flexible than one might think. So, to preserve intrinsic motivation for research, it's maybe not a good idea to push researchers too much. Still, I think it's crucial to have a culture where researchers think actively about which medium to pick, why they're doing it, and how the output will be shared. As long as this is being diligently considered and discussed, I think it's reasonable to defer to the judgment of individual researchers.

Comment by lukas_gloor on Do research organisations make theory of change diagrams? Should they? · 2020-07-23T07:04:28.517Z · EA · GW

I did write something that builds on it, yeah. It was about defining various proxies to optimize for (e.g., money, societal influence, connections to other EA organizations, followers of the organization's newsletter (with near-term EA as their main interest), value-aligned people with computer science expertise, etc.) and how well they do in futures where we decide different interventions are most important. I didn't want to make it public because it felt unpolished, and I was worried that some of the proxies could give outsiders the impression of instrumentalizing people.

Someone even helped me with excel to produce a heat map of the results weighted by probability we assign to various interventions mattering the most, and at the time this helped me clarify objections I had to EAF's 2015/2016 strategic direction (we interacted little with other EA orgs and tried to build up capacity with animal advocacy, but always promote cause neutrality with the intent of maybe pivoting to other causes later). It didn't lead to many important changes right away, but we made major changes in 2017 that strongly reflected the takeaways I had sketched in those documents.

Comment by lukas_gloor on Do research organisations make theory of change diagrams? Should they? · 2020-07-22T09:42:03.114Z · EA · GW

Context: I'm drawing from experience with a small research organization in a young field where it used to be very hard to do good research without thoroughly understanding the causal paths to impact.

Strongly stated, weakly held, and definitely tainted by personal idiosyncrasies:

I often found myself suspicious about (too many) internal strategy documents because I think that in a well-functioning organization of the kind I described, the people who make prioritization decisions (researchers pursuing their interests autonomously or executive director/managers who define tasks and targets at the organization-level) should be hired, among other things, for their prioritization abilities.

My sense is that being good at prioritization is more about the mindset than following some plan, and it involves thinking through the paths to impact for every decision, every day. So when I'm asked to help write up a theory of change, my intuitive reaction is "Who is this for? This feels like tediously writing down things that are already second nature to many people, and so much goes into this that it's hard not to come away from this feeling like the document produced is too simplistic to be of any use."

So, I'm overall skeptical about the use of ToC documents for improving a small organization's focus, especially if the organization operates in a field/paradigm where staff have already been selected for their ability to prioritize well.

To be clear, this isn't measured against a comparison of not thinking about strategy at all. Instead, I favor leaner versions of strategy discussions. For instance, one person writes up their thoughts on what could be improved (this might sometimes look like an abbreviated version of a ToC document), then core staff use it as a basis for group discussions and try to identify the non-obvious questions that seem the most crucial to the organization's strategic direction. Then one discusses these questions from various angles, switches to a solution-oriented mode, and defines action points. The result of those discussions should be written down, but there's no need to start at "our mission is to reduce future suffering.")

Of course, there might be other reasons why internal ToC documents could be useful. For instance, not everyone's work involves making big-picture prioritization decisions, and it's helpful and motivating for all staff to have a good sense of what the organization concretely aims to accomplish. Still, if the reason for writing a ToC document is updating staff instead of actually improving overall prioritization and focus, then that calls for different ways of writing the document. And perhaps doing a (recorded) strategy Q&A with researchers and the executive director might be more efficient than a drily written document with rectangles and arrows.

Another instance where ToC documents might be (more) useful is for establishing consensus about an organization's aims. If it feels like the organization lacks a coherent framework for how to think about their mission, maybe the process of writing a ToC document could be helpful in getting staff to think along similar lines.

Comment by lukas_gloor on Cause prioritization for downside-focused value systems · 2020-07-21T18:58:18.857Z · EA · GW

Sorry for the delayed answer; I had this open but forgot.

I like this map! Do you know of anything that attempts to assign probabilities (even very vague/ballpark) to these different outcomes?

Not in any principled way, no. I think the action threshold ("How large/small would the probability have to be in order to make a for-me-actionable difference?") are quite low if you're particularly suffering-focused, and quite high if you have a symmetrical/upside-focused view. (This distinction is crude and nowadays I'd caveat that some plausible moral views might not fit on the spectrum.) So in practice, I'd imagine that cruxes are rarely about the probabilities of these scenarios. Still, I think it could be interesting to think about their plausibility and likelihood in a systematic fashion.

Given my lack of knowledge about the different risk factors, I mostly just treat each of the different possible outcomes on your map and the hypothetical "map that also tracked outcomes with astronomical amounts of happiness" as being roughly equal in probability.

At the extremes (very good outcomes vs. very bad ones), the good outcomes seem a lot more likely, because future civilization would want to intentionally bring them about. For the very bad outcomes, things don't only have to go wrong, but do so in very specific ways.

For the less extreme cases (moderately good vs. moderately bad), I think most options are defensible and treating them as similarly likely certainly seems reasonable.

Comment by lukas_gloor on Moral Anti-Realism Sequence #1: What Is Moral Realism? · 2020-07-03T12:56:05.567Z · EA · GW

Thanks!

At the time when I wrote this post, the formatting either didn't yet allow the hyperlinked endnotes, or (more likely) I didn't know how to do the markdown. I plan to update the endnotes here so they become more easily readable.

Update 7/7/2020: I updated the endnotes.

Comment by lukas_gloor on Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue) · 2020-06-18T13:33:17.558Z · EA · GW

Yeah, I made the AI really confident for purposes of sharpening the implications of the dialogue. I want to be clear that I don't think the AI's arguments are obviously true.

(Maybe I should flag this more clearly in the dialogue itself, or at least the introduction. But I think this is at least implicitly explained in the current wording.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue) · 2020-06-18T13:31:17.071Z · EA · GW
I think sometimes my metaethical fanaticism looks like that. And I imagine for some people that's how it typically looks. But I think for me it's more often "wanting to be careful in case moral realism is true", rather than "hoping that moral realism is true". You could even say it's something like "concerned that moral realism might be true".

Interesting! Yeah, that framing also makes sense to me.

Comment by lukas_gloor on Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue) · 2020-06-18T13:30:37.135Z · EA · GW

Thanks for those thoughts, and for the engagement in general! I just want to flag that I agree that weaker versions of the wager aren't covered with my objections (I also say this in endnote 5 of my previous post). Weaker wagers are also similar to the way valuing reflection works for anti-realists (esp. if they're directed toward naturalist or naturalism-like versions of moral realism).

I think it's important to note that anti-realism is totally compatible with this part you write here:

Humanity should try to "keep our options open" for a while (by avoiding existential risks), while also improving our ability to understand, reflect, etc. so that we get into a better position to work out what options we should take.

I know that you wrote this part because you'd primarily want to use the moral reflection to figure out if realism is true or not. But even if one were confident that moral realism is false, there remain some strong arguments to favor reflection. (It's just that those arguments feel like less of a forced move, and the are interesting counter-considerations to also think about.)

(Also, whether one is a moral realist or not, it's important to note that working toward a position of option value for philosophical reflection isn't the only important thing to do according to all potentially plausible moral views. For some moral views, the most important time to create value arguably happens before long reflection.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-06-16T13:22:05.706Z · EA · GW

It seems odd to me to suggest we have any examples of maximally nuanced and versatile reasoners. It seems like all humans are quite flawed thinkers.

Sorry, bad phrasing on my part! I didn't mean to suggest that there are perfect human reasoners. :)

The context of my remark was this argument by Richard Yetter-Chappell. He thinks that as humans, we can use our inside view to disqualify hypothetical reasoners who don't even change their minds in the light of new evidence, or don't use induction. We can disqualify them from the class of agents who might be correctly predisposed to apprehend normative truths. We can do this because compared to those crappy alien ways of reasoning, ours feels undoubtedly "more nuanced and versatile."

And so I'm replying to Yetter-Chappell that as far as inside-view criteria for disqualifying people from the class of promising candidates for the correct psychology goes, we probably can't find differences among humans that would rule out everyone except a select few reasoners who will all agree on the right morality. Insofar as we try to construct a non-gerrymandered reference class of "humans who reason in really great ways," that reference class will still contain unbridgeable disagreement.

One example of why: I don't think we yet have a compelling demonstration that, given something like coherent extrapolated volition, humans wouldn't converge on the same set of values. So I think we need to rely on arguments, speculations, etc. for matters like that, rather than the answer already being very clear.

I haven't yet made any arguments about this (because this is the topic of future posts in the sequence), but my argument will be that we don't necessarily need a compelling demonstration, because we know enough about why people disagree to tell that they are aren't always answering the same question and/or paying attention to the same evaluation criteria.

Comment by lukas_gloor on Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails · 2020-06-16T13:12:26.309Z · EA · GW

Yes, that's the same intuition. :)

In that case, I'll continue clinging to my strange wager as I await your next post :)

Haha. The intuition probably won't get any weaker, but my next post will spell out the costs it would have to endorse this intuition as your value, as opposed to treating it as a misguiding intuition. Perhaps by reflecting on the costs and the practical inconveniences it could bring about to treat this intuition as one's terminal value, we might come to rethink it.

Comment by lukas_gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-06-16T13:07:07.507Z · EA · GW

Good question!

By "open-ended moral uncertainty" I mean being uncertain about one's values without having in mind well-defined criteria (either implicit or explicit) for what constitutes a correct solution.

Footnote 26 leaves me with the impression that perhaps you mean something like "uncertainty about what our fundamental goals should be, rather than uncertainty that's just about what should follow from our fundamental goals". But I'm not sure I'd call the latter type of uncertainty normative/moral uncertainty at all - it seems more like logical or empirical uncertainty.

Yes, this captures it well. I'd say most of the usage of "moral uncertainty" in EA circles is at least in part open-ended, so this is in agreement with your intuition that maybe what I'm describing isn't "normative uncertainty" at all. I think many effective altruists use "moral uncertainty" in a way that either fails to refer to anything meaningful, or it implies under-determined moral values. (I think this can often be okay. Our views on lots of things are under-determined and there isn't necessarily anything wrong with that. But sometimes it can be bad to think that something is well-determined when it's not.)

Now, I didn't necessarily mean to suggest that the only defensible way to think that morality has enough "structure" to deserve the label "moral realism" is to advance an object-level normative theory that specifies every single possible detail. If someone subscribes to hedonistic total utilitarianism but leaves it under-defined to what degree bees can feel pleasure, maybe that still qualifies as moral realism. But if someone is so morally uncertain that they don't know whether they favor preference utilitarianism or hedonistic utilitarianism, or whether they might favor some kind prioritarianism after all, or even something entirely different such as Kantianism, moral particularism, etc., then I would ask them: "Why do you think the question you're asking yourself is well-defined? What are you uncertain about? Why do you expect there to be a speaker-independent solution to this question?"

To be clear, I'm not making an argument that one cannot be in a state of uncertainty between, for instance, preference utilitarianism versus hedonistic utilitarianism. I'm just saying that, as far as I can tell, the way to make this work satisfactorily would be based on anti-realist assumptions. The question we're asking, in this case, isn't "What's the true moral theory?" but "Which moral theory would I come to endorse if I thought about this question more?"

Comment by lukas_gloor on Timeline of the wild-animal suffering movement · 2020-06-16T10:26:50.811Z · EA · GW

Dawkins wrote about it and said "it must be so." Maybe the timeline is about people who explicitly challenged that perception.

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-16T08:41:50.111Z · EA · GW

Does (2) sound like a roughly accurate depiction of your views?

Yes, but with an important caveat. The way you described the three views, it doesn't make it clear that 2. and 3. have the same practical implications as 1. Whereas I intended to describe them in a way that leaves no possible doubt about that.

Here's how I would change your descriptions to make them compatible with my views:

  • A position in which there may not even be a single correct moral theory ((no change))

  • A position in which no strong claims can ever be made about what the single correct moral theory would be.

  • A position in which the only moral questions that have a correct (and/or knowable) answer are questions on which virtually everyone already agrees.

As you can see, my 2. and 3. are quite different from what you wrote.

Comment by lukas_gloor on Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails · 2020-06-16T07:39:37.961Z · EA · GW

I meant it the way you describe, but I didn't convey it well. Maybe a good way to explain it as follows:

My initial objection to the wager is that the anti-realist way of assigning what matters is altogether very different from the realist way, and this makes the moral realism wager question begging. This is evidenced by issues like "infectiousness." I maybe shouldn't even have called that a counter-argument—I'd just think of it as supporting evidence for the view that the two perspectives are altogether too different for there to be a straightfoward wager.

However, one way to still get something that behaves like a wager is if one perspective "voluntarily" favors acting as though the other perspective is true. Anti-realism is about acting on the moral intuitions that most deeply resonate with you. If your caring capacity under anti-realism says "I want to act as though irreducible normativity applies," and the perspective from irreducible normativity says "you ought to act as though irreducible normativity applies," then the wager goes through in practice.

(In my text, I wrote "Admittedly, it seems possible to believe that one’s actions are meaningless without irreducible normativity." This is confusing because it sounds like it's a philosophical belief rather than a statement of value. Edit: I now edited the text to reflect that I was thinking of "believing that one's actions are meaningless without irreducible normativity" as a value statement.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails · 2020-06-16T07:25:12.044Z · EA · GW

Thanks! Yeah, I'm curious about the same questions regarding the strong downvotes. Since I wrote "it works well as a standalone piece," I guess I couldn't really complain if people felt that the post was unconvincing on its own. I think the point I'm making in the "Begging the question" subsection only works if one doesn't think of anti-realism as nihilism/anything goes. I only argued for that in previous posts.

(If the downvotes were because readers are tired of the topic or thought that the discussion of Huemer's argument was really dry, the good news is that I have only 1 post left for the time being, and it's going to be a dialogue, so perhaps more engaging than this one.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails · 2020-06-16T07:17:15.107Z · EA · GW

You're describing what I tried to address in my last paragraph, the stance I called "metaethical fanaticism." I think you're right that this type of wager works. Importantly, it's dependent on having this strongly felt intuition you describe, and giving it (near-)total weight on what you care about.

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-14T11:50:54.061Z · EA · GW

I feel like you're too focused on this notion of whether something "exists" or not. One of the main points I was trying to convey in the article is that I don't consider this to be an ideal way of framing the disagreement. See for instance these quotes:

Going by connotations alone, we might at first think that realism means that a domain in question is real, whereas anti-realism implies that it’s something other than real (e.g., that it’s merely imagined). Although accurate in a very loose sense, this interpretation is misleading.

[...]

Typically, when someone stops believing in God, they also stop talking as though God exists. As far as private purposes are concerned, atheists don’t generally refine their concept of God; they abandon it.[3]
Going from realism to anti-realism works differently.

[...]

Rejecting realism for a domain neither entails erasing the substance of that domain, nor (necessarily) its relevance. Anti-realists will generally agree that the domain has some relevance, some “structure.”

__

Now quoting something from your comment:

Lack of sharp boundaries is in my mind no strong argument for denying the existence of a claimed aspect of reality. Okay, this also feels uncharitable, but it felt like Alice was arguing that the moon doesn't exist because there are edge cases, like the big rock that orbits Pluto.

Hm, I think it goes beyond just saying that a concept has fuzzy boundaries. Some people might say that "markets" don't exist because it's a fuzzy, abstract concept and people may not agree in practice what aspects of physical reality are part of a market. This would be a pedantic way of objecting to the claim "markets are real." That's not what I think anti-realism is about. :)

With the example of consciousness, my point would go something like this: "There might be a totally sensible interpretation of consciousness according to which bees are conscious, and a totally sensible interpretation according to which they aren't. Bees aren't 'edge cases' like the rocks that surround Pluto. They either fall square into a concept of consciousness, or completely outside of it. Based on what we can tell from introspection and from our folk concept of consciousness, it's under-determined what we're supposed to do with bees."

If put this way, perhaps you'd agree that this in conflict with the realist intuition that consciousness is this crisp thing that systems either have or lack.

Then Alice would maybe say

Haha. Or if you wanted to make the joke about Dennett's eliminativism, you could describe Alice's reply like this:

"Look, here's an optical illusion. And here's another one. Therefore, consciousness doesn't exist."

But I think that's uncharitable to Dennett. If you read Consciousness Explained in search of arguments why consciousness doesn't exist, you'll be disappointed. However, if you read it in search of arguments why there's no clearcut way to extrapolate from obvious examples like "I'm conscious right now" to less obvious ones like "are bees conscious?" then the book will be really interesting. All these illusions and discussions about fancy neuroscience (e.g., cutaneous rabbit or the discussion about Stalinesque versus Orwellian revisions) support the point that many processes we believe to have a good introspective grasp on are actually much more under-determined than we would intuitively guess. This supports the view that consciousness is very unlike what we think it is. Some people therefore say things like "consciousness ((as we think of it)) doesn't exist." I think that's misleading and will confuse everyone. I think it would be easier to understand anti-realists if they explained their views by saying that things are different from how they appear, and more ambiguous in quite fundamental ways, etc.

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-14T11:25:24.911Z · EA · GW
This passage sounds to me like it's implying that the anti-realist position is: "Some moral claims may be objectively true, but many are neither objectively true nor objectively false." In this case, it sounds like the anti-realist is saying that there is a speaker-independent fact of the matter about whether everyone getting tortured is morally worse than a world full of flourishing, and just denying that that means there will always be independent facts of the matter about moral claims.

I should have chosen a more nuanced framing in my comment. Instead of saying, "Sure, we can agree about that," the anti-realist should have said "Sure, that seems like a reasonable way to use words. I'm happy to go along with using moral terms like 'worse' or 'better' in ways where this is universally considered self-evident. But it seems to me that you think you are also saying that for every moral question, there's a single correct answer [...]"

So the anti-realist isn't necessarily conceding that "surely a world where everyone gets tortured is worse than a world where everyone flourishes" is a successful argument in favor of moral realism. At least, it's not yet an argument for ambitious versions of moral realism (ones "worthy of the name" according to my semantic intuitions).

I think I'd want to classify such a view as moral realist in an important sense, as it seems to involve realism about at least some moral claims.

It's possible that you just have different semantic intuitions from me. It might be helpful to take a step back and ignore whether or not to classify a view as "moral realism," and think about what it means for notions like moral uncertainty, the value of information for doing more work in philosophy, the prospect of convergence among people's normative-ethical views if they did more reflecting, etc. Because the view we are discussing here has relatively weak implications for all these things, I personally didn't feel like calling it "moral realism."

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-14T11:17:09.690Z · EA · GW
Does the following statement of the slogan seem to you to capture the anti-realist position: "Reality doesn't come with objectively correct labels. Humans create labels and draw categories, and how they do this will be determined by physical reality, but there's no separate criteria determining how humans should do this; there's nothing more/other than how they will do this."

Yeah, that sounds right! It carries more information than my crude proposal.

As you suggest, moral naturalists might agree that reality (obviously) doesn't carry labels. They might argue that in a way, it kind of screams out at you where you can put the labels. And the anti-realist position is that there's more ambiguity than "it just screams out at you."

While the distinction between anti-realism and non-naturalism seems relatively clearcut, I think the distinction between anti-realism and naturalism is a bit loose. This is also reflected in Luke Muehlhauser's Pluralistic Moral Reductionism post. Luke left it open whether to count PMR as realism or anti-realism. By contrast, my terminological choice has been to count it as anti-realism.

Comment by lukas_gloor on Moral Anti-Realism Sequence #1: What Is Moral Realism? · 2020-06-14T11:10:22.813Z · EA · GW
I found this argument confusing. Wouldn't it be acceptable, and probably what we'd expect, for a metaethical view to not also provide answers on normative ethics or axiology?

I'm not saying metaethical views have to advance a particular normative-ethical theory. I'm just saying that if a realist metaethical view doesn't do this, it becomes difficult to explain how proponents of this view could possibly know that there really is "a single correct theory."

So for instance, looking at the arguments by Peter Railton, it's not clear to me whether Railton even expects there to be a single correct moral theory. His arguments leave morality under-defined. "Moral realism" is commonly associated with the view that there's a single correct moral theory. Railton has done little to establish this, so I think it's questionable whether to call this view "moral realism."

Of course, "moral realism" is just a label. It matters much more that we have clarity about what we're discussing, instead of which label we pick. If someone wants to use the term "moral realism" for moral views that are explicitly under-defined (i.e., views according to which many moral questions don't have an answer), that's fine. In that sense, I would be a "realist."

It seems that finding out there are "speaker-independent moral facts, rules or values" would be quite important, even if we don't yet know what those facts are.

One would think so, but as I said, it depends on what we mean exactly by "speaker-independent moral facts." On some interpretations, those facts may be forever unknowable. In that case, knowledge that those facts exist would be pointless in practice.

I write more about this in my 3rd post, so maybe the points will make more sense with the context there. But really the main point of this 1st post is that I make a proposal in favor of being cautious about the label "moral realism" because, in my view, some versions of it don't seem to have action-guiding implications for how to go about effective altruism.

(I mean, if I had started out convinced of moral relativism, then sure, "moral realism" in Peter Railton's sense would change my views in very action-guiding ways. But moral relativists are rare. I feel like one should draw the realism vs. anti-realism distinction in a place where it isn't obvious that one side is completely wrong. If we draw the distinction in such a way that Peter Railton's view qualifies as "moral realism," then it would be rather trivial that anti-realism was wrong. This would seem uncharitable to all the anti-realist philosophers who have done important work on normative ethics.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-12T14:44:34.059Z · EA · GW
2) saying that your subjective experience is real (that is, it exists in some form and is not just a delusion)

What does it entail when you say that your subjective experience "is real"? It's important to note that the anti-realist doesn't try to take away the way something feels to you. Instead, the anti-realist disagrees with the further associations you might have for "consciousness is real." If consciousness is real, it seems like there'd be a fact of the matter whether bees are conscious, that there's an unambiguous way to answer the question "Are bees conscious?" without the need to further explain what exactly the question is going for. As I tried to explain endnote 18, that's a very different claim from "it feels like something to be me, right now" (or "it feels like something to be in pain" – to use the example in your comment above).

For consciousness, this sentiment is really hard to explain. I think endnote 18 is the best explanation I've managed to give thus far. I'd say the sentiment behind anti-realism is much easier to understand with other bedrock concepts (Tier 2, 3, or 4).

For instance, you can go through a similar dialogue structure for morality. The moral realist says "But surely moral facts exist, because it seems that, all else equal, a world where everyone gets tortured is worse than a world full of flourishing." In reply, the moral anti-realist might say something like "Sure, we can agree about that. But it seems to me that you think you are also saying that for every moral question, there's a single correct answer. That, for instance, whether or not people have obligations to avoid purchasing factory farmed meat has an unambiguous answer. I don't see how you think you can establish this merely by pointing at self-evident examples such as 'surely a world where everyone gets tortured is worse than a world full of flourishing.' It seems to me that you have not yet argued that what's moral versus what's not moral always has a solution."

Analogously, the same dialogue works for aesthetics realism. The Mona Lisa might be (mostly) uncontroversially beautiful, but it would be weird to infer from this that "Are Mark Rhotko's paintings beautiful ?" is a well-specified question with a single true answer.

Comment by lukas_gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-06-12T14:15:16.969Z · EA · GW

Thanks for this comment, this type of empirical metaethics research is quite new to me and it sounds really fascinating!

(1) Moral cognition may not have evolved
With respect to the claim that morality evolved, Mallon & Machery (2010) provide at least three interpretations of what this could mean:
(a) Some components of moral psychology evolved
(b) normative cognition evolved
(c) moral cognition, “understood as a special sort of cognition” (p. 4), evolved.
They provide what strikes me as a fairly persuasive case that (a) is uncontroversially true, (b) is probably true, but (c) isn’t well-supported by available data.
Only (c) would easily support EDAs, while (b) may not and whether (a) could support EDAs would presumably depend on the details.
In subsequent papers, Machery (2018) and Stich (2018) have developed on this and related criticisms, arguing that morality is a culturally-contingent phenomenon and that there is no principled distinction between moral and nonmoral norms, respectively (see also Sinnott-Armstrong & Wheatley, 2012).

You say that only (c) would easily support EDAs. Is this because of worries that EDAs would be too strong if they also applied against normative cognition in general? If yes, I think this point might be (indirectly) covered by my thoughts in footnote 5. I would argue that EDAs go through for all domains of irreducible normativity, not just ethics. But as I said, I haven't given this much thought, so I might be missing why (c) is needed for EDAs against moral cognition to go through. I have bookmarked the paper you cited and will investigate why the authors think this. (Edit: Not sure I'll be able to easily access the text, though.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-06-11T10:22:07.629Z · EA · GW

That makes sense! I'll try to change the titles tomorrow (I hope I won't make a mess out of it:)).

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-10T15:53:56.891Z · EA · GW

Not really. In the duck-rabbit illusion, the image itself is clear. (I mean OK, the figure is coarse-grained as far as digital images can go, but that's not the main reason why the image allows for different interpretations. You could also imagine a duck-rabbit illusion with better graphics.) The argument isn't about the indirectness of perception.

Maybe a good slogan for the anti-realists would be "reality doesn't come with labels." There's a fact of the matter about how atoms (or 1s and 0s) are allocated, but how we draw categories comes down to subjective judgment calls.