Posts

Lukas_Gloor's Shortform 2020-07-27T14:35:50.329Z · score: 6 (1 votes)
Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue) 2020-06-17T12:33:05.392Z · score: 21 (8 votes)
Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails 2020-06-14T13:33:41.638Z · score: 20 (18 votes)
Moral Anti-Realism Sequence #3: Against Irreducible Normativity 2020-06-09T14:38:49.163Z · score: 37 (16 votes)
Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree 2020-06-05T07:51:59.975Z · score: 53 (28 votes)
Moral Anti-Realism Sequence #1: What Is Moral Realism? 2018-05-22T15:49:52.516Z · score: 60 (32 votes)
Cause prioritization for downside-focused value systems 2018-01-31T14:47:11.961Z · score: 53 (38 votes)
Multiverse-wide cooperation in a nutshell 2017-11-02T10:17:14.386Z · score: 33 (23 votes)
Room for Other Things: How to adjust if EA seems overwhelming 2015-03-26T14:10:52.928Z · score: 26 (20 votes)

Comments

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-07-27T14:57:56.302Z · score: 16 (8 votes) · EA · GW

[Is pleasure ‘good’?]

What do we mean by the claim “Pleasure is good”?

There’s an uncontroversial interpretation and a controversial one.

Vague and uncontroversial claim: When we say that pleasure is good, we mean that all else equal, pleasure is always unobjectionable, and often it is desired.

Specific and controversial claim: When we say that pleasure is good, what we mean is that, all else equal, pleasure is an end we should be striving for. This captures points like:

  • that pleasure is in itself desirable,
  • that no mental states without pleasure are in itself desirable,
  • that more pleasure is always better than less pleasure.

People who say “pleasure is good” claim that we can establish this by introspection about the nature of pleasure. I don’t see how one could establish the specific and controversial claim from mere introspection. After all, even if I personally valued pleasure in the strong sense (I don’t), I couldn’t, with my own introspection, establish that everyone does the same. People’s psychologies differ, and how pleasure is experienced in the moment doesn’t fully determine how one will relate to it. Whether one wants to dedicate one’s lives (or, for altruists, at least the self-oriented portions of one's life) to pursuing pleasure depends on more than just what pleasure feels like.

Therefore, I think pleasure is only good in the weak sense. It’s not good in the strong sense.

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-07-27T14:53:39.939Z · score: 6 (3 votes) · EA · GW

[Are underdetermined moral values problematic?]

If I think my goals are merely uncertain, but in reality they are underdetermined and the contributions I make to shaping the future will be driven, to a large degree, by social influences, ordering effects, lock-in effects, and so on, is that a problem?

I can’t speak for others, but I’d find it weird. I want to know what I’m getting up for in the morning.

On the other hand, because it makes it easier for the community to coordinate and pull things in the same directions, there's a sense in which underdetermined values are beneficial.

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-07-27T14:50:44.321Z · score: 31 (12 votes) · EA · GW

[Takeaways from Covid forecasting on Metaculus]

I’m probably going to win the first round of the Li Wenliang forecasting tournament on Metaculus, or maybe get second. (My screen name shows up in second on the leaderboard, but it’s a glitch that’s not resolved yet because one of the resolutions depends on a strongly delayed source.)

With around 52 questions, this was the largest forecasting tournament on the virus. It ran from late February until early June.

I learned a lot during the tournament. Next to claiming credit, I want to share some observations and takeaways from this forecasting experience, inspired by Linch Zhang’s forecasting AMA:

  • I did well at forecasting, but it came at the expense of other things I wanted to do. In February, March and April, Covid had completely absorbed me. I spent several hours per day reading news and had anxiety about regularly updating my forecasts. This was exhausting; I was relieved when the tournament came to an end.
  • I had previously dabbled in AI forecasting. Unfortunately, I can’t tell if I excelled at it because the Metaculus domain for it went dormant. In any case, I noticed that I felt more motivated to delve into Covid questions because they seemed more connected. It felt like I was not only learning random information to help me with a single question, but I was acquiring a kind of expertise. (Armchair epidemiology? :P ) I think this impression was due to a mixture of perhaps suboptimal question design for the AI Metaculus domain and the increased difficulty of picking up useful ML intuitions on the go.
  • One thing I think I’m good at is identifying reasons why past trends might change. I’m always curious to understand the underlying reasons behind some trends. I come up with lots of hypotheses because I like the feeling of generating a new insight. I often realized that my hunches were wrong, but in the course of investigating them, I improved my understanding.
  • I have an aversion to making complex models. I always feel like model uncertainty is too large anyway. When forecasting Covid cases, I mostly looked for countries where similar situations have already played out. Then, I’d think about factors that might be different with the new situation, and make intuition-based adjustments in the direction predicted by the differences.
  • I think my main weakness is laziness. Occasionally, when there’s an easy way to do it, I’d spot-check hypotheses by making predictions about past events that I hadn’t yet read about. However, I don’t do this nearly enough. Also, I rely too much on factoids I picked up from somewhere without verifying how accurate they are. For instance, I had it stuck in my head that someone said that the case doubling rate was 4 days. So, I operated with this assumption for many days of forecasting, before realizing that it’s actually looking like 2.5 days in densely populated areas and that I should anyway have spent more time looking firsthand into this crucial variable. Lastly, I noticed a bunch of times that other forecasters were talking about issues I don't have a good grasp on (e.g., test-positivity rates), and I felt that I'd probably improve my forecasting if I looked into it, but I preferred to stick with approaches I was more familiar with.
  • IT skills really would have helped me generate forecasts faster. I had to do crazy things with pen and paper because I lacked them. (But none of what I did involved more than elementary-school math.)
  • I learned that confidently disagreeing with the community forecast is different from “not confidently agreeing.” I lost a bunch of points twice due to underconfidence. In cases where I had no idea about some issue and saw the community predict <10%, I didn’t want to go <20% because that felt inappropriate given my lack of knowledge about the plausible-sounding scenario. I couldn't confidently agree with the community, but since I also didn't confidently disagree with them, I should have just deferred to their forecast. Contrarianism is a valuable skill, but one also has to learn to trust others in situations where one sees no reason not to.
  • I realized early that when I changed my mind on some consideration that initially had me predict different from the community median, I should make sure to update thoroughly. If I no longer believe my initial reason for predicting significantly above the median, maybe I should go all the way to slightly below the median next. (The first intuition is to just move closer to it but still stay above.)
  • From playing a lot of poker, I have the habit of imagining that I make some bet (e.g., a bluff or thin value bet) and it will turn out that I’m wrong in this instance. Would I still feel good about the decision in hindsight? This heuristic felt very useful to me in forecasting. It made me reverse initially overconfident forecasts when I realized that my internal assumptions didn’t feel like something I could later on defend as “It was a reasonable view at the time.”
  • I made a couple of bad forecasts after I stopped following developments every day. I realized I needed to re-calibrate how much to trust my intuitions once I no longer had a good sense of everything that was happening.

Some things I was particularly wrong about:

  • This was well before I started predicting on Metaculus, but up until about February 5th, I was way too pessimistic about the death rate for young healthy people. I think I lacked the medical knowledge to have the right prior about how strongly age-skewed most illnesses are, and therefore updated too strongly upon learning about the deaths of two young healthy Chinese doctors.
  • Like others, I overestimated the importance of hospital overstrain. I assumed that this would make the infection fatality rate about 1.5x–2.5x worse in countries that don’t control their outbreaks. This didn’t happen.
  • I was somewhat worried about food shortages initially, and was surprised by the resilience of the food distribution chains.
  • I expected more hospitalizations in Sweden in April.
  • I didn’t expect the US to put >60 countries on the level-3 health warning travel list. I was confident that they would not do this, because “If a country is gonna be safer than the US itself, why not let your citizens travel there??”
  • I was nonetheless too optimistic about the US getting things under control eventually, even though I saw comments from US-based forecasters who were more pessimistic.
  • My long-term forecasts for case numbers tended to be somewhat low. (Perhaps this was in part related to laziness; the Metaculus interface made it hard to create long tails for the distribution.)

Some things I was particularly right about:

  • I was generally early to recognize the risks from novel coronavirus / Covid.
  • For European countries and the US initially, I expected lockdown measures to work roughly as well as they did. I confidently predicted lower than the community for the effects of the first peak.
  • I somewhat confidently ruled out IFR estimates <0.5% in early March already, and I think this was for good reasons, even though I continued to accumulate better evidence for my IFR predictions later and was wrong about the effects of hospital overstrain.
  • I very confidently doubled down against <0.5% IFR estimates in late March, despite the weird momentum that developed around taking them seriously, and the confusion about the percentage of asymptomatic cases.
  • I have had very few substantial updates since mid March. I predicted the general shape of the pandemic quite well, e.g. here or here.
  • I confidently predicted that the UK and the Netherlands (later) would change course about their initial “no lockdown” policy.
  • I noticed early that Indonesia had a large undetected outbreak. A couple of days after I predicted this, the deaths there jumped from 1 to 5 and its ratio of confirmed cases to deaths became the worst (or second worst?) in the world at the time.

(I have stopped following the developments closely by now.)

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-07-27T14:40:15.316Z · score: 2 (1 votes) · EA · GW

[When thinking about what I value, should I take peer disagreement into account?]

Consider the question “What’s the best career for me?”

When we think about choosing careers, we don’t update to the career choice of the smartest person we know or the person who has thought the most about their career. Instead, we seek out people who have approached career choice with a similar overarching goal/framework (in my case, 80,000 Hours is a good fit), and we look toward the choices of people with similar personalities (in my case, I notice a stronger personality overlap with researchers than managers, operations staff, or those doing earning to give).

When it comes to thinking about one’s values, many people take peer disagreement very seriously.

I think that can be wise, but it shouldn’t be done unthinkingly. I believe that the quest to figure out one’s values shares strong similarities with the quest of figuring out one’s ideal career. Before deferring to others with one's deliberations, I recommend making sure that others are asking the same questions (not everything that comes with the label “morality” is the same) and that they are psychologically similar in the ways that seem fundamental to what you care about as a person.

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-07-27T14:38:16.393Z · score: 9 (5 votes) · EA · GW

[I’m an anti-realist because I think morality is underdetermined]

I often find myself explaining why anti-realism is different from nihilism / “anything goes.” I wrote lengthy posts in my sequence on moral anti-realism (2 and 3) about partly this point. However, maybe the framing “anti-realism” is needlessly confusing because some people do associate it with nihilism / “anything goes.” Perhaps the best short explanation of my perspective goes as follows:

I’m happy to concede that some moral facts exist (in a comparatively weak sense), but I think morality is underdetermined.

This means that beyond the widespread agreement on some self-evident principles, expert opinions won’t converge even if we had access to a superintelligent oracle. Multiple options will be defensible, and people will gravitate to different attractors in value space.

Comment by lukas_gloor on Lukas_Gloor's Shortform · 2020-07-27T14:35:50.632Z · score: 4 (2 votes) · EA · GW

[Moral uncertainty and moral realism are in tension]

Is it ever epistemically warranted to have high confidence in moral realism, and also be morally uncertain not only between minor details of a specific normative-ethical theory but between theories?

I think there's a tension there. One possible reply might be the following. Maybe we are confident in the existence of some moral facts, but multiple normative-ethical theories can accommodate them. Accordingly, we can be moral realists (because some moral facts exist) and be morally uncertain (because there are many theories to choose from that accommodate the little bits we think we know about moral reality).

However, what do we make of the possibility that moral realism could be true only in a very weak sense? For instance, maybe some moral facts exist, but most of morality is underdetermined. Similarly, maybe the true morality is some all-encompassing and complete theory, but humans might be forever epistemically closed off to it. If so, then, in practice, we could never go beyond the few moral facts we already think we know for sure.

Assuming a conception of moral realism that is action-relevant for effective altruism (e.g., because it predicts reasonable degrees of convergence among future philosophers, or makes other strong claims that EAs would be interested in), is it ever epistemically warranted to have high confidence in that, and be open-endedly morally uncertain?

Another way to ask this question: If we don't already know/see that a complete and all-encompassing theory explains many of the features related to folk discourse on morality, why would we assume that such a complete and all-encompassing theory exists in a for-us-accessible fashion? Even if there are, in some sense, "right answers" to moral questions, we need more evidence to conclude that morality is not vastly underdetermined.

Comment by lukas_gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-07-25T14:47:46.019Z · score: 3 (2 votes) · EA · GW
[...] one thing that it convinced me of is that there is a close connection between your particular normative ethical theory and moral realism. If you claim to be a moral realist but don't make ethical claims beyond 'self-evident' ones like pain is bad, given the background implausibility of making such a claim about mind-independent facts, you don't have enough 'material to work with' for your theory to plausibly refer to anything.

Cool, I'm happy that this argument appeals to a moral realist!

I agree that it then shifts the arena to convergence arguments. I will discuss them in posts 6 and 7.

In short, I don't think of myself as a moral realist because I see strong reasons against convergence about moral axiology and population ethics.

This won't compel the anti-realist, but I think it would compel someone weighing up the two alternative theories of how justification works. If you are uncertain about whether there are mind-independent facts about our beliefs being justified, the argument that anti-realism is self-defeating pulls you in the direction of realism.

I don't think this argument ("anti-realism is self-defeating") works well in this context. If anti-realism is just the claim "the rocks or free-floating mountain slopes that we're seeing don't connect to form a full mountain," I don't see what's self-defeating about that.

One can try to say that a mistaken anti-realist makes a more costly mistake than a mistaken realist. However, on close inspection, I argue that this intuition turns out to be wrong. It also depends a lot on the details. Consider the following cases:

(1) A person with weak object-level normative opinions. To such a person, the moral landscape they're seeing looks like either:

(1a) free-floating rocks or parts of mountain slope, with a lot of fog and clouds.

(1b) many (more or less) full mountains, all of which are similarly appealing. The view feels disorienting.

(2) A person with strong object-level normative opinions. To such a person, the moral landscape they're seeing looks like either:

(2a) a full mountain with nothing else of note even remotely in the vicinity.

(2b) many (more or less) full mountains, but one of which is definitely theirs. All the other mountains have something wrong/unwanted about them.

2a is confident moral realism. 2b is confident moral anti-realism. 1a is genuine uncertainty, which is compatible with moral realism in theory, but there's no particular reason to assume that the floating rocks would connect. 1b is having underdefined values.

Of course, how things appear to someone may not reflect how they really are. We can construct various types of mistakes that people in the above examples might be making.

This requires longer discussion, but I feel strongly that someone whose view is closest to 2b has a lot to lose by trying to change their psychology into something that lets them see things as 1a or 1b instead. They do have something to gain if 1a or 1b are actually epistemically warranted, but they also have stuff to lose. And the losses and gains here are commensurate – I tried to explain this in endnote 2 of my fourth post. (But it's a hastily written endnote and I would have ideally written a separate post about just this issue. I plan to touch on it again in a future post on how anti-realism changes things for EAs.)

Lastly, it's worth noting that sometimes people's metaethics interact with their normative ethics. A person might not adopt a mindset of thinking about or actually taking stances on normative questions because they're in the habit of deferring to others or waiting until morality is solved. But if morality is a bit like career choice, then there are things to lose from staying indefinitely uncertain about one's ideal career, or just going along with others.

To summarize: There's no infinitely strong wager for moral realism. There is an argument for valuing moral reflection (in the analogy: gaining more clarity on the picture that you're seeing, and making sure you're right about what you think you're seeing). However, the argument for valuing moral reflection is not overridingly strong. It is to be traded off against one's the strength of one's object-level normative opinions. And without object-level normative opinions, one's values might be underdetermined.

Comment by lukas_gloor on Do research organisations make theory of change diagrams? Should they? · 2020-07-23T07:15:36.078Z · score: 6 (3 votes) · EA · GW
I'd guess that both of those conditions will be harder to maintain as an organisation scales up. Would you guess that ToC diagrams tend to become more useful as organisations scale up?

I think so. I'm somewhat nervous about this because if the culture changes drastically, maybe that's generally bad and ToC documents just mitigate some of the badness, but can't quite get you back the culture at a smaller organization. Whether large scaling even makes sense might depend on the organization's mission, or the ability of the executive director (and hiring committee) to scale in a way that preserves the right culture.

Also, when you say "prioritization abilities", do you just mean ability to prioritise between research questions?

Also the other things you list.

I ask largely because one reason I suspect ToC diagrams may be helpful is to guide decisions about things like which forms of output to produce, who to share research findings with, and whether and how to disseminate particular findings broadly. It seems plausible to me that a researcher who's excellent at prioritizing among research questions might not be good at thinking about those matters, and a ToC diagram (or the process of making one) might speed up or clarify their thoughts on those matters.

That seems reasonable. My experience is that people often know the right answers in theory, but need a lot of nudging to choose mediums or venues different from the ones they find personally the most rewarding. I think there are also just large constraints by individual psychology that make things less flexible than one might think. So, to preserve intrinsic motivation for research, it's maybe not a good idea to push researchers too much. Still, I think it's crucial to have a culture where researchers think actively about which medium to pick, why they're doing it, and how the output will be shared. As long as this is being diligently considered and discussed, I think it's reasonable to defer to the judgment of individual researchers.

Comment by lukas_gloor on Do research organisations make theory of change diagrams? Should they? · 2020-07-23T07:04:28.517Z · score: 4 (2 votes) · EA · GW

I did write something that builds on it, yeah. It was about defining various proxies to optimize for (e.g., money, societal influence, connections to other EA organizations, followers of the organization's newsletter (with near-term EA as their main interest), value-aligned people with computer science expertise, etc.) and how well they do in futures where we decide different interventions are most important. I didn't want to make it public because it felt unpolished, and I was worried that some of the proxies could give outsiders the impression of instrumentalizing people.

Someone even helped me with excel to produce a heat map of the results weighted by probability we assign to various interventions mattering the most, and at the time this helped me clarify objections I had to EAF's 2015/2016 strategic direction (we interacted little with other EA orgs and tried to build up capacity with animal advocacy, but always promote cause neutrality with the intent of maybe pivoting to other causes later). It didn't lead to many important changes right away, but we made major changes in 2017 that strongly reflected the takeaways I had sketched in those documents.

Comment by lukas_gloor on Do research organisations make theory of change diagrams? Should they? · 2020-07-22T09:42:03.114Z · score: 15 (7 votes) · EA · GW

Context: I'm drawing from experience with a small research organization in a young field where it used to be very hard to do good research without thoroughly understanding the causal paths to impact.

Strongly stated, weakly held, and definitely tainted by personal idiosyncrasies:

I often found myself suspicious about (too many) internal strategy documents because I think that in a well-functioning organization of the kind I described, the people who make prioritization decisions (researchers pursuing their interests autonomously or executive director/managers who define tasks and targets at the organization-level) should be hired, among other things, for their prioritization abilities.

My sense is that being good at prioritization is more about the mindset than following some plan, and it involves thinking through the paths to impact for every decision, every day. So when I'm asked to help write up a theory of change, my intuitive reaction is "Who is this for? This feels like tediously writing down things that are already second nature to many people, and so much goes into this that it's hard not to come away from this feeling like the document produced is too simplistic to be of any use."

So, I'm overall skeptical about the use of ToC documents for improving a small organization's focus, especially if the organization operates in a field/paradigm where staff have already been selected for their ability to prioritize well.

To be clear, this isn't measured against a comparison of not thinking about strategy at all. Instead, I favor leaner versions of strategy discussions. For instance, one person writes up their thoughts on what could be improved (this might sometimes look like an abbreviated version of a ToC document), then core staff use it as a basis for group discussions and try to identify the non-obvious questions that seem the most crucial to the organization's strategic direction. Then one discusses these questions from various angles, switches to a solution-oriented mode, and defines action points. The result of those discussions should be written down, but there's no need to start at "our mission is to reduce future suffering.")

Of course, there might be other reasons why internal ToC documents could be useful. For instance, not everyone's work involves making big-picture prioritization decisions, and it's helpful and motivating for all staff to have a good sense of what the organization concretely aims to accomplish. Still, if the reason for writing a ToC document is updating staff instead of actually improving overall prioritization and focus, then that calls for different ways of writing the document. And perhaps doing a (recorded) strategy Q&A with researchers and the executive director might be more efficient than a drily written document with rectangles and arrows.

Another instance where ToC documents might be (more) useful is for establishing consensus about an organization's aims. If it feels like the organization lacks a coherent framework for how to think about their mission, maybe the process of writing a ToC document could be helpful in getting staff to think along similar lines.

Comment by lukas_gloor on Cause prioritization for downside-focused value systems · 2020-07-21T18:58:18.857Z · score: 7 (2 votes) · EA · GW

Sorry for the delayed answer; I had this open but forgot.

I like this map! Do you know of anything that attempts to assign probabilities (even very vague/ballpark) to these different outcomes?

Not in any principled way, no. I think the action threshold ("How large/small would the probability have to be in order to make a for-me-actionable difference?") are quite low if you're particularly suffering-focused, and quite high if you have a symmetrical/upside-focused view. (This distinction is crude and nowadays I'd caveat that some plausible moral views might not fit on the spectrum.) So in practice, I'd imagine that cruxes are rarely about the probabilities of these scenarios. Still, I think it could be interesting to think about their plausibility and likelihood in a systematic fashion.

Given my lack of knowledge about the different risk factors, I mostly just treat each of the different possible outcomes on your map and the hypothetical "map that also tracked outcomes with astronomical amounts of happiness" as being roughly equal in probability.

At the extremes (very good outcomes vs. very bad ones), the good outcomes seem a lot more likely, because future civilization would want to intentionally bring them about. For the very bad outcomes, things don't only have to go wrong, but do so in very specific ways.

For the less extreme cases (moderately good vs. moderately bad), I think most options are defensible and treating them as similarly likely certainly seems reasonable.

Comment by lukas_gloor on Moral Anti-Realism Sequence #1: What Is Moral Realism? · 2020-07-03T12:56:05.567Z · score: 3 (2 votes) · EA · GW

Thanks!

At the time when I wrote this post, the formatting either didn't yet allow the hyperlinked endnotes, or (more likely) I didn't know how to do the markdown. I plan to update the endnotes here so they become more easily readable.

Update 7/7/2020: I updated the endnotes.

Comment by lukas_gloor on Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue) · 2020-06-18T13:33:17.558Z · score: 2 (1 votes) · EA · GW

Yeah, I made the AI really confident for purposes of sharpening the implications of the dialogue. I want to be clear that I don't think the AI's arguments are obviously true.

(Maybe I should flag this more clearly in the dialogue itself, or at least the introduction. But I think this is at least implicitly explained in the current wording.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue) · 2020-06-18T13:31:17.071Z · score: 4 (2 votes) · EA · GW
I think sometimes my metaethical fanaticism looks like that. And I imagine for some people that's how it typically looks. But I think for me it's more often "wanting to be careful in case moral realism is true", rather than "hoping that moral realism is true". You could even say it's something like "concerned that moral realism might be true".

Interesting! Yeah, that framing also makes sense to me.

Comment by lukas_gloor on Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue) · 2020-06-18T13:30:37.135Z · score: 4 (2 votes) · EA · GW

Thanks for those thoughts, and for the engagement in general! I just want to flag that I agree that weaker versions of the wager aren't covered with my objections (I also say this in endnote 5 of my previous post). Weaker wagers are also similar to the way valuing reflection works for anti-realists (esp. if they're directed toward naturalist or naturalism-like versions of moral realism).

I think it's important to note that anti-realism is totally compatible with this part you write here:

Humanity should try to "keep our options open" for a while (by avoiding existential risks), while also improving our ability to understand, reflect, etc. so that we get into a better position to work out what options we should take.

I know that you wrote this part because you'd primarily want to use the moral reflection to figure out if realism is true or not. But even if one were confident that moral realism is false, there remain some strong arguments to favor reflection. (It's just that those arguments feel like less of a forced move, and the are interesting counter-considerations to also think about.)

(Also, whether one is a moral realist or not, it's important to note that working toward a position of option value for philosophical reflection isn't the only important thing to do according to all potentially plausible moral views. For some moral views, the most important time to create value arguably happens before long reflection.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-06-16T13:22:05.706Z · score: 4 (2 votes) · EA · GW

It seems odd to me to suggest we have any examples of maximally nuanced and versatile reasoners. It seems like all humans are quite flawed thinkers.

Sorry, bad phrasing on my part! I didn't mean to suggest that there are perfect human reasoners. :)

The context of my remark was this argument by Richard Yetter-Chappell. He thinks that as humans, we can use our inside view to disqualify hypothetical reasoners who don't even change their minds in the light of new evidence, or don't use induction. We can disqualify them from the class of agents who might be correctly predisposed to apprehend normative truths. We can do this because compared to those crappy alien ways of reasoning, ours feels undoubtedly "more nuanced and versatile."

And so I'm replying to Yetter-Chappell that as far as inside-view criteria for disqualifying people from the class of promising candidates for the correct psychology goes, we probably can't find differences among humans that would rule out everyone except a select few reasoners who will all agree on the right morality. Insofar as we try to construct a non-gerrymandered reference class of "humans who reason in really great ways," that reference class will still contain unbridgeable disagreement.

One example of why: I don't think we yet have a compelling demonstration that, given something like coherent extrapolated volition, humans wouldn't converge on the same set of values. So I think we need to rely on arguments, speculations, etc. for matters like that, rather than the answer already being very clear.

I haven't yet made any arguments about this (because this is the topic of future posts in the sequence), but my argument will be that we don't necessarily need a compelling demonstration, because we know enough about why people disagree to tell that they are aren't always answering the same question and/or paying attention to the same evaluation criteria.

Comment by lukas_gloor on Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails · 2020-06-16T13:12:26.309Z · score: 4 (2 votes) · EA · GW

Yes, that's the same intuition. :)

In that case, I'll continue clinging to my strange wager as I await your next post :)

Haha. The intuition probably won't get any weaker, but my next post will spell out the costs it would have to endorse this intuition as your value, as opposed to treating it as a misguiding intuition. Perhaps by reflecting on the costs and the practical inconveniences it could bring about to treat this intuition as one's terminal value, we might come to rethink it.

Comment by lukas_gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-06-16T13:07:07.507Z · score: 4 (2 votes) · EA · GW

Good question!

By "open-ended moral uncertainty" I mean being uncertain about one's values without having in mind well-defined criteria (either implicit or explicit) for what constitutes a correct solution.

Footnote 26 leaves me with the impression that perhaps you mean something like "uncertainty about what our fundamental goals should be, rather than uncertainty that's just about what should follow from our fundamental goals". But I'm not sure I'd call the latter type of uncertainty normative/moral uncertainty at all - it seems more like logical or empirical uncertainty.

Yes, this captures it well. I'd say most of the usage of "moral uncertainty" in EA circles is at least in part open-ended, so this is in agreement with your intuition that maybe what I'm describing isn't "normative uncertainty" at all. I think many effective altruists use "moral uncertainty" in a way that either fails to refer to anything meaningful, or it implies under-determined moral values. (I think this can often be okay. Our views on lots of things are under-determined and there isn't necessarily anything wrong with that. But sometimes it can be bad to think that something is well-determined when it's not.)

Now, I didn't necessarily mean to suggest that the only defensible way to think that morality has enough "structure" to deserve the label "moral realism" is to advance an object-level normative theory that specifies every single possible detail. If someone subscribes to hedonistic total utilitarianism but leaves it under-defined to what degree bees can feel pleasure, maybe that still qualifies as moral realism. But if someone is so morally uncertain that they don't know whether they favor preference utilitarianism or hedonistic utilitarianism, or whether they might favor some kind prioritarianism after all, or even something entirely different such as Kantianism, moral particularism, etc., then I would ask them: "Why do you think the question you're asking yourself is well-defined? What are you uncertain about? Why do you expect there to be a speaker-independent solution to this question?"

To be clear, I'm not making an argument that one cannot be in a state of uncertainty between, for instance, preference utilitarianism versus hedonistic utilitarianism. I'm just saying that, as far as I can tell, the way to make this work satisfactorily would be based on anti-realist assumptions. The question we're asking, in this case, isn't "What's the true moral theory?" but "Which moral theory would I come to endorse if I thought about this question more?"

Comment by lukas_gloor on Timeline of the wild-animal suffering movement · 2020-06-16T10:26:50.811Z · score: 2 (1 votes) · EA · GW

Dawkins wrote about it and said "it must be so." Maybe the timeline is about people who explicitly challenged that perception.

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-16T08:41:50.111Z · score: 4 (2 votes) · EA · GW

Does (2) sound like a roughly accurate depiction of your views?

Yes, but with an important caveat. The way you described the three views, it doesn't make it clear that 2. and 3. have the same practical implications as 1. Whereas I intended to describe them in a way that leaves no possible doubt about that.

Here's how I would change your descriptions to make them compatible with my views:

  • A position in which there may not even be a single correct moral theory ((no change))

  • A position in which no strong claims can ever be made about what the single correct moral theory would be.

  • A position in which the only moral questions that have a correct (and/or knowable) answer are questions on which virtually everyone already agrees.

As you can see, my 2. and 3. are quite different from what you wrote.

Comment by lukas_gloor on Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails · 2020-06-16T07:39:37.961Z · score: 4 (2 votes) · EA · GW

I meant it the way you describe, but I didn't convey it well. Maybe a good way to explain it as follows:

My initial objection to the wager is that the anti-realist way of assigning what matters is altogether very different from the realist way, and this makes the moral realism wager question begging. This is evidenced by issues like "infectiousness." I maybe shouldn't even have called that a counter-argument—I'd just think of it as supporting evidence for the view that the two perspectives are altogether too different for there to be a straightfoward wager.

However, one way to still get something that behaves like a wager is if one perspective "voluntarily" favors acting as though the other perspective is true. Anti-realism is about acting on the moral intuitions that most deeply resonate with you. If your caring capacity under anti-realism says "I want to act as though irreducible normativity applies," and the perspective from irreducible normativity says "you ought to act as though irreducible normativity applies," then the wager goes through in practice.

(In my text, I wrote "Admittedly, it seems possible to believe that one’s actions are meaningless without irreducible normativity." This is confusing because it sounds like it's a philosophical belief rather than a statement of value. Edit: I now edited the text to reflect that I was thinking of "believing that one's actions are meaningless without irreducible normativity" as a value statement.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails · 2020-06-16T07:25:12.044Z · score: 5 (3 votes) · EA · GW

Thanks! Yeah, I'm curious about the same questions regarding the strong downvotes. Since I wrote "it works well as a standalone piece," I guess I couldn't really complain if people felt that the post was unconvincing on its own. I think the point I'm making in the "Begging the question" subsection only works if one doesn't think of anti-realism as nihilism/anything goes. I only argued for that in previous posts.

(If the downvotes were because readers are tired of the topic or thought that the discussion of Huemer's argument was really dry, the good news is that I have only 1 post left for the time being, and it's going to be a dialogue, so perhaps more engaging than this one.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails · 2020-06-16T07:17:15.107Z · score: 2 (1 votes) · EA · GW

You're describing what I tried to address in my last paragraph, the stance I called "metaethical fanaticism." I think you're right that this type of wager works. Importantly, it's dependent on having this strongly felt intuition you describe, and giving it (near-)total weight on what you care about.

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-14T11:50:54.061Z · score: 2 (1 votes) · EA · GW

I feel like you're too focused on this notion of whether something "exists" or not. One of the main points I was trying to convey in the article is that I don't consider this to be an ideal way of framing the disagreement. See for instance these quotes:

Going by connotations alone, we might at first think that realism means that a domain in question is real, whereas anti-realism implies that it’s something other than real (e.g., that it’s merely imagined). Although accurate in a very loose sense, this interpretation is misleading.

[...]

Typically, when someone stops believing in God, they also stop talking as though God exists. As far as private purposes are concerned, atheists don’t generally refine their concept of God; they abandon it.[3]
Going from realism to anti-realism works differently.

[...]

Rejecting realism for a domain neither entails erasing the substance of that domain, nor (necessarily) its relevance. Anti-realists will generally agree that the domain has some relevance, some “structure.”

__

Now quoting something from your comment:

Lack of sharp boundaries is in my mind no strong argument for denying the existence of a claimed aspect of reality. Okay, this also feels uncharitable, but it felt like Alice was arguing that the moon doesn't exist because there are edge cases, like the big rock that orbits Pluto.

Hm, I think it goes beyond just saying that a concept has fuzzy boundaries. Some people might say that "markets" don't exist because it's a fuzzy, abstract concept and people may not agree in practice what aspects of physical reality are part of a market. This would be a pedantic way of objecting to the claim "markets are real." That's not what I think anti-realism is about. :)

With the example of consciousness, my point would go something like this: "There might be a totally sensible interpretation of consciousness according to which bees are conscious, and a totally sensible interpretation according to which they aren't. Bees aren't 'edge cases' like the rocks that surround Pluto. They either fall square into a concept of consciousness, or completely outside of it. Based on what we can tell from introspection and from our folk concept of consciousness, it's under-determined what we're supposed to do with bees."

If put this way, perhaps you'd agree that this in conflict with the realist intuition that consciousness is this crisp thing that systems either have or lack.

Then Alice would maybe say

Haha. Or if you wanted to make the joke about Dennett's eliminativism, you could describe Alice's reply like this:

"Look, here's an optical illusion. And here's another one. Therefore, consciousness doesn't exist."

But I think that's uncharitable to Dennett. If you read Consciousness Explained in search of arguments why consciousness doesn't exist, you'll be disappointed. However, if you read it in search of arguments why there's no clearcut way to extrapolate from obvious examples like "I'm conscious right now" to less obvious ones like "are bees conscious?" then the book will be really interesting. All these illusions and discussions about fancy neuroscience (e.g., cutaneous rabbit or the discussion about Stalinesque versus Orwellian revisions) support the point that many processes we believe to have a good introspective grasp on are actually much more under-determined than we would intuitively guess. This supports the view that consciousness is very unlike what we think it is. Some people therefore say things like "consciousness ((as we think of it)) doesn't exist." I think that's misleading and will confuse everyone. I think it would be easier to understand anti-realists if they explained their views by saying that things are different from how they appear, and more ambiguous in quite fundamental ways, etc.

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-14T11:25:24.911Z · score: 4 (2 votes) · EA · GW
This passage sounds to me like it's implying that the anti-realist position is: "Some moral claims may be objectively true, but many are neither objectively true nor objectively false." In this case, it sounds like the anti-realist is saying that there is a speaker-independent fact of the matter about whether everyone getting tortured is morally worse than a world full of flourishing, and just denying that that means there will always be independent facts of the matter about moral claims.

I should have chosen a more nuanced framing in my comment. Instead of saying, "Sure, we can agree about that," the anti-realist should have said "Sure, that seems like a reasonable way to use words. I'm happy to go along with using moral terms like 'worse' or 'better' in ways where this is universally considered self-evident. But it seems to me that you think you are also saying that for every moral question, there's a single correct answer [...]"

So the anti-realist isn't necessarily conceding that "surely a world where everyone gets tortured is worse than a world where everyone flourishes" is a successful argument in favor of moral realism. At least, it's not yet an argument for ambitious versions of moral realism (ones "worthy of the name" according to my semantic intuitions).

I think I'd want to classify such a view as moral realist in an important sense, as it seems to involve realism about at least some moral claims.

It's possible that you just have different semantic intuitions from me. It might be helpful to take a step back and ignore whether or not to classify a view as "moral realism," and think about what it means for notions like moral uncertainty, the value of information for doing more work in philosophy, the prospect of convergence among people's normative-ethical views if they did more reflecting, etc. Because the view we are discussing here has relatively weak implications for all these things, I personally didn't feel like calling it "moral realism."

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-14T11:17:09.690Z · score: 4 (2 votes) · EA · GW
Does the following statement of the slogan seem to you to capture the anti-realist position: "Reality doesn't come with objectively correct labels. Humans create labels and draw categories, and how they do this will be determined by physical reality, but there's no separate criteria determining how humans should do this; there's nothing more/other than how they will do this."

Yeah, that sounds right! It carries more information than my crude proposal.

As you suggest, moral naturalists might agree that reality (obviously) doesn't carry labels. They might argue that in a way, it kind of screams out at you where you can put the labels. And the anti-realist position is that there's more ambiguity than "it just screams out at you."

While the distinction between anti-realism and non-naturalism seems relatively clearcut, I think the distinction between anti-realism and naturalism is a bit loose. This is also reflected in Luke Muehlhauser's Pluralistic Moral Reductionism post. Luke left it open whether to count PMR as realism or anti-realism. By contrast, my terminological choice has been to count it as anti-realism.

Comment by lukas_gloor on Moral Anti-Realism Sequence #1: What Is Moral Realism? · 2020-06-14T11:10:22.813Z · score: 2 (1 votes) · EA · GW
I found this argument confusing. Wouldn't it be acceptable, and probably what we'd expect, for a metaethical view to not also provide answers on normative ethics or axiology?

I'm not saying metaethical views have to advance a particular normative-ethical theory. I'm just saying that if a realist metaethical view doesn't do this, it becomes difficult to explain how proponents of this view could possibly know that there really is "a single correct theory."

So for instance, looking at the arguments by Peter Railton, it's not clear to me whether Railton even expects there to be a single correct moral theory. His arguments leave morality under-defined. "Moral realism" is commonly associated with the view that there's a single correct moral theory. Railton has done little to establish this, so I think it's questionable whether to call this view "moral realism."

Of course, "moral realism" is just a label. It matters much more that we have clarity about what we're discussing, instead of which label we pick. If someone wants to use the term "moral realism" for moral views that are explicitly under-defined (i.e., views according to which many moral questions don't have an answer), that's fine. In that sense, I would be a "realist."

It seems that finding out there are "speaker-independent moral facts, rules or values" would be quite important, even if we don't yet know what those facts are.

One would think so, but as I said, it depends on what we mean exactly by "speaker-independent moral facts." On some interpretations, those facts may be forever unknowable. In that case, knowledge that those facts exist would be pointless in practice.

I write more about this in my 3rd post, so maybe the points will make more sense with the context there. But really the main point of this 1st post is that I make a proposal in favor of being cautious about the label "moral realism" because, in my view, some versions of it don't seem to have action-guiding implications for how to go about effective altruism.

(I mean, if I had started out convinced of moral relativism, then sure, "moral realism" in Peter Railton's sense would change my views in very action-guiding ways. But moral relativists are rare. I feel like one should draw the realism vs. anti-realism distinction in a place where it isn't obvious that one side is completely wrong. If we draw the distinction in such a way that Peter Railton's view qualifies as "moral realism," then it would be rather trivial that anti-realism was wrong. This would seem uncharitable to all the anti-realist philosophers who have done important work on normative ethics.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-12T14:44:34.059Z · score: 3 (2 votes) · EA · GW
2) saying that your subjective experience is real (that is, it exists in some form and is not just a delusion)

What does it entail when you say that your subjective experience "is real"? It's important to note that the anti-realist doesn't try to take away the way something feels to you. Instead, the anti-realist disagrees with the further associations you might have for "consciousness is real." If consciousness is real, it seems like there'd be a fact of the matter whether bees are conscious, that there's an unambiguous way to answer the question "Are bees conscious?" without the need to further explain what exactly the question is going for. As I tried to explain endnote 18, that's a very different claim from "it feels like something to be me, right now" (or "it feels like something to be in pain" – to use the example in your comment above).

For consciousness, this sentiment is really hard to explain. I think endnote 18 is the best explanation I've managed to give thus far. I'd say the sentiment behind anti-realism is much easier to understand with other bedrock concepts (Tier 2, 3, or 4).

For instance, you can go through a similar dialogue structure for morality. The moral realist says "But surely moral facts exist, because it seems that, all else equal, a world where everyone gets tortured is worse than a world full of flourishing." In reply, the moral anti-realist might say something like "Sure, we can agree about that. But it seems to me that you think you are also saying that for every moral question, there's a single correct answer. That, for instance, whether or not people have obligations to avoid purchasing factory farmed meat has an unambiguous answer. I don't see how you think you can establish this merely by pointing at self-evident examples such as 'surely a world where everyone gets tortured is worse than a world full of flourishing.' It seems to me that you have not yet argued that what's moral versus what's not moral always has a solution."

Analogously, the same dialogue works for aesthetics realism. The Mona Lisa might be (mostly) uncontroversially beautiful, but it would be weird to infer from this that "Are Mark Rhotko's paintings beautiful ?" is a well-specified question with a single true answer.

Comment by lukas_gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-06-12T14:15:16.969Z · score: 5 (3 votes) · EA · GW

Thanks for this comment, this type of empirical metaethics research is quite new to me and it sounds really fascinating!

(1) Moral cognition may not have evolved
With respect to the claim that morality evolved, Mallon & Machery (2010) provide at least three interpretations of what this could mean:
(a) Some components of moral psychology evolved
(b) normative cognition evolved
(c) moral cognition, “understood as a special sort of cognition” (p. 4), evolved.
They provide what strikes me as a fairly persuasive case that (a) is uncontroversially true, (b) is probably true, but (c) isn’t well-supported by available data.
Only (c) would easily support EDAs, while (b) may not and whether (a) could support EDAs would presumably depend on the details.
In subsequent papers, Machery (2018) and Stich (2018) have developed on this and related criticisms, arguing that morality is a culturally-contingent phenomenon and that there is no principled distinction between moral and nonmoral norms, respectively (see also Sinnott-Armstrong & Wheatley, 2012).

You say that only (c) would easily support EDAs. Is this because of worries that EDAs would be too strong if they also applied against normative cognition in general? If yes, I think this point might be (indirectly) covered by my thoughts in footnote 5. I would argue that EDAs go through for all domains of irreducible normativity, not just ethics. But as I said, I haven't given this much thought, so I might be missing why (c) is needed for EDAs against moral cognition to go through. I have bookmarked the paper you cited and will investigate why the authors think this. (Edit: Not sure I'll be able to easily access the text, though.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-06-11T10:22:07.629Z · score: 10 (4 votes) · EA · GW

That makes sense! I'll try to change the titles tomorrow (I hope I won't make a mess out of it:)).

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-10T15:53:56.891Z · score: 3 (2 votes) · EA · GW

Not really. In the duck-rabbit illusion, the image itself is clear. (I mean OK, the figure is coarse-grained as far as digital images can go, but that's not the main reason why the image allows for different interpretations. You could also imagine a duck-rabbit illusion with better graphics.) The argument isn't about the indirectness of perception.

Maybe a good slogan for the anti-realists would be "reality doesn't come with labels." There's a fact of the matter about how atoms (or 1s and 0s) are allocated, but how we draw categories comes down to subjective judgment calls.

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-10T07:31:13.294Z · score: 8 (2 votes) · EA · GW

it is not arbitrary to consider suffering bad. Indeed, interpretation (broadly defined) is arguably intrinsic to, and in some sense constitutive of, suffering itself (cf. Aydede, 2014).

The way I think about it, when I'm suffering, this is my brain subjectively "disvaluing" (in the sense of wanting to end or change it) the state it's currently in. This is not the same as saying that there exists a state of the world that is objectively to be disvalued. (Of course, for people who are looking for meaningful life goals, disvaluing all suffering is a natural next step, which we both have taken.:))

In the context of bedrock concepts, it's not clear to me why such concepts should be considered problematic. After all, what is the alternative? An infinite regress of concepts? A circular loop? Having bedrock concepts seems to me the least problematic — indeed positively plausible — option.

I talk about notions like 'life goals' (which sort of consequentialist am I?), 'integrity' (what type of person do I want to be?), 'cooperation/respect' (how do I think of the relation between my life goals and other people's life goals?), 'reflective equilibrium' (part of philosophical methodology), 'valuing reflection' (the anti-realist notion of normative uncertainty), etc. I find that this works perfectly well and it doesn't feel to me like I'm missing parts of the picture.

If you're asking for how I justify particular answers to the above, I'd just say that I'm basing those answers on what feels the most right to me. On my fundamental intuitions. I consider them axiomatic and that's where the buck stops.

I don't see how that follows. Accepting bedrock concepts need not imply that the most plausible conception of philosophical progress will be bedrock.

This makes sense if your only bedrock concepts are Tier 1 or lower. If you allow Tier 2 (normative bedrock concepts), I'd point out that there are arguments why all of normativity is related, in which case it would be a bit weird to say that metaphilosophy has no speaker-independent solution, but e.g., ethics or epistemology do have such solutions. (I take it that your moral realism is primarily based on consciousness realism, so I would classify it as Tier 1 rather than Tier 2. Of course, this typology is very crude and one can reasonably object to the specifics.)

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-10T07:13:06.198Z · score: 3 (2 votes) · EA · GW

Yeah. I'm trying to argue for the view that anti-realism isn't about denying the existence of the domain at hand. It's more about "the domain exists and is ambiguous" instead of "the domain doesn't exist."

Admittedly, some anti-realists may use eliminativist rhetoric, saying things like "consciousness doesn't exist." But even there, I would guess that the explanation of their position goes something like "I understand why people talk as though consciousness is objective, but I think it works differently from how people think it works. There is something going on, but it's not what people commonly call 'consciousness.'"

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-10T07:05:41.019Z · score: 2 (1 votes) · EA · GW

I can see why this is surprising. I just noticed that the crux for me is more about the notion of speaker-independent reasons altogether. If self-oriented reasons existed, I would feel a bit pedantic to say, "I'm still not a moral realist because there's no single correct way to 'take an impartial stance.'" (It might be true that there's no single way to, e.g., solve population ethics, but for most purposes I think utilitarianism is so elegant as a solution, and alternatives like prioritarianism seem so stilted, that I wouldn't mind calling this "moral realism.")

That said, I think it's important to highlight that I wouldn't be convinced of a theory of self-oriented reasons for action that said that the only such reasons are things like "all else equal, don't subject yourself to torture." If the self-oriented reasons for action leave it largely underdetermined how personal flourishing would look like, then I don't count it as moral realism. As I argue in my third post, if the only speaker-independent reasons for action are universal uncontroversial principles like "all else equal, don't subject yourself to torture," that notion of realism won't differ from (what I think of as) anti-realism in any action-relevant ways.

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-06T12:18:39.503Z · score: 2 (2 votes) · EA · GW

But if realists use 'morality' to always implicitly mean 'objective morality', then I don't know when they're relying on the 'objective' bit in their arguments. That seems bad.

Okay, you’re definitely right that it would be weird to always (also) interpret realists as making an error. How about this:

  • When a realist makes a moral claim where it matters that they believe in realism, we interpret their claim in the sense of error theory.
  • When the same realist makes a moral claim that's easily separable from their belief in realism, we interpret their claim according to what you call the “lowest common denominator.”
  • Sometimes we may not be able to tell whether a person’s belief in realism influences their first-order moral claims. Those cases could benefit from clarifying questions.

So, instead of switching back and forth between two interpretations, we only hold one interpretation or the other (if someone clearly commits themselves to either objectivism or non-objectivism), or we treat the moral claim with an under-determined interpretation that's compatible with both realism or anti-realism (the lowest common denominator.)

I agree that this^ is much better (and more charitable) than interpreting realists as always making an error! (When I thought of realists making moral claims, I primarily envisioned a discussion about metaethics.)

The alternative is to agree on a "lowest common denominator" definition of morality, and expect people who are relying on its objectiveness or subjectivity to explicitly flag that.

That makes sense. I called the pragmatic re-interpretation “non-objectivism” in my post, but terminology-wise, that's a bit unfair because it already presupposes anti-realism. “Not-necessarily-objectivism” would be a term that’s more neutral. This seems appropriate for everyday moral discourse.

(The reason I think anti-realism is compelling is because the lowest common denominator already feels like it's enough to get all the object-level reasoning off the ground.)

I could equivalently describe the above position as: "when your conception of something looks like Network 2, but not everyone agrees, then your definitions should look like Network 1."

I'd say it depends on what "mode" you're in. Your point certainly applies when it comes to descriptively interpreting what people mean. But there's also the meliorist mode of trying to nudge people towards more useful concepts. A lot of folk concepts get stretched beyond their limits extremely quickly when the discussions switch from everyday contexts to more philosophical ones. Trying to do philosophy without improving our concepts seems like trying to build skyscrapers with only axes and knives. (Of course, you probably agree with this.)

There’s also a question about whether you, as an anti-realist, consider realism to be clearly mistaken, or whether you think it might be the case that realists just think within a very different conceptual repertoire. If it's only the latter, it would be uncharitable to ever consider them as "making an error." My view is that there are certainly cases where I'm not sure (or where I'd say I agree with some versions of realism but don't find them quite 'worthy of the name'), but in many instances I'd say realists are committing a real error. One that they would recognize by their own standards. That’s why I wrote this sequence, I'm hoping to eventually change the minds of at least a bunch of realists. :)

Comment by lukas_gloor on Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree · 2020-06-06T07:13:47.500Z · score: 6 (4 votes) · EA · GW

I think we probably have very similar views, but I am less of a fan of error theory. What might it look like to endorse error theory as an anti-realist? Well, as an anti-realist I think that my claims about morality are perfectly reasonable and often true, since I intend them to be speaker-dependent. It's just the moral realists whose claims are in error.

This is how I think of it, yeah. Non-objectivism for the anti-realist and error theory for the realist.

So that leads to the bizarre situation where I can have a conversation about object-level morality with a moral realist, and we might even change each other's minds, but throughout the whole conversation I'm evaluating every statement he says as trivially incorrect. This seems untenable.

I see. You could switch back and forth between two ways of interpreting the realist's moral claims. On the one hand, they are making some kind of error. But as you say, you can still have fruitful discussions. I’d characterize the second interpretation as "pragmatically re-interpreting realist claims." I think this matches what you propose in your blogpost (as a way to interpret all moral claims)! :)

Again, I expect we mostly agree here, but the phrase "facts about a non-objective (speaker-dependent) reality" feels potentially confusing to me. Would you consider it equivalent to say that anti-realists can think about moral facts as facts about the implications of certain evaluation criteria? From this perspective, when we make moral claims, we're implicitly endorsing a set of evaluation criteria (making this position somewhere in the middle of cognitivism and non-cognitivism).

Yes, exactly. That sounds clearer.

Comment by lukas_gloor on Are there historical examples of excess panic during pandemics killing a lot of people? · 2020-05-29T13:43:29.205Z · score: 4 (3 votes) · EA · GW

I only came to this thread by accident and saw that I'm apparently the culprit (it showed a weak downvote). I don't even remember reading this comment nor the thread and I rarely downvote people anyway. Maybe I misclicked while I scrolled through random comments yesterday. I hope that doesn't happen too often. :)

Comment by lukas_gloor on Modelers and Indexers · 2020-05-13T21:42:33.006Z · score: 3 (2 votes) · EA · GW
These counterexamples where what inspired my term “archetype” – they weren’t specific situations that could be dismissed as isolated exceptions to an otherwise sound rule but situations where it was clear from the structure of the situation that they contradicted my model.

Noticing when this is or isn't the case takes some skill too. These sort of counterexamples come naturally to me as well, but sometimes I'm the only person who finds them convincing because people find it easy to dismiss them as "isolated" even when IMO they shouldn't. :)

Comment by lukas_gloor on jacobpfau's Shortform · 2020-05-10T22:40:27.289Z · score: 2 (2 votes) · EA · GW

You might be familiar with https://ai.metaculus.com/questions/. It went dormant unfortunately.

Comment by lukas_gloor on Max_Daniel's Shortform · 2020-03-25T21:30:30.044Z · score: 5 (3 votes) · EA · GW

About declaring it a "pandemic," I've seen the WHO reason as follows (me paraphrasing):

«Once we call it a pandemic, some countries might throw up their hands and say "we're screwed," so we should better wait before calling it that, and instead emphasize that countries need to try harder at containment for as long as there's still a small chance that it might work.»

So overall, while the OP's premise appealing to major legal/institutional consequences of the WHO using the term "pandemic" seems false, I'm now even more convinced of the key claim I wanted to argue for: that the WHO response does not provide an argument against epistemic modesty in general, nor for the epistemic superiority of "informed amateurs" over experts on COVID-19.

Yeah, I think that's a good point.

I'm not sure I can have updates in favor or against modest epistemology because it seems to me that my true rejection is mostly "my brain can't do that." But if I could have further updates against modest epistemology, the main Covid-19-related example for me would be how long it took some countries to realize that flattening the curve instead of squishing it is going to lead to a lot more deaths and tragedy than people seem to have initially thought. I realize that it's hard to distinguish between what's actual government opinion versus what's bad journalism, but I'm pretty confident there was a time when informed amateurs could see that experts were operating under some probably false or at least dubious assumptions. (I'm happy to elaborate if anyone's interested.)

Comment by lukas_gloor on What posts you are planning on writing? · 2020-03-24T14:23:17.939Z · score: 6 (4 votes) · EA · GW

I started working on them in December. The virus infected my attention, but I'm back working on the posts now. I have two new ones fully finished. I will publish them once I have four new ones. (If anyone is particularly curious about the topic and would like to give feedback on drafts, feel free to get in touch!)

Comment by lukas_gloor on COVID-19 brief for friends and family · 2020-03-05T18:27:04.016Z · score: 4 (3 votes) · EA · GW

I don't remember the exact source, sorry.

FWIW I now think that warm conditions very likely do slow down transmissions by a lot. Mostly because there are many cold countries where outbreaks became uncontrollable quickly, and this happened nowhere in a hot country so far.

Comment by lukas_gloor on COVID-19 brief for friends and family · 2020-03-02T11:14:36.386Z · score: 4 (3 votes) · EA · GW

I just read (surprisingly to me) that Thailand ranks extremely high in pandemic preparedness and early detection. This makes me downshift the warmth hypothesis a bit.

Comment by lukas_gloor on COVID-19 brief for friends and family · 2020-02-29T19:16:45.106Z · score: 3 (2 votes) · EA · GW

Singapore also ranked lower on lists published in late January on "most at risk countries" compared to Japan and Korea. Thailand (first on that list) would be a better example for a warm location being hit less badly than predicted. It reported a lot of cases initially, but it indeed seems like the virus hasn't spread as much as in some other locations. Warmth could be the decisive factor, but there might also be other reasons.

Comment by lukas_gloor on Harsanyi's simple “proof” of utilitarianism · 2020-02-25T12:11:44.147Z · score: 5 (4 votes) · EA · GW
Ah, my mistake – I had heard this definition before, which seems slightly different.

Probably I was wrong here. After reading this abstract, I realize that the way Norcross wrote about it is compatible with a weaker claim that linear aggregation of utility too. I think I just assumed that he must mean linear aggregation of utility, because everything else would seem weirdly arbitrary. :)

I changed it to this – curious if you still find it jarring?

Less so! The "total" still indicates the same conclusion I thought would be jumping the gun a bit, but if that's your takeaway it's certainly fine to leave it. Personally I would just write "utilitarianism" instead of "total utilitarianism."

Comment by lukas_gloor on Harsanyi's simple “proof” of utilitarianism · 2020-02-22T13:53:04.760Z · score: 6 (4 votes) · EA · GW

I'm not very familiar with the terminology here, but I remember that in this paper, Alastair Norcross used the term "thoroughgoing aggregation" for what seems to be linear addition of utilities in particular. That's what I had in mind anyway, so I'm not sure I believe anything different form you. The reason I commented above was because I don't understand the choice of "total utilitarianism" instead of just "utilitarianism." Doesn't every form of utilitarianism use linear addition of utilities in a case where population size remains fixed? But only total utilitarianism implies the repugnant conclusion. Your conclusion section IMO suggests that Harsanyi's theorem (which takes a case where population size is indeed fixed) does something to help motivate total utilitarianism over other forms of utilitarianism, such as prior-existence utilitarianism, negative utilitarianism or average utilitarianism. You already acknowledged in your reply further above to that it doesn't do much of that. That's why I suggested rephrasing your conclusion section. Alternatively, you could also explain in what ways you might think the utilitarian alternatives to total utilitarianism are contrived somehow or not in line with Harsanyi's assumptions. And probably I'm missing something about how you think about all of this, because the rest of the article seemed really excellent and clear to me. I just find the conclusion section really jarring.

Comment by lukas_gloor on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T18:48:10.710Z · score: 8 (6 votes) · EA · GW
I agree it doesn't say much, see e.g. Michael's comment.

In that case, it would IMO be better to change "total utilitarianism" to "utilitarianism" in the article. Utilitarianism is different from other forms of consequentialism in that it uses thoroughgoing aggregation. Isn't that what Harsanyi's theorem mainly shows? It doesn't really add any intuitions about population ethics. Mentioning the repugnant conclusion in this context feels premature.

Comment by lukas_gloor on A conversation with Rohin Shah · 2019-11-14T11:14:49.143Z · score: 5 (4 votes) · EA · GW
Chomsky's universal grammar: There's not enough language data for children to learn languages in the absence of inductive biases.

I think there's more recent work in computational linguistics that challenges this. Unfortunately I can't summarize it since I only took an overview course a long time ago. I've been wondering whether I should read up on language evolution at some point. Mostly because it seems really interesting, but also because it's a field I haven't seen being discussed in EA circles, and it seems potentially useful to have this background when it comes to evaluating/interpreting AI milestones and so on. In any case, if someone understands computational linguistics, language evolution and how it relates to the nativism debate, I'd be extremely interested in a summary!

Comment by lukas_gloor on Conditional interests, asymmetries and EA priorities · 2019-10-25T14:12:40.710Z · score: 2 (2 votes) · EA · GW

Okay, I agree that going "from perfect to flawed" isn't the core of the intuition.

Moreover, I don't think most people find the RP much less unacceptable if the initial population merely enjoys very high quality of life versus perfect satisfaction.

This seems correct to me too.

I mostly wanted to point out that I'm pretty sure that it's a strawman that the repugnant conclusion primarily targets anti-aggregationist intuitions. I suspect that people would also find the conclusion strange if it involved smaller numbers. When a family decides how many kids they have and they estimate that the average quality of life per person in the family (esp. with a lot of weights on the parents themselves) will be highest if they have two children, most people would find it strange to go for five children if that did best in terms of total welfare.

Comment by lukas_gloor on Conditional interests, asymmetries and EA priorities · 2019-10-25T13:19:20.799Z · score: 1 (1 votes) · EA · GW
I'd say that the reason I (as a CU) don't try to stay awake is that I can't dissociate the pleasantness of falling asleep from actually falling asleep

That makes sense. But do you think that the impulse to prolong the pleasant feeling (as opposed to just enjoying it and "laying back in the cockpit") is a component of the pleasure-feeling itself? To me, they seem distinct! I readily admit that we often want to do things to prolong pleasures or go out of our way to seek particularly rewarding pleasures. But I don't regard that as a pure feature of what pleasure feels like. Rather, it's the result of an interaction between what pleasure feels like and a bunch of other things that come in degrees, and can be on or off.

Let's say I found a technique to prolong the pleasure. Assuming it does take a small bit of effort to use it, it seems that whether I'm in fact going to use it depends on features such as which options I make salient to myself, whether I might develop fear of missing out, whether pleasure pursuit is part of my self-concept, the degree to which I might have cravings or the degree to which I have personality traits related to constantly optimizing things about my personal life, etc.

And it's not only "whether I'm in fact going to use the technique" that depends on those additional aspects of the situation. I'd argue that even "whether I feel like wanting to use the technique" depends on those additional, contingent factors!

If the additional factors are just right, I can simply loose myself in the positive feeling, "laying back in the cockpit." That's why the experience is a positive one, why it lets me lay back. Losing myself in the pleasant sensation means I'm not worrying about the future and whether the feeling will continue. If pleasure was intrinsically about wanting a sensation to continue, it would kind of suck because I'd have to start doing things to make that happen.

My brain doesn't like to have do things.

(This could be a fundamental feature of personality where there are large interpersonal differences. I have heard that some people always feel a bit restless and as though they need to do stuff to accomplish something or make stuff better. I don't have that, my "settings" are different. This would explain why many people seem to have troubles understanding the intuitive appeal tranquilism has for some people.)

Anyway, the main point is that "laying back in the cockpit" is something one cannot do when suffering. (Or it's what experienced meditators can maybe do – and then it's not suffering anymore.) And the perspective where laying back in the cockpit is actually appealing for myself as a sentient being, rather than some kind of "failure of not being agenty enough," is what fuels my stance that suffering and happiness are very, very different from one another. The hedonist view that "more happiness is always better" means that, in order to be a good egoist, one needs to constantly be in the cockpit to maximize one's long-term pleasure maximization. That's way too demanding for a theory that's supposed to help me do what is best for me.

Insofar as someone's hedonism is justified solely via introspection about the nature of conscious experience, I believe that it's getting something wrong. I'd say that hedonists of this specific type reify intuitions they have about pleasure (specifically, an interrelated cluster of intuitions about more pleasure always being better, that pleasure is better than non-consciousness, that pleasure involves wanting the experience to continue, etc.) as intrinsic components to pleasure. They treat their intuitions as the way things are while shrugging off the "contentment can be perfect" perspective as biased by idiosyncratic intuitions. However, both intuitions are secondary evaluative judgments we ascribe to these positive feelings. Different underlying stances produce different interpretations.

(And I feel like there's a sense in which the tranquilism perspective is simpler and more elegant. But at this point I'd already be happy if more people started to grant that hedonism is making just as much of a judgment call based on a different foundational intuition.)

Finally, I don't think all of ethics should be about the value of different experiences. When I think about "Lukas, the sentient being," then I care primarily about the "laying back in the cockpit" perspective. When I think about "Lukas, the person," then I care about my life goals. The perspectives cannot be summed into one thing because they are in conflict (except if one's life goals aren't perfectly selfish). If people have personal hedonism as one of their life goals, I care about them experiencing posthuman bliss out of my regard for the person's life goals, but not out of regard of this being the optimal altruistic action regardless of their life goals.