Posts

Lukas_Gloor's Shortform 2020-07-27T14:35:50.329Z
Moral Anti-Realism Sequence #5: Metaethical Fanaticism (Dialogue) 2020-06-17T12:33:05.392Z
Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails 2020-06-14T13:33:41.638Z
Moral Anti-Realism Sequence #3: Against Irreducible Normativity 2020-06-09T14:38:49.163Z
Moral Anti-Realism Sequence #2: Why Realists and Anti-Realists Disagree 2020-06-05T07:51:59.975Z
Moral Anti-Realism Sequence #1: What Is Moral Realism? 2018-05-22T15:49:52.516Z
Cause prioritization for downside-focused value systems 2018-01-31T14:47:11.961Z
Multiverse-wide cooperation in a nutshell 2017-11-02T10:17:14.386Z
Room for Other Things: How to adjust if EA seems overwhelming 2015-03-26T14:10:52.928Z

Comments

Comment by Lukas_Gloor on On the limits of idealized values · 2021-06-22T08:52:31.512Z · EA · GW

I think this post is brilliant! 

I plan to link to it heavily in an upcoming piece for my moral anti-realism sequence. 

On X., Passive and active ethics

Rather, what I’m trying to point at is a way that importing and taking for granted a certain kind of realist-flavored ethical psychology can result in an instructive sort of misfire. Something is missing, in these cases, that I expect the idealizing subjectivist needs. In particular: these agents, to the end, lack an affordance for a certain kind of direct, active agency — a certain kind of responsibility, and self-creation. They don’t know how to choose, fully, for themselves.

Yeah, I think there's a danger for people who expect that "having more information," or other features of some idealized reflection procedure, would change the phenomenology of moral reasoning, such that once they're in the reflection procedure, certain answers will stick out to them. But, as you say, this point may never come!  So instead, it could continue to feel like one has to make difficult judgment calls left and right, with no guarantee that one is doing moral reasoning "the right way." 

(In fact, I'm convinced such a phase change won't come. I have a draft on this.)

In a sense, what I’m saying here is that idealizing subjectivism is, and needs to be, less like “realism-lite,” and more like existentialism, than is sometimes acknowledged.

I've also used the phrase "more like existentialism" in this context. :)

On IX., Hoping for convergence, tolerating indeterminacy

This is an excellent strategy for people who find themselves without strong object-level intuitions about their goals/values. (Or people who only have strong object-level intuitions about some aspects of their goals/values, but not the details. E.g., being confident that one would want to be altruistic, but unsure about population ethics or different theories of well-being. [In these cases, perhaps with a guarantee for the reflection procedure to not to change the overarching objective – being altruistic, or finding a suitable theory of well-being, etc.])

Some people would probably argue that "Hoping for convergence, tolerating indeterminacy" is the rational strategy in the light of our metaethical uncertainty.  (I know you're not necessarily saying this in your post.) For example, they might argue as follows:

"If there's convergence among reflection procedures, I miss out if I place too much faith in my object-level intuitions and already formed moral convictions. By contrast, if there's no convergence, then it doesn't matter – all outcomes would be on the same footing." 

I want to push back against this stance, "rationally mandated wagering on convergence." I think it only makes sense for people whose object-level values are still under-defined. By contrast, if you find yourself with solid object-level convictions about your values, then you not only stand something to gain from wagering on convergence. You also stand things to lose. You might be giving up something you feel is worth fighting for to follow the kind-of-arbitrary outcome of some reflection procedure.

My point is, the currencies are commensurable: What's attractive about the possibility of many reflection procedures converging is the same thing that's attractive to people who already have solid object-level convictions about their values (assuming they're not making one of the easily identifiable mistakes, i.e., assuming that, for them, there'd be no convergence among reflection procedures that are open-ended enough to get them to adopt different values). Namely, when they reflect to the best of their abilities, they feel drawn to certain moral principles or goals or specific ways of living their lives.

In other words, the importance of moral reflection for someone is exactly proportional to their credence in it changing their thinking. If someone feels highly uncertain, they almost exclusively have things to gain. By contrast, the more certain you already are in your object-level convictions, the larger the risk that deferring to some poorly understood reflection procedure would lead you to an outcome that constitutes a loss, in a sense relevant to your current self. Of course, one can always defer to conservative reflection procedures, i.e., procedures where one is fairly confident that they won't lead to drastic changes in one's thinking. Those could be used to flesh out one's thinking in places where it's still uncertain (and therefore, possibly, under-defined), while protecting convictions that one would rather not put at risk. 

Comment by Lukas_Gloor on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-06-18T16:15:10.884Z · EA · GW

Is the map/territory distinction central to your point? I get the impression that you're mostly expressing the opinion that the LTFF has too high a bar or idiosyncratic (or too narrow) research taste. (I'd imagine that grantmakers are trying to do what's best on impact grounds.) 

Comment by Lukas_Gloor on Progress studies vs. longtermist EA: some differences · 2021-06-15T10:04:59.904Z · EA · GW

It sounds like we both agree that when it comes to reflecting about what's important to us, there should maybe be a place for stuff like "(idiosyncratic) reactive attitudes," "psychotherapy or raising a child or 'things the humanities do'" etc. 

Your view seems to be that you have two modes of moral reasoning: The impartial mode of analytic philosophy, and the other thing (subjectivist/particularist/existentialist).  

My point with my long comment earlier is basically the following: 
The separation between these two modes is not clear!  

I'd argue that what you think of the "impartial mode" has some clear-cut applications, but it's under-defined in some places, so different people will gravitate toward different ways of approaching the under-defined parts, based on using appeals that you'd normally place in the subjectivist/particularist/existentialist mode. 

Specifically, population ethics is under-defined. (It's also under-defined how to extract "idealized human preferences" from people like my parents, who aren't particularly interested in moral philosophy or rationality.) 

I'm trying to point out that if you fully internalized that population ethics is going to be under-defined no matter what, you then have more than one option for how to think about it. You no longer have to think of impartiality criteria and "never violating any transitivity axioms" as the only option. You can think of population ethics more like this: Existing humans have a giant garden (the 'cosmic commons') that is at risk of being burnt, and they can do stuff with it if they manage to preserve it, and people have different preferences about what definitely should or shouldn't be done with that garden. You can look for the "impartially best way to make use of the garden" – or you could look at how other people want to use the garden and compromise with them, or look for "meta-principles" that guide who gets to use which parts of the garden (and stuff that people definitely shouldn't do, e.g., no one should shit in their part of the garden), without already having a fixed vision for how the garden has to look like at the end, once it's all made use of. Basically, I'm saying that "knowing from the very beginning exactly what the 'best garden' has to look like, regardless of the gardening-related preferences of other humans, is not a forced move (especially because there's no universally correct solution anyway!). You're very much allowed to think of gardening in a different, more procedural  and 'particularist' way."
 

Comment by Lukas_Gloor on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T13:53:53.942Z · EA · GW

Oh, you're probably right then! 

Comment by Lukas_Gloor on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T13:52:51.641Z · EA · GW

that seems to imply that developing countries had lower survival rates, despite their more favourable demographics, which would be sad.

This isn't impossible because there seems to be a correlation where people with lower socioeconomic status have worse Covid outcomes, but I still doubt that the IFR was worse  overall in developing countries. The demographics (esp. the proportion of people age 70-80, and older)  make a huge difference. 

But I never looked into this in detail, and my impression was also that for a long time at least, there wasn't any reliable data. 

From excess deaths in some locations, such as Guayaquil (Ecuador), one could rule out the possibility that the IFR in developing countries was incredibly low (it would have been at least 0.3% given plausible assumptions about the outbreak there, and possibly a lot higher). 

Comment by Lukas_Gloor on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T13:45:34.273Z · EA · GW

IFR (but back in February/March 2020, a lot of people called everything "CFR"). I think he was talking about high-income countries (that's what my 0.9% estimate for 2020 referred to – note that it's lower for 2020+2021  combined because of better treatment and vaccines). I'd have to look it up again, but I doubt that Adalja was talking about a global IFR that includes countries with much younger demographics than the US. It could be that he left it ambiguous. 

Here's the Sam Harris podcast in question; I haven't re-listened to it yet. 

Comment by Lukas_Gloor on How well did EA-funded biorisk organisations do on Covid? · 2021-06-06T18:04:34.219Z · EA · GW

To be fair, the Johns Hopkins Center isn't just Adalja. I'm not aware of the list of things they do, but for instance, they kept an updated database in the early stage of the virus outbreak that was extremely helpful for forecasting!

Comment by Lukas_Gloor on How well did EA-funded biorisk organisations do on Covid? · 2021-06-06T15:31:36.113Z · EA · GW

He said he travelled internationally "yesterday" (which would have been February 9th if the video was uploaded the day of the lecture) and didn't wear a mask.

This seems totally okay to me, FWIW. In most places (e.g., London or the US), it would have seemed a bit overly cautious to wear masks before the end of February, no? 

I think his prediction and advice should probably be judged negatively and reflect poorly on him / Center for Health Security, but I'm not sure how harshly he/ CHS should be judged.

I generally agree with that, but it's worth noting that it was extremely common for Western epidemiologists to repeat the mantra "you cannot do what Asian countries are doing; there's no way to contain the virus." 

Comment by Lukas_Gloor on How well did EA-funded biorisk organisations do on Covid? · 2021-06-06T15:25:27.623Z · EA · GW

Adalja also confidently predicted the infection fatality rate for the rest of 2020 to be around 0.6% (on the Sam Harris podcast)  despite thinking the virus can't be contained (if true, this would have led to more ICU beds and oxygen shortages in lots of places). In reality, the IFR was more like 0.9% or higher for countries like the US and UK. Probably it was lower for countries with younger demographics, but I don't even think Adelja was basing his estimates on that. 

(TBC, this isn't as big a mistake compared to other statements or compared to Ioannidis who completely disgraced himself throughout 2020 and ongoing, but I find it worth pointing out because I remember distinctly that, at the time when Adalja said this, there was a lot of fairly strong evidence for higher IFRs, including published estimates. I thought 0.6% seemed [edit]hard to defend, though I don't remember how much he flagged that there's a substantial chance it's significantly higher. Importantly, it would have been higher than it actually turned out to be, if Adalja had been right about "the virus can't be contained.")  

Comment by Lukas_Gloor on Progress studies vs. longtermist EA: some differences · 2021-06-06T11:12:24.188Z · EA · GW

E.g., suppose I'm uncertain between:

  • Worldview A, according to which I should prioritize based on time scales of trillions of years.
  • Worldview B, according to which I should prioritize based on time scales of hundreds of years.

[...]

Now, I do have views on this matter that don't make me very sympathetic to allocating a significant chunk of my resources to, say, speeding up economic growth or other things someone concerned about next few decades might prioritize. (Both because of my views on normative uncertainty and because I'm not aware of anything sufficiently close to 'worldview B' that I find sufficiently plausible - these kind of worldviews from my perspective sit in too awkward a spot between impartial consequentialism and a much more 'egoistic', agent-relative, or otherwise nonconsequentialist perspective.)

I think I have a candidate for a "worldview B" that some EAs may find compelling. (Edit: Actually, the thing I'm proposing also allocates some weight to trillions of years, but it differs from your "worldview A" in that nearer-term considerations don't get swamped!) It requires a fair bit of explaining, but IMO that's because it's generally hard to explain how a framework differs from another framework when people are used to only thinking within a single framework. I strongly believe that if moral philosophy had always operated within my framework, the following points would be way easier to explain.

Anyway, I think standard moral-philosophical discourse is a bit dumb in that it includes categories without clear meaning. For instance, the standard discourse talks about notions like, "What's good from a universal point of view," axiology/theory of value, irreducibly normative facts, etc.

The above notions fail at reference – they don't pick out any unambiguously specified features of reality or unambiguously specified sets from the option space of norms for people/agents to adopt.

You seem to be unexcited about approaches to moral reasoning that are more "more 'egoistic', agent-relative, or otherwise nonconsequentialist" than the way you think moral reasoning should be done. Probably, "the way you think moral reasoning should be done" is dependent on some placeholder concepts like "axiology" or "what's impartially good" that would have to be defined crisply if we wanted to completely solve morality according to your preferred evaluation criteria. Consider the possibility that, if we were to dig into things and formalize your desired criteria, you'd realize that there's a sense in which any answer to population ethics has to be at least a little bit 'egoistic' or agent-relative. Would this weaken your intuitions that person-affecting views are unattractive?

I'll try to elaborate now why I believe "There's a sense in which any answer to population ethics has to be at least a little bit 'egoistic' or agent-relative."

Basically, I see a tension between "there's an objective axiology" and "people have the freedom to choose life goals that represent their idiosyncrasies and personal experiences." If someone claims there's an objective axiology, they're implicitly saying that anyone who doesn't adopt an optimizing mindset around successfully scoring "utility points" according to that axiology is making some kind of mistake / isn't being optimally rational. They're implicitly saying it wouldn't make sense for people (at least for people who are competent/organized enough to reliably pursue long-term goals) to live their lives in pursuit of anything other than "pursuing points according to the one true axiology." Note that this is a strange position to adopt! Especially when we look at the diversity between people, what sorts of lives they find the most satisfying (e.g., differences between investment bankers, MMA fighters, novelists, people who open up vegan bakeries, people for whom family+children means everything, those EA weirdos, etc.), it seems strange to say that all these people should conclude that they ought to prioritize surviving until the Singularity so as to get the most utility points overall. To say that everything before that point doesn't really matter by comparison. To say that and any romantic relationships people enter are only placeholders until something better comes along  with experience-machine technology.

Once you give up on the view that there's an objectively correct axiology (as well as the view that you ought to follow a wager for the possibility of it), all of the above considerations ("people differ according to how they'd ideally want to score their own lives") will jump out at you, no longer suppressed by this really narrow and fairly weird framework of "How can we subsume all of human existence into utility points and have debates on whether we should adopt 'totalism' toward the utility points, or come up with a way to justify taking a person-affecting stance."

There's a common tendency in EA to dismiss the strong initial appeal of person-affecting views because there's no elegant way to incorporate them into the moral realist "utility points" framework. But one person's modus ponens is another's modus tollens: Maybe if your framework can't incorporate person-affecting intuitions, that means there's something wrong with the framework.

I suspect that what's counterintuitive about totalism in population ethics is less about the "total"/"everything" part of it, and more related to what's counterintuitive about "utility points" (i.e., the postulate that there's an objective, all-encompassing axiology). I'm pretty convinced that something like person-affecting views, though obviously conceptualized somewhat differently (since we'd no longer be assuming moral realism) intuitively makes a lot of sense. 

Here's how that would work (now I'll describe the new proposal for how to do ethical reasoning):

Utility is subjective. What's good for someone is what they deem good for themselves by their lights, the life goals for which they get up in the morning and try doing their best.

A beneficial outcome for all of humanity could be defined by giving individual humans the possibility to reflect about their goals in life  under ideal conditions to then implement some compromise (e.g., preference utilitarianism, or – probably better – a moral parliament framework) to make everyone really happy with the outcome. 

Preference utilitarianism or the moral parliament framework would concern people who already exist – these frameworks' population-ethical implications are indirectly specified, in the sense that they depend on what the people on earth actually want. Still, people individually have views about how they want the future to go. Parents may care about having more children, many people may care about intelligent earth-originating life not going extinct, some people may care about creating as much hedonium as possible in the future, etc.

In my worldview, I conceptualize the role of ethics as two-fold: 

(1) Inform people about the options for wisely chosen subjective life goals

--> This can include life goals inspired by a desire to do what's "most moral" / "impartial" / "altruistic," but it can also include more self-oriented life goals

(2) Provide guidance for how people should deal with the issue that not everyone shares the same life goals

Population ethics, then, is a subcategory of (1). Assuming you're looking for an altruistic life goal rather than a self-oriented one, you're faced with the question of whether your notion of "altruism" includes bringing happy people into existence. No matter what you say, your answer to population ethics will be, in a weak sense, 'egoistic' or agent-relative, simply because you're not answering "What's the right population ethics for everyone." You're just answering, "What's my vote for how to allocate future resources." (And you'd be trying to make your vote count in an altruistic/impartial way – but you don't have full/single authority on that.)

If moral realism is false, notions like "optimal altruism" or "What's impartially best" are under-defined. Note that under-definedness doesn't mean "anything goes" – clearly, altruism has little to do with sorting pebbles or stacking cheese on the moon. "Altruism is under-defined" just means that there are multiple 'good' answers.

Finally, here's the "worldview B" I promised to introduce:

 Within the anti-realist framework I just outlined, altruistically motivated people have to think about their preferences for what to do with future resources. And they can – perfectly coherently – adopt the view: "Because I have person-affecting intuitions, I don't care about creating new people; instead, I want to focus my 'altruistic' caring energy on helping people/beings that exist regardless of my choices. I want to help them by fulfilling their life goals, and by reducing the suffering of sentient beings that don't form world-models sophisticated enough to qualify for 'having life goals'."

Note that a person who thinks this may end up caring a great deal about humans not going extinct. However, unlike in the standard framework for population ethics, she'd care about this not because she thinks it's impartially good for the future to contain lots of happy people. Instead, she thinks it's good from the perspective of the life goals of specific, existing others, for the future to go on and contain good things.

Is that really such a weird view? I really don't think so, myself. Isn't it rather standard population-ethical discourse that's a bit weird?

Edit: (Perhaps somewhat related: my thoughts on the semantics of what it could mean that 'pleasure is good'. My impression is that some people think there's an objectively correct axiology because they find experiential hedonism compelling in a sort of 'conceptual' way, which I find very dubious.) 

Comment by Lukas_Gloor on "Existential risk from AI" survey results · 2021-06-03T08:32:01.946Z · EA · GW

the surveys were sampling from somewhat similar populations (most clearly for the FHI research scholar's survey and this one, and less so for the 2008 one - due to a big time gap - and the Grace et al. one)

I mostly just consider the FHI research scholar survey to be relevant counter evidence here because 2008 is indeed really far away and because I think EA researchers reason quite differently than the domain experts in the Grace et al survey. 

When I posted my above comment I realized that I hadn't seen the results of the FHI survey! I'd have to look it up to say more, but one hypothesis I already have could be: The FHI research scholars survey was sent to a broader audience than the one by Rob now (e.g., it was sent to me, and some of my former colleagues), and people with lower levels of expertise tend to defer more to what they consider to be the expert consensus , which might itself be affected by the possibility of public-facing biases.

Of course, I'm also just trying to defend my initial intuition here.  :) 

Edit: Actually I can't find the results of that FHI RS survey. I only find this announcement. I'd be curious if anyone knows more about the results of that survey – when I filled it out I thought it was well designed and I felt quite curious about people's answers! 

Comment by Lukas_Gloor on "Existential risk from AI" survey results · 2021-06-02T09:34:41.603Z · EA · GW

I find it plausible that there's some perceived pressure to not give unreasonably-high-seeming probabilities in public, so as to not seem weird (as Rob hypothesized in the discussion here, which inspired this survey).  This could manifest both as "unusually 'optimistic' people being unusually likely to give public, quantitative estimates" and "people being prone to downplay their estimates when they're put into the spotlight." 

Personally, I've noticed the latter effect a couple of times when I was talking to people who I thought would be turned off by high probabilities for TAI. I didn't do it on purpose, but after two conversations I noticed that the probabilities I gave for TAI in 10 years, or things similar to that, seemed uncharacteristically low for me. (I think it's natural for probabilities estimates to fluctuate between elicitation attempts, but if the trend is quite strong and systematically goes in one direction, then that's an indicator of some type of bias.)

I also remember that I felt a little uneasy about giving my genuine probabilities in a survey of alignment- and longtermist-strategy researchers in August 2020 (by an FHI research scholar), out of concerns of making myself or the community seem a bit weird. I gave my true probabilities anyway (I think it was anonymized), but I felt a bit odd for thinking that I was giving 65% to things that I expected a bunch of reputable EAs  to only give 10% to.  (IIRC, the survey questions were quite similar to the wording in this post.)

(By the way, I find the "less than maximum potential" operationalizations to call for especially high probability estimates, since it's just a priori unlikely that humans set things up in perfect ways, and I do think that small differences in the setup can have huge effects on the future. Maybe that's an underappreciated crux between researchers – which could also include some normative subcruxes.) 

Comment by Lukas_Gloor on Could an international marriage (historically mail order bride) be considered an effective "initiative"? · 2021-05-27T22:40:40.113Z · EA · GW

Answering "x or y?" questions with just "no" (or "yes") seems to leave things ambiguous (does it mean  "the latter," "not the former," or "neither of those"?). It seems impolite to me (not putting in the effort to write something slightly longer to make things easier for the reader).
 

Comment by Lukas_Gloor on My attempt to think about AI timelines · 2021-05-20T12:30:55.801Z · EA · GW

I phrased my point poorly. I didn't mean to put the emphasis on the 20% figure, but more on the notion that things will be transformative in a way that fits neatly in the economic growth framework. My concern is that any operationalization of TAI as "x% growth per year(s)" is quite narrow and doesn't allow for scenarios where AI systems are deployed to secure influence and control over the future first. Maybe there'll be a war and the "TAI" systems secure influence over the future by wiping out most of the economy except for a few heavily protected compute clusters and resource/production centers. Maybe AI systems are deployed as governance advisors primarily and stay out of the rest of the economy to help with beneficial regulation. And so on. 

I think things will almost certainly be transformative one way or another, but if you therefore expect to always see stock market increases of >20%, or increases to other economic growth metrics, then maybe that's thinking too narrowly. The stock market (or standard indicators of economic growth) are not what ultimately matters. Power-seeking AI systems would prioritize "influence over the long-term future" over "short-term indicators of growth". Therefore, I'm not sure we see economic growth right when "TAI" arrives. The way I conceptualize "TAI" (and maybe this is different from other operationalizations, though, going by memory, I think it's compatible with the way Ajeya framed it in her report, since she framed it as "capable of executing a 'transformative task'") is that "TAI" is certainly capable of bringing about a radical change in growth mode, eventually, but it may not necessarily be deployed to do that. I think "where's the point of no return?" is a more important question than "Will AGI systems already transform the economy 1,2,4 years after their invention?"

That said, I don't think the above difference in how I'd operationalize "TAI" are cruxes between us. From what you say in the writeup, it sounds like you'd be skeptical about both, that AGI systems could transform the economy(/world) directly, and that they could transform it eventually via influence-securing detours. 

Comment by Lukas_Gloor on My attempt to think about AI timelines · 2021-05-20T12:08:21.351Z · EA · GW

Yes, that's what I meant. And FWIW, I wasn't sure whether Ben was using modest epistemology (in my terminology, outside-view reasoning isn't necessarily modest epistemology), but there were some passages in the original post that suggest low discrimination on how to construct the reference class. E.g., "10% on short timelines people" and "10% on long timelines people" suggests that one is simply including the sorts of timeline credences that happen to be around, without trying to evaluate people's reasoning competence. For contrast, imagine wording things like this: 

"10% credence each to persons A and B, who both appear to be well-informed on this topic and whose interestingly different reasoning styles both seem defensible to me, in the sense that I can't confidently point out why one of them is better than the other."

 

Comment by Lukas_Gloor on My attempt to think about AI timelines · 2021-05-19T13:44:11.746Z · EA · GW

On the object level (I made the other comment before reading on), you write: 

My impression from talking to Phil Trammell at various times is that it’s just really hard to get such high growth rates from a new technology (and I think he thinks the chance that AGI leads to >20% per year growth rates is lower than I do).

Maybe this is talking about definitions, but I'd say that "like the Industrial Revolution or bigger" doesn't have to mean literally >20% growth / year. Things could be transformative in others ways, and eventually at least, I feel like things would accelerate almost certainly in a future controlled with or by AGI. 

Edit: And I see now that you're addressing why you feel comfortable disagreeing: 
 

I sort of feel like other people don’t really realise / believe the above so I feel comfortable deviating from them.

I'm not sure about that. :) 

Comment by Lukas_Gloor on My attempt to think about AI timelines · 2021-05-19T13:33:33.599Z · EA · GW

Gave a 57% probability that AGI (or similar) would not imply TAI, i.e. would not imply an effect on the world’s trajectory at least as large as the Industrial Revolution.

My impression (I could be wrong) is that this claim is interestingly contrarian among EA-minded AI researchers. I see a potential tension between how much weight you give this claim within your framework, versus how much you defer to outside views (and potentially even modest epistemology – gasp!)  in the overall forecast. 

Comment by Lukas_Gloor on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-05-18T12:53:32.493Z · EA · GW

Do you think that the moderators were too charitable toward Phil?

No, I didn't mean to voice an opinion on that part. (And the moderation decision seemed  reasonable to me.)

My comment was prompted by the concern that giving a warning to Halstead (for not providing more evidence) risks making it difficult for people to voice concerns in the future. My impression is that it's already difficult enough to voice negative opinions on others' character. Specifically, I think there's an effect where, if you voice a negative opinion and aren't extremely skilled at playing the game of being highly balanced, polite and charitable (e.g., some other people's comments in the discussion strike me as almost superhumanly balanced and considerate),  you'll offend the parts of the EA forum audience that implicitly consider being charitable to the accused a much more  fundamental virtue than protecting other individuals (the potential victims of bad behavior) and the community at large (problematic individuals in my view tend to create a "distortion field" around them that can have negative norm-eroding consequences in various ways – though that was probably much more the case with other community drama than here, given that Phil wrote articles mostly at the periphery of the community.) 

Of course, these potential  drawbacks I mention only count in worlds where the concerns raised are in fact accurate. The only way to get to the bottom of things is indeed with truth-tracking norms, and being charitable (edit: and thorough) is important for that. 

I just feel that the demands for evidence shouldn't be too strong or absolute, partly also because there are instances where it's difficult to verbalize why exactly someone's behavior seems unacceptable (even though it may be really obvious to people who are closely familiar with the situation that it is). 

Lastly, I think it's particularly bad to disincentivize people for how they framed things in instances where they turned out to be right. (It's different if there was a lot of uncertainty as to whether Halstead had valid concerns, or whether he was just pursuing a personal vendetta against someone.)

Of course, these situations are really, really tricky, and I don't envy the forum moderators for having to navigate the waters.

If someone wants to warn the entire community that someone is behaving badly, the most effective warnings will include evidence.

True, but that also means that the right incentives are already there. If someone doesn't provide the evidence, it could be that they find that it's hard to articulate, that there are privacy concerns, or that the person doesn't have the mental energy at the time to polish their evidence and reasoning, but feels strongly enough that they'd like to speak up with a shorter comment. Issuing a warning discourages all those options. All else equal, providing clear evidence is certainly best. But I wouldn't want to risk missing out on the relevant info that community veterans (whose reputation is automatically on the line when they voice a strong concern) have a negative opinion for one reason or another. 

Comment by Lukas_Gloor on EA is a Career Endpoint · 2021-05-18T09:02:49.672Z · EA · GW

I think it's common for funding opportunities "just below the bar" to have capped upside potential, in the sense of the funders thinking that the grants are highly unlikely to generate very high impact. At least, that's my experience with grantmaking. The things I felt unsure about tended to always be of the sort "maybe this is somewhat impactful, but I can't imagine it being a giant mistake not to fund it." By contrast, saving money gives you some chance of having an outsized impact later on, in case you end up desperately needing it for a new, large opportunity. 

Comment by Lukas_Gloor on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-05-12T21:29:03.704Z · EA · GW

I think the EA community (and rationality community) is systematically too much at risk of being too charitable.  I don't have a citation for that but my impression is very much that this has been pointed out repeatedly in the instances where there was community discussion on problematic behavior of people who seemed interpersonally incorrigible. I think it's really unwise and has bad consequences to continue repeating that mistake.

 

Comment by Lukas_Gloor on [Coronavirus] Is it a good idea to meet people indoors if everyone's rapid antigen test came back negative? · 2021-04-23T10:21:10.752Z · EA · GW

This tweet (in German) seems relevant.

And here's a related anecdote: This story might just be a fluke, but it does suggest that it can happen that people test negative repeatedly shortly before superspreading.

Comment by Lukas_Gloor on How much does performance differ between people? · 2021-03-29T13:14:40.459Z · EA · GW

To give an example of what would go into research taste, consider the issue of reference class tennis (rationalist jargon for arguments on whether a given analogy has merit, or two people throwing widely different analogies at each other in an argument). That issue comes up a lot especially in preparadigmatic branches of science. Some people may have good intuitions about this sort of thing, while others may be hopelessly bad at it. Since arguments of that form feel notoriously intractable to outsiders, it would make sense if "being good at reference class tennis" were a skill that's hard to evaluate. 

Comment by Lukas_Gloor on How much does performance differ between people? · 2021-03-29T13:03:41.427Z · EA · GW

The awakening of slumbering papers may be fundamentally un- predictable in part because science itself must advance before the implications of the discovery can unfold.

Except to the authors themselves, who may often have an inkling that their paper is important. E.g., I think Rosenblatt was incredibly excited/convinced about the insights in that sleeping beauty paper. (Small chance my memory is wrong about this, or that he changed his mind at some point.) 

I don't think this is just a nitpicky comment on the passage you quoted. I find it plausible that there's some hard-to-study quantity around 'research taste' that predicts impact quite well. It'd be hard to study because the hypothesis is that only very few people have it. To tell who has it, you kind of need to have it a bit yourself. But one decent way to measure it is asking people who are universally regarded as 'having it' to comment on who else they think also has it. (I know this process would lead to unfair network effects and may result in false negatives and so on, but I'm advancing a descriptive observation here; I'm not advocating for a specific system on how to evaluate individuals.) 

Related: I remember a comment (can't find it anymore) somewhere by Liv Boeree or some other poker player familiar with EA. The commenter explained that monetary results aren't the greatest metric for assessing the skill of top poker players. Instead, it's  best to go with assessments by expert peers. (I think this holds mostly for large-field tournaments, not online cash games.) 

Comment by Lukas_Gloor on Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails · 2021-01-09T15:13:52.788Z · EA · GW

I will probably  rename this post eventually to "Why the Irreducible Normativity Wager Fails." I now think there are three separate wagers related to moral realism: 

  • An infinitely strong wager to act as though Irreducible Normativity applies
  • An infinitely strong wager to act as though normative qualia exist (this can be viewed as a subcategory of the Irreducible Normativity wager) 
  • A conditionally strong wager to expect moral convergence
    • I will argue that this is not per se a wager for "moral realism" but actually equivalent to a wager for valuing moral reflection under anti-realism; the degree to which it applies depends on one's prior intuitions and normative convictions. 

I don't find the first two wagers convincing. The last wager definitely works in my view, but since it's only conditionally strong, it doesn't quite work the way people think it does. I will devote future posts to wagers 2 and 3 in the list above. This post here only covers the first wager. 

Comment by Lukas_Gloor on Can I have impact if I’m average? · 2021-01-04T11:02:46.309Z · EA · GW

I fully agree! It's certainly possible to have a lot of impact if your skills are average! And any amount of impact matters by definition. I suspect that it doesn't always seem like it because people tend to try to have impact in only the more established, direct ways. Or because some average-skilled people don't want to acknowledge that others are more suited for certain projects. I like the framework introduced by Ryan Carey and Tegan McCaslin  here. One of the steps is "Get humble: Amplify others’ impact from a more junior role."

I also like to think of EA (and life in general) as a video game with varying difficulty levels, and if your skills are only average (or you suffer from mental health issues more so than others), you're playing at a higher level of difficulty and you can't expect to earn the same amount of (non-adjusted) points. Upwards comparisons don't make sense for that reason! 

Comment by Lukas_Gloor on Lukas_Gloor's Shortform · 2020-12-23T10:01:39.492Z · EA · GW

Thanks for bringing up this option! I don't agree with this framing for two reasons: 

  • As I point out in my sequence's first post, some ways in which "moral facts exist" are underwhelming. 
  • I don't think moral indeterminacy necessarily means that there's convergence of expert judgments. At least, the way in which I think morality is underdetermined explicitly predicts expert divergence. Morality is "real" in the sense that experts will converge up to a certain point, and beyond that, some experts will have underdetermined moral values while others will have made choices within what's allowed by indeterminacy. Out of the ones that made choices, not all choices will be the same.

I think what I describe in the second bullet point will seem counterintuitive to many people because they think that if morality is underdetermined, your views on morality should be underdetermined, too. But that doesn't follow! I understand why people have the intuition that this should follow, but it really doesn't work that way when you look at it closely. I've been working on spelling out why. 

Comment by Lukas_Gloor on Asking for advice · 2020-12-14T18:38:13.012Z · EA · GW

Sure!

Comment by Lukas_Gloor on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T16:28:18.363Z · EA · GW

It's maybe worth noting that there's an asymmetry: For people who think wild-animal lives are net positive, there are many things that contain even more sentient value than rainforest. By contrast, if you think wild-animal lives are net negative, only few things contain more sentient disvalue than rainforest. (Of course, in comparison to expected future sentience, biological life only makes up a tiny portion, so rainforest is unlikely to be a priority from a longtermist perspective.)

I understand the worries described in the OP (apart from the "let's better not find out" part).  I
 think it's important for  EAs in the WAS reduction movement to proactively counter simplistic memes and advocate interventions that don't cause great harm from the perspective of some very popular moral perspectives. I think that's a moral responsibility for animal advocates with suffering-focused views. (And as we see in other replies here, this sounds like it's already common practice!) 

At the same time, I feel like the discourse on this topic can be a bit disingenuous sometimes, where people whose actions otherwise don't indicate much concern for the moral importance of the action-omission distinction (esp. when it comes to non-persons) suddenly employ rhetorical tactics that make it sound like "wrongly thinking animal lives are negative" is a worse mistake than "wrongly thinking they are positive". 

I also think this issue is thorny because, IMO, there's no clear answer. There are moral judgment calls to make that count for at least as much as empirical discoveries.
 

Comment by Lukas_Gloor on What is a book that genuinely changed your life for the better? · 2020-10-23T09:11:18.338Z · EA · GW

I also read Animorphs! I saw this tweet about it recently that was pretty funny. 
 

Comment by Lukas_Gloor on What is a book that genuinely changed your life for the better? · 2020-10-23T09:05:36.745Z · EA · GW

The Ancestor's Tale got me hooked with trying to understand the world. It was the perfect book for me at the time I read it (2008) because my English wasn't that good yet and I would plausibly have been too overwhelmed with reading The Selfish Gene right away. And it was just way too cool to have this backwards evolutionary journey to go through. Apart from the next item on this list, I can't remember another book that I was so eager to read once I saw what it's about. I really wish I could have that feeling again!

Practical Ethics was life-changing for the obvious reasons and also because it got me far enough into ethics to develop the ambition to solve all the questions Singer left open.

Atonement  was maybe the fiction book that influenced me the most. I had to re-read it for an English exam and it got me thinking about typical mind fallacy and how people can perceive/interpret the same situation in very different ways.

Fiction books I read when I was younger must have affected me in various ways, but I can't point to any specific effect with confidence.

Comment by Lukas_Gloor on What is a "Kantian Constructivist view of the kind Christine Korsgaard favours"? · 2020-10-21T10:55:04.389Z · EA · GW

I'm not sure I remember this the right way, but here's an attempt: 

"Constructivism" can refer to a family of normative-ethical views according to which objectively right moral facts are whatever would be the output of some constructive function, such as an imagined social contract or the Kantian realm of ends. "Constructivism" can also refer to a non-realist metaethical view that moral language doesn't refer to moral facts that exist in an outright objective sense, but are instead "construed" intersubjectively via some constructive function. 

So, a normative-ethical constructivist uses constructive functions to find the objectively right moral facts, while a metaethical constructivist uses constructive functions to explain why we talk as though there are some kind of moral facts at all, and what their nature is.

I'm really not sure I got this exactly right, but I am confident that in the context of this "letter to a young philosopher," the author meant to refer to the metaethical version of constructivism. It's mentioned right next to subjectivism, which is another non-realist metaethical position. Unlike some other Kantians, Korsgaard is not an objectivist moral realist. 

So, I think the author of this letter is criticizing consequentialist moral realism because there's a sense in which its recommendations are "too impartial." The most famous critique of this sort is the "Critique of Utilitarianism" by Bernard Williams. I quoted the most relevant passage here. One way to point to the intuitive force of this critique is as follows: If your moral theory gives the same recommendation whether or not you replace all existing humans with intelligent aliens, something seems (arguably) a bit weird. The "human nature element," as well as relevant differences between different people, are all lost! At least, to anyone who cares about something other than "The one objectively correct thing to care about," the objective morality will seem wrong and alienating. Non-objectivist morality has the feature that moral actions depend on "who's here." That morality arises from people rather than people being receptacles for it. 

I actually agree with this type of critique – I just wouldn't say that it's incompatible with EA. It's only incompatible with how many EAs (especially Oxford-educated ones) currently think about the foundations of ethics.

Importantly, it doesn't automatically follows from this critique of objectivist morality that a strong focus on (some type of) effectiveness is misguided, or that "inefficient" charities suddenly look a lot better. Not at all. Maybe it can happen that certain charities/projects look better from that vantage point, depending on the specifics and so on. But this would require further arguments. 

Comment by Lukas_Gloor on Buck's Shortform · 2020-09-13T19:55:51.444Z · EA · GW

I thought the same thing recently.

Comment by Lukas_Gloor on Asking for advice · 2020-09-09T13:33:55.596Z · EA · GW

I have the same!

For me it's the feeling of too many options, that some options may be less convenient for the other person than they initially would think, and that I have to try to understand this interface (IT aversion) instead of replying normally (even just clicking on the link feels annoying).

Comment by Lukas_Gloor on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T06:38:04.533Z · EA · GW

I did read the post, and I mostly agree with you about the content (Edit: at least in the sense that I think large parts of the argument are valid; I think there are some important disanalogies that Hanson didn't mention, like "right to bodily integrity" being way clearer than "moral responsibility toward your marriage partner"). I find it weird that just because I think a point is poorly presented, people think I disagree with the point. (Edit: It's particularly the juxtaposition of "gently raped" that comes also in the main part of the text. I also would prefer more remarks that put the reader at ease, e.g., repeating several times that it's all just a thought experiment, and so on.)

There's a spectrum of how much people care about a norm to present especially sensitive topics in a considerate way. You and a lot of other people here seem to be so far on one end of the spectrum that you don't seem to notice the difference between me and Ezra Klein (in the discussion between Sam Harris and Ezra Klein, I completely agreed with Sam Harris.) Maybe that's just because there are few people in the middle of this spectrum, and you usually deal with people who bring the same types of objections. But why are there so few people in the middle of this spectrum? That's what I find weird.

Some people here talk about a slippery slope and having to defend the ground at all costs. Is that the reasoning?

I want to keep up a norm that considerateness is really good. I think that's compatible with also criticizing bad outgrowths of considerate impulses. Just like it's compatible to care about truth-seeking, but criticize bad outgrowths of it. (If a virtue goes too far, it's not a virtue anymore.)

Comment by Lukas_Gloor on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T06:02:14.196Z · EA · GW

Thanks, that makes sense to me now! The three categories are also what I pointed out in my original comment:

Yes, it's a tradeoff, but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident. I don't think he's even trying, and maybe he's trying to deliberately walk as close to the line as possible.

Okay, so you cared mostly about this point about mind reading:

While I'm comfortable predicting those categories will exist, confidently asserting that someone falls into any particular category is hard,

This is a good point, but I didn't find your initial comment so helpful because this point against mind reading didn't touch on any of the specifics of the situation. It didn't address the object-level arguments I gave:

[...] I just feel like some of the tweet wordings were deliberately optimized to be jarring.)
but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident.

I felt confused about why I was presented with a fully general argument for something I thought I indicated I already considered. If I read your comment as "I don't want to comment on the specific tweets, but your interpretation might be a bit hasty" – that makes perfect sense. But by itself, it felt to me like I was being strawmanned for not being aware of obvious possibilities. Similar to khorton, I had the impulse to say "What does this have to do with trolleys, shouldn't we, if anything, talk about the specific wording of the tweets?" Because to me, phrases like "gentle, silent rape" seem obviously unnecessarily jarring even as far as twitter discussions about rape go." (And while one could try to defend this as just blunt or blithe, I think the reasoning would have to be disanalogous to your trolley or food examples, because it's not like it should be surprising to any Western person in the last two decades that rape is a particularly sensitive topic – very unlike the "changing animal food to vegan food" example you gave.)

Comment by Lukas_Gloor on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T19:13:27.735Z · EA · GW
Now, I'm not saying Hanson isn't deliberately edgy; he very well might be.

If you're not saying that, then why did you make a comment? It feels like you're stating a fully general counterargument to the view that some statements are clearly worth improving, and that it matters how we say things. That seems like an unattractive view to me, and I'm saying that as someone who is really unhappy with social justice discourse.

Edit: It makes sense to give a reminder that we may sometimes jump to conclusions too quickly, and maybe you didn't want to voice unambiguous support for the view that the comment wordings were in fact not easy to improve on given the choice of topic. That would make sense – but then I have a different opinion.

Comment by Lukas_Gloor on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T12:53:11.640Z · EA · GW

That all makes sense. I'm a bit puzzled why it has to be edgy on top of just talking with fewer filters. It feels to me like the intention isn't just to discuss ideas with people of a certain access need, but also some element of deliberate provocation. (But maybe you could say that's just a side product of curiosity about where the lines are – I just feel like some of the tweet wordings were deliberately optimized to be jarring.) If it wasn't for that one tweet that Hanson now apologized for, I'd have less strong opinions on whether to use the term "misstep." (And the original post used it in plural, so you have a point.)

Comment by Lukas_Gloor on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T09:29:29.590Z · EA · GW

Thanks, those are good points. I agree that this is not black and white, that there are some positives to being edgy.

That said, I don't think you make a good case for the alternative view. I wouldn't say that the problem with Hanson's tweets is that they cause "emotional damage."The problem is that they contribute to the toxoplasmosa of rage dynamics (esp. combined with some people's impulse to defend everything about them). My intuition is that this negative effect outweighs the positive effects you describe.

Comment by Lukas_Gloor on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T08:17:33.290Z · EA · GW
This seems like a tradeoff to me

Yes, it's a tradeoff, but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident. I don't think he's even trying, and maybe he's trying to deliberately walk as close to the line as possible. What's the point in that? If I'm right, I wouldn't want to gratify that. I think it's lacking nuance if you blanket object to the "misstep" framing, especially since that's still a relatively weak negative judgment. We probably want to be able to commend some people on their careful communication of sensitive topics, so we also have to be willing to call it out if someone is doing an absolutely atrocious job at it.

For reference, I have listened to a bunch of politically controversial podcasts by Sam Harris, and even though I think there's a bit of room to communicate even better, there were no remarks I'd label as 'missteps.' By contrast, several of Hanson's tweets are borderline at best, and at least one now-deleted tweet I saw was utterly insane. I don't think it's fair that everyone has to be at least as good at careful communication as Harris to be able to openly talk about sensitive topics (and it seems the bar from societal backlash is even higher now, which is of course terrible), but maybe we can expect people to at least do better than Hanson? That doesn't mean that Hanson should be disinvited from events, but I feel like it would suck if he didn't take more time to make his tweets less needlessly incendiary.

Comment by Lukas_Gloor on Lukas_Gloor's Shortform · 2020-08-24T22:01:55.939Z · EA · GW

I'm not following the developments anymore. I could imagine that the IFR is now lower than it used to be in April because treatment protocols have improved.

Comment by Lukas_Gloor on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-13T10:37:54.666Z · EA · GW

Thinking about insights that were particularly relevant for me / my values:

  • Reducing long-term risks from malevolent actors as a potentially promising cause area
  • The importance of developing (the precursors for) peaceful bargaining strategies
    • Related: Anti-realism about bargaining? (I don't know if people still believed this in 2015, but early discussions on Lesswrong seemed to indicate that a prevalent belief was that there exists a proper solution to good bargaining that works best independently of the decision architecture of other agents in the environment.)
  • Possible implications of correlated decision-making in large worlds
    • Arguably, some people were thinking along these lines before 2015. However, so many things fall under the heading of "acausal trade" that it's hard to tell, and judging by conversations with people who think they understood the idea but actually mixed it up with something else, I assign 40% to this having been relevantly novel.
  • Some insights on metaethics might qualify. For instance, the claim "Being morally uncertain and confidently a moral realist are in tension" is arguably a macrostrategically relevant insight. It suggests that more discussion of the relevance of having underdetermined moral values (Stuart Armstrong wrote about this a lot) seems warranted, and that, depending on the conclusions from how to think about underdetermined values, peer disagreement might work somewhat differently for moral questions than for empirical ones. (It's hard to categorise whether these are novel insights or not. I think it's likely that there were people who would have confidently agreed with these points in 2015 for the right reasons, but maybe lacked awareness that not everyone will agree on addressing the underdetermination issue in the same way, and so "missed" a part of the insight.)
Comment by Lukas_Gloor on EA reading list: utilitarianism and consciousness · 2020-08-08T12:41:25.368Z · EA · GW

I also noticed this when I started planning a blogpost on this topic!

De Lazari-Radek and Singer's The Point of View of the Universe has a chapter on hedonism, but I think the argument is less developed than in the two links you give. (BTW, if you have a copy of the paper by Adam Lerner and think it's okay to share it with me, I'd be very interested!)

It's interesting to note that Sinhababu's epistemic argument for hedonism explicitly relies on the premise "moral realism is true." Without that premise, the argument would be less forceful (what remains would be the comparison that pleasure's goodness is similar to the brigthness of the color "lemon yellow" – but that doesn't seem to support the strong version of the claim "pleasure is good.")

Comment by Lukas_Gloor on Max_Daniel's Shortform · 2020-08-06T13:36:37.584Z · EA · GW

Related: Relationships in a post-singularity future can also be set up to work well, so that the setup overdetermines any efforts by the individuals in them.

To me, that takes away the whole point. I don't think this would feel less problematic if somehow future people decided to add some noise to the setup, such that relationships occasionally fail.

The reason I find any degree of "setup" problematic is because this seems like emphasizing the self-oriented benefits one gets out of relationships, and de-emphasizing the from-you-independent identity of the other person. It's romantic to think that there's a soulmate out there who would be just as happy to find you as you are about finding them. It's not that romantic to think about creating your soulmate with the power of future technology (or society doing this for you).

This is the "person-affecting intuition for thinking about soulmates." If the other person exists already, I'd be excited to meet them, and would be motivated to put in a lot of effort to make things work, as opposed to just giving up on myself in the face of difficulties. By contrast, if the person doesn't exist yet or won't exist in a way independent of my actions, I feel like there's less of a point/appeal to it.

Comment by Lukas_Gloor on The problem with person-affecting views · 2020-08-06T11:36:29.470Z · EA · GW

It's great to have a short description of the difficulties for person-affecting intuitions!

Any reasonable theory of population ethics must surely accept that C is better than B. C and B contain all of the same people, but one of them is significantly better off in C (with all the others equally well off in both cases). Invoking a person-affecting view implies that B and C are equally as good as each other, but this is clearly wrong.

That a good argument. Still, I find person-affecting views underrated because I suspect that many people have not given much thought to whether it even makes sense to treat population ethics in the same way as other ethical domains.

Why do we think we have to be able rate all possible world states according to how impartially good or bad they are? Population ethics seems underspecified on exactly the dimension where many moral philosophers derive "objective" principles from: others’ interests. It’s the one ethical discipline where others’ interests are not fixed. The principles that underlie preference utilitarianism aren’t sufficiently far-reaching to specify what to do with newly created people. And preference utilitarianism is itself incomplete, because of the further question: What are my preferences? (If everyone's preference was to be a preference utilitarian, we'd all be standing around waiting until someone has a problem or forms a preference that's different from selflessly adhering to preference utilitarianism.)

Preference utilitarianism seems like a good answer to some important question(s) that fall(s) under the "morality" heading. But it can't cover everything. Population ethics is separate from the rest of ethics.

And there's an interesting relation between how we choose to conceptualize population ethics and how we then come to think about "What are my life goals?"

If we think population ethics has a uniquely correct solution that ranks all world states without violations of transitivity or other, similar problems, we have to think that, in some way, there's a One Compelling Axiology telling us the goal criteria for every sentient mind. That axiology would specify how to answer "What are my life goals?"

By contrast, if axiology is underdetermined, then different people can rationally adopt different types of life goals.

I self-identify as a moral anti-realist because I'm convinced there's no One Compelling Axiology. Insofar as there's something fundamental and objective to ethics, it's this notion of "respecting others' interests." People's life goals (their "interests") won't converge.

Some people take personal hedonism as their life goals, some just want to Kill Bill, some want to have a meaningful family life and die from natural causes here on earth, some don't think about the future at all and live the party life, some discount any aspirations of personal happiness in favor of working toward positively affecting transformative AI, some want to live forever but also do things to help others realize their dreams along the way, some want to just become famous, etc.

If you think of humans as the biological algorithm we express, rather than the things we come to believe and identify with at some particular point in our biography (based on what we've lived), then you might be tempted to seek a One Compelling Axiology with the question "What's the human policy?" ("Policy" in analogy to machine learning.) For instance, you could plan to devote the future's large-scale simulation resources to figuring out the structure of what different humans come to value in different simulated environments, with different experienced histories. You could do science about this and identify general patterns. But suppose you've figured out the general patterns and tell the result to the Bride in Kill Bill. You tell her "the objective human policy is X." She might reply "Hold on with your philosophizing, I'm going to have to kill Bill first. Maybe I'll come back to you and consider doing X afterwards." Similarly, if you tell a European woman with husband and children about the arguments to move to San Francisco to work on reducing AI risks, because that's what she ended up caring about on many runs of simulations of her in environments where she had access to all the philosophical arguments, she might say "Maybe I'd be receptive to that in another life, but I love my husband in this world here, and I don't want to uproot my children, so I'm going to stay here and devote less of my caring capacity to longtermism. Maybe I'll consider wanting to donate 10% of my income, though." So, regardless of questions about their "human policy," in terms of what actual people care about at given points in time, life goals may differ tremendously between people, and even between copies of the same person in different simulated environments. That's because life goals also track things that relate to the identities we have adopted and the for-us meaningful social connections we have made.

If you say that population ethics is all-encompassing, you're implicitly saying that all the complexities in the above paragraphs count for nothing (or not much), and that people should just adopt the same types of life goals, no matter their level of novelty-seeking, achievement striving, proscociality, embeddedness in meaningful social connections, views on death, etc. You're implicitly saying that the way the future should ideally go has almost nothing to do with the goals of presently existing people. To me, that stance is more incomprehensible than some problem with transitivity.

Alternatively, you can say that maybe all of this can't be put under a single impartial utility function. If so, it seems that you're correct that you have to accept something similar to the violation of transitivity you describe. But is it really so bad if we look at it with my framing?

It's not "Even though there's a One Compelling Axiology, I'll go ahead and decide to do the grossly inelegant thing with it." Instead, it's "Ethics is about life goals and how to relate to other people with different life goals, as well as asking what types of life goals are good for people. Probably, different life goals are good for different people. Therefore, as long as we don't know which people exist, not everything can be determined. There also seems to be a further issue about how to treat cases where we create new people: that's population ethics, and it's a bit underdetermined, which gives more freedom for us to choose what to do with our future lightcone."

So, I propose to consider the possibility of drawing a more limited role for population ethics than it is typically conceptualized under. We could maybe think of it as: A set of appeals or principles by which beings can hold accountable the decision-makers that created them. This places some constraints on the already existing population, but it leaves room for personal life projects (as opposed to "dictatorship of the future," where all our choices about the future light cone are predetermined by the One Compelling Axiology, and so have no relation to which exact people are actually alive and care about it).

To give a few examples for population-ethical principles:

  • All else equal, it seems objectionable on other-regarding grounds to create minds that lament their existence.
  • It also seems objectionable, all else equal, to create minds and place them in situations where their interests are only somewhat fulfilled, if one could have provided them with better circumstances.
  • Likewise, it seems objectionable, all else equal, to create minds destined to constant misery, yet with a strict preference for existence over non-existence.

(Note that the first principle is about objecting to the fact of being created, while the latter two principles are about objecting to how one was created.)

We can also ask: Is it ever objectionable to fail to create minds – for instance, in cases where they’d have a strong interest in their existence?

(From a preference-utilitarian perspective, it seems left open whether the creation of some types of minds can be intrinsically important. Satisfied preferences are good because satisfying preferences is just what it means to consider the interests of others. Also counting the interests of not-yet-existent beings is a possible extension of that, but a somewhat peculiar one. The choice looks underdetermined, again.)

Ironically, the perspective I have described becomes very similar to how non-philosophers commonly think about the ethics of having children:

  • Parents are obligated to provide a very high standard of care for their children (universal principle)
  • People are free to decide against becoming parents (personal principle)
  • Parents are free to want to have as many children as possible (personal principle), as long as the children are happy in expectation (universal principle)
  • People are free to try to influence other people’s stances and parenting choices (personal principle), as long as they remain within the boundaries of what is acceptable in a civil society (universal principle)

Universal principles fall out of considerations about respecting others' interests. Personal principles fall out of considerations about "What are my life goals?"

Personal principles can be inspired by considerations of morality, i.e., they can be about choosing to give stronger weight to universal principles and filling out underdetermined stuff with one's most deeply held moral intuitions. Many people find existence meaningless without dedication to something greater than oneself.

Because there are different types of considerations at play in all of this, there's probably no super-elegant way to pack everything into a single, impartially valuable utility function. There will have to be some messy choices about how to make tradeoffs, but there isn't really a satisfying alternative. Just like people have to choose some arbitrary-seeming percentage of how much caring capacity they dedicate toward self-oriented life goals versus other-regarding ones (insofar as the separation is clean; it often isn't so clean), we have to also somehow choose how much weight to give to different moral domains, including the considerations commonly discussed under the heading of population ethics, and how they relate to my life goals and those of other existing people.

Comment by Lukas_Gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-08-06T10:07:41.216Z · EA · GW
The exact thing that Williams calls 'alienating' is the thing that Singer, Yudkowsky, Parfit and many other realists and anti-realists consider to be the most valuable thing about morality! But you can keep this 'alienation' if you reframe morality as being the result of the basic, deterministic operations of your moral reasoning, the same way you'd reframe epistemic or practical reasoning on the anti-realist view. Then it seems more 'external' and less relativistic.

Nice point!

If your goal here is to convince those inclined towards moral realism to see anti-realism as existentially satisfying, I would recommend a different framing of it. I think that framing morality as a 'personal life goal' makes it seem as though it is much more a matter of choice or debate than it in fact is, and will probably ring alarm bells in the mind of a realist and make them think of moral relativism.

Yeah, I think that's a good suggestion. I had a point about "arguments can't be unseen" – which seems somewhat related to the alienation point.

I didn't quite want to imply that morality is just a life goal. There's a sense in which morality is "out there" – it's just more underdetermined than the realists think, and maybe more goes into whether or not to feel compelled to dedicate all of one's life to other-regarding concerns.

I emphasize this notion of "life goals" because it will play a central role later on in this sequence. I think it's central to all of normativity. Back when I was a moral realist, I used to say "ethics is about goals" and "everything is ethics." There's this position "normative monism" that says all of normativity is the same thing. I kind of feel this way, except that I think the target criteria can differ between people, and are often underdetermined. (As you point out in some comment, things also depend on which parts of one's psychology one identifies with.)

Comment by Lukas_Gloor on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-08-06T09:36:40.122Z · EA · GW

This discussion continues to feel like the most productive discussion I've had with a moral realist! :)

However, I do think that normative anti-realism is self-defeating, assuming you start out with normative concepts (though not an assumption that those concepts apply to anything). I consider this argument to be step 1 in establishing moral realism, nowhere near the whole argument.

[...]

So the wager argument for normative realism actually goes like this -
2) We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist 'justification' for our beliefs, in purely descriptive terms of what we will probably end up believing given basic facts about how our minds work in some idealised situation. The other is where there are mind-independent facts about which of our beliefs are justified. The latter is more plausible because of 1).

[...]

Either you think some basic epistemic facts have to exist for reasoning to get off the ground and therefore that epistemic anti-realism is self-defeating, or you are an epistemic anti-realist and don't care about the realist's sense of 'self-defeating'. The AI is in the latter camp, but not because of evidence, the way that it's a moral anti-realist (...However, you haven’t established that all normative statements work the same way—that was just an intuition...), but just because it's constructed in such a way that it lacks the concept of an epistemic reason.
So, if this AI is constructed such that irreducibly normative facts about how to reason aren't comprehensible to it, it only has access to argument 1), which doesn't work. It can't imagine 2).

I think I agree with all of this, but I'm not sure, because we seem to draw different conclusions. In any case, I'm now convinced I should have written the AI's dialogue a bit differently. You're right that the AI shouldn't just state that it has no concept of irreducible normative facts. It should provide an argument as well!

What would you reply if the AI uses the same structure of arguments against other types of normative realism as it uses against moral realism? This would amount to the following trilemma for proponents of irreducible normativity (using section headings from my text):

(1) Is irreducible normativity about super-reasons?

(2) Is (our knowledge of) irreducible normativity confined to self-evident principles?

(3) Is there a speaker-independent normative reality?

I think you're inclined to agree with me that (1) and (2) are unworkable or not worthy of the term "normative realism." Also, it seems like there's a weak sense in which you agree with the points I made in (3), as it relates to the domain of morality.

But maybe you only agree with my points in (3) in a weak sense, whereas I consider the arguments in that section to have stronger implications. The way I thought about this, I think the points in (3) apply to all domains of normativity, and they show that unless we come up with some other way to make normative concepts work that I haven't yet thought of, we are forced to accept that normative concepts, in order to be action-guiding and meaningful, have to be linked to claims about convergence in human expert reasoners. Doesn't this pin down the concept of irreducible normativity in a way that blocks any infinite wagers? It doesn't feel like proper non-naturalism anymore once you postulate this link as a conceptual necessity. "Normativity" became a much more mundane concept after we accepted this link.

However, I think that we humans are in a situation where 2) is open to consideration, where we have the concept of a reason for believing something, but aren't sure if it applies - and if we are in that situation, I think we are dragged towards thinking that it must apply, because otherwise our beliefs wouldn't be justified.

The trilemma applies here as well. Saying that it must apply still leaves you with the task of making up your mind on how normative concepts even work. I don't see alternatives to my suggestions (1), (2) and (3).

What I'm giving here is such a 'partners-in-crime' argument with a structure, with epistemic facts at the base. Realism about normativity certainly should lower the burden of proof on moral realism to prove total convergence now, because we already have reason to believe normative facts exist. For most anti-realists, the very strongest argument is the 'queerness argument' that normative facts are incoherent or too strange to be allowed into our ontology. The 'partners-in-crime'/'infinite wager' undermines this strong argument against moral realism. So some sort of very strong hint of a convergence structure might be good enough - depending on the details.

Since I don't think we have established anything interesting about normative facts, the only claim I see in the vicinity of what you say in this paragraph, would go as follows:

"Since we probably agree that there is a lot of convergence among expert reasoners on epistemic facts, we shouldn't be too surprised if morality works similarly."

And I kind of agree with that, but I don't know how much convergence I would expect in epistemology. (I think it's plausible that it would be higher than for morality, and I do agree that this is an argument to at least look really closely for ways of bringing about convergence on moral questions.)

All I'll say is that I don't consider strongly conflicting intuitions in e.g. population ethics to be persuasive reasons for thinking that convergence will not occur. As long as the direction of travel is consistent, and we can mention many positive examples of convergence, the preponderance of evidence is that there are elements of our morality that reach high-level agreement.

I agree with this. My confidence that convergence won't work is based on not only observing disagreements in fundamental intuitions, but also on seeing why people disagree, and seeing that these disagreements are sometimes "legitimate" because ethical discussions always get stuck in the same places (differences in life goals, which is intertwined with axiology). If one actually thinks about what sorts of assumptions are required for the discussions not to get stuck (something like: "all humans would adopt the same broad types of life goals under idealized conditions"), many people would probably recognize that those assumptions are extremely strong and counterintuitive. Oddly enough, people often don't seem to think that far because they self-identify as moral realists for reasons that don't make any sense. They expect convergence on moral questions because they somehow ended up self-identifying as moral realists, instead of them self-identifying as moral realists because they expect convergence.

(I'll maybe make another comment later today to briefly expand on my line of argument here.)

(I say elements because realism is not all-or-nothing - there could be an objective 'core' to ethics, maybe axiology, and much ethics could be built on top of such a realist core - that even seems like the most natural reading of the evidence, if the evidence is that there is convergence only on a limited subset of questions.)

I also agree with that, except that I think axiology is the one place where I'm most confident that there's no convergence. :)

Maybe my anti-realism is best described as "some moral facts exist (in a weak sense as far as other realist proposals go), but morality is underdetermined."

(I thought "anti-realism" was the best description for my view, because as I discussed in this comment, the way in which I treat normative concepts takes away the specialness they have under non-naturalism. Even some non-naturalists claim that naturalism isn't interesting enough to be called "moral realism." And insofar as my position can be characterized as naturalism, it's still underdetermined in places where it matters a lot for our ethical practice.)

Belief in God, or in many gods, prevented the free development of moral reasoning. Disbelief in God, openly admitted by a majority, is a recent event, not yet completed. Because this event is so recent, Non-Religious Ethics is at a very early stage. We cannot yet predict whether, as in Mathematics, we will all reach agreement. Since we cannot know how Ethics will develop, it is not irrational to have high hopes.

When I read some similar passage at the end of Parfit's Reasons and Persons (which may have even included a quote of this passage?), I shared Parfit's view. But I've done a lot of thinking since then. At some point one also has to drastically increase one's confidence that further game-changing considerations won't show up, especially if one's map of the option space feels very complete in a self-contained way, and intellectually satisfying.

Comment by Lukas_Gloor on Lukas_Gloor's Shortform · 2020-07-27T14:57:56.302Z · EA · GW

[Is pleasure ‘good’?]

What do we mean by the claim “Pleasure is good”?

There’s an uncontroversial interpretation and a controversial one.

Vague and uncontroversial claim: When we say that pleasure is good, we mean that all else equal, pleasure is always unobjectionable, and often it is desired.

Specific and controversial claim: When we say that pleasure is good, what we mean is that, all else equal, pleasure is an end we should be striving for. This captures points like:

  • that pleasure is in itself desirable,
  • that no mental states without pleasure are in itself desirable,
  • that more pleasure is always better than less pleasure.

People who say “pleasure is good” claim that we can establish this by introspection about the nature of pleasure. I don’t see how one could establish the specific and controversial claim from mere introspection. After all, even if I personally valued pleasure in the strong sense (I don’t), I couldn’t, with my own introspection, establish that everyone does the same. People’s psychologies differ, and how pleasure is experienced in the moment doesn’t fully determine how one will relate to it. Whether one wants to dedicate one’s life (or, for altruists, at least the self-oriented portions of one's life) to pursuing pleasure depends on more than just what pleasure feels like.

Therefore, I think pleasure is only good in the weak sense. It’s not good in the strong sense.

Comment by Lukas_Gloor on Lukas_Gloor's Shortform · 2020-07-27T14:53:39.939Z · EA · GW

[Are underdetermined moral values problematic?]

If I think my goals are merely uncertain, but in reality they are underdetermined and the contributions I make to shaping the future will be driven, to a large degree, by social influences, ordering effects, lock-in effects, and so on, is that a problem?

I can’t speak for others, but I’d find it weird. I want to know what I’m getting up for in the morning.

On the other hand, because it makes it easier for the community to coordinate and pull things in the same directions, there's a sense in which underdetermined values are beneficial.

Comment by Lukas_Gloor on Lukas_Gloor's Shortform · 2020-07-27T14:50:44.321Z · EA · GW

[Takeaways from Covid forecasting on Metaculus]

I’m probably going to win the first round of the Li Wenliang forecasting tournament on Metaculus, or maybe get second. (My screen name shows up in second on the leaderboard, but it’s a glitch that’s not resolved yet because one of the resolutions depends on a strongly delayed source.) (Update: I won it!)

With around 52 questions, this was the largest forecasting tournament on the virus. It ran from late February until early June.

I learned a lot during the tournament. Next to claiming credit, I want to share some observations and takeaways from this forecasting experience, inspired by Linch Zhang’s forecasting AMA:

  • I did well at forecasting, but it came at the expense of other things I wanted to do. In February, March and April, Covid had completely absorbed me. I spent several hours per day reading news and had anxiety about regularly updating my forecasts. This was exhausting; I was relieved when the tournament came to an end.
  • I had previously dabbled in AI forecasting. Unfortunately, I can’t tell if I excelled at it because the Metaculus domain for it went dormant. In any case, I noticed that I felt more motivated to delve into Covid questions because they seemed more connected. It felt like I was not only learning random information to help me with a single question, but I was acquiring a kind of expertise. (Armchair epidemiology? :P ) I think this impression was due to a mixture of perhaps suboptimal question design for the AI Metaculus domain and the increased difficulty of picking up useful ML intuitions on the go.
  • One thing I think I’m good at is identifying reasons why past trends might change. I’m always curious to understand the underlying reasons behind some trends. I come up with lots of hypotheses because I like the feeling of generating a new insight. I often realized that my hunches were wrong, but in the course of investigating them, I improved my understanding.
  • I have an aversion to making complex models. I always feel like model uncertainty is too large anyway. When forecasting Covid cases, I mostly looked for countries where similar situations have already played out. Then, I’d think about factors that might be different with the new situation, and make intuition-based adjustments in the direction predicted by the differences.
  • I think my main weakness is laziness. Occasionally, when there’s an easy way to do it, I’d spot-check hypotheses by making predictions about past events that I hadn’t yet read about. However, I don’t do this nearly enough. Also, I rely too much on factoids I picked up from somewhere without verifying how accurate they are. For instance, I had it stuck in my head that someone said that the case doubling rate was 4 days. So, I operated with this assumption for many days of forecasting, before realizing that it’s actually looking like 2.5 days in densely populated areas and that I should anyway have spent more time looking firsthand into this crucial variable. Lastly, I noticed a bunch of times that other forecasters were talking about issues I don't have a good grasp on (e.g., test-positivity rates), and I felt that I'd probably improve my forecasting if I looked into it, but I preferred to stick with approaches I was more familiar with.
  • IT skills really would have helped me generate forecasts faster. I had to do crazy things with pen and paper because I lacked them. (But none of what I did involved more than elementary-school math.)
  • I learned that confidently disagreeing with the community forecast is different from “not confidently agreeing.” I lost a bunch of points twice due to underconfidence. In cases where I had no idea about some issue and saw the community predict <10%, I didn’t want to go <20% because that felt inappropriate given my lack of knowledge about the plausible-sounding scenario. I couldn't confidently agree with the community, but since I also didn't confidently disagree with them, I should have just deferred to their forecast. Contrarianism is a valuable skill, but one also has to learn to trust others in situations where one sees no reason not to.
  • I realized early that when I changed my mind on some consideration that initially had me predict different from the community median, I should make sure to update thoroughly. If I no longer believe my initial reason for predicting significantly above the median, maybe I should go all the way to slightly below the median next. (The first intuition is to just move closer to it but still stay above.)
  • From playing a lot of poker, I have the habit of imagining that I make some bet (e.g., a bluff or thin value bet) and it will turn out that I’m wrong in this instance. Would I still feel good about the decision in hindsight? This heuristic felt very useful to me in forecasting. It made me reverse initially overconfident forecasts when I realized that my internal assumptions didn’t feel like something I could later on defend as “It was a reasonable view at the time.”
  • I made a couple of bad forecasts after I stopped following developments every day. I realized I needed to re-calibrate how much to trust my intuitions once I no longer had a good sense of everything that was happening.

Some things I was particularly wrong about:

  • This was well before I started predicting on Metaculus, but up until about February 5th, I was way too pessimistic about the death rate for young healthy people. I think I lacked the medical knowledge to have the right prior about how strongly age-skewed most illnesses are, and therefore updated too strongly upon learning about the deaths of two young healthy Chinese doctors.
  • Like others, I overestimated the importance of hospital overstrain. I assumed that this would make the infection fatality rate about 1.5x–2.5x worse in countries that don’t control their outbreaks. This didn’t happen.
  • I was somewhat worried about food shortages initially, and was surprised by the resilience of the food distribution chains.
  • I expected more hospitalizations in Sweden in April.
  • I didn’t expect the US to put >60 countries on the level-3 health warning travel list. I was confident that they would not do this, because “If a country is gonna be safer than the US itself, why not let your citizens travel there??”
  • I was nonetheless too optimistic about the US getting things under control eventually, even though I saw comments from US-based forecasters who were more pessimistic.
  • My long-term forecasts for case numbers tended to be somewhat low. (Perhaps this was in part related to laziness; the Metaculus interface made it hard to create long tails for the distribution.)

Some things I was particularly right about:

  • I was generally early to recognize the risks from novel coronavirus / Covid.
  • For European countries and the US initially, I expected lockdown measures to work roughly as well as they did. I confidently predicted lower than the community for the effects of the first peak.
  • I somewhat confidently ruled out IFR estimates <0.5% in early March already, and I think this was for good reasons, even though I continued to accumulate better evidence for my IFR predictions later and was wrong about the effects of hospital overstrain.
  • I very confidently doubled down against <0.5% IFR estimates in late March, despite the weird momentum that developed around taking them seriously, and the confusion about the percentage of asymptomatic cases.
  • I have had very few substantial updates since mid March. I predicted the general shape of the pandemic quite well, e.g. here or here.
  • I confidently predicted that the UK and the Netherlands (later) would change course about their initial “no lockdown” policy.
  • I noticed early that Indonesia had a large undetected outbreak. A couple of days after I predicted this, the deaths there jumped from 1 to 5 and its ratio of confirmed cases to deaths became the worst (or second worst?) in the world at the time.

(I have stopped following the developments closely by now.)

Comment by Lukas_Gloor on Lukas_Gloor's Shortform · 2020-07-27T14:40:15.316Z · EA · GW

[When thinking about what I value, should I take peer disagreement into account?]

Consider the question “What’s the best career for me?”

When we think about choosing careers, we don’t update to the career choice of the smartest person we know or the person who has thought the most about their career. Instead, we seek out people who have approached career choice with a similar overarching goal/framework (in my case, 80,000 Hours is a good fit), and we look toward the choices of people with similar personalities (in my case, I notice a stronger personality overlap with researchers than managers, operations staff, or those doing earning to give).

When it comes to thinking about one’s values, many people take peer disagreement very seriously.

I think that can be wise, but it shouldn’t be done unthinkingly. I believe that the quest to figure out one’s values shares strong similarities with the quest of figuring out one’s ideal career. Before deferring to others with one's deliberations, I recommend making sure that others are asking the same questions (not everything that comes with the label “morality” is the same) and that they are psychologically similar in the ways that seem fundamental to what you care about as a person.