Problems of evil 2021-04-19T08:05:58.893Z
The innocent gene 2021-04-05T03:26:16.961Z
The importance of how you weigh it 2021-03-29T04:58:17.862Z
On future people, looking back at 21st century longtermism 2021-03-22T08:21:04.205Z
Against neutrality about creating happy lives 2021-03-15T01:54:22.612Z
Care and demandingness 2021-03-08T06:59:27.554Z
Subjectivism and moral authority 2021-03-01T08:59:29.742Z
Two types of deference 2021-02-22T03:27:32.368Z
Contact with reality 2021-02-15T04:53:34.381Z
Killing the ants 2021-02-07T23:16:50.147Z
Believing in things you cannot see 2021-02-01T07:24:16.051Z
On clinging 2021-01-24T23:25:41.907Z
A ghost 2021-01-21T07:14:14.326Z
Actually possible: thoughts on Utopia 2021-01-18T08:27:32.025Z
Alienation and meta-ethics (or: is it possible you should maximize helium?) 2021-01-15T07:06:54.124Z
The impact merge 2021-01-13T07:26:47.630Z
Shouldn't it matter to the victim? 2021-01-11T07:14:20.069Z
Thoughts on personal identity 2021-01-08T04:19:09.765Z
Grokking illusionism 2021-01-06T05:50:07.646Z
The despair of normative realism bot 2021-01-03T22:59:06.126Z
Thoughts on being mortal 2021-01-01T19:16:55.944Z
Wholehearted choices and "morality as taxes" 2020-12-21T19:35:58.437Z


Comment by Joe_Carlsmith on Problems of evil · 2021-04-20T05:11:45.826Z · EA · GW

Sounds right to me.  Per a conversation with Aaron a while back, I've been relying on the moderators to tag posts as personal blog, and had been assuming this one would be.

Comment by Joe_Carlsmith on The importance of how you weigh it · 2021-04-08T06:03:51.832Z · EA · GW

Glad to hear you found it helpful. Unfortunately, I don't think I have a lot to add at the moment re: how to actually pursue moral weighting research, beyond what I gestured at in the post (e.g., trying to solicit lots of your own/other people's intuitions across lots of cases, trying to make them consistent,  that kind of thing). Re: articles/papers/posts, you could also take a look at GiveWell's process here, and the moral weight post from Luke Muelhauser I mentioned has a few references at the end that might be helpful (though most of them I haven't engaged with myself). I'll also add, FWIW, that I actually think the central point in the post most applicable outside of the EA community than inside it, as I think of EA as fairly "basic-set oriented" (though there are definitely some questions in EA where weightings matter).

Comment by Joe_Carlsmith on Against neutrality about creating happy lives · 2021-03-18T09:20:59.574Z · EA · GW

Hi Michael — 

I meant, in the post, for the following paragraphs to address the general issue you mention: 

Some people don’t think that gratitude of this kind makes sense. Being created, we might say, can’t have been “better for” me, because if I hadn’t been created, I wouldn’t exist, and there would be no one that Wilbur’s choice was “worse for.” And if being created wasn’t better for me, the thought goes, then I shouldn’t be grateful to Wilbur for creating me.

Maybe the issues here are complicated, but at a high level: I don’t buy it. It seems to me very natural to see Wilbur as having done, for me, something incredibly significant — to have given me, on purpose, something that I value deeply. One option, for capturing this, is to say that something can be good for me, without being “better” for me (see e.g. McMahan (2009)). Another option is just to say that being created is better for me than not being created, even if I only exist — at least concretely — in one of the cases. Overall, I don’t feel especially invested in the metaphysics/semantics of “good for” and “better for” in this sort of case. I don’t have a worked out account of these issues, but neither do I see them as especially forceful reason not to be glad that I’m alive, or grateful to someone who caused me to be so.

That is, I don’t take myself to be advocating directly for comparativism here (though a few bits of the language in the post, in particular the reference to “better off dead,” do suggest that). As the quoted paragraphs note, comparativism is one option; another is to say that creating me is good for me, even if it’s not better for me (a la McMahan). 

FWIW, though, I do currently feel intuitively open/sympathetic to comparativism, partly because it seems plausible that we can say truly things like “Joe would prefer to be live rather than not to live,” even if Joe doesn’t and never will exist; and clear that we can truly say "Joe prefers to live" in worlds  where he does exist; and I tend to think about treating people well as centrally about being responsive to what they care about/would care about. But I haven’t tried to dig in on this stuff, partly because I see things like being glad I’m alive, and grateful to someone who caused me to be so, as on more generally solid ground than things like “betterness for Joe is a relation that requires two concrete Joe lives as relata" (see e.g. the Menagerie argument in Hilary's powerpoint, p. 13, for the type of thing that makes me think that metaphysical premises like that aren't a "super solid ground" type area). 

At a higher level, though: the point I’m arguing against is specifically that the neutrality intuition is directly intuitive. I don’t see it that way, and the point of “poetically tugging at people’s intuitions” was precisely to try to illustrate and make vivid the intuitive situation as I see it. But as I note at the end —  e.g., “direct intuitions about neutrality aren’t the only data available” — it’s a further question whether there is more to be said for neutrality overall (indeed, I think there is — though metaphysical issues like the ones you mention aren’t very central for me here). That said, I tend to see much of person-affecting ethics as driven at least in substantial part by appeal direct intuition, so I do think it would change the overall dialectical landscape a bit if people come in going “intuitively, we have strong reasons to create happy lives. But there are some metaphysical/semantic questions about how to make sense of this…” 

Comment by Joe_Carlsmith on Contact with reality · 2021-02-18T06:41:14.505Z · EA · GW

Thanks! Re: mental manipulation, do you have similar worries even granted that you’ve already been being manipulated in these ways? We can stipulate that there won’t be any increase in the manipulation in question, if you stay. One analogy might be: extreme cognitive biases that you’ve had all along. They just happen to be machine-imposed. 

That said, I don’t think this part is strictly necessary for the thought experiment, so I’m fine with folks leaving it out if it trips them up.

Comment by Joe_Carlsmith on On clinging · 2021-02-01T08:58:32.492Z · EA · GW

Glad to hear you enjoyed it. 

I haven't engaged much with tranquilism. Glancing at that piece, I do think that the relevant notions of "craving" and "clinging" are similar; but I wouldn't say, for example, that an absence of clinging makes an experience as good as it can be for someone.

Comment by Joe_Carlsmith on Actually possible: thoughts on Utopia · 2021-01-25T07:31:48.954Z · EA · GW

Thanks :). I haven't thought much about personal universes, but glancing at the paper, I'd expect resource-distribution, for example, to remain an issue.

Comment by Joe_Carlsmith on Alienation and meta-ethics (or: is it possible you should maximize helium?) · 2021-01-20T08:33:40.456Z · EA · GW

Glad to hear it :)

Re: "my motivational system is broken, I'll try to fix it" as the thing to say as an externalist realist: I think this makes sense as a response. The main thing that seems weird to me is the idea that you're fundamentally "cut off" from seeing what's good about helium, even though there's nothing you don't understand about reality. But it's a weird case to imagine, and the relevant notions of "cut off" and "understanding" are tricky.

Comment by Joe_Carlsmith on Alienation and meta-ethics (or: is it possible you should maximize helium?) · 2021-01-16T09:58:59.860Z · EA · GW

Thanks for reading. Re: your version of anti-realism: is "I should create flourishing (or whatever your endorsed theory says)" in your mouth/from your perspective true, or not truth-apt? 

To me Clippy's having or not having a moral theory doesn't seem very central. E.g., we can imagine versions in which Clippy (or some other human agent) is quite moralizing, non-specific, universal, etc about clipping, maximizing pain, or whatever.