Posts

antimonyanthony's Shortform 2020-09-19T16:05:02.590Z

Comments

Comment by antimonyanthony on Exploring a Logarithmic Tolerance of Suffering · 2021-04-12T18:38:57.767Z · EA · GW

Personally I still wouldn't consider it ethically acceptable to, say, create a being experiencing a -100-intensity torturous life provided that a life  with exp(100)-intensity happiness is also created. Even after trying strongly to account for possible scope neglect. Going from linear to log here doesn't seem to address the fundamental asymmetry. But I appreciate this post, and I suspect quite a few longtermists who don't find stronger suffering-focused views compelling would be sympathetic to a view like this one - and the implications for prioritizing s-risks versus extinction risks seem significant.

Comment by antimonyanthony on Spears & Budolfson, 'Repugnant conclusions' · 2021-04-08T15:00:20.342Z · EA · GW

But of course the A and Z populations are already impossible, because we already have present and past lives that aren't perfectly equal and aren't all worth living.  So-- even setting aside possible boundedness on the number of lives--the RC has always fundamentally been about comparing undeniably impossible  populations

I don't find this a compelling response to Guillaume's objection. There seems to be a philosophically relevant difference between physical impossibility of the populations, and metaphysical impossibility of the axiological objects. We study population ethics because we expect our decisions about the trajectory of the long-term future to approximate the decisions involved in these thought experiments. So the point is that NU would not prescribe actions with the general structure of "choose a future with arbitrarily many torturous lives and a sufficiently large number of slightly more happy than suffering lives [regardless of whether we call these positive utility lives], over a future with arbitrarily many perfectly happy lives," but these other axiologies would. (ETA: As Michael noted, there are other intuitively unpalatable actions that NU would prescribe too. But the whole message of this paper is that we need to distinguish between degrees of repugnance to make progress, and for some, the VRC is more repugnant than the conclusions of NU.)

Comment by antimonyanthony on How to PhD · 2021-03-30T23:47:25.424Z · EA · GW

You will find yourself justifying the stupidest shit on impact grounds, and/or pursuing projects which directly make the world worse.

Could you be a bit more specific about this point? This sounds very field-dependent.

Comment by antimonyanthony on Proposed Longtermist Flag · 2021-03-26T00:08:41.427Z · EA · GW

I downvoted for reasons similar to Stefan's comment: longtermism is not synonymous with a focus on x-risk and space colonization, and the black bar symbolism creates that association. In EA discourse, I have observed consistent conflation of longtermism with this particular subset of longtermist priorities, and I'd like to strongly push back against that. (I believe I would feel the same even if my priorities aligned with that subset.)

Comment by antimonyanthony on On future people, looking back at 21st century longtermism · 2021-03-23T23:58:19.317Z · EA · GW

But we should care about individual orangutans, & it seems plausible to me that they care whether they go extinct. Large parts of their lives are after all centered around finding mates & producing offspring. So to the extent that anything is important to them (& I would argue that things can be just as important to them as they can be to us), surely the continuation of their species/bloodline is.

I'm pretty skeptical of this claim. It's not evolutionarily surprising that orangutans (or humans!) would do stuff that decreases their probability of extinction, but this doesn't mean the individuals "care" about the continuation of their species per se. Seems we only have sufficient evidence to say they care about doing the sorts of things that tend to promote their own (and relatives', proportional to strength of relatedness) survival and reproductive success, no?

Comment by antimonyanthony on Against neutrality about creating happy lives · 2021-03-19T03:28:02.529Z · EA · GW

For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell. (And also not getting the very positive lives, but NU treats them as 0 welfare anyway.)

This may be counterintuitive to an extent, but to me it doesn't reach "very repugnant" territory. Misery is still reduced here; an epsilon change of the "reducing extreme suffering" sort, evenly if barely so, doesn't seem morally frivolous like the creation of an epsilon-happy life or, worse, creation of an epsilon roller coaster life. But I'll have to think about this more. It's a good point, thanks for bringing it to my attention.

Comment by antimonyanthony on Against neutrality about creating happy lives · 2021-03-19T00:31:26.682Z · EA · GW

I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views

Not negative utilitarian axiology. The proof relies on the assumption that the utility variable u can be positive.

What if "utility" is meant to refer to the objective aspects of the beings' experience etc. that axiologies would judge as good or bad—rather than to moral goodness or badness themselves? Then I think there are two problems:

  • 1) Supposing it's a fair move to aggregate all these aspects into one scalar, the theorem assumes the function f  must be strictly increasing. Under this interpretation the NU function would be f(u) = min(u, 0).
  • 2) I deny that such aggregation even is a reasonable move. Restricting to hedonic welfare for simplicity, it would be more appropriate for f to be a function of two variables, happiness and suffering. Collapsing this into a scalar input, I think, obscures some massive moral differences between different formulations of the Repugnant Conclusion, for example. Interestingly, though, if we formulate the VRC as in that paper by treating all positive values of u as "only happiness, no suffering" and all negative values as "only suffering, no happiness" (thereby making my objection on this point irrelevant) the theorem still goes through for all those axiologies. But not for NU.

Edit: The paper seems to acknowledge point #2, though not the implications for NU:

One way to see that a ε increase could be very repugnant is to recall Portmore’s (1999) suggestion that ε lives in the restricted RC could be “roller coaster” lives, in which there is much that is wonderful, but also much terribly suffering, such that the good ever-so-slightly outweighs the bad. Here, one admitted possibility is that an ε-change could substantially increase the terrible suffering in a life, and also increase good components; such a ε-change is not the only possible ε-change, but it would have the consequence of increasing the total amount of suffering. ... Moreover, if ε-changes are of the “roller coaster” form, they could increase deep suffering considerably beyond even the arbitrarily many [u < 0] lives, and in fact could require everyone in the chosen population to experience terrible suffering.

Comment by antimonyanthony on Against neutrality about creating happy lives · 2021-03-15T19:13:51.387Z · EA · GW

I guess it was unclear that here I was assuming that the creator knows with certainty all the evaluative contents of the life they're creating. (As in the Wilbur and Michael thought experiments.) I would be surprised if anyone disagreed that creating a life you know won't be worth living, assuming no other effects, is wrong. But I'd agree that the claim about lives not worth living in expectation isn't uncontroversial, though I endorse it.

[edit: Denise beat me to the punch :)]

Comment by antimonyanthony on Against neutrality about creating happy lives · 2021-03-15T17:37:05.604Z · EA · GW

[Apologies for length, but I think these points are worth sharing in full.]

As someone who is highly sympathetic to the procreation asymmetry, I have to say, I still found this post quite moving. I’ve had, and continue to have, joys profound enough to know the sense of awe you’re gesturing at. If there were no costs, I’d want those joys to be shared by new beings too.

Unfortunately, assuming that we’re talking about practically relevant cases where creating a "happy" life also entails suffering of the created person and other beings, there are costs in expectation. (I assume no one has moral objections to creating utterly flawless lives, so the former is the sense in which I read "neutrality." See also this comment. Please let me know if I've misunderstood.) And I find those costs qualitatively more serious than the benefits. Let me see if I can convey where I’m coming from.

I found it surprising that you wrote:

I have refrained, overall, from framing the preceding discussion in specifically moral terms — implying, for example, that I am obligated to create Michael, instead of going on my walk. I think I have reasons to create Michael that have to do with the significance of living for Michael; but that’s not yet to say, for example, that I owe it to Michael to create him, or that I am wronging Michael if I don’t.

Because to me this is exactly the heart of the asymmetry. It’s uncontroversial that creating a person with a bad life inflicts on them a serious moral wrong. Those of us who endorse the asymmetry don’t see such a moral wrong involved in not creating a happy life. (If one is a welfarist consequentialist, a fortiori this calls into question the idea that the uncreated happy person is "wronged" in any prudential sense.)

To flesh that out a bit: You acknowledged, in sketching out Michael’s hypothetical life, these pains:

 I see a fight with that same woman, a sense of betrayal, months of regret. … I see him on his deathbed … cancer blooming in his stomach

When I imagine the prospect of creating Michael, these moments weigh pretty gravely. I feel the pang of knowing just how utterly crushing a conflict with the most important person in one’s life can be; the pit in the gut, the fear, shock, and desperation. I haven’t had cancer, but I at least know the fear of death, and can only imagine it gets more haunting when one actually expects to die soon. By all reports, cancer is clearly a fate I couldn’t possibly wish on anyone, and suffering it slowly in a hospital sounds nothing short of harrowing.

I simply can't comprehend creating those moments in good conscience, short of preventing greater pain broadly construed. It seems cruel to do so. By contrast, although Michael-while-happy would feel grateful to exist, it doesn’t seem cruel to me at all to not invite his nonexistent self to the "party," in your words. As you acknowledge, the objection is that "if [he] hadn’t been created, [he] wouldn’t exist, and there would be no one that [my] choice was ‘worse for.’" I don’t see a strong enough reason to think the Michael-while-happy experiences override the Michael-while-miserable experiences, given the difference in moral gravity. It seems cold comfort to tell the moments of Michael that beg for relief, "I’m sorry for the pain I gave you, but it's worth it for the party to come."

I feel inclined, not to "disagree" with them, but rather to inform them that they are wrong

Likewise I feel inclined to inform the Michael-creators that they are wrong, in implicitly claiming that the majority vote of Michael-while-happy can override the pleas of Michael-while-miserable. Make no mistake, I abhor scope neglect. But this is not a question of ignoring numbers, any more than someone who would not torture a person for any number of beautiful lifeless planets created in a corner of the universe where no one could ever observe them. It's about prioritizing needs over wants, the tragic over the precious.

Lastly, you mention the golden rule as part of your case. I personally would not want to be forced by anyone - including my past self, who often acts in a state of myopia and doesn't remember how awful the worst moments are - to suffer terribly because they judged it was worth it for the goods in life.

I do of course have some moral uncertainty on this. There are some counterintuitive implications to the view I sketched here. But I wouldn't say this is an unnecessary addition to the hardness of population ethics.

Comment by antimonyanthony on Layman’s Summary of Resolving Pascallian Decision Problems with Stochastic Dominance · 2021-03-13T19:58:24.034Z · EA · GW

While I think this is a fascinating concept, and probably pretty useful as a heuristic in the real hugely uncertain world, I don't think it addresses the root of the decision theoretic puzzles here. I - and I suspect most people? - want decision theory to give an ordering over options even assuming no background uncertainty, which SD can't provide on its own. If option A is 100% chance of -10 utility, and option B is 50% chance of -10^20 utility else 0, it seems obvious to me that B is a very very terrible, not rationally permitted choice. But in a world with no background uncertainty A would not stochastically dominate B.

Comment by antimonyanthony on antimonyanthony's Shortform · 2021-02-24T03:53:20.185Z · EA · GW

Wow, that's promising news! Thanks for sharing.

Comment by antimonyanthony on Bob Jacobs's Shortform · 2021-02-13T23:27:19.387Z · EA · GW

What if there's a small hedonic cost to creating the beautiful world? Suppose option 1 is "Creating a stunningly beautiful world that is uninhabited and won’t influence sentient beings in any way, plus giving a random person a headache for an hour."

In that case I can't really see a moral case for choosing option 1, no matter how stunningly beautiful the world in question is. This would suggest that even if there is some intrinsic value to beauty, it's extremely small if not lexically inferior to the value of hedonics. I think for basically all practical purposes we do face tradeoffs between hedonic and other purported values, and I just don't feel the moral force of the latter in those cases.

Comment by antimonyanthony on antimonyanthony's Shortform · 2021-02-13T17:39:01.410Z · EA · GW

Some reasons not to primarily argue for veganism on health/climate change grounds

I've often heard animal advocates claim that since non-vegans are generally more receptive to arguments from health benefits and reducing climate impact, we should prioritize those arguments, in order to reduce farmed animal suffering most effectively.

On its face, this is pretty reasonable, and I personally don't care intrinsically about how virtuous people's motivations for going vegan are. Suffering is suffering, no matter its sociological cause.

But there are some reasons I'm nervous about this approach, at least if it comes at the opportunity cost of moral advocacy. None of these are original to me, but I want to summarize them here since I think this is a somewhat neglected point:

  1. Plausibly many who are persuaded by the health/CC arguments won't want to make the full change to veganism, so they'll substitute cows for chickens and fish. Both of which are evidently less bad for one's health and CC risk, but because these animals are so small and have fewer welfare protections, this switch causes a lot more suffering per calorie. More speculatively, there could be a switch to insect consumption.
  2. Health/CC arguments don't apply to reducing wild animal suffering, and indeed emphasizing environmental motivations for going vegan might exacerbate support for conservation for its own sake, independent of individual animals' welfare. (To be fair, moral arguments can also backfire if the emphasis is on general care for animals, rather than specifically preventing extreme suffering.)
  3. Relatedly, health/CC arguments don't motivate one to oppose other potential sources of suffering in voiceless sentient beings, like reckless terraforming and panspermia, or unregulated advanced simulations. This isn't to say all anti-speciesists will make that connection, but caring about animals themselves rather than avoiding exploiting them for human-centric reasons seems likely to increase concern for other minds.
  4. While the evidence re: CC seems quite robust, nutrition science is super uncertain and messy. Based on both this prior about the field and suspicious convergence concerns, I'd be surprised if a scientific consensus established veganism as systematically better for one's health than alternatives. That said, I'd also be very surprised about a consensus that it's worse, and clearly even primarily ethics-based arguments for veganism should also clarify that it's feasible to live (very) healthily on a vegan diet.
Comment by antimonyanthony on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T20:19:36.724Z · EA · GW

Thank you for writing this critique, it was a thought I had while listening as well. In my experience many EAs make the same mistake, not just Ajeya.

Comment by antimonyanthony on antimonyanthony's Shortform · 2021-01-21T19:15:44.216Z · EA · GW

Linkpost: "Tranquilism Respects Individual Desires"

I wrote a defense of an axiology on which an experience is perfectly good to the extent that it is absent of craving for change. This defense follows in part from a reductionist view of personal identity, which is usually considered in EA circles to be in support of total symmetric utilitarianism, but I argue that this view lends support to a form of negative utilitarianism.

Comment by antimonyanthony on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-17T03:25:22.156Z · EA · GW

The problem is that one man's modus ponens is another man's modus tollens.

Fair :) I admit I'm apparently unusually inclined to the modus ponens end of these dilemmas.

If there's a part of a theory that is of very little practical use, but is still seen as a strong point against the theory, we should try find a version without it.

I think this depends on whether the version without it is internally consistent. But more to the point, the question about the value of strangers does seem practically relevant. It influences how much you're willing to effectively donate rather than spend on fancy gifts, for example, giving (far?) greater marginal returns of well-being to strangers than to loved ones. Ironically, if we're not impartial, it seems our loved ones are "utility monsters" in a sense. (Of course, you could still have some nonzero partiality while agreeing that the average person doesn't donate nearly enough.)

I find this as troubling as anyone else who cares deeply about their family and friends, certainly. But I'm inclined to think it's even more troubling that other sentient beings suffer needlessly because of my personal attachments... Ethics need not be easy.

There's also the argument that optimal altruism is facilitated by having some baseline of self-indulgence, to avoid burnout, but 1) I think this argument can be taken too far into the realm of convenient rationalization, and 2) this doesn't require any actual partiality baked into the moral system. It's just that partial attachments are instrumentally useful.

Comment by antimonyanthony on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-16T16:27:15.690Z · EA · GW

In particular, it seems hard to make utilitarianism consistent with caring much more about people close to us than strangers.

Why exactly is this a problem? To me it seems more sensible to recognize our disproportionate partiality toward people close to us as an evolutionary bug, rather than a feature. Even though we do  care about people close to us much more, this doesn't mean we actually should regard their interests as overwhelmingly more important than those of strangers (whom we can probably help more cheaply), on critical reflection.

Comment by antimonyanthony on jackmalde's Shortform · 2021-01-09T18:09:45.030Z · EA · GW

Might be outdated, and the selection of papers is probably skewed in favor of welfare reforms, but here's a bibliography on this question.

Comment by antimonyanthony on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-06T04:45:31.074Z · EA · GW

There are some moral intuitions, such as the ‘procreation asymmetry’ (illustrated in the ‘central illustration’ below) that only a person-affecting view can capture.

I don't think this is exactly true. The procreation asymmetry is also consistent with any form of negative consequentialism. I wouldn't classify such views as "person-affecting," since the reason they don't consider it obligatory to create happy people is that they reject the premise that happiness is intrinsically morally valuable, rather than that they assign special importance to badness-for-someone. These views do still have some of the implications you consider problematic in this post, but they're not vulnerable to, for example, Parfit's critiques based on reductionism about personal identity.

Comment by antimonyanthony on Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure · 2020-12-28T22:16:54.711Z · EA · GW

This disanalogy between the x-risk and s-risk definitions is a source of ongoing frustration to me, as s-risk discourse thus often conflates hellish futures (which are existential risks, and especially bad ones), or possibilities of suffering on a scale significant relative to the potential for suffering (or what we might expect), with bad events many orders of magnitude smaller or futures that are utopian by common sense standards and compared to our world or the downside potential.

This is a fair enough critique. But I think that from the perspective of suffering-focused and many other non-total-symmetric-utilitarian value systems, the definition of x-risk is just as frustrating in its breadth. To such value systems, there is a massive moral difference between the badness of human extinction and a locked-in dystopian future, so they are not necessarily in "the same ballpark of importance." The former is only critical to the upside potential of the future if one has a non-obvious symmetric utilitarian conception of (moral) upside potential, or certain deontological premises that are also non-obvious.

Comment by antimonyanthony on Introduction to the Philosophy of Well-Being · 2020-12-11T00:28:46.773Z · EA · GW

A fourth alternative that may be appealing to those who don't find any of these three theories completely satisfying: tranquilism.

Tranquilism states that an individual experiential moment is as good as it can be for her if and only if she has no craving for change.

(You could argue this is a subset of hedonism, in that it is fundamentally concerned with experiences, but there are important differences.)

Comment by antimonyanthony on some concerns with classical utilitarianism · 2020-11-18T01:59:46.608Z · EA · GW

Accepting VRC would be required by CU, in this hypothetical. So, assuming CU, rejecting VRC would need justification.

Yep, this is what I was getting at, sorry that I wasn't clear. I meant "defense of CU against this case."

On the other hand, as Vinding also writes (ibid, 5.6; 8.10), the qualitative difference between extreme suffering and suffering that could be extreme if we push a bit further may be still be huge.

Yeah, I don't object to the possibility of this in principle, just noting that it's not without its counterintuitive consequences. Neither is pure NU, or any sensible moral theory in my opinion.

Comment by antimonyanthony on some concerns with classical utilitarianism · 2020-11-18T01:54:11.046Z · EA · GW

Good point. I would say I meant intensity of the experience, which is distinct both from intensity of the stimulus and moral (dis)value. And I also dislike seeing conflation of intensity with moral value when it comes to evaluating happiness relative to suffering.

Comment by antimonyanthony on some concerns with classical utilitarianism · 2020-11-16T19:18:52.007Z · EA · GW

I agree with the critiques in the sections including and after "Implicit Commensurability of (Extreme) Suffering," and would encourage defenders of CU to apply as much scrutiny to its counterintuitive conclusions as they do to NU, among other alternatives. I'd also add the Very Repugnant Conclusion as a case for which I haven't heard a satisfying CU defense. Edit: The utility monster as well seems asymmetric in how repugnant it is when you formulate it in terms of happiness versus suffering. It does seem abhorrent to accept the increased suffering of many for the supererogatory happiness of the one, but if the disutility monster would suffer far more from not getting a given resource than many others would put together, helping the disutility monster seems perfectly reasonable to me.

But I think objecting to aggregation of experience per se, as in the first few sections, is throwing the baby out with the bathwater. Even if you just consider suffering as the morally relevant object, it's quite hard to reject the idea that between (a) 1 million people experiencing a form of pain just slightly weaker than the threshold of "extreme" suffering, and (b) 1 person experiencing pain just slightly stronger than that threshold, (b) is the lesser evil.

Perhaps all the alternatives are even worse, and I have some sympathies for lexical threshold NU, including that the form of arguments against it like the one I just proposed could just as easily lead to conclusions of fanaticism, which many near-classical utilitarians reject. And intuitively it does seem there's some qualitative difference between the moral seriousness of torture versus a large number of dust specks. But in general I think aggregation in axiology is much more defensible than classical utilitarianism wholesale.

Comment by antimonyanthony on Please Take the 2020 EA Survey · 2020-11-12T18:42:23.002Z · EA · GW

unless I think that I'm at least as well informed than the average respondent about where this money should go

This applies if your ethics are very aligned with the average respondent, but if not, it is a decent incentive. I'd be surprised if almost all of EAs' disagreement on cause prioritization were strictly empirical.

Comment by antimonyanthony on antimonyanthony's Shortform · 2020-10-10T17:51:08.200Z · EA · GW

5. I do not expect that artificial superintelligence would converge on The Moral Truth by default. Even if it did, the convergence might be too slow to prevent catastrophes. But I also doubt humans will converge on this either. Both humans and AIs are limited by our access only to our "own" qualia, and indeed our own present qualia. The kind of "moral realism" I find plausible with respect to this convergence question is that convergence to moral truth could occur for a perfectly rational and fully informed agent, with unlimited computation and - most importantly - subjective access to the hypothetical future experiences of all sentient beings. These conditions are so idealized that I am probably as pessimistic about AI as any antirealist, but I'm not sure yet if they're so idealized that I functionally am an antirealist in this sense.

Comment by antimonyanthony on antimonyanthony's Shortform · 2020-10-10T17:02:31.069Z · EA · GW

Some vaguely clustered opinions on metaethics/metanormativity

I'm finding myself slightly more sympathetic to moral antirealism lately, but still afford most of my credence to a form of realism that would not be labeled "strong" or "robust." There are several complicated propositions I find plausible that are in tension:

1. I have a strong aversion to arbitrary or ad hoc elements in ethics. Practically this cashes out as things like: (1) rejecting any solutions to population ethics that violate transitivity, and (2) being fairly unpersuaded by solutions to fanaticism that round down small probabilities or cap the utility function.

2. Despite this, I do not intrinsically care about the simplicity of a moral theory, at least for some conceptions of "simplicity." It's quite common in EA and rationalist circles to dismiss simple or monistic moral theories as attempting to shoehorn the complexity of human values into one box. I grant that I might unintentionally be doing this when I respond to critiques of the moral theory that makes most sense to me, which is "simple." But from the inside I don't introspect that this is what's going on. I would be perfectly happy to add some complexity to my theory to avoid underfitting the moral data, provided this isn't so contrived as to constitute overfitting. The closest cases I can think of where I might need to do this are in population ethics and fanaticism. I simply don't see what could matter morally in the kinds of things whose intrinsic value I reject: rules, virtues, happiness, desert, ... When I think of these things, and the thought experiments meant to pump one's intuitions in their favor, I do feel their emotional force. It's simply that I am more inclined to think of them as just that: emotional, or game theoretically useful constructs that break down when you eliminate bad consequences on conscious experience. The fact that I may "care" about them doesn't mean I endorse them as relevant to making the world a better place.

3. Changing my mind on moral matters doesn't feel like "figuring out my values." I roughly know what I value. Many things I value, like a disproportionate degree of comfort for myself, are things I very much wish I didn't value, things I don't think I should value. A common response I've received is something like: "The values you don't think you 'should' have are simply ones that contradict stronger values you hold. You have meta-preferences/meta-values." Sure, but I don't think this has always been the case. Before I learned about EA, I don't think it would have been accurate to say I really did "value" impartial maximization of good across sentient beings. This was a value I had to adopt, to bring my motivations in line with my reasons. Encountering EA materials did not feel at all like "Oh, you know what, deep down this was always what I would've wanted to optimize for, I just didn't know I would've wanted it."

4. The question "what would you do if you discovered the moral truth was to do [obviously bad thing]?" doesn't make sense to me, for certain inputs of [obviously bad thing], e.g. torturing all sentient beings as much as possible. For extreme inputs of that sort, the question is similar to "what would you do if you discovered 2+2=5?" For less extreme inputs, such that it's plausible to me I simply have not thought through ethics enough that I could imagine that hypothetical but merely find it unlikely right now, the question does make sense, and I see nothing wrong with saying "yes." I suspect many antirealists do this all the time, radically changing their minds on moral questions due to considerations other than empirical discoveries, and they would not be content saying "screw the moral truth" by retaining their previous stance.

Comment by antimonyanthony on Expected value theory is fanatical, but that's a good thing · 2020-09-28T03:19:29.984Z · EA · GW
we shouldn't generally assign probability 0 to anything that's logically possible (except where a measure is continuous; I think this requirement had a name, but I forget)

You're probably (pun not intended) thinking of Cromwell's rule.

Comment by antimonyanthony on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-22T00:16:29.744Z · EA · GW

Thanks for your reply! :)

I think that in practice no one does A.

This is true, but we could all be mistaken. This doesn't seem unlikely to me, considering that our brains simply were not built to handle such incredibly small probabilities and incredibly large magnitudes of disutility. That said, I won't practically bite the bullet, any more than people who would choose torture over dust specks probably do, or any more than pure impartial consequentialists truly sacrifice all their own frivolities for altruism. (This latter case is often excused as just avoiding burnout, but I seriously doubt the level of self-indulgence of the average consequentialist EA, myself included, is anywhere close to altruistically optimal.)

In general—and this is something I seem to disagree with many in this community about—I think following your ethics or decision theory through to its honest conclusions tends to make more sense than assuming the status quo is probably close to optimal. There is of course some reflective equilibrium involved here; sometimes I do revise my understanding of the ethical/decision theory.

This is similar to how you might dismiss this proof that 1+1=3 even if you cannot see the error.

To the extent that I assign nonzero probability to mathematically absurd statements (based on precedents like these), I don't think there's very high disutility in acting as if 1+1=2 in a world where it's actually true that 1+1=3. But that could be a failure of my imagination.

It is however a bit of a dissatisfying answer as it is not very rigorous, it is unclear when a conclusion is so absurd as to require outright objection.

This is basically my response. I think there's some meaningful distinction between good applications of reductio ad absurdum and relatively hollow appeals to "common sense," though, and the dismissal of Pascal's mugging strikes me as more the latter.

For example you could worry about future weapons technology that could destroy the world and try to explore what this would look like – but you can safely say it is very unlikely to look like your explorations.

I'm not sure I follow how this helps. People who accept giving into Pascal's mugger don't dispute that the very bad scenario in question is "very unlikely."

This might allow you to avoid the pascal mugger and invest appropriate time into more general more flexible evil wizard protection.

I think you might be onto something here, but I'd need the details fleshed out because I don't quite understand the claim.

Comment by antimonyanthony on antimonyanthony's Shortform · 2020-09-19T23:45:38.289Z · EA · GW

I don't call the happiness itself "slight," I call it "slightly more" than the suffering (edit: and also just slightly more than the happiness per person in world A). I acknowledge the happiness is tremendous. But it comes along with just barely less tremendous suffering. If that's not morally compelling to you, fine, but really the point is that there appears (to me at least) to be quite a strong moral distinction between 1,000,001 happiness minus 1,000,000 suffering, and 1 happiness.

Comment by antimonyanthony on antimonyanthony's Shortform · 2020-09-19T16:05:02.968Z · EA · GW

The Repugnant Conclusion is worse than I thought

At the risk of belaboring the obvious to anyone who has considered this point before: The RC glosses over the exact content of happiness and suffering that are summed up to the quantities of “welfare” defining world A and world Z. In world A, each life with welfare 1,000,000 could, on one extreme, consist purely of (a) good experiences that sum in intensity to a level 1,000,000, or on the other, (b) good experiences summing to 1,000,000,000 minus bad experiences summing (in absolute value) to 999,000,000. Similarly, each of the lives of welfare 1 in world Z could be (a) purely level 1 good experiences, or (b) level 1,000,001 good experiences minus level 1,000,000 bad experiences.

To my intuitions, it’s pretty easy to accept the RC if our conception of worlds A and Z is the pair (a, a) from the (of course non-exhaustive) possibilities above, even more so for (b, a). However, the RC is extremely unpalatable if we consider the pair (a, b). This conclusion, which is entailed by any plausible non-negative[1] total utilitarian view, is that a world of tremendous happiness with absolutely no suffering is worse than a world of many beings each experiencing just slightly more happiness than those in the first, but along with tremendous agony.

To drive home how counterintuitive that is, we can apply the same reasoning often applied against NU views: Suppose the level 1,000,001 happiness in each being in world Z is compressed into one millisecond of some super-bliss, contained within a life of otherwise unremitting misery. There doesn’t appear to be any temporal ordering of the experiences of each life in world Z such that this conclusion isn’t morally absurd to me. (Going out with a bang sounds nice, but not nice enough to make the preceding pure misery worth it; remember this is a millisecond!) This is even accounting for the possible scope neglect involved in considering the massive number of lives in world Z. Indeed, multiplying these lives seems to make the picture more horrifying, not less.

Again, at the risk of sounding obvious: The repugnance of the RC here is that on total non-NU axiologies, we’d be forced to consider the kind of life I just sketched a “net-positive” life morally speaking.[2] Worse, we're forced to consider an astronomical number of such lives better than a (comparatively small) pure utopia.


[1] “Negative” here includes lexical and lexical threshold views.

[2] I’m setting aside possible defenses based on the axiological importance of duration. This is because (1) I’m quite uncertain about that point, though I share the intuition, and (2) it seems any such defense rescues NU just as well. I.e. one can, under this principle, maintain that 1 hour of torture-level suffering is impossible to morally outweigh, but 1 millisecond isn’t.

Comment by antimonyanthony on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-16T02:09:31.093Z · EA · GW
At the time I thought I was explaining [Pascal's mugging] badly but reading more on this topic I think it is just a non-problem: it only appears to be a problem to those whose only decision making tool is an expected value calculation.

This is quite a strong claim IMO. Could you explain exactly which other decision making tool(s) you would apply to Pascal's mugging that makes it not a problem? The descriptions of the tools in stories 1 and 2 are too vague for me to clearly see how they'd apply here.

Indeed, if anything, some of those tools strengthen the case for giving into Pascal's mugging. E.g. "developing a set of internally consistent descriptions of future events based on each uncertainty, then developing plans that are robust to all options": if you can't reasonably rule out the possibility that the mugger is telling the truth, paying the mugger seems a lot more robust. Ruling out that possibility in the literal thought experiment doesn't seem obviously counterintuitive to me, but the standard stories for x- and s-risks don't seem so absurd that you can treat them as probability 0 (more on this below). Appealing to the possibility that one's model is just wrong, which does cut against naive EV calculations, doesn't seem to help here.

I can imagine a few candidates, but none seem satisfactory to me:

  • "Very small probabilities should just be rounded down to zero." I can't think of a principled basis for selecting the threshold for a "very small" probability, at least not one that doesn't subject us to absurd conclusions like that you shouldn't wear a seatbelt because probabilities of car crashes are very low. This rule also seems contrary to maximin robustness.
  • "Very high disutilities are practically impossible." I simply don't see sufficiently strong evidence in favor of this to outweigh the high disutility conditional on the mugger telling the truth. If you want to say my reply is just smuggling expected value reasoning in through the backdoor, well, I don't really consider this a counterargument. Declaring a hard rule like this one, which treats some outcomes as impossible absent a mathematical or logical argument, seems epistemically hubristic and is again contrary to robustness.
  • "Don't do anything that extremely violates common sense." Intuitive, but I don't think we should expect our common sense to be well-equipped to handle situations involving massive absolute values of (dis)utility.
Comment by antimonyanthony on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-09T17:22:15.489Z · EA · GW

Do you think this is highly implausible even if you account for:

  • the opportunities to reduce other people's extreme suffering that a person committing suicide would forego,
  • the extreme suffering of one's loved ones this would probably increase,
  • plausible views of personal identity on which risking the extreme suffering of one's future self is ethically similar to, if not the same as, risking it for someone else,
  • relatedly, views of probability where the small measure of worlds with a being experiencing extreme suffering are as "real" as the large measure without, and
  • the fact that even non-negative utilitarian views will probably consider some forms of suffering so bad, that small risks of them would outweigh any upsides that a typical human experiences, for oneself (ignoring effects on other people)?
Comment by antimonyanthony on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:34:37.623Z · EA · GW

I don't think that if someone rejects the rationality of trading off neutrality for a combination of happiness and suffering, they need to explain every case of this. (Analogously, the fact that people often do things for reasons other than maximizing pleasure and minimizing pain isn't an argument against ethical hedonism, just psychological hedonism.) Some trades might just be frankly irrational or mistaken, and one can point to biases that lead to such behavior.

Comment by antimonyanthony on "Disappointing Futures" Might Be As Important As Existential Risks · 2020-09-03T17:31:50.948Z · EA · GW
If we reject either of these premises, we must also reject the overwhelming importance of shaping the far future.

Perhaps a nitpick (on a post that is otherwise very well done!), but as phrased this doesn't appear true. Rejecting either of those premises only entails rejecting the overwhelming importance of populating the far future with lots of happy lives. You could still consider the far future overwhelmingly ethically important in that you want to prevent it from being worse than extinction, for example.

Comment by antimonyanthony on My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda · 2020-08-20T15:59:14.442Z · EA · GW

I'm glad "distillation" is emphasized as well in the acronym, because I think it resolves an important question about competitiveness. My initial impression, from the pitch of IA as "solve arbitrarily hard problems with aligned AIs by using human-endorsed decompositions," was that this wouldn't work because explicitly decomposing tasks this way in deployment sounds too slow. But distillation in theory solves that problem, because the decomposition from the training phase becomes implicit. (Of course, it raises safety risks too, because we need to check that the compression of this process into a "fast" policy didn't compromise the safety properties that motivated decomposition in the training in the first place.)

Comment by antimonyanthony on The problem with person-affecting views · 2020-08-08T16:43:52.093Z · EA · GW

Under this interpretation I would say my position is doubt that positive welfare exists in the first place. There's only the negation or absence of negative welfare. So to my ears it's like arguing 5 x 0 > 1 x 0. (Edit: Perhaps a better analogy, if suffering is like dust that can be removed by the vacuum-cleaner of happiness, it doesn't make sense to say that vacuuming a perfectly clean floor for 5 minutes is better than doing so for 1 minute, or not at all.)

Taken in isolation I can see how counterintuitive this sounds, but in the context of observations about confounders and the instrumental value of happiness, it's quite sensible to me compared with the alternatives. In particular, it doesn't commit us to biting the bullets I mentioned in my last comment, doesn't violate transitivity, and accounts for the procreation asymmetry intuition. The main downside I think is the implication that death is not bad for the dying person themselves, but I don't find this unacceptable considering: (a) it's quite consistent with e.g. Epicurean and Buddhist views, not "out there" in the history of philosophy, and (b) practically speaking every life is entangled with others so that even if my death isn't a tragedy to myself, it is a strong tragedy to people who care about or depend on me.

Comment by antimonyanthony on The problem with person-affecting views · 2020-08-06T23:00:36.431Z · EA · GW

Maybe your intuition that the latter is better than the former is confounded by the pleasant memories of this beautiful sight, which could remove suffering from their life in the future. Plus the confounder I mentioned in my original comment.

Of course one can cite confounders against suffering-focused intuitions as well (e.g. the tendency of the worst suffering in human life to be much more intense than the best happiness). But for me the intuition that C > B when all these confounders are accounted for really isn't that strong - at least not enough to outweigh the very repugnant conclusion, utility monster, and intuition that happiness doesn't have moral importance of the sort that would obligate us to create it for its own sake.

Comment by antimonyanthony on The problem with person-affecting views · 2020-08-05T22:11:56.211Z · EA · GW
Any reasonable theory of population ethics must surely accept that C is better than B.

I dispute this, at least if we interpret the positive-welfare lives as including only happiness (of varying levels) but no suffering. If a life contains no suffering, such that additional happiness doesn't play any palliative role or satisfy any frustrated preferences or cravings, I'm quite comfortable saying that this additional happiness doesn't add value to the life (hence B = C).

I suspect the strength of the intuition in favor of judging C > B comes from the fact that in reality, extra happiness almost always does play a palliative role and satisfies preferences. But a defender of the procreation asymmetry (not the neutrality principle, which I agree with Michael is unpalatable) doesn't need to dispute this.

Comment by antimonyanthony on Can you have an egoistic preference about your own birth? · 2020-07-21T17:16:55.108Z · EA · GW

Instrumental to causing them to have a frustrated preference. If they weren't born, they wouldn't have that preference.

Comment by antimonyanthony on Can you have an egoistic preference about your own birth? · 2020-07-18T15:03:50.669Z · EA · GW
Is it bad to have created that mind?
It doesn't personally affect anyone. And they personally don't care about having been created (again: they don't have any preference about their existence). So is it bad to have created them?

I don't know if I'm missing something obvious, but even though the birth itself doesn't violate this mind's preference, their birth creates a preference that cannot be fulfilled. So (under the usual psychology of what it's like to have a frustrated preference) it is instrumentally bad to have created that mind.

Comment by antimonyanthony on Asymmetric altruism · 2020-06-27T18:52:04.151Z · EA · GW

Asymmetries need not be deontological; they could be axiological. A pure consequentialist could maintain that negative experiences are lexically worse than absence of good experiences, all else equal (in particular, controlling for the effects of good experiences on the prevalence of negative experiences). This is controversial, to be sure, but not inconsistent with consequentialism and hence not vulnerable to Will's argument.

Comment by antimonyanthony on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-06-12T21:47:59.357Z · EA · GW
It seems to me plausible that anyone who uses the word agony in the standard sense is committing her/himself to agony being undesirable. This is not an argument for irreducible normativity, but it may give you a feeling that there is some intrinsic connection underlying the set of self-evident cases.

Could you please clarify this? As someone who is mainly convinced of irreducible normativity by the self-evident badness of agony - in particular, considering the intuition that someone in agony has reason to end it even if they don't consciously "desire" that end - I don't think this can be dissolved as a linguistic confusion.

It's true that for all practical purposes humans seem not to desire their own pain/suffering. But in my discussions with some antirealists they have argued that if a paperclip maximizer, for example, doesn't want not to suffer (by hypothesis all it wants is to maximize paperclips), then such a being doesn't have a reason to avoid suffering. That to me seems patently unbelievable. Apologies if I've misunderstood your point!

Comment by antimonyanthony on How to Measure Capacity for Welfare and Moral Status · 2020-06-01T23:04:10.997Z · EA · GW
We could also ask how many days of one’s human life one would be willing to forgo to experience some duration of time as another species. This approach would allow us to assign cardinal numbers to the value of animal lives.

I hope I’m not being too obvious here, but I’ve seen people frequently speak of animals “mattering” X times as much as a human, say, without drawing this distinction: we’d need to be very careful to distinguish what we mean by value of life. For prioritizing which lives to save, this quote perhaps makes sense. But not if “value of animal lives” is meant to correspond to how much we should prioritize alleviating different animals’ suffering. I wouldn’t trade days of my life to experience days of a very poor person’s life, but that doesn’t mean my life is more valuable in the sense that helping me is more important. Quite the opposite: the less value there is in a human’s/animal’s life, the more imperative it is to help them (in non-life-saving ways), for reasons of diminishing returns at least.

I would strongly encourage surveys about intuitions of this sort to precisely ask about tradeoffs of experiences, rather than “value of life” (as in the Norwood and Lusk survey that you cite).

Comment by antimonyanthony on Applying speciesism to wild-animal suffering · 2020-05-18T12:29:58.117Z · EA · GW

Do you think they would have a similar response to intervening in the lives of young children in X oppressed group (or any group for that matter)? That seems to be a relevantly similar case to wild animals, in terms of their lack of capacity to self-govern and vulnerability.

Comment by antimonyanthony on The Effects of Animal-Free Food Technology Awareness on Animal Farming Opposition · 2020-05-17T15:32:27.211Z · EA · GW

Excellent and important, if sobering, work! I've gotten the sense that very general social psychology arguments about animal advocacy strategy can go either way (foot in the door vs door in the face, etc.), so it's refreshing to see specific studies on this that tell me something not at all obvious. I like the preregistration and use of FDR control. Some minor remarks:

  • "the power (the risk of false negative results)" - I believe this should be the complement of that risk
  • "If the AFFT articles encourage the view that animal-free alternatives are unnatural, they could strengthen one of the key justifications for animal product consumption." - Seems like your results for the model with an interaction between reading about AFFT and preference for naturalness have some implications for this. In that model reading about AFFT is no longer significant, nor is the interaction. But I suppose under this hypothesis you'd expect a noticeable negative interaction: the stronger one's preference for naturalness, the more strongly reading about AFFT decreases their AFO.
Comment by antimonyanthony on The Alienation Objection to Consequentialism · 2020-05-07T13:07:32.884Z · EA · GW
the reason you maintain and continue to value the relationship is not so circumstantial, and has more to do with your actual relationship with that other person

Right, but even so it seems like a friend who cares for you because they believe caring for you is good, and better than the alternatives, is "warmer" than one who doesn't think this but merely follows some partiality (or again, bias) toward you.

I suppose it comes down to conflicting intuitions on something like "unconditional love." Several people, not just hardcore consequentialists, find that concept hollow and cheap, because loving someone unconditionally implies you don't really care who they are, in any sense other than the physical continuity of their identity. Conditional love identifies the aspects of the person actually worth loving, and that seems more genuine to me, though less comforting to someone who wants (selfishly) to be loved no matter what they do.

I suppose the point is that you don't recognize that reason as an ethical one; it's just something that happens to explain your behaviour in practice, not what you think is right.

Yeah, exactly. It would be an extremely convenient coincidence if our feelings for partial friendship etc., which evolved in small communities where these feelings were largely sufficient for social cohesion, just happened to be the ethically best things for us to follow - when we now live in a world where it's feasible for someone to do a lot more good by being impartial.

Edit: seems based on one of your other comments that we actually agree more than I thought.

Comment by antimonyanthony on The Alienation Objection to Consequentialism · 2020-05-05T21:51:56.161Z · EA · GW
It's more circumstantial if they prioritize you based on impartial concern; it just happened to be the best thing they could do.

Hm, to my ear, prioritizing a friend just because you happen to be biased towards them is more circumstantial. It's based on accidents of geography and life events that led you to be friends with that person to a greater degree than with other people you've never met.

that's pretty small compared to the impartial stakes we face

I agree, though that's a separate argument. I was addressing the claim that conditional on a consequentialist choosing to help their friend, their reasons are alienating, which I don't find convincing. My point was precisely that because the standard is so high for a consequentialist, it's all the more flattering if your friend prioritizes you in light of that standard. It's quite difficult to reconcile with my revealed priorities as someone who definitely doesn't live up to my own consequentialism, yes, but I bite the bullet that this is really just a failure on my part (or, as you mention, the "instrumental" reasons to be a good friend also win over anyway).

Comment by antimonyanthony on The Alienation Objection to Consequentialism · 2020-05-05T19:50:37.346Z · EA · GW

I'm still confused by this. The more impartial someone's standards, if anything, the more important you should feel if they still choose to prioritize you.

Comment by antimonyanthony on The Alienation Objection to Consequentialism · 2020-05-05T18:39:44.907Z · EA · GW
One thought to have about this case is that you have the wrong motivation in visiting your friend. Plausibly, your motive should be something like ‘my friend is suffering; I want to help them feel better!’ and not ‘helping my friend has better consequences than anything else I could have done.’ Imagine what it would be like to frankly admit to your friend, “I’m only here because being here had the best consequences. If painting a landscape would have led to better consequences, I would have stayed home and painted instead.” Your friend would probably experience this remark as cold, or at least overly abstract and aloof.

This doesn't resonate with me at all, personally. What exactly could be a purer, warmer motivation for helping a friend than the belief that helping them is the best thing you could be doing with your time? That belief implies their well-being is very important; it's not just an abstract consequence, their suffering really exists and by helping them you are choosing to relieve it.