Posts

antimonyanthony's Shortform 2020-09-19T16:05:02.590Z

Comments

Comment by antimonyanthony on some concerns with classical utilitarianism · 2020-11-18T01:59:46.608Z · EA · GW

Accepting VRC would be required by CU, in this hypothetical. So, assuming CU, rejecting VRC would need justification.

Yep, this is what I was getting at, sorry that I wasn't clear. I meant "defense of CU against this case."

On the other hand, as Vinding also writes (ibid, 5.6; 8.10), the qualitative difference between extreme suffering and suffering that could be extreme if we push a bit further may be still be huge.

Yeah, I don't object to the possibility of this in principle, just noting that it's not without its counterintuitive consequences. Neither is pure NU, or any sensible moral theory in my opinion.

Comment by antimonyanthony on some concerns with classical utilitarianism · 2020-11-18T01:54:11.046Z · EA · GW

Good point. I would say I meant intensity of the experience, which is distinct both from intensity of the stimulus and moral (dis)value. And I also dislike seeing conflation of intensity with moral value when it comes to evaluating happiness relative to suffering.

Comment by antimonyanthony on some concerns with classical utilitarianism · 2020-11-16T19:18:52.007Z · EA · GW

I agree with the critiques in the sections including and after "Implicit Commensurability of (Extreme) Suffering," and would encourage defenders of CU to apply as much scrutiny to its counterintuitive conclusions as they do to NU, among other alternatives. I'd also add the Very Repugnant Conclusion as a case for which I haven't heard a satisfying CU defense. Edit: The utility monster as well seems asymmetric in how repugnant it is when you formulate it in terms of happiness versus suffering. It does seem abhorrent to accept the increased suffering of many for the supererogatory happiness of the one, but if the disutility monster would suffer far more from not getting a given resource than many others would put together, helping the disutility monster seems perfectly reasonable to me.

But I think objecting to aggregation of experience per se, as in the first few sections, is throwing the baby out with the bathwater. Even if you just consider suffering as the morally relevant object, it's quite hard to reject the idea that between (a) 1 million people experiencing a form of pain just slightly weaker than the threshold of "extreme" suffering, and (b) 1 person experiencing pain just slightly stronger than that threshold, (b) is the lesser evil.

Perhaps all the alternatives are even worse, and I have some sympathies for lexical threshold NU, including that the form of arguments against it like the one I just proposed could just as easily lead to conclusions of fanaticism, which many near-classical utilitarians reject. And intuitively it does seem there's some qualitative difference between the moral seriousness of torture versus a large number of dust specks. But in general I think aggregation in axiology is much more defensible than classical utilitarianism wholesale.

Comment by antimonyanthony on Please Take the 2020 EA Survey · 2020-11-12T18:42:23.002Z · EA · GW

unless I think that I'm at least as well informed than the average respondent about where this money should go

This applies if your ethics are very aligned with the average respondent, but if not, it is a decent incentive. I'd be surprised if almost all of EAs' disagreement on cause prioritization were strictly empirical.

Comment by antimonyanthony on antimonyanthony's Shortform · 2020-10-10T17:51:08.200Z · EA · GW

5. I do not expect that artificial superintelligence would converge on The Moral Truth by default. Even if it did, the convergence might be too slow to prevent catastrophes. But I also doubt humans will converge on this either. Both humans and AIs are limited by our access only to our "own" qualia, and indeed our own present qualia. The kind of "moral realism" I find plausible with respect to this convergence question is that convergence to moral truth could occur for a perfectly rational and fully informed agent, with unlimited computation and - most importantly - subjective access to the hypothetical future experiences of all sentient beings. These conditions are so idealized that I am probably as pessimistic about AI as any antirealist, but I'm not sure yet if they're so idealized that I functionally am an antirealist in this sense.

Comment by antimonyanthony on antimonyanthony's Shortform · 2020-10-10T17:02:31.069Z · EA · GW

Some vaguely clustered opinions on metaethics/metanormativity

I'm finding myself slightly more sympathetic to moral antirealism lately, but still afford most of my credence to a form of realism that would not be labeled "strong" or "robust." There are several complicated propositions I find plausible that are in tension:

1. I have a strong aversion to arbitrary or ad hoc elements in ethics. Practically this cashes out as things like: (1) rejecting any solutions to population ethics that violate transitivity, and (2) being fairly unpersuaded by solutions to fanaticism that round down small probabilities or cap the utility function.

2. Despite this, I do not intrinsically care about the simplicity of a moral theory, at least for some conceptions of "simplicity." It's quite common in EA and rationalist circles to dismiss simple or monistic moral theories as attempting to shoehorn the complexity of human values into one box. I grant that I might unintentionally be doing this when I respond to critiques of the moral theory that makes most sense to me, which is "simple." But from the inside I don't introspect that this is what's going on. I would be perfectly happy to add some complexity to my theory to avoid underfitting the moral data, provided this isn't so contrived as to constitute overfitting. The closest cases I can think of where I might need to do this are in population ethics and fanaticism. I simply don't see what could matter morally in the kinds of things whose intrinsic value I reject: rules, virtues, happiness, desert, ... When I think of these things, and the thought experiments meant to pump one's intuitions in their favor, I do feel their emotional force. It's simply that I am more inclined to think of them as just that: emotional, or game theoretically useful constructs that break down when you eliminate bad consequences on conscious experience. The fact that I may "care" about them doesn't mean I endorse them as relevant to making the world a better place.

3. Changing my mind on moral matters doesn't feel like "figuring out my values." I roughly know what I value. Many things I value, like a disproportionate degree of comfort for myself, are things I very much wish I didn't value, things I don't think I should value. A common response I've received is something like: "The values you don't think you 'should' have are simply ones that contradict stronger values you hold. You have meta-preferences/meta-values." Sure, but I don't think this has always been the case. Before I learned about EA, I don't think it would have been accurate to say I really did "value" impartial maximization of good across sentient beings. This was a value I had to adopt, to bring my motivations in line with my reasons. Encountering EA materials did not feel at all like "Oh, you know what, deep down this was always what I would've wanted to optimize for, I just didn't know I would've wanted it."

4. The question "what would you do if you discovered the moral truth was to do [obviously bad thing]?" doesn't make sense to me, for certain inputs of [obviously bad thing], e.g. torturing all sentient beings as much as possible. For extreme inputs of that sort, the question is similar to "what would you do if you discovered 2+2=5?" For less extreme inputs, such that it's plausible to me I simply have not thought through ethics enough that I could imagine that hypothetical but merely find it unlikely right now, the question does make sense, and I see nothing wrong with saying "yes." I suspect many antirealists do this all the time, radically changing their minds on moral questions due to considerations other than empirical discoveries, and they would not be content saying "screw the moral truth" by retaining their previous stance.

Comment by antimonyanthony on Expected value theory is fanatical, but that's a good thing · 2020-09-28T03:19:29.984Z · EA · GW
we shouldn't generally assign probability 0 to anything that's logically possible (except where a measure is continuous; I think this requirement had a name, but I forget)

You're probably (pun not intended) thinking of Cromwell's rule.

Comment by antimonyanthony on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-22T00:16:29.744Z · EA · GW

Thanks for your reply! :)

I think that in practice no one does A.

This is true, but we could all be mistaken. This doesn't seem unlikely to me, considering that our brains simply were not built to handle such incredibly small probabilities and incredibly large magnitudes of disutility. That said, I won't practically bite the bullet, any more than people who would choose torture over dust specks probably do, or any more than pure impartial consequentialists truly sacrifice all their own frivolities for altruism. (This latter case is often excused as just avoiding burnout, but I seriously doubt the level of self-indulgence of the average consequentialist EA, myself included, is anywhere close to altruistically optimal.)

In general—and this is something I seem to disagree with many in this community about—I think following your ethics or decision theory through to its honest conclusions tends to make more sense than assuming the status quo is probably close to optimal. There is of course some reflective equilibrium involved here; sometimes I do revise my understanding of the ethical/decision theory.

This is similar to how you might dismiss this proof that 1+1=3 even if you cannot see the error.

To the extent that I assign nonzero probability to mathematically absurd statements (based on precedents like these), I don't think there's very high disutility in acting as if 1+1=2 in a world where it's actually true that 1+1=3. But that could be a failure of my imagination.

It is however a bit of a dissatisfying answer as it is not very rigorous, it is unclear when a conclusion is so absurd as to require outright objection.

This is basically my response. I think there's some meaningful distinction between good applications of reductio ad absurdum and relatively hollow appeals to "common sense," though, and the dismissal of Pascal's mugging strikes me as more the latter.

For example you could worry about future weapons technology that could destroy the world and try to explore what this would look like – but you can safely say it is very unlikely to look like your explorations.

I'm not sure I follow how this helps. People who accept giving into Pascal's mugger don't dispute that the very bad scenario in question is "very unlikely."

This might allow you to avoid the pascal mugger and invest appropriate time into more general more flexible evil wizard protection.

I think you might be onto something here, but I'd need the details fleshed out because I don't quite understand the claim.

Comment by antimonyanthony on antimonyanthony's Shortform · 2020-09-19T23:45:38.289Z · EA · GW

I don't call the happiness itself "slight," I call it "slightly more" than the suffering (edit: and also just slightly more than the happiness per person in world A). I acknowledge the happiness is tremendous. But it comes along with just barely less tremendous suffering. If that's not morally compelling to you, fine, but really the point is that there appears (to me at least) to be quite a strong moral distinction between 1,000,001 happiness minus 1,000,000 suffering, and 1 happiness.

Comment by antimonyanthony on antimonyanthony's Shortform · 2020-09-19T16:05:02.968Z · EA · GW

The Repugnant Conclusion is worse than I thought

At the risk of belaboring the obvious to anyone who has considered this point before: The RC glosses over the exact content of happiness and suffering that are summed up to the quantities of “welfare” defining world A and world Z. In world A, each life with welfare 1,000,000 could, on one extreme, consist purely of (a) good experiences that sum in intensity to a level 1,000,000, or on the other, (b) good experiences summing to 1,000,000,000 minus bad experiences summing (in absolute value) to 999,000,000. Similarly, each of the lives of welfare 1 in world Z could be (a) purely level 1 good experiences, or (b) level 1,000,001 good experiences minus level 1,000,000 bad experiences.

To my intuitions, it’s pretty easy to accept the RC if our conception of worlds A and Z is the pair (a, a) from the (of course non-exhaustive) possibilities above, even more so for (b, a). However, the RC is extremely unpalatable if we consider the pair (a, b). This conclusion, which is entailed by any plausible non-negative[1] total utilitarian view, is that a world of tremendous happiness with absolutely no suffering is worse than a world of many beings each experiencing just slightly more happiness than those in the first, but along with tremendous agony.

To drive home how counterintuitive that is, we can apply the same reasoning often applied against NU views: Suppose the level 1,000,001 happiness in each being in world Z is compressed into one millisecond of some super-bliss, contained within a life of otherwise unremitting misery. There doesn’t appear to be any temporal ordering of the experiences of each life in world Z such that this conclusion isn’t morally absurd to me. (Going out with a bang sounds nice, but not nice enough to make the preceding pure misery worth it; remember this is a millisecond!) This is even accounting for the possible scope neglect involved in considering the massive number of lives in world Z. Indeed, multiplying these lives seems to make the picture more horrifying, not less.

Again, at the risk of sounding obvious: The repugnance of the RC here is that on total non-NU axiologies, we’d be forced to consider the kind of life I just sketched a “net-positive” life morally speaking.[2] Worse, we're forced to consider an astronomical number of such lives better than a (comparatively small) pure utopia.


[1] “Negative” here includes lexical and lexical threshold views.

[2] I’m setting aside possible defenses based on the axiological importance of duration. This is because (1) I’m quite uncertain about that point, though I share the intuition, and (2) it seems any such defense rescues NU just as well. I.e. one can, under this principle, maintain that 1 hour of torture-level suffering is impossible to morally outweigh, but 1 millisecond isn’t.

Comment by antimonyanthony on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-16T02:09:31.093Z · EA · GW
At the time I thought I was explaining [Pascal's mugging] badly but reading more on this topic I think it is just a non-problem: it only appears to be a problem to those whose only decision making tool is an expected value calculation.

This is quite a strong claim IMO. Could you explain exactly which other decision making tool(s) you would apply to Pascal's mugging that makes it not a problem? The descriptions of the tools in stories 1 and 2 are too vague for me to clearly see how they'd apply here.

Indeed, if anything, some of those tools strengthen the case for giving into Pascal's mugging. E.g. "developing a set of internally consistent descriptions of future events based on each uncertainty, then developing plans that are robust to all options": if you can't reasonably rule out the possibility that the mugger is telling the truth, paying the mugger seems a lot more robust. Ruling out that possibility in the literal thought experiment doesn't seem obviously counterintuitive to me, but the standard stories for x- and s-risks don't seem so absurd that you can treat them as probability 0 (more on this below). Appealing to the possibility that one's model is just wrong, which does cut against naive EV calculations, doesn't seem to help here.

I can imagine a few candidates, but none seem satisfactory to me:

  • "Very small probabilities should just be rounded down to zero." I can't think of a principled basis for selecting the threshold for a "very small" probability, at least not one that doesn't subject us to absurd conclusions like that you shouldn't wear a seatbelt because probabilities of car crashes are very low. This rule also seems contrary to maximin robustness.
  • "Very high disutilities are practically impossible." I simply don't see sufficiently strong evidence in favor of this to outweigh the high disutility conditional on the mugger telling the truth. If you want to say my reply is just smuggling expected value reasoning in through the backdoor, well, I don't really consider this a counterargument. Declaring a hard rule like this one, which treats some outcomes as impossible absent a mathematical or logical argument, seems epistemically hubristic and is again contrary to robustness.
  • "Don't do anything that extremely violates common sense." Intuitive, but I don't think we should expect our common sense to be well-equipped to handle situations involving massive absolute values of (dis)utility.
Comment by antimonyanthony on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-09T17:22:15.489Z · EA · GW

Do you think this is highly implausible even if you account for:

  • the opportunities to reduce other people's extreme suffering that a person committing suicide would forego,
  • the extreme suffering of one's loved ones this would probably increase,
  • plausible views of personal identity on which risking the extreme suffering of one's future self is ethically similar to, if not the same as, risking it for someone else,
  • relatedly, views of probability where the small measure of worlds with a being experiencing extreme suffering are as "real" as the large measure without, and
  • the fact that even non-negative utilitarian views will probably consider some forms of suffering so bad, that small risks of them would outweigh any upsides that a typical human experiences, for oneself (ignoring effects on other people)?
Comment by antimonyanthony on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:34:37.623Z · EA · GW

I don't think that if someone rejects the rationality of trading off neutrality for a combination of happiness and suffering, they need to explain every case of this. (Analogously, the fact that people often do things for reasons other than maximizing pleasure and minimizing pain isn't an argument against ethical hedonism, just psychological hedonism.) Some trades might just be frankly irrational or mistaken, and one can point to biases that lead to such behavior.

Comment by antimonyanthony on "Disappointing Futures" Might Be As Important As Existential Risks · 2020-09-03T17:31:50.948Z · EA · GW
If we reject either of these premises, we must also reject the overwhelming importance of shaping the far future.

Perhaps a nitpick (on a post that is otherwise very well done!), but as phrased this doesn't appear true. Rejecting either of those premises only entails rejecting the overwhelming importance of populating the far future with lots of happy lives. You could still consider the far future overwhelmingly ethically important in that you want to prevent it from being worse than extinction, for example.

Comment by antimonyanthony on My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda · 2020-08-20T15:59:14.442Z · EA · GW

I'm glad "distillation" is emphasized as well in the acronym, because I think it resolves an important question about competitiveness. My initial impression, from the pitch of IA as "solve arbitrarily hard problems with aligned AIs by using human-endorsed decompositions," was that this wouldn't work because explicitly decomposing tasks this way in deployment sounds too slow. But distillation in theory solves that problem, because the decomposition from the training phase becomes implicit. (Of course, it raises safety risks too, because we need to check that the compression of this process into a "fast" policy didn't compromise the safety properties that motivated decomposition in the training in the first place.)

Comment by antimonyanthony on The problem with person-affecting views · 2020-08-08T16:43:52.093Z · EA · GW

Under this interpretation I would say my position is doubt that positive welfare exists in the first place. There's only the negation or absence of negative welfare. So to my ears it's like arguing 5 x 0 > 1 x 0. (Edit: Perhaps a better analogy, if suffering is like dust that can be removed by the vacuum-cleaner of happiness, it doesn't make sense to say that vacuuming a perfectly clean floor for 5 minutes is better than doing so for 1 minute, or not at all.)

Taken in isolation I can see how counterintuitive this sounds, but in the context of observations about confounders and the instrumental value of happiness, it's quite sensible to me compared with the alternatives. In particular, it doesn't commit us to biting the bullets I mentioned in my last comment, doesn't violate transitivity, and accounts for the procreation asymmetry intuition. The main downside I think is the implication that death is not bad for the dying person themselves, but I don't find this unacceptable considering: (a) it's quite consistent with e.g. Epicurean and Buddhist views, not "out there" in the history of philosophy, and (b) practically speaking every life is entangled with others so that even if my death isn't a tragedy to myself, it is a strong tragedy to people who care about or depend on me.

Comment by antimonyanthony on The problem with person-affecting views · 2020-08-06T23:00:36.431Z · EA · GW

Maybe your intuition that the latter is better than the former is confounded by the pleasant memories of this beautiful sight, which could remove suffering from their life in the future. Plus the confounder I mentioned in my original comment.

Of course one can cite confounders against suffering-focused intuitions as well (e.g. the tendency of the worst suffering in human life to be much more intense than the best happiness). But for me the intuition that C > B when all these confounders are accounted for really isn't that strong - at least not enough to outweigh the very repugnant conclusion, utility monster, and intuition that happiness doesn't have moral importance of the sort that would obligate us to create it for its own sake.

Comment by antimonyanthony on The problem with person-affecting views · 2020-08-05T22:11:56.211Z · EA · GW
Any reasonable theory of population ethics must surely accept that C is better than B.

I dispute this, at least if we interpret the positive-welfare lives as including only happiness (of varying levels) but no suffering. If a life contains no suffering, such that additional happiness doesn't play any palliative role or satisfy any frustrated preferences or cravings, I'm quite comfortable saying that this additional happiness doesn't add value to the life (hence B = C).

I suspect the strength of the intuition in favor of judging C > B comes from the fact that in reality, extra happiness almost always does play a palliative role and satisfies preferences. But a defender of the procreation asymmetry (not the neutrality principle, which I agree with Michael is unpalatable) doesn't need to dispute this.

Comment by antimonyanthony on Can you have an egoistic preference about your own birth? · 2020-07-21T17:16:55.108Z · EA · GW

Instrumental to causing them to have a frustrated preference. If they weren't born, they wouldn't have that preference.

Comment by antimonyanthony on Can you have an egoistic preference about your own birth? · 2020-07-18T15:03:50.669Z · EA · GW
Is it bad to have created that mind?
It doesn't personally affect anyone. And they personally don't care about having been created (again: they don't have any preference about their existence). So is it bad to have created them?

I don't know if I'm missing something obvious, but even though the birth itself doesn't violate this mind's preference, their birth creates a preference that cannot be fulfilled. So (under the usual psychology of what it's like to have a frustrated preference) it is instrumentally bad to have created that mind.

Comment by antimonyanthony on Asymmetric altruism · 2020-06-27T18:52:04.151Z · EA · GW

Asymmetries need not be deontological; they could be axiological. A pure consequentialist could maintain that negative experiences are lexically worse than absence of good experiences, all else equal (in particular, controlling for the effects of good experiences on the prevalence of negative experiences). This is controversial, to be sure, but not inconsistent with consequentialism and hence not vulnerable to Will's argument.

Comment by antimonyanthony on Moral Anti-Realism Sequence #3: Against Irreducible Normativity · 2020-06-12T21:47:59.357Z · EA · GW
It seems to me plausible that anyone who uses the word agony in the standard sense is committing her/himself to agony being undesirable. This is not an argument for irreducible normativity, but it may give you a feeling that there is some intrinsic connection underlying the set of self-evident cases.

Could you please clarify this? As someone who is mainly convinced of irreducible normativity by the self-evident badness of agony - in particular, considering the intuition that someone in agony has reason to end it even if they don't consciously "desire" that end - I don't think this can be dissolved as a linguistic confusion.

It's true that for all practical purposes humans seem not to desire their own pain/suffering. But in my discussions with some antirealists they have argued that if a paperclip maximizer, for example, doesn't want not to suffer (by hypothesis all it wants is to maximize paperclips), then such a being doesn't have a reason to avoid suffering. That to me seems patently unbelievable. Apologies if I've misunderstood your point!

Comment by antimonyanthony on How to Measure Capacity for Welfare and Moral Status · 2020-06-01T23:04:10.997Z · EA · GW
We could also ask how many days of one’s human life one would be willing to forgo to experience some duration of time as another species. This approach would allow us to assign cardinal numbers to the value of animal lives.

I hope I’m not being too obvious here, but I’ve seen people frequently speak of animals “mattering” X times as much as a human, say, without drawing this distinction: we’d need to be very careful to distinguish what we mean by value of life. For prioritizing which lives to save, this quote perhaps makes sense. But not if “value of animal lives” is meant to correspond to how much we should prioritize alleviating different animals’ suffering. I wouldn’t trade days of my life to experience days of a very poor person’s life, but that doesn’t mean my life is more valuable in the sense that helping me is more important. Quite the opposite: the less value there is in a human’s/animal’s life, the more imperative it is to help them (in non-life-saving ways), for reasons of diminishing returns at least.

I would strongly encourage surveys about intuitions of this sort to precisely ask about tradeoffs of experiences, rather than “value of life” (as in the Norwood and Lusk survey that you cite).

Comment by antimonyanthony on Applying speciesism to wild-animal suffering · 2020-05-18T12:29:58.117Z · EA · GW

Do you think they would have a similar response to intervening in the lives of young children in X oppressed group (or any group for that matter)? That seems to be a relevantly similar case to wild animals, in terms of their lack of capacity to self-govern and vulnerability.

Comment by antimonyanthony on The Effects of Animal-Free Food Technology Awareness on Animal Farming Opposition · 2020-05-17T15:32:27.211Z · EA · GW

Excellent and important, if sobering, work! I've gotten the sense that very general social psychology arguments about animal advocacy strategy can go either way (foot in the door vs door in the face, etc.), so it's refreshing to see specific studies on this that tell me something not at all obvious. I like the preregistration and use of FDR control. Some minor remarks:

  • "the power (the risk of false negative results)" - I believe this should be the complement of that risk
  • "If the AFFT articles encourage the view that animal-free alternatives are unnatural, they could strengthen one of the key justifications for animal product consumption." - Seems like your results for the model with an interaction between reading about AFFT and preference for naturalness have some implications for this. In that model reading about AFFT is no longer significant, nor is the interaction. But I suppose under this hypothesis you'd expect a noticeable negative interaction: the stronger one's preference for naturalness, the more strongly reading about AFFT decreases their AFO.
Comment by antimonyanthony on The Alienation Objection to Consequentialism · 2020-05-07T13:07:32.884Z · EA · GW
the reason you maintain and continue to value the relationship is not so circumstantial, and has more to do with your actual relationship with that other person

Right, but even so it seems like a friend who cares for you because they believe caring for you is good, and better than the alternatives, is "warmer" than one who doesn't think this but merely follows some partiality (or again, bias) toward you.

I suppose it comes down to conflicting intuitions on something like "unconditional love." Several people, not just hardcore consequentialists, find that concept hollow and cheap, because loving someone unconditionally implies you don't really care who they are, in any sense other than the physical continuity of their identity. Conditional love identifies the aspects of the person actually worth loving, and that seems more genuine to me, though less comforting to someone who wants (selfishly) to be loved no matter what they do.

I suppose the point is that you don't recognize that reason as an ethical one; it's just something that happens to explain your behaviour in practice, not what you think is right.

Yeah, exactly. It would be an extremely convenient coincidence if our feelings for partial friendship etc., which evolved in small communities where these feelings were largely sufficient for social cohesion, just happened to be the ethically best things for us to follow - when we now live in a world where it's feasible for someone to do a lot more good by being impartial.

Edit: seems based on one of your other comments that we actually agree more than I thought.

Comment by antimonyanthony on The Alienation Objection to Consequentialism · 2020-05-05T21:51:56.161Z · EA · GW
It's more circumstantial if they prioritize you based on impartial concern; it just happened to be the best thing they could do.

Hm, to my ear, prioritizing a friend just because you happen to be biased towards them is more circumstantial. It's based on accidents of geography and life events that led you to be friends with that person to a greater degree than with other people you've never met.

that's pretty small compared to the impartial stakes we face

I agree, though that's a separate argument. I was addressing the claim that conditional on a consequentialist choosing to help their friend, their reasons are alienating, which I don't find convincing. My point was precisely that because the standard is so high for a consequentialist, it's all the more flattering if your friend prioritizes you in light of that standard. It's quite difficult to reconcile with my revealed priorities as someone who definitely doesn't live up to my own consequentialism, yes, but I bite the bullet that this is really just a failure on my part (or, as you mention, the "instrumental" reasons to be a good friend also win over anyway).

Comment by antimonyanthony on The Alienation Objection to Consequentialism · 2020-05-05T19:50:37.346Z · EA · GW

I'm still confused by this. The more impartial someone's standards, if anything, the more important you should feel if they still choose to prioritize you.

Comment by antimonyanthony on The Alienation Objection to Consequentialism · 2020-05-05T18:39:44.907Z · EA · GW
One thought to have about this case is that you have the wrong motivation in visiting your friend. Plausibly, your motive should be something like ‘my friend is suffering; I want to help them feel better!’ and not ‘helping my friend has better consequences than anything else I could have done.’ Imagine what it would be like to frankly admit to your friend, “I’m only here because being here had the best consequences. If painting a landscape would have led to better consequences, I would have stayed home and painted instead.” Your friend would probably experience this remark as cold, or at least overly abstract and aloof.

This doesn't resonate with me at all, personally. What exactly could be a purer, warmer motivation for helping a friend than the belief that helping them is the best thing you could be doing with your time? That belief implies their well-being is very important; it's not just an abstract consequence, their suffering really exists and by helping them you are choosing to relieve it.

Comment by antimonyanthony on Why I'm Not Vegan · 2020-04-10T13:19:54.367Z · EA · GW
Some people find it prohibitively costly

This isn't a "minds very different from our own" claim, though. It's an empirical claim about how expensive a vegan diet needs to be to be nutritious. Cam stated: "But it's also quite feasible to meet most people's dietary requirements with vegan foods that cost just as much as, or even less than, animal-based foods." What exactly in that statement do you dispute?

ETA: Even though there is a risk in overstating the case that veganism is universally "cheap," at present it seems that case is far understated. I think the value of Cam's comment is in noting that veganism is at the very least cheaper than most people suspect before trying it.

Comment by antimonyanthony on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-04-01T21:02:38.278Z · EA · GW
the problem comes from trying to compare infinite sets of individuals with utilities when identities (including locations in spacetime) aren't taken to matter at all

Ah, that's fair - I think I was mistaking the technical usage of "infinite ethics" for a broader class of problems involving infinities in ethics in general. Deonotological theories sometimes imply "infinite" badness of actions, which can have counterintuitive implications as discussed by MacAskill in his interviews with 80k, which is why I was confused by your objection.

Comment by antimonyanthony on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-04-01T20:14:49.491Z · EA · GW

Do non-utilitarian moral theories have readily available solutions to infinite ethics either? Suggesting infinite ethics as an objection I think only makes sense if it's a particular problem for utilitarianism, or at least a worse problem for utilitarianism than for anything else.

I'd also recommend the very repugnant conclusion as an important objection (at least to classical or symmetric utilitarianism).

Comment by antimonyanthony on Replaceability with differing priorities · 2020-03-08T14:01:32.160Z · EA · GW

I like this analysis! Some slight counter-considerations:

Displacements can also occur in donations, albeit probably less starkly than with jobs, which are discrete units. If my highest priority charity announces its funding gap somewhat regularly, and I donate a large fraction of that gap, this would likely lower the expected amount donated by others to this charity and this difference might be donated to causes I consider much less important. (Thanks to Phil Trammell for pointing out this general consideration; all blame for potentially misapplying it to this situation goes to me.)

Also, in the example you gave where about 10% of people highly prioritize cause A, wouldn't we expect the multiplier to be significantly larger than 0.1 because conditional on a person applying to position P, they are quite likely to have a next best option that is closely aligned with yours? Admittedly this makes my first point less of a concern since you could also argue that the counterfactual donor to an unpopular cause I highly prioritize would go on to fund similar, probably neglected causes.

Comment by antimonyanthony on Should Longtermists Mostly Think About Animals? · 2020-02-04T14:46:49.669Z · EA · GW

Got it, so if I'm understanding things correctly, the claim is not that many longtermists are necessarily neglecting x-risks that uniquely affect wild animals, just that they are disproportionately prioritizing risks that uniquely affect humans? That sounds fair, though like other commenters here the crux that makes me not fully endorse this conclusion is that I think, in expectation, artificial sentience could be larger than that of organic humans and wild animals combined. I agree with your assessment that this isn't something that many (non-suffering-focused) longtermists emphasize in common arguments, though; the focus is still on humans.

Comment by antimonyanthony on What are the challenges and problems with programming law-breaking constraints into AGI? · 2020-02-04T13:55:00.115Z · EA · GW
I feel like the word "values" makes this sound more complex than it is, and I'd say we instead want the agent to understand and act in line with what the human wants / intends.

Doesn’t “wants / intends” makes this sound less complex than it is? To me this phrasing connotes (not to say you actually believe this) that the goal is for AIs to understand short-term human desires, without accounting for ways in which our wants contradict what we would value in the long term, or ways that individuals’ wants can conflict. Once we add caveats like “what we would want / intend after sufficient rational reflection,” my sense is that “values” just captures that more intuitively. I haven’t surveyed people on this, though, so this definitely isn’t a confident claim on my part.

Comment by antimonyanthony on Should Longtermists Mostly Think About Animals? · 2020-02-04T02:56:33.240Z · EA · GW

Great post, Abraham!

You mention "preventing x-risks that pose specific threats to animals over those that only pose threats to humans" - which examples of this did you have in mind? It's hard for me to imagine a risk factor for extinction of all nonhuman wildlife that wouldn't also apply to humans, aside from perhaps an asteroid that humans could avoid by going to some other planet but humans would not choose to protect wild animals from by bringing them along. Though I haven't spent much time thinking about non-AI x-risks so it's likely the failure is in my imagination.

I think it's also worth noting that the takeaway from this essay could be that x-risk to humans is primarily bad not because of effects on us/our descendants, but because of the wild animal suffering that would not be relieved in our absence. I'm not sure this would make much difference to the priorities of classical utilitarians, but it's an important consideration if reducing suffering is one's priority.

Comment by antimonyanthony on EA Survey 2019 Series: Community Demographics & Characteristics · 2020-01-05T14:43:05.416Z · EA · GW

If I remember correctly, the 2019 survey asked about utilitarians' identification as classical vs. negative utilitarian, plus some other distinctions. Will those results be included in a future post? I'm very curious to see them.

Comment by antimonyanthony on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T02:24:28.295Z · EA · GW

I can also imagine being persuaded that AI alignment research is as important as I think but something else is even more important, like maybe s-risks or some kind of AI coordination thing.

Huh, my impression was that the most plausible s-risks we can sort-of-specifically foresee are AI alignment problems - do you disagree? Or is this statement referring to s-risks as a class of black swans for which we don't currently have specific imaginable scenarios, but if those scenarios became more identifiable you would consider working on them instead?

Comment by antimonyanthony on We should choose between moral theories based on the scale of the problem · 2019-11-06T14:39:21.331Z · EA · GW

While I see the intuitive appeal of this idea, it honestly seems a bit ad hoc. The physics analogy is interesting, yes, but we should be careful not to mistake the practical usefulness of local level deontology or virtue ethics for an actual normative difference between levels. If we just accept the local heuristics as useful for social cohesion etc. without critically assessing whether we could do better, we run the risk of not actually improving sentient experience - just rationalizing standards that mainly exist because they were evolutionarily expedient, or maintain some power structure.

To be more specific, it's very much an open question whether trying to be a "good" friend/family member, in ways that significantly privilege your friends/family over others, actually achieves more good in the long run. It seems very unlikely to me that, say, (A) buying or making a few hundred dollars' worth of presents for people during holidays (reciprocated with similar presents, many of which in my experience honestly haven't been worth the money even though I appreciate their thought) makes the world a better place than (B) spending that money/time on the seemingly cold utilitarian choice.

The usual objection to this is that B weakens social bonds or makes people trust you less. But: (1) from the perspective of the people or animals you'd be helping by choosing B, those bonds and small degrees of weakened trust would probably seem paltry and frivolous by comparison to their suffering. There also doesn't seem to be much robust evidence supporting this claim anyway, it's just an intuition I've seen repeated without justification. (2) It's possible that this is one of several social norms that we can change over time by challenging the assumption that it's eternal; in the short run, perhaps people think of you as cold or weird, but if enough people follow suit, maybe refusing to waste money on trivialities for holidays could become normal. Omnivores have argued that veganism threatens social bonds and the (particularly American) culture of eating meat together; c.f. this article. I think that that argument is self-evidently weak in the face of great animal suffering, so analogously it isn't a stretch to suppose that deontological norms we currently consider necessary for social cohesion are disposable, if we challenge them.

Comment by antimonyanthony on Defending the Procreation Asymmetry with Conditional Interests · 2019-10-18T11:37:57.280Z · EA · GW

I think the following is a typo:

not coming to exist at all would be strictly worse than coming to exist with non-maximal utility

The transitivity argument you presented shows that it's strictly better.

Nitpicks aside, thank you for sharing these ideas! I think identifying that interests (or desires associated with experiences) are the morally relevant objects rather than persons is crucial.

Comment by antimonyanthony on [deleted post] 2019-10-06T17:06:57.649Z
A better alternative is to recognize that our own future selves, and our descendants, will be able to "debug" the unpredictable consequences of the actions we take and systems we create. They can do this by creating sustainable alternatives, building resiliency, and improving their planning and evaluation. They will be motivated by self-interest to do so, and enabled by their increasing knowledge. [emphasis mine]

This point doesn't hold in the case of animal welfare. This might seem like a minor nitpick on my part, but for EAs who prioritize animal welfare yet are also concerned about long-term effects, it's a pretty crucial thing to note. Indeed I'd suspect that taking an approach of going with what seems best right now (without more thoroughly investigating the long-term consequences that we could in principle discover upon reflection) could harm the reputation of animal welfare activism, because this would seem especially reckless given that animals aren't in a position to save themselves from the negative consequences of our choices.

An analogous point holds more weakly even for human-centric causes, I think. Just because future humans will be in a position to debug interventions we make in the present, that doesn't make it prudent for us to neglect the work of considering the (often conflicting) long-term effects that we could identify if we worked harder. I worry that this attitude places a burden on future people that they didn't ask for, unless I'm misunderstanding your general claim.

Comment by antimonyanthony on How much EA analysis of AI safety as a cause area exists? · 2019-09-20T13:22:53.668Z · EA · GW

I'm not aware of such summaries, but I'll take a stab at it here:

Even though it's possible for the expected disvalue of a very improbable outcome to be high if the outcome is sufficiently awful, the relatively large degree of investment in AI safety work by the EA community today would only make sense if the probability of AI-catalyzed GCR were decently high. This Open Phil post for example doesn't frame this as a "yes it's extremely unlikely, but the downsides could be massive, so in expectation it's worth working on" cause; many EAs in general give estimates of a non-negligible probability of very bad AI outcomes. So, accordingly, AI is considered not only a viable cause to work on but indeed one of the top priorities.

But arguably the scenarios in which AGI becomes a catastrophic threat rely on a conjunction of several improbable assumptions. One of which is that general "intelligence" in the sense of a capacity to achieve goals on a global scale - rather than capacity merely to solve problems easily representable within e.g. a Markov decision process - is something that computers can develop without a long process of real world trial and error, or cooperation in the human economy. (If such a process is necessary, then humans should be able to stop potentially dangerous AIs in their tracks before they become too powerful.) The key takeaway from the essay as far as I found was that we should be cautious about using one definition of intelligence, i.e. the sort that deep RL algorithms have demonstrated in game settings, as grounds for predicting dangerous outcomes resulting from a much more difficult-to-automate sense of intelligence, namely ability to achieve goals in physical reality.

The actual essay is more subtle than this, of course, and I'd definitely encourage people to at least skim it before dismissing the weaker form of the argument I've sketched here. But I agree that the AI safety research community has a responsibility to make that connection between current deep learning "intelligence" and intelligence-as-power more explicit, otherwise it's a big equivocation fallacy.

Magnus, is this a fair representation?

Comment by antimonyanthony on The Long-Term Future: An Attitude Survey · 2019-09-18T15:08:52.431Z · EA · GW

I see, thank you - wasn't sure what might have been hidden in "Other." :)

Comment by antimonyanthony on The Long-Term Future: An Attitude Survey · 2019-09-17T23:54:05.849Z · EA · GW

"Trivial, but in a Derek Parfit way" is honestly the highest compliment I could ever receive.

Comment by antimonyanthony on The Long-Term Future: An Attitude Survey · 2019-09-17T21:57:35.925Z · EA · GW

Question 13 seems under-specified to me, specifically this part: "Their members are equally happy." Does this mean their level of welfare is the same, but it could be at any level for the purposes of this question? Does the use of "happy" in particular mean the question assumes this constant level of welfare is net positive? Could the magnitudes of happiness and suffering differ between people as long as the "net welfare" is positive, assuming it's possible to make that aggregation?

I think these questions matter because they influence your interpretation of the answers as either a result of population ethical factors, or other things like the respondents' beliefs about the moral weight of happiness vs suffering. Someone could coherently accept totalism yet consider the smaller world better if, for instance, they think the higher number of cases of the extreme tails of suffering in the larger population (just because there are more people that things could go very wrong for) makes it worse.

A priori I expect suffering focused intuitions to be in the minority, but in any case it's not obvious that the answers to #13 reveal non-totalist or irrational population ethics among the respondents.

Comment by antimonyanthony on Does improving animal rights now improve the far future? · 2019-09-16T22:21:38.042Z · EA · GW

My guess is the figure is so small at least partly because of an assumption that the default expected value of the far future is high already. If this is the case, then someone who expects disvalue to be far more prominent in the future all else equal will consider this increase in humane values much more important, relatively speaking.

Comment by antimonyanthony on How much EA analysis of AI safety as a cause area exists? · 2019-09-14T15:15:48.584Z · EA · GW

While I disagree with his conclusion and support FRI's approach to reducing AI s-risks, Magnus Vinding's essay "Why Altruists Should Perhaps Not Prioritize Artificial Intelligence" is one of the most thoughtful EA analyses against prioritizing AI safety I'm aware of. I'd say it fits into the "Type A and meets OP's criterion" category.

Comment by antimonyanthony on Can/should we define quick tests of personal skill for priority areas? · 2019-06-11T15:16:47.428Z · EA · GW

Agree on the "should" part! As for "can": a potentially valuable side project someone (perhaps myself, with the extra time I'll have on my hands before grad school) might want to try is looking for empirical predictors of success in priority fields. Something along these lines, although unfortunately the linked paper's formula wouldn't be of much use to people who haven't already entered academia.