Posts

[linkpost] When does technical work to reduce AGI conflict make a difference?: Introduction 2022-09-16T14:35:18.434Z
A longtermist critique of “The expected value of extinction risk reduction is positive” 2021-07-01T21:01:20.942Z
antimonyanthony's Shortform 2020-09-19T16:05:02.590Z

Comments

Comment by antimonyanthony on My take on What We Owe the Future · 2022-09-24T09:48:15.991Z · EA · GW

I'm fine with other phrasings and am also concerned about value lock-in and s-risks though I think these can be thought of as a class of x-risks

I'm not keen on classifying s-risks as x-risks because, for better or worse, most people really just seem to mean "extinction or permanent human disempowerment" when they talk about "x-risks." I worry that a motte-and-bailey can happen here, where (1) people include s-risks within x-risks when trying to get people on board with focusing on x-risks, but then (2) their further discussion of x-risks basically equates them with non-s-x-risks. The fact that the "dictionary definition" of x-risks would include s-risks doesn't solve this problem.

Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-09-17T10:14:32.585Z · EA · GW

e.g. 2 minds with equally passionate complete enthusiasm  (with no contrary psychological processes or internal currencies to provide reference points) respectively for and against their own experience, or gratitude and anger for their birth (past or future).  They  can respectively consider a world with and without their existences completely unbearable and beyond compensation. But if we're in the business of helping others for their own sakes rather than ours, I don't see the case for excluding either one's concern from our moral circle.

... 

But when I'm in a mindset of trying to do impartial good I don't see the appeal of ignoring those who would desperately, passionately want to exist, and their gratitude in worlds where they do.

I don't really see the motivation for this perspective. In what sense, or to whom, is a world without the existence of the very happy/fulfilled/whatever person "completely unbearable"? Who is "desperate" to exist? (Concern for reducing the suffering of beings who actually feel desperation is, clearly, consistent with pure NU, but by hypothesis this is set aside.) Obviously not themselves. They wouldn't exist in that counterfactual.

To me, the clear case for excluding intrinsic concern for those happy moments is:

  • "Gratitude" just doesn't seem like compelling evidence in itself that the grateful individual has been made better off. You have to compare to the counterfactual. In daily cases with existing people, gratitude is relevant as far as the grateful person would have otherwise been dissatisfied with their state of deprivation. But that doesn't apply to people who wouldn't feel any deprivation in the counterfactual, because they wouldn't exist.
  • I take it that the thrust of your argument is, "Ethics should be about applying the same standards we apply across people as we do for intrapersonal prudence." I agree. And I also find the arguments for empty individualism convincing. Therefore, I don't see a reason to trust as ~infallible the judgment of a person at time T that the bundle of experiences of happiness and suffering they underwent in times T-n, ..., T-1 was overall worth it. They're making an "interpersonal" value judgment, which, despite being informed by clear memories of the experiences, still isn't incorrigible. Their positive evaluation of that bundle can be debunked by, say, this insight from my previous bullet point that the happy moments wouldn't have felt any deprivation had they not existed.
    • In any case, I find upon reflection that I don't endorse tradeoffs of contentment for packages of happiness and suffering for myself. I find I'm generally more satisfied with my life when I don't have the "fear of missing out" that a symmetric axiology often implies. Quoting myself:

Another takeaway is that the fear of missing out seems kind of silly. I don’t know how common this is, but I’ve sometimes felt a weird sense that I have to make the most of some opportunity to have a lot of fun (or something similar), otherwise I’m failing in some way. This is probably largely attributable to the effect of wanting to justify the “price of admission” (I highly recommend the talk in this link) after the fact. No one wants to feel like a sucker who makes bad decisions, so we try to make something we’ve already invested in worth it, or at least feel worth it. But even for opportunities I don’t pay for, monetarily or otherwise, the pressure to squeeze as much happiness from them as possible can be exhausting. When you no longer consider it rational to do so, this pressure lightens up a bit. You don’t have a duty to be really happy. It’s not as if there’s a great video game scoreboard in the sky that punishes you for squandering a sacred gift.

Comment by antimonyanthony on Puzzles for Everyone · 2022-09-11T16:01:41.551Z · EA · GW

...Having said that, I do think the "deeper intuition that the existing Ann must in some way come before need-not-ever-exist-at-all Ben" plausibly boils down to some kind of antifrustrationist or tranquilist intuition. Ann comes first because she has actual preferences (/experiences of desire) that get violated when she's deprived of happiness. Not creating Ben doesn't violate any preferences of Ben's.

Comment by antimonyanthony on Puzzles for Everyone · 2022-09-11T15:48:07.678Z · EA · GW

certainly don't reflect the kinds of concerns expressed by Setiya that I was responding to in the OP

I agree. I happen to agree with you that the attempts to accommodate the procreation asymmetry without lexically disvaluing suffering don't hold up to scrutiny. Setiya's critique missed the mark pretty hard, e.g. this part just completely ignores that this view violates transitivity:

But the argument is flawed. Neutrality says that having a child with a good enough life is on a par with staying childless, not that the outcome in which you have a child is equally good regardless of their well-being. Consider a frivolous analogy: being a philosopher is on a par with being a poet—neither is strictly better or worse—but it doesn’t follow that being a philosopher is equally good, regardless of the pay.

Comment by antimonyanthony on Puzzles for Everyone · 2022-09-11T08:40:16.678Z · EA · GW

appeal to some form of partiality or personal prerogative seems much more appropriate to me than denying the value of the beneficiaries

I don't think this solves the problem, at least if one has the intuition (as I do) that it's not the current existence of the people who are extremely harmed to produce happy lives that makes this tradeoff "very repugnant." It doesn't seem any more palatable to allow arbitrarily many people in the long-term future (rather than the present) to suffer for the sake of sufficiently many more added happy lives. Even if those lives aren't just muzak and potatoes, but very blissful. (One might think that is "horribly evil" or "utterly disastrous," and isn't just a theoretical concern either, because in practice increasing the extent of space settlement would in expectation both enable many miserable lives and many more blissful lives.)

ETA: Ideally I'd prefer these discussions not involve labels like "evil" at all. Though I sympathize with wanting to treat this with moral seriousness!

Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-09-07T11:26:23.725Z · EA · GW
I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avoiding the problem, but they run into other issues.)

It really isn't clear to me that the problem you sketched is so much worse than the problems with total symmetric, average, or critical-level axiology, or the "intuition of neutrality." In fact this conclusion seems much less bad than the Sadistic Conclusion or variants of that, which affect the latter three. So I find it puzzling how much attention you (and many other EAs writing about population ethics and axiology generally; I don't mean to pick on you in particular!) devoted to those three views. And I'm not sure why you think this problem is so much worse than the Very Repugnant Conclusion (among other problems with outweighing views), either.

I sympathize with the difficulty of addressing so much content in a popular book. But this is a pretty crucial axiological debate that's been going on in EA for some time, and it can determine which longtermist interventions someone prioritizes.

Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-09-05T12:15:49.848Z · EA · GW
The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”,  “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.

You seem to be using a different definition of the Asymmetry than Magnus is, and I'm not sure it's a much more common one. On Magnus's definition (which is also used by e.g. Chappell; Holtug, Nils (2004), "Person-affecting Moralities"; and McMahan (1981), "Problems of Population Theory"), bringing into existence lives that have "positive wellbeing" is at best neutral. It could well be negative.

The kind of Asymmetry Magnus is defending here doesn't imply the intuition of neutrality, and so isn't vulnerable to your critiques like violating transitivity, or relying on a confused concept of necessarily existing people.

Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-28T11:06:04.173Z · EA · GW
Are you saying that from your and Teo's POVs, there's a way to 'improve a mental state' that doesn't amount to decreasing suffering (/preventing it)?

No, that's precisely what I'm denying. So, the reason I mentioned that "arbitrary" view was that I thought Jack might be conflating my/Teo's view with one that (1) agrees that happiness intrinsically improves a mental state, but (2) denies that improving a mental state in this particular way is good (while improving a mental state via suffering-reduction is good).

Such an understanding seems plausible in a self-intimating way when one valence state transitions to the next, insofar as we concede that there are states of more or less pleasure, outside an negatively valanced states.

It's prima facie plausible that there's an improvement, sure, but upon reflection I don't think my experience that happiness has varying intensities implies that moving from contentment to more intense happiness is an improvement. Analogously, you can increase the complexity and artistic sophistication of some painting, say, but if no one ever observes it (which I'm comparing to no one suffering from the lack of more intense happiness), there's no "improvement" to the painting.

It seems that one could do this all the while maintaining that such improvements are never capable of outweighing the mitigation of problematic, suffering states.

You could, yeah, but I think "improvement" has such a strong connotation to most people that something of intrinsic value has been added. So I'd worry that using that language would be confusing, especially to welfarist consequentialists who think (as seems really plausible to me) that you should do an act to the extent that it improves the state of the world.

Comment by antimonyanthony on antimonyanthony's Shortform · 2022-08-27T10:00:24.212Z · EA · GW

Some things I liked about What We Owe the Future, despite my disagreements with the treatment of value asymmetries:

  • The thought experiment of imagining that you live one big super-life composed of all sentient beings’ experiences is cool, as a way of probing moral intuitions. (I'd say this kind of thought experiment is the core of ethics.)
    • It seems better than e.g. Rawls’ veil of ignorance because living all lives (1) makes it more salient that the possibly rare extreme experiences of some lives still exist even if you're (un)lucky enough not to go through them, and (2) avoids favoring average-utilitarian intuitions.
  • Although the devil is very much in the details of what measure of (dis)value the total view totals up, the critiques of average, critical level, and symmetric person-affecting views are spot-on.
  • There's some good discussion of avoiding lock-in of bad (/not-reflected-upon) values as a priority that most longtermists can get behind.
    • I was already inclined to think dominant values can be very contingent on factors that don't seem ethically relevant, like differences in reproduction rates (biological or otherwise) or flukes of power imbalances. So I didn't update much from reading about this. But I have the impression that many longtermists are a bit too complacent about future people converging to the values we'd endorse with proper reflection (strangely, even when they're less sympathetic to moral realism than I am). And the vignettes about e.g. Benjamin Lay were pretty inspiring.
  • Relatedly, it's great that premature space settlement is acknowledged as a source of lock-in / reduction of option value. Lots of discourse on longtermism seems to gloss over this.
Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-27T08:13:11.891Z · EA · GW

I think one crux here is that Teo and I would say, calling an increase in the intensity of a happy experience "improving one's mental state" is a substantive philosophical claim. The kind of view we're defending does not say something like, "Improvements of one's mental state are only good if they relieve suffering." I would agree that that sounds kind of arbitrary.

The more defensible alternative is that replacing contentment (or absence of any experience) with increasingly intense happiness / meaning / love is not itself an improvement in mental state. And this follows from intuitions like "If a mind doesn't experience a need for change (and won't do so in the future), what is there to improve?"

Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-26T22:14:51.558Z · EA · GW
Is it thought experiments such as the ones Magnus has put forward? I think these argue that alleviating suffering is more pressing than creating happiness, but I don't think these argue that creating happiness isn't good.

I think they do argue that creating happiness isn't intrinsically good, because you can always construct a version of the Very Repugnant Conclusion that applies to a view that says suffering is weighed some finite X times more than happiness, and I find those versions almost as repugnant. E.g. suppose that on classical utilitarianism we prefer to create 100 purely miserable lives plus some large N micro-pleasure lives over creating 10 purely blissful lives. On this new view, we'd prefer to create 100 purely miserable lives plus X*N micro-pleasure lives over the 10 purely blissful lives. Another variant you could try is a symmetric lexical view where only sufficiently blissful experiences are allowed to outweigh misery. But while some people find that dissolves the repugnance of the VRC, I can't say the same.

Increasing the X, or introducing lexicalities, to try to escape the VRC just misses the point, I think. The problem is that (even super-awesome/profound) happiness is treated as intrinsically commensurable with miserable experiences, as if giving someone else happiness in itself solves the miserable person's urgent problem. That's just fundamentally opposed to what I find morally compelling.

(I like the monk example given in the other response to your question, anywho. I've written about why I find strong SFE compelling elsewhere, like here and here.)

You could try to use your pareto improvement argument here i.e. that it's better if parents still have a preference for their child not to have been killed, but also not to feel any sort of pain related to it.

Yeah, that is indeed my response; I have basically no sympathy to the perspective that considers the pain intrinsically necessary in this scenario, or any scenario. This view seems to clearly conflate intrinsic with instrumental value. "Disrespect" and "grotesqueness" are just not things that seem intrinsically important to me, at all.

having a preference that the child wasn't killed, but also not feeling any sort of hedonic pain about it...is this contradictory?

Depends how you define a preference, I guess, but the point of the thought experiment is to suspend your disbelief about the flow-through effects here. Just imagine that literally nothing changes about the world other than that the suffering is relieved. This seems so obviously better than the default that I'm at a loss for a further response.

Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-25T20:06:51.628Z · EA · GW

This only applies to flavors of the Asymmetry that treat happiness as intrinsically valuable, such that you would pay to add happiness to a "neutral" life (without relieving any suffering by doing so). If the reason you don't consider it good to create new lives with more happiness than suffering is that you don't think happiness is intrinsically valuable, at least not at the price of increasing suffering, then you can't get Dutch booked this way. See this comment.

Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-25T19:58:29.480Z · EA · GW

I didn't directly respond to the other one because the principle is exactly the same. I'm puzzled that you think otherwise.

Removing their sadness at separation while leaving their desire to be together intact isn't a clear Pareto improvement unless one already accepts that pain is what is bad.

I mean, in thought experiments like this all one can hope for is to probe intuitions that you either do or don't have. It's not question-begging on my part because my point is: Imagine that you can remove the cow's suffering but leave everything else practically the same. (This, by definition, assesses the intrinsic value of relieving suffering.) How could that not be better? It's a Pareto improvement because, contra the "drugged into happiness" image, the idea is not that you've relieved the suffering but thwarted the cow's goal to be reunited with its child; the goals are exactly the same, but the suffering is gone, and it just seems pretty obvious to me that that's a much better state of the world.

Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-24T07:47:45.036Z · EA · GW

Here's another way of saying my objection to your original comment: What makes "happiness is intrinsically good" more of an axiom than "sufficiently intense suffering is morally serious in a sense that happiness (of the sort that doesn't relieve any suffering) isn't, so the latter can't compensate for the former"? I don't see what answer you can give that doesn't appeal to intuitions about cases.

Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-24T07:38:35.516Z · EA · GW
  • That case does run counter to "suffering is intrinsically bad but happiness isn't," but it doesn't run counter to "suffering is bad," which is what your last comment asked about. I don't see any compelling reasons to doubt that suffering is bad, but I do see some compelling reasons to doubt that happiness is good.
  • That's just an intuition, no? (i.e. that everyone painlessly dying would be bad.) I don't really understand why you want to call it an "axiom" that happiness is intrinsically good, as if this is stronger than an intuition, which seemed to be the point of your original comment.
  • See this post for why I don't think the case you presented is decisive against the view I'm defending.
Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-24T07:24:29.152Z · EA · GW

For all practical purposes suffering is dispreferred by beings who experience it, as you know, so I don't find this to be a counterexample. When you say you don't want someone to make you less sad about the problems in the world, it seems like a Pareto improvement would be to relieve your sadness without changing your motivation to solve those problems—if you agree, it seems you should agree the sadness itself is intrinsically bad.

Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-24T06:59:05.246Z · EA · GW

No, I know of no thought experiments or any arguments generally that make me doubt that suffering is bad. Do you?

Comment by antimonyanthony on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-24T06:01:52.512Z · EA · GW

On a really basic level my philosophical argument would be that suffering is bad, and pleasure is good (the most basic of ethical axioms that we have to accept to get consequentialist ethics off the ground).

It seems like you're just relying on your intuition that pleasure is intrinsically good, and calling that an axiom we have to accept. I don't think we have to accept that at all — rejecting it does have some counterintuitive consequences, I won't deny that, but so does accepting it. It's not at all obvious (and Magnus's post points to some reasons we might favor rejecting this "axiom").

Comment by antimonyanthony on The Repugnant Conclusion Isn't · 2022-08-23T12:09:59.970Z · EA · GW

This is how Parfit formulated the Repugnant Conclusion, but the way it's usually referred to in population ethics discussions about the (de)merits of total symmetric utilitarianism, it need not be the case that the muzak and potatoes lives never suffer.

The real RC that some kinds of total views face is that world A with lives of much more happiness than suffering is worse than world Z with more lives of just barely more happiness than suffering. How repugnant this is, for some people like myself, depends on how much happiness or suffering is in those lives on each side. I wrote about this here and here.

Comment by antimonyanthony on [link post] The Case for Longtermism in The New York Times · 2022-08-06T09:30:39.291Z · EA · GW
which goes against the belief in a net-positive future upon which longtermism is predicated

Longtermism per se isn't predicated on that belief at all—if the future is net-negative, it's still (overwhelmingly) important to make future lives less bad.

Comment by antimonyanthony on Confused about "making people happy" vs. "making happy people" · 2022-07-18T18:21:00.430Z · EA · GW
But I want to be clear that this normative disagreement isn't evidence of any philosophical defect on our part.

Oh I absolutely agree with this. My objections to that quote have no bearing on how legitimate your view is, and I never claimed as much. What I find objectionable is that by using such dismissive language about the view you disagree with, not merely critical language, you're causing harm to population ethics discourse. Ideally readers will form their views on this topic based on their merits and intuitions, not based on claims that views are "too divorced from humane values to be worth taking seriously."

complaining that we didn't preface every normative claim with the tedious disclaimer "in our opinion"

Personally I don't think you need to do this.

This sociological claim isn't philosophically relevant.  There's nothing inherently objectionable about concluding that some people have been mistaken in their belief that a certain view is worth taking seriously.  There's also nothing inherently objectionable about making claims that are controversial.

Again, I didn't claim that your dismissiveness bears on the merit of your view. The objectionable thing is that you're confounding readers' perceptions of the views with labels like "[not] worth taking seriously." The fact that many people do take this view seriously suggests that that kind of label is uncharitable. (I suppose I'm not opposed in principle to being dismissive to views that are decently popular—I would have that response to the view that animals don't matter morally, for example. But what bothers me about this case is partly that your argument for why it's not worth taking seriously is pretty unsatisfactory.)

I'm certainly not calling for you to pass no judgments whatsoever on philosophical views, and "merely report on others' arguments," and I don't think a reasonable reading of my comment would lead you to believe that.

And certainly if we're making philosophical errors, or overlooking important counterarguments, I'm happy to have any of that drawn to my attention.

Indeed, I gave substantive feedback on the Population Ethics page a few months back, and hope you and your coauthors take it into account. :)

Comment by antimonyanthony on Confused about "making people happy" vs. "making happy people" · 2022-07-18T11:32:45.833Z · EA · GW

It seems like you're conflating the following two views:

  1. Utilitarianism.net has an obligation to present views other than total symmetric utilitarianism in a sympathetic light.
  2. Utilitarianism.net has an obligation not to present views other than total symmetric utilitarianism in an uncharitable and dismissive light.

I would claim #2, not #1, and presumably so would Michael. The quote about nihilism etc. is objectionable because it's not just unsympathetic to such views, it's condescending. Clearly many people who have reflected carefully about ethics think these alternatives are worth taking seriously, and it's controversial to claim that "humane values" necessitate wanting to create happy beings de novo even at some (serious) opportunity cost to suffering. "Nihilistic" also connotes something stronger than denying positive value.

Comment by antimonyanthony on Confused about "making people happy" vs. "making happy people" · 2022-07-17T13:27:57.951Z · EA · GW
One is that views of the "making people happy" variety basically always wind up facing structural weirdness when you formalize them. It was my impression until recently that all of these views imply intransitive preferences (i.e something like A>B>C>A), until I had a discussion with Michael St Jules in which he pointed out more recent work that instead denies the independence of irrelevant alternatives.

It depends if by valuing "making people happy" one means 1) intrinsically valuing adding happiness to existing people's lives, or 2) valuing "making them happy" in the sense of relieving their suffering (practically, this is often what happiness does for people). I agree that violations of transitivity or IIA seem inevitable for views of type (1), and that's pretty bad.

But (2) is an alternative that I think has gotten weirdly sidelined in (EA) population axiology discourse. If some person is completely content and has no frustrated desires (state A), I don't see any moral obligation to make them happier (state B), so I don't violate transitivity by saying the world is not better by adding person A and also not better by adding person B. I suspect lots of people's "person-affecting" intuitions really boil down to the intuition that preferences that don't exist—and will not exist—have no need to be fulfilled, as you allude to in your last big paragraph:

A frustrated interest exists in the timeline it is frustrated in, and so any ethics needs to care about it. A positive interest (i.e. having something even better than an already good or neutral state) does not exist in a world in which it isn't brought about, so it doesn't provide reasons to that world in the same way
Comment by antimonyanthony on Questioning the Value of Extinction Risk Reduction · 2022-07-07T21:07:44.187Z · EA · GW
Second, I might be mistaken about what this agent’s choice would be. For instance, perhaps the lake is so cold that the pain of jumping in is of greater moral importance than any happiness I obtain.

Yeah, I think this is pretty plausible at least for sufficiently horrible forms of suffering (and probably all forms, upon reflection on how bad the alternative moral views are IMO). I doubt my common sense intuitions about bundles of happiness and suffering can properly empathize, in my state of current comfort, with the suffering-moments.

But given you said the point above, I'm a bit surprised you also said this:

One of the following three things is true:
(1) One would not accept a week of the worst torture conceptually possible in exchange for an arbitrarily large amount of happiness for an arbitrarily long time.
(2) One would not accept such a trade, but believes that a perfectly rational, self-interested hedonist would accept it ...
(3) One would accept such a trade, and further this belief is predicated on the existence of compelling arguments in favor of proposition (i).

What about "(4): One would accept such a trade, but believes that a perfectly rational, self-interested hedonist would not accept it"?

Comment by antimonyanthony on Critiques of EA that I want to read · 2022-06-20T21:04:51.213Z · EA · GW
There is a defense of ideas related to your position here

For the record I also don't find that post compelling, and I'm not sure how related it is to my point. I think you can coherently think that the moral truth is consistent (and that ethics is likely to not be consistent if there is no moral truth), but be uncertain about it. Analogously I'm pretty uncertain what the correct decision theory is, and think that whatever that decision theory is, it would have to be self-consistent.

Comment by antimonyanthony on Critiques of EA that I want to read · 2022-06-20T20:53:18.532Z · EA · GW
I also would be interested in seeing someone compare the tradeoffs on non- views vs person-affecting. E.g. person affecting views might entail X weirdness, but maybe X weirdness is better to accept than the repugnant conclusion, etc.

Agreed—while I expect people's intuitions on which is "better" to differ, a comprehensive accounting of which bullets different views have to bite would be a really handy resource. By "comprehensive" I don't mean literally every possible thought experiment, of course, but something that gives a sense of the significant considerations people have thought of. Ideally these would be organized in such a way that it's easy to keep track of which cases that bite different views are relevantly similar, and there isn't double-counting.

Comment by antimonyanthony on Critiques of EA that I want to read · 2022-06-20T10:55:49.573Z · EA · GW

Also, moral realism seems more predictive of ethics being consistent, not less. (Not consistent with our unreflected intuitions, though.)

Comment by antimonyanthony on The ordinal utility argument against effective altruism · 2022-06-14T09:15:38.321Z · EA · GW

I'm confused — welfare economics seems premised on the view that interpersonal comparisons of utility are possible. In any case, ethics =/= economics; comparisons of charity effectiveness aren't assessing interpersonal "utility" in the sense of VNM preferences, they're concerned with "utility" in the sense of e.g. hedonic states, life satisfaction, so-called objective lists, and so on.

Comment by antimonyanthony on antimonyanthony's Shortform · 2022-06-13T22:21:38.973Z · EA · GW

No, longtermism is not redundant

I’m not keen on the recent trend of arguments that persuading people of longtermism is unnecessary, or even counterproductive, for encouraging them to work on certain cause areas (e.g., here, here). This is for a few reasons:

  • It’s not enough to believe that extinction risks within our lifetimes are high, and that extinction would constitute a significant moral problem purely on the grounds of harms to existing beings. Arguments for the tractability of reducing those risks, sufficient to outweigh the nearterm good done by focusing on global human health or animal welfare, seem lacking in the arguments I’ve seen for prioritizing extinction risk reduction on non-longtermist grounds.
    • Take the AI alignment problem as one example (among the possible extinction risks, I’m most familiar with this one). I think it’s plausible that the collective efforts of alignment researchers and people working on governance will prevent extinction, though I’m not prepared to put a number on this. But as far as I’ve seen, there haven’t been compelling cost-effectiveness estimates suggesting that the marginal dollar or work-hour invested in alignment is competitive with GiveWell charities or interventions against factory farming, from a purely neartermist perspective. (Shulman discusses this in this interview, but without specifics about tractability that I would find persuasive.)
  • More importantly, not all longtermist cause areas are risks that would befall currently existing beings. MacAskill discusses this a bit here, including the importance of shaping the values of the future rather than (I would say “complacently”) supposing things will converge towards a utopia by default. Near-term extinction risks do seem likely to be the most time-sensitive thing that non-downside-focused longtermists would want to prioritize. But again, tractability makes a difference, and for those who are downside-focused, there simply isn’t this convenient convergence between near- and long-term interventions. As far I can tell, s-risks affecting beings in the near future fortunately seem highly unlikely.
Comment by antimonyanthony on The ordinal utility argument against effective altruism · 2022-06-12T15:30:26.136Z · EA · GW

I think this is just an equivocation of "utility." Utility in the ethical sense is not identical to the "utility" of von Neumann Morgenstern utility functions.

Comment by antimonyanthony on The psychology of population ethics · 2022-06-05T12:46:51.554Z · EA · GW

It's notable that a pilot study (N = 172, compared to N = 474 for the results given in Fig. 1) discussed in the supplementary materials of this paper suggests a stronger suffering/happiness asymmetry in people's intuitions about creating populations. e.g. In response to the question, “Suppose you could push a button that created a new world with X people who are generally happy and 10 people who generally suffer. How high would X have to be for you to push the button?”, the median response was X = 1000.

Comment by antimonyanthony on My list of effective altruism ideas that seem to be underexplored · 2022-06-01T20:08:52.403Z · EA · GW
For a mundane example, imagine I'm ambivalent about mini-golfing. But you know me, and you suspect I'll love it, so you take me mini-golfing. Afterwards, I enthusiastically agree that you were right, and I loved mini-golfing.

It seems you can accommodate this just as well, if not better, within a hedonistic view—you didn't prefer to go mini-golfing, but mini-golfing made you happier once you tried it, so that's why you endorse people encouraging you to try new things. (Although I'm inclined to say, it really depends on what you would've otherwise done with your time instead of mini-golfing, and if someone is fine not wanting something, it's reasonable to err on the side of not making them want it.)

Comment by antimonyanthony on antimonyanthony's Shortform · 2022-05-15T18:46:48.888Z · EA · GW

In Defense of Aiming for the Minimum

I’m not really sympathetic to the following common sentiment: “EAs should not try to do as much good as feasible at the expense of their own well-being / the good of their close associates.”

It’s tautologically true that if trying to hyper-optimize comes at too much of a cost to the energy you can devote to your most important altruistic work, then trying to hyper-optimize is altruistically counterproductive. I acknowledge that this is the principle behind the sentiment above, and evidently some people’s effectiveness has benefited from advice like this.

But in practice, I see EAs apply this principle in ways that seem suspiciously favorable to their own well-being, or to the status quo. When you find yourself trying to justify on the grounds of impact the amounts of self-care people afford themselves when they don’t care about being effectively altruistic, you should be extremely suspicious.

Some examples, which I cite not to pick on the authors in particular—since I think many others are making a similar mistake—but just because they actually wrote these claims down.

1. “Aiming for the minimum of self-care is dangerous”

I felt a bit suspicious, looking at how I spent my time. Surely that long road trip wasn’t necessary to avoid misery? Did I really need to spend several weekends in a row building a ridiculous LED laser maze, when my other side project was talking to young synthetic biologists about ethics?

I think this is just correct. If your argument is that EAs shouldn’t be totally self-effacing because some frivolities are psychologically necessary to keep rescuing people from the bottomless pit of suffering, then sure, do the things that are psychologically necessary. I’m skeptical that “psychologically necessary” actually looks similar to the amount of frivolities indulged by the average person who is as well-off as EAs generally are.

Do I live up to this standard? Hardly. That doesn’t mean I should pretend I’m doing the right thing.

Minimization is greedy. You don’t get to celebrate that you’ve gained an hour a day [from sleeping seven instead of eight hours], or done something impactful this week, because that minimizing urge is still looking at all your unclaimed time, and wondering why you aren’t using it better, too.

How important is my own celebration, though, when you really weigh it against what I could be doing with even more time? (This isn’t just abstract impact points; there are other beings whose struggles matter no less than mine do, and fewer frivolities for me could mean relief for them.)

I think where I fundamentally disagree with this post is that, for many people, I don’t think aiming for the minimum puts you close to less than the minimum. Getting to the minimum, much less below it, can be very hard, such that people who aim at it just aren’t in much danger of undershooting. If you find this is not true for yourself, then please do back off from the minimum. But remember that in the counterfactual where you hadn’t tested your limits, you probably would not have gotten close to optimal.

This post includes some saddening anecdotes about people ending up miserable because they tried to optimize all their time for altruism. I don’t want to trivialize their suffering. Yet I can conjure anecdotes in the opposite direction (and the kind of altruism I care about reduces more suffering in expectation). Several of my colleagues seem to work more than the typical job entails, and I don’t have any evidence of the quality of their work being the worse for this. I’ve found that the amount of time I can realistically devote to altruistic efforts is pretty malleable. No, I’m not a machine; of course I have my limits. But when I gave myself permission to do altruistic things for parts of weekends, or into later hours of weekdays, well, I could. “My happiness is not the point,” as Julia said in this post, and while she evidently doesn’t endorse that statement, I do. That just seems to be the inevitable consequence of taking the sentience of other beings besides yourself (or your loved ones) seriously.

See also this comment:

Personally have been trying to think of my life only as a means to an end. Will my life technically might have value, I am fairly sure it is rather minuscule compared to the potential impact can make. I think it's possible, though probably difficult, to intuit this and still feel fine / not guilty, about things. … I'm a bit wary on this topic that people might be a bit biased to select beliefs based on what is satisfying or which ones feel good.

I do think Tessa's point about slack has some force—though in a sense, this merely shifts the “minimum” up by some robustness margin, which is unlikely to be large enough to justify the average person’s indulgences.

2. “You have more than one goal, and that’s fine”

If I donate to my friend’s fundraiser for her sick uncle, I’m pursuing a goal. But it’s the goal of “support my friend and our friendship,” not my goal of “make the world as good as possible.” When I make a decision, it’s better if I’m clear about which goal I’m pursuing. I don’t have to beat myself up about this money not being used for optimizing the world — that was never the point of that donation. That money is coming from my “personal satisfaction” budget, along with money I use for things like getting coffee with friends.

It puzzles me that, as common as concerns about the utility monster—sacrificing the well-being of the many for the super-happiness of one—are, we seem to find it totally intuitive that one can (passively) sacrifice the well-being of the many for one’s own rather mild comforts. (This is confounded by the act vs. omission distinction, but do you really endorse that?)

The latter conclusion is basically the implication of accepting goals other than “make the world as good as possible.” What makes these other goals so special, that they can demand disproportionate attention (“disproportionate” relative to how much actual well-being is at stake)?

3. “Ineffective Altruism”

Due to the writing style, it’s honestly not clear to me what exactly this post was claiming. But the author does emphatically say that devoting all of their time to the activity that helps more people per hour would be “premature optimization.” And they celebrate an example of a less effective thing they do because it consistently makes a few people happy.

I don’t see how the post actually defends doing the less effective thing. To the extent that you impartially care about other sentient beings, and don’t think their experiences matter any less because you have fewer warm fuzzy feelings about them, what is the justification for willingly helping fewer people?

Comment by antimonyanthony on Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety) · 2022-05-07T20:33:19.875Z · EA · GW

For what it's worth, my experience hasn't matched this. I started becoming concerned about the prevalence of net-negative lives during a particularly happy period of my own life, and have noticed very little correlation between the strength of this concern and the quality of my life over time. There are definitely some acute periods where, if I'm especially happy or especially struggling, I have more or less of a system-1 endorsement of this view. But it's pretty hard to say how much of that is a biased extrapolation, versus just a change in the size of my empathy gap from others' suffering.

Comment by antimonyanthony on [deleted post] 2022-05-07T10:47:08.278Z
But only some s-risks are very concerning to utilitarians -- for example, utilitarians don't worry much about the s-risk of 10^30 suffering people in a universe with 10^40 flourishing people.

Utilitarianism =/= classical utilitarianism. I'm a utilitarian who would think that outcome is extremely awful. It depends on the axiology.

Comment by antimonyanthony on EA is more than longtermism · 2022-05-04T13:06:41.310Z · EA · GW
Longtermism, as a worldview, does not  want present day people to suffer; instead, it wants to  work towards a future with as little suffering as possible, for everyone.

This is a bit misleading. Some longtermists, myself included, prioritizing minimizing suffering in the future. But this is definitely not a consensus among longtermists, and many popular longtermist interventions will probably increase future suffering (by increasing future sentient life, including mostly-happy lives, in general).

Comment by antimonyanthony on How much current animal suffering does longtermism let us ignore? · 2022-04-22T10:09:41.855Z · EA · GW

I think the strength of these considerations depends on what sort of longtermist intervention you're comparing to, depending on your ethics. I do find the abject suffering of so many animals a compelling counter to prioritizing creating an intergalactic utopia (if the counterfactual is just that fewer sentient beings exist in the future). But some longtermist interventions are about reducing far greater scales of suffering, by beings who don't matter any less than today's animals. So when comparing to those interventions, while of course I feel really horrified by current suffering, I feel even more horrified by those greater scales in the future—we just have to triage our efforts in this bad situation.

Comment by antimonyanthony on How much current animal suffering does longtermism let us ignore? · 2022-04-22T09:51:24.580Z · EA · GW
Longtermism is probably not really worth it if the far future contains much more suffering than happiness

Longtermism isn't synonymous with making sure more sentient beings exist in the far future. That's one subset, which is popular in EA, but an important alternative is that you could work to reduce the suffering of beings in the far future.

Comment by antimonyanthony on A longtermist critique of “The expected value of extinction risk reduction is positive” · 2022-04-19T09:03:37.644Z · EA · GW

Thanks for the kind feedback. :) I appreciated your post as well—I worry that many longtermists are too complacent about the inevitability of the end of animal farming (or its analogues for digital minds).

Comment by antimonyanthony on Is AI safety still neglected? · 2022-04-03T10:53:52.518Z · EA · GW
Ambitious value learning and CEV are not a particularly large share of what AGI safety researchers are working on on a day-to-day basis, AFAICT. And insofar as researchers are thinking about those things, a lot of that work is trying to figure out whether those things are good ideas the first place, e.g. whether they would lead to religious hell.

Sure, but people are still researching narrow alignment/corrigibility as a prerequisite for ambitious value learning/CEV. If you buy the argument that safety with respect to s-risks is non-monotonic in proximity to "human values" and control, then marginal progress on narrow alignment can still be net-negative w.r.t. s-risks, by increasing the probability that we get to "something close to ambitious alignment occurs but without a Long Reflection, technical measures against s-risks, etc." At least, if we're in the regime of severe misalignment being the most likely outcome conditional on no more narrow alignment work occurring, which I think is a pretty popular longtermist take. (I don't currently think most alignment work clearly increases s-risks, but I'm pretty close to 50/50 due to considerations like this.)

Comment by antimonyanthony on Future-proof ethics · 2022-04-03T08:32:45.525Z · EA · GW

I'm pretty happy to bite that bullet, especially since I'm not an egoist. I should still leave my house because others are going to suffer far worse (in expectation) if I don't do something to help, at some risk to myself. It does seem strange to say that if I didn't have any altruistic obligations then I shouldn't take very small risks of horrible experiences. But I have the stronger intuition that those horrible experiences are horrible in a way that the nonexistence of nice experiences isn't. And that "I" don't get to override the preference to avoid such experiences, when the counterfactual is that the preferences for the nice experiences just don't exist in the first place.

Comment by antimonyanthony on What are the strongest arguments against working on existential risk? (EA Librarian) · 2022-03-12T08:44:58.337Z · EA · GW

Note that there are normative views other than discounting and person-affecting views that do not prioritize reducing existential risks—at least extinction risks specifically, which seem to be the large majority of existential risks that the EA community focuses on. I discuss these here.

Comment by antimonyanthony on New EA cause area: Breeding really dumb chickens · 2022-02-21T07:49:55.035Z · EA · GW

I think the core idea of your comment—that intelligence is not equal to capacity to suffer, and the OP imprecisely conflates the two—is true and important. I had that same thought while reading the OP. But I suspect your comment would have received less (strong) disapproval if you had stated your point in a less adversarial/politically charged way.

Comment by antimonyanthony on Future-proof ethics · 2022-02-19T10:54:15.528Z · EA · GW

I started writing a comment, then it got too long, so I put in my shortform here. :)

Comment by antimonyanthony on antimonyanthony's Shortform · 2022-02-19T10:51:35.806Z · EA · GW

A Parfitian Veil of Ignorance

[Edit: I would be very surprised if I were the first person to have proposed this; it probably exists somewhere else, I just don't know of a source.]

Prompted by Holden’s discussion of the veil of ignorance as a utilitarian intuition pump (contra Rawls), I thought about an alternative to the standard veil. My intuitions about tradeoffs of massive harms for a large number of small benefits—at least for some conceptions of "benefit"—diverge from those in his post, when considering this version.

The standard veil of ignorance asks you to imagine being totally ignorant as to which person you will be in a population. (Assume we’re only considering fixed population sizes, so there’s no worry that this exercise sneaks in average utilitarianism, etc.)

But the many EA fans of Parfit (or Buddha) know that this idea of a discrete person is metaphysically problematic. So we can look at another approach, inspired by empty individualism.

Imagine that when evaluating two possible worlds, you don’t know which slice of experience in each world you would be. To make things easy enough to grasp, take a “slice” to be just the longest amount of time necessary for a sentient being to register an experience, but not much longer. Let’s say one second.

These worlds might entail probabilities of experiences as well. So, since it’s hard to intuitively grasp probabilities as effectively as frequencies, suppose each world is “re-rolled” a large enough times that each outcome happens at least once, in proportion to its probability. e.g., In Holden’s example of a 1 in 100 million chance of someone dying, the experiences of that person are repeated 100 million times, and one of those experience streams is cut short by death.

So now a purely aggregative and symmetric utilitarian offers me a choice, from behind this veil of ignorance, between two worlds. Option 1 consists of a person who lives for one day with constantly neutral experiences—no happiness, no suffering (including boredom). In option 2, that person instead spends that day relaxing on a nice beach, with a 1 in 100 million chance of ending that day by spiraling into a depression (instead of dying peacefully in their sleep).

I imagine, first, rescaling things so in #1 the person lives 100 million days of neutrality, and in #2, they live 99,999,999 peaceful beach-days—suspend your disbelief and imagine they never get bored—followed by a beach-day that ends in depression. Then I imagine I don’t know which moment of experience in either of these options I’ll be.

Choosing #1 seems pretty defensible to me, from this perspective. Several of those experience-moments in #2 are going to consist purely of misery. They won’t be comforted by the fact that they’re rare, or that they’re in the context of a “person” who otherwise is quite happy. They’ll just suffer.

I’m not saying the probabilities don’t matter. Of course they do; I’d rather take #2 than a third option where there’s a 1 in 100 thousand chance of depression. I’m also pretty uncertain where I stand when we modify #1 so that the person’s life is a constant mild itch instead of neutrality. The intuition this thought experiment prompts in me is the lexical badness of at least sufficiently intense suffering, compared with happiness or other goods. And I think the reason it prompts such an intuition is that in this version of the veil of ignorance, discrete “persons” don’t get to dictate what package of experiences is worth it, i.e., what happens to the multitude of experience-moments in their life. Instead, one has to take the experience-moments themselves as sovereign, and decide how to handle conflicts among their preferences. (I discuss this more here.)

Comment by antimonyanthony on Some thoughts on vegetarianism and veganism · 2022-02-16T10:03:16.277Z · EA · GW

I suspect there are examples of things EAs do out of consideration for other humans that are just as costly, and they justify them on the grounds that this comes out of their "fuzzies" budget. e.g. Investing in serious romantic or familial relationships. I'm personally rather skeptical that I would spend any time and money saved by being non-vegan on altruistically important things, even if I wanted to. (Plus there is Nikola's point that if you already do care a lot about animals, the emotional cost of acting in a way that financially supports factory farming could be nontrivial.)

Comment by antimonyanthony on Theses on Sleep · 2022-02-15T08:09:49.073Z · EA · GW

Just as a prior, I would think it's more likely for motivated reasoning to generate the belief "it is optimal from a health perspective to spend more time doing something that makes me feel better while awake, and that doesn't require any productivity during that extra time," than "it is not optimal to spend more time on that, and if anything it is probably optimal to spend less time on that so you can do more effortfully productive things."

Comment by antimonyanthony on Simplify EA Pitches to "Holy Shit, X-Risk" · 2022-02-13T09:00:59.614Z · EA · GW

Note that your "tl;dr" in the OP is a stronger claim than "these empirical claims are first order while the moral disagreements are second order." You claimed that agreement on these empirical claims is "enough to justify the core action relevant points of EA." Which seems unjustified, as others' comments in this thread have suggested. (I think agreement on the empirical claims very much leaves it open whether one should prioritize, e.g., extinction risks or trajectory change.)

Comment by antimonyanthony on Doubts about Track Record Arguments for Utilitarianism · 2022-02-13T08:53:57.494Z · EA · GW

I'm not sure the criticism of utilitarianism failing on its own grounds is very common. My understanding is that when people point to harms that someone following utilitarianism would cause, their claim is that this is entirely consistent with utilitarianism, and that that's the problem with utilitarianism. They object to the harms themselves (because they violate non-consequentialist duties).

Of course a plausible response is often that non-naive utilitarianism would not endorse such harms. Because they are not actually outweighed by the benefits when taking a full accounting of the consequences. i.e. The utilitarian thing to do is often not "try to do the utilitarian calculation based on a faulty world-model and take the 'optimal' action." But we knew this without looking at the historical track record.

Comment by antimonyanthony on What (standalone) LessWrong posts would you recommend to most EA community members? · 2022-02-10T08:19:35.399Z · EA · GW

The noncentral fallacy nicely categorizes a very common source of ethical disagreement in my experience.

[Edit:] Somewhat more niche, but considering how important AI risk is to many EAs, I'd also recommend Against GDP as a metric for timelines and takeoff speeds, for rebutting what is in my estimation a bizarrely common error in forecasting AI takeoff.