Puzzles for Everyone 2022-09-10T02:11:50.674Z
How Useful is Utilitarianism? 2022-09-08T00:01:01.537Z
The Nietzschean Challenge to Effective Altruism 2022-08-29T14:39:40.574Z
New Guest Essays on Hedonism [] 2022-08-26T00:36:17.862Z
Review of WWOTF 2022-08-15T18:53:16.224Z
Meat Externalities 2022-07-11T03:23:49.840Z
Buddhism and Utilitarianism; EA vs EB 2022-06-23T17:56:38.278Z
The Strange Shortage of Moral Optimizers 2022-06-07T15:23:28.220Z
Yglesias on EA and politics 2022-05-23T12:37:03.681Z
Leveling-up Impartiality 2022-05-19T00:32:29.086Z
New substack on utilitarian ethics: Good Thoughts 2022-05-09T16:00:59.639Z
Virtues for Real-World Utilitarians 2022-04-28T14:18:39.229Z
Philanthropy Vouchers as Systemic Change 2019-07-11T15:12:47.894Z
Charity Vouchers [public policy idea] 2019-07-10T18:24:11.802Z


Comment by Richard Y Chappell (RYC) on "Defective Altruism" by Nathan J. Robinson in Current Affairs · 2022-09-26T14:44:29.542Z · EA · GW

Here's one, from philosophy student 'Bentham's Bulldog'.

Comment by Richard Y Chappell (RYC) on Puzzles for Everyone · 2022-09-26T14:24:02.554Z · EA · GW


Comment by RYC on [deleted post] 2022-09-21T17:45:29.632Z

Hi! I'm an academic philosopher, and interested in developing a broadly utilitarian (i.e. beneficence-focused) approach to bioethics, so that policies like vaccine challenge trials are more widely appreciated as no-brainers by the time the next pandemic strikes. (I set out a broad overview of some of my goals in 'How Useful is Utilitarianism?')

Comment by Richard Y Chappell (RYC) on ‘Where are your revolutionaries?’ Making EA congenial to the social justice warrior. · 2022-09-21T16:36:13.137Z · EA · GW

I'm a bit confused by this post. You start off by relating your frustration with "vapid" rhetoric (and its epistemic costs in "dull[ing] our analytical thinking"), but then seem to advocate that EA pivot towards embracing vapid social justice rhetoric?  Maybe I've misunderstood what you're suggesting.

I also worry that the post assumes that SJWs (rather than, say, policy-makers who read The Economist) constitute "the beating heart of changemaking".  But (insofar as I have a grasp on what that even means) that doesn't seem accurate to me.

The world of social justice is not so easily swayed as Silicon Valley, we do not iterate, and we certainly do not ‘fail fast’. Such concessions cost lives.

Doesn't stubborn failure to iterate or swiftly identify & learn from mistakes risk costing even more lives?  I think this point illustrates the risks of leading with rhetoric.  It's really important to first work out what's true, not just what sounds good.

Comment by Richard Y Chappell (RYC) on Puzzles for Everyone · 2022-09-16T18:12:36.754Z · EA · GW

I'm just talking about intrinsic value here, i.e. all else equal.

You write: "Why not morally positive? I find it hard to convince myself that happy experience or satisfaction of self-interest is ever morally neutral, but that is what we're talking about. I actually think that it's impossible."

I have no idea what this means, so I still don't know why you deny that positive lives have positive value.  You grant that negative lives have negative (intrinsic) value.  It would seem most consistent to also grant that positive lives have positive (intrinsic) value. To deny this obvious-seeming principle, some argument is needed!

Comment by Richard Y Chappell (RYC) on Puzzles for Everyone · 2022-09-16T12:49:57.787Z · EA · GW

What's the basis for claiming that (1) is neutral, rather than positive?

Comment by Richard Y Chappell (RYC) on Moral Categories and Consequentialism · 2022-09-15T18:57:28.994Z · EA · GW

I've always found Parfit's response to be pretty compelling.  As I summarize it here:

Rather than discounting smaller benefits (or refusing to aggregate them), Parfit suggests that we do better to simply weight harms and benefits in a way that gives priority to the worse-off. Two appealing implications of this view are that: (1) We generally should not allow huge harms to befall a single person, if that leaves them much worse off than the others with competing interests. (2) But we should allow (sufficient) small benefits to the worse-off to (in sum) outweigh a single large benefit to someone better-off.

Since we need aggregation in order to secure verdict (2), and we can secure verdict (1) without having to reject aggregation, it looks like our intuitions are overall best served by accepting an aggregative moral theory.

I'll just add that it's a mistake to see the Transmitter Room case as an objection to consequentialism per se.  Nobody (afaict) has the intuition that it would be better for the guy to be electrocuted, but we're just not allowed to let that happen.  Rather, the standard intuition is that it wouldn't even be a good result.  But that's to call for an axiological refinement, not to reject the claim that we should bring about the better outcome.

Comment by Richard Y Chappell (RYC) on Puzzles for Everyone · 2022-09-13T13:03:41.967Z · EA · GW

The connection to personal identity is interesting, thanks for flagging that!  I'd emphasize two main points in reply:

(i) While preference continuity is a component of person identity, it isn't clear that it's essential. Memory continuity is classically a major component, and I think it makes sense to include other personality characteristics too.  We might even be able to include values in the sense of moral beliefs that could persist even while the agent goes through a period of being unable to care in the usual way about their values; they might still acknowledge, at least in an intellectual sense, that this is what they think they ought to care about.  If someone maintained all of those other connections, and just temporary stopped caring about anything, I think they would still qualify as the same person. Their past self has not thereby "already died".

(ii) re: "paternalism", it's worth distinguishing between acting against another's considered preferences vs merely believing that their considered preferences don't in fact coincide with their best interests.  I don't think the latter is "paternalistic" in any objectionable sense.  I think it's just obviously true that someone who is depressed or otherwise mentally ill may have considered preferences that fail to correspond to their best interests.  (People aren't infallible in normative matters, even concerning themselves.  To claim otherwise would be an extremely strong and implausible view!)

fwiw, I also think that paternalistic actions are sometimes justifiable, most obviously in the case of literal children, or others (like the temporarily depressed!) for whom we have a strong basis to judge that the standard Millian reasons for deference do not apply.

But that isn't really the issue here. We're just assessing the axiological question of whether it would, as a matter of principle, be bad for the temporary depressive to die--whether we should, as mere bystanders, hope that they endure through this rough period, or that they instead find and take the means to end it all, despite the bright future that would otherwise be ahead of them.

Comment by Richard Y Chappell (RYC) on Puzzles for Everyone · 2022-09-12T13:32:50.641Z · EA · GW

Funny that you use the phrase "a bit pro-natalist" as though that's a bad thing!   I am indeed unabashedly in favour of good things existing rather than nothing at all. And I'm also quite unembarrassed to share that I regard good lives to be a good thing :-)

So yes, I think good lives contain value.  You might be concerned to avoid the view that people are merely containers of value.  But you shouldn't deny that our lives (when good for us) are, in fact, valuable.

I think the sensible view here is to distinguish personal and impersonal value.  Creating value-filled lives is impersonally good: it makes the universe a better place.  But of course we shouldn't just care about the universe.  We should also care about particular persons.

Indeed, whenever possible (i.e. when dealing with existing persons, to whom we can directly refer), our concern should be primarily person-directed.  Visit your friend in the hospital for her sake, not just to boost aggregate happiness.  But not all moral action can be so personally motivated. To donate to charities or otherwise save "statistical" lives, we need to fall back on our general desire to promote the good.  And likewise for solving the non-identity problem: preferring better futures over worse ones (when different people would exist in either case). And, yes, this fall-back desire to promote better outcomes should also, straightforwardly, lead us to prefer good futures over lifeless futures (and lifeless futures over miserable futures).

I cannot do the same calculations in their absence from existence.

Then you would give me a lollipop even at the cost of bringing many miserable new lives into existence.  That would clearly be wrong.  But once you bring yourself to acknowledge reasons to not bring bad lives into existence (perhaps because the eventual person would regret your violating them), there's no deep metaphysical difference between that and the positive reasons to bring good lives into existence (which the eventual person would praise your following).

Comment by Richard Y Chappell (RYC) on Puzzles for Everyone · 2022-09-12T13:07:18.391Z · EA · GW

"Disingenuous"?  I really don't think it's OK for you to accuse me of dishonesty just because you disagree with my framing of the issue.  Perhaps you meant to write something like "misleading".

But fwiw, I strongly disagree that it's misleading.  Human extinction is obviously a "trajectory change". Quite apart from what anyone wants -- maybe the party is sufficient incentive to change their preferences, for example -- I think it's perfectly reasonable to expect the continuation of the species by default.  But I'm also not sure that default expectations are what matters here. Even if you come to expect extinction, it remains accurate to view extinction as extinguishing the potential for future life. 

Your response to the Cleopatra example is similarly misguided. I'm not appealing to "existing people not wanting to die", but rather existing people being glad that they got to come into existence, which is rather more obviously relevant. (But I won't accuse you of dishonesty over this.  I just think you're confused.)

Comment by Richard Y Chappell (RYC) on The Nietzschean Challenge to Effective Altruism · 2022-09-11T21:48:46.125Z · EA · GW

Glad you liked the post!

Utility = well-being = what's worth caring about for an individual's sake.  It's an open normative question what this is.  So you should feel totally free, conceptually, to include more than just hedonic states in your account of utility, if that's what you find all-things-considered most plausible!  Hedonism is not a "definition" of utility, but just one candidate account (or theory) of what constitutes it.

See our chapter on 'Theories of Well-Being' at for more detail.

It can be a tricky taxonomic question whether putative objective values (like "excellence") are best understood as components of well-being, or as non-welfare values.  One test is to ask: is it specifically for your child's sake that you prefer that they have the grander-but-slightly-less-happy life?  Or is it just that you think this makes for an impersonally better world (potentially worth a very mild cost to your child)?  The former option suggests that you see grandeur as a component of well-being; the latter would instead be a non-welfare value.

On the broader methodological question of when we should revise our theory of value vs rejecting the consequentialist idea that promoting value is foundational to ethics, see my old blog post: 'Anti-Consequentialism and Axiological Refinements'.  The key idea:

So when faced with [objections to classical utilitarianism], it's worth asking not just whether the action seems wrong, but whether the outcome is really desirable in the first place. If not, the consequentialist has a simple response: the act is indeed wrong, precisely because it doesn't maximize what's (genuinely) good.

Comment by Richard Y Chappell (RYC) on Puzzles for Everyone · 2022-09-11T18:26:05.432Z · EA · GW

In fairness to Setiya, the whole point of parity relations (as developed, with some sophistication, by Ruth Chang) is that they -- unlike traditional value relations -- are not meant to be transitive.  If you're not familiar with the idea, I sketch a rough intro here.

Comment by Richard Y Chappell (RYC) on Puzzles for Everyone · 2022-09-11T18:19:54.879Z · EA · GW

I don't think so.  I'm sure that Roberts would, for example, think we had more reason to give Ann a lollipop than to bring Ben into existence and give him one, even if Ann would not in any way be frustrated by the lack of a lollipop.

The far more natural explanation is just that we have person-directed reasons to want what is good for Ann, in addition to the impersonal reasons we have to want a better world (realizable by either benefiting Ann or creating & benefiting Ben).

Comment by Richard Y Chappell (RYC) on Puzzles for Everyone · 2022-09-11T13:27:20.675Z · EA · GW

Interesting!  Yeah, a committed anti-natalist who regrets all of existence -- even in an "approximate utopia" -- on the grounds that even a small proportion of very unhappy lives automatically trumps the positive value of a world mostly containing overwhelmingly wonderful, flourishing lives  is, IMO, in the grips of... um (trying to word this delicately)... values I strongly disagree with.  We will just have very persistent disagreements, in that case!

FWIW, I think those extreme anti-natalist values are unusual, and certainly don't reflect the kinds of concerns expressed by Setiya that I was responding to in the OP (or other common views in the vicinity, e.g. Melinda Roberts' "deeper intuition that the existing Ann must in some way come before need-not-ever-exist-at-all Ben").

Comment by Richard Y Chappell (RYC) on Puzzles for Everyone · 2022-09-11T13:13:43.842Z · EA · GW

Hi Noah, just to be clear on the dialectic: my post isn't trying to argue a committed anti-natalist out of their view.  Instead, the population ethics section is just trying to make clear that Setiya's appeal to the "intuition of neutrality" is not a cost-free solution for someone with ordinary views who is worried about the repugnant conclusion, and in fact there are far better alternative solutions available that don't require resorting to neutrality.  (Here I take for granted that my target audience shares the initial intuition that "utopia is better than a barren rock".  Again, this isn't targeted at committed nihilists or anti-natalists who reject that claim.)

But you raise an interesting further question, of how one might best try to challenge "indifference to future happy people".  I think it's pretty difficult to challenge indifference in general. If someone is insistently indifferent to non-human animal welfare (or the global poor, or...), for example, you're not realistically going to be able to argue them out of that.

That said, I think some rational pressure can be put on indifference to future good lives through a combination of:

(i) showing that the putative advantages of the view (e.g. apparent helpfulness for avoiding the repugnant conclusion) are largely illusory, as I argue in the OP, and

(ii) showing that the view introduces further costs, e.g. violating independently plausible principles or axioms.

I'm not going to pursue the latter task in any depth here, but just to give a rough sketch of how it might go, consider the following dilemma for anti-natalists. Either:

(a) They deny that future good lives can have impersonal value, which is implausibly nihilistic, or
(b) They grant this axiological claim, but then violate the bridging principle that we always have some moral reason to prefer an impersonally better world to a worse one (such that, all else equal, we should bring about the better world).

Of course, they can always just bite the bullet and insist that they don't find (a) bothersome. I think argument runs out at that point.  You can't argue people out of nihilism.  All you can do, I think, is to express sadness and discourage others from following that bleak path.

Comment by Richard Y Chappell (RYC) on Puzzles for Everyone · 2022-09-11T00:04:29.660Z · EA · GW

Hi Michael!  Thanks for your comments.

  1. I think my dialectical strategy works similarly against appealing to the Very Repugnant Conclusion to support neutrality.  To avoid the intra-personal VRC (compatibly with other commonsense commitments about the harm of death), we'd need a theory that assigns suitably more weight to quality than quantity. And if you've got such a theory, you don't need neutrality for interpersonal cases either.
  2. Fair enough if you just don't share my intuitions.  I think it would be horribly evil for the present generation to extinguish all future life, merely to moderately benefit ourselves (even in not purely frivolous ways).  When considering different cases, where there are much graver costs to existing people (e.g. full-blown replacement), I share the intuition that extreme sacrifice is not required; but appeal to some form of partiality or personal prerogative seems much more appropriate to me than denying the value of the beneficiaries. (I develop a view along these lines in my paper, 'Rethinking the Asymmetry'.)  Just like the permissibility of keeping your organs inside your own body is no reason to deny the value of potential beneficiaries of organ donation.

    That last point also speaks to the putative desirability of offering a "stronger principled reason".  Protecting bodily autonomy by denying the in-principle value of people in need of organ transplants would be horrifying, not satisfying.  So I don't think that question can be adjudicated independently of the first-order question of which view is simply right on the merits.
  3. How to deal with induced or changing preferences is a real problem for preferentist theories of well-being, and IMO is a good reason to reject all such views in favour of more objective alternatives.  Neutrality about future desires helps in some cases, as you note, but is utterly disastrous in others (e.g. potentially implying that a temporarily depressed child or teenager, who momentarily loses all his desires/preferences, might as well just die, even if he'd have a happy, flourishing future).
Comment by Richard Y Chappell (RYC) on 'Psychology of Effective Altruism' course syllabus · 2022-09-07T21:42:54.806Z · EA · GW

Hi Pablo, could you update my syllabus on your list to this one?  Thanks!

Comment by Richard Y Chappell (RYC) on Bernard Williams: Ethics and the limits of impartiality · 2022-09-07T13:32:41.020Z · EA · GW

I think you mean 'moral realism' when you write 'non-naturalism'? Note that Rawlette, for example, is a naturalist moral realist. (According to her analytic hedonism, normative properties are conceptually reducible to the natural properties of pleasure and pain.)

Comment by Richard Y Chappell (RYC) on Valuing lives instrumentally leads to uncomfortable conclusions · 2022-09-05T12:56:02.226Z · EA · GW

The OP spoke of evaluative claims ("it is better to..." and "the conclusion that some lives are more valuable..."), so I think it's important to be clear that those axiological claims are not reasonably disputable, and hence not reasonably regarded as "repugnant" or whatever.

Now, it's a whole 'nother question what should be done in light of these evaluative facts. One could argue that it's "unacceptable" to act upon them; that one should ignore or disregard facts about instrumental value for the purposes of deciding which life to save.

The key question then is: why? Most naturally, I think, one may worry that acting upon such differences might reinforce historical and existing social inequalities in a way that is more detrimental on net than the first-order effects of doing more immediate good.  If that worry is empirically accurate, then even utilitarians will agree with the verdict that one should "screen off" considerations of instrumental value in one's decision procedure for saving lives (just as we ordinary think doctors etc. should).  Saving the most (instrumentally) valuable life might not be the best thing to do, if the act itself--or the process by which it was decided--has further negative consequences.

Again, per

[T]here are many cases in which instrumental favoritism would seem less appropriate. We do not want emergency room doctors to pass judgment on the social value of their patients before deciding who to save, for example. And there are good utilitarian reasons for this: such judgments are apt to be unreliable, distorted by all sorts of biases regarding privilege and social status, and institutionalizing them could send a harmful stigmatizing message that undermines social solidarity. Realistically, it seems unlikely that the minor instrumental benefits to be gained from such a policy would outweigh these significant harms. So utilitarians may endorse standard rules of medical ethics that disallow medical providers from considering social value in triage or when making medical allocation decisions. But this practical point is very different from claiming that, as a matter of principle, utilitarianism's instrumental favoritism treats others as mere means [or is otherwise inherently objectionable]. There seems no good basis for that stronger claim.

Comment by Richard Y Chappell (RYC) on Valuing lives instrumentally leads to uncomfortable conclusions · 2022-09-04T23:15:44.037Z · EA · GW

(For related discussion, see the 'Instrumental Favoritism' section of the 'Mere Means' objection on

Comment by Richard Y Chappell (RYC) on Valuing lives instrumentally leads to uncomfortable conclusions · 2022-09-04T23:07:38.033Z · EA · GW

It's indisputable that some lives are more instrumentally valuable (to save) than others.  So if you hold that all lives are equally intrinsically valuable, it follows that some lives are all-things-considered more valuable to save than others (due to having the same intrinsic value, but more instrumental value).

To avoid that "uncomfortable"-sounding conclusion, you would need to reject the second premise (that all lives are equally intrinsically valuable).  That is, you would have to claim that some lives are intrinsically more valuable than others.  And that is surely a much more uncomfortable conclusion!

I think we should conclude from this that there's actually nothing remotely morally objectionably about saying that some lives are more valuable to save for purely instrumental reasons.  The thing to avoid is to claim that some lives are intrinsically more important.  It "sounds bad" to say "some lives are more valuable to save than others" because it sounds like you're claiming that some lives are inherently more valuable than others.  So it's important to explicitly cancel the implicature by adding the "for purely instrumental reasons" clause.

But once clarified, it's a perfectly innocuous claim.  Anyone who still thinks it sounds bad at that point needs to think more clearly.

Comment by Richard Y Chappell (RYC) on EA is about maximization, and maximization is perilous · 2022-09-03T17:51:53.529Z · EA · GW

Will's conversation with Tyler: "I say I’m not a utilitarian because — though it’s the view I’m most inclined to argue for in seminar rooms because I think it’s most underappreciated by the academy — I think we should have some degree of belief in a variety of moral views and take a compromise between them."

Comment by Richard Y Chappell (RYC) on EA is about maximization, and maximization is perilous · 2022-09-02T18:27:48.522Z · EA · GW

Thanks for writing this, Holden!  I agree that potential harms from the naive (mis-)application of maximizing consequentialism is a risk that's important to bear in mind, and to ward against.  It's an interesting question whether this is best done by (i) raising concerns about maximizing in principle, or (ii) stressing the instrumental reasons why maximizers should be co-operative and pluralistic.

I strongly prefer the latter strategy, myself.  It's something we take care to stress on (following the example of historical utilitarians from J.S. Mill to R.M. Hare, who have always urged the importance of wise rules of thumb to temper the risks of miscalculation).  A newer move in this vicinity is to bring in moral uncertainty as an additional reason to avoid fanaticism, even if utilitarianism is correct and one could somehow be confident that violating commonsense norms was actually utility-maximizing on this occasion, unlike all the other times that following crude calculations unwittingly leads to disaster. (I'm excited that we have a guest essay in the works by a leading philosopher that will explore the moral uncertainty argument in more detail.)

One reason why I opt for option (ii) is honesty: I really think these principles are right, in principle!  We should be careful not to misapply them.  But I don't think that practical point does anything to cast doubt on the principles as a matter of principle.  (Others may disagree, of course, which is fine: route (i) might then be an available option for them!)

Another reason to favour (ii) is the risk of otherwise shoring up harmful anti-consequentialist views. I think encouraging more people to think in a more utilitarian way (at least on current margins, for most people--there could always be exceptions, of course) is on average very good.  I've even argued on this basis that non-consequentialism may be self-effacing.

That said, some sort of loosely utilitarian-leaning meta-pluralism (of the sort Will MacAskill has been endorsing in recent interviews) may well be optimal.  (It also seems more reasonable than dogmatic certainty in any one ethical approach.)

Comment by Richard Y Chappell (RYC) on What is the moral foundation for not donating nearly everything? · 2022-09-02T15:00:51.793Z · EA · GW

I think pressure towards rationalizing one's self-interest as somehow being "optimal" is not a good idea. It's better to be honest.

Singer's answer is correct.  It really would be better to give more.  We don't because we aren't perfect.  And that's fine!  Cf. My response to Caplan's Conscience Objection to Utilitarianism.

Comment by Richard Y Chappell (RYC) on Summaries are underrated · 2022-09-02T13:11:39.010Z · EA · GW

Ah, thanks for the explanation.  I'll run this by my co-editors, and get back to you if they're interested.

Comment by Richard Y Chappell (RYC) on Summaries are underrated · 2022-09-02T12:52:20.519Z · EA · GW

Can I put your honorarium as a superlinear prize?

I'm not sure what you're asking. You're welcome to refer to the honorarium (and academic qualification requirements) for commissioned articles, and encourage any interested parties to contact me with further questions.

(Or, if someone really wanted to try writing the article first, and then approach us to check whether we viewed it as of appropriate quality for us to publish, that would also be fine. But a bit of a risk on their part, since writing an academic article is a significant time investment, and there's no guarantee we would accept it!)

Comment by Richard Y Chappell (RYC) on Summaries are underrated · 2022-09-02T00:42:04.045Z · EA · GW

Strongly agreed!

I would love for to host high-quality summaries of major EA philosophical works (incl. Doing Good Better and The Precipice).  The challenge is finding qualified writers who have the time to spare for such a task.  (I did it myself for Singer's 'Famine, Affluence, and Morality'.)*

But if anyone reading this is (i) either a graduate student in a top philosophy program or a professional philosopher (or can otherwise make a strong case for being qualified), and (ii) interested in writing a high-quality précis of such a book for, please get in touch!  (I can offer a $1000 honorarium if I agree to commission your services.)

I created a bounty, funded by Superlinear, for summarizing books in Twitter threads.

Just out of curiosity: why restrict it to Twitter threads? (E.g. I can't imagine a tweet-thread on Singer's FAM being nearly as useful as my above study guide.)

* = folks who like that summary might also appreciate my summary of Parfit's entire moral philosophy in seven blog posts.

Comment by Richard Y Chappell (RYC) on On the Philosophical Foundations of EA · 2022-09-01T13:15:26.620Z · EA · GW

Nice post, thanks for writing this!  Despite being an ethical theorist myself, I actually think the central thrust of your message is mistaken, and that the precise details of ethical theory don't much affect the basic case for EA.  This is something I've written about under the banner of 'beneficentrism'.

A few quick additional thoughts:


The EA community is really invested in problems of applied consequentialist ethics such as "how should we think about low probability / high, or infinite, magnitude risks", "how should we discount future utility", "ought the magnitudes of positive and negative utility be weighed equally", etc.

These are problems of applied beneficence.  Unless your moral theory says that consequences don't matter at all (which would be, as Rawls himself noted, "crazy"), you'll need answers to these questions no matter what ethical-theory tradition you're working within.

(2) re: arguments for utilitarianism (and responses to objections), check out's respective chapters.

(3) re: Harsanyi / "Each of the starting questions I've imagined clearly load the deck in terms of the kinds of answers that are conceptually viable." This seems easily avoided by instead asking which world one would rationally prefer from behind the veil of ignorance. (Whole possible worlds build in all the details, so do not artificially limit the potential for moral assessment in any way.)

(4) "Morality is at its core a guide for individuals to choose what to do." Agreed!  I'd add that noting the continuity between ethical choice and rational choice more broadly is something that strongly favours consequentialism.

Comment by Richard Y Chappell (RYC) on The Nietzschean Challenge to Effective Altruism · 2022-08-31T13:52:39.280Z · EA · GW

Thanks, this is an excellent point!

Comment by Richard Y Chappell (RYC) on Effective altruism is no longer the right name for the movement · 2022-08-31T13:38:06.502Z · EA · GW

Effective Altruists want to effectively help others. I think it makes perfect sense for this to be an umbrella movement that includes a range of different cause areas.  (And literally saving the world is obviously a legitimate area of interest for altruists!)

Cause-specific movements are great, but they aren't a replacement for EA as a cause-neutral movement to effectively do good.

Comment by Richard Y Chappell (RYC) on The Nietzschean Challenge to Effective Altruism · 2022-08-30T22:51:13.190Z · EA · GW

Hi!  Could you expand on what conclusion you find dangerous?

Comment by Richard Y Chappell (RYC) on What books/blogs/articles were most impactful on you re: EA? · 2022-08-30T16:27:36.578Z · EA · GW

Singer's 'Famine, Affluence, and Morality'. (But I joined GWWC back in 2010, so there wasn't much other EA writing around yet!)

Comment by Richard Y Chappell (RYC) on Sacred Cow: has the Effective Altruism community had its view on animal husbandry challenged? · 2022-08-29T16:25:51.164Z · EA · GW

Isn't the issue just that approximately all the meat you can actually buy at the store today comes from factory farms (which also wastes more crops) rather than regenerative farms?

Basically, if the only permissible meat is from regenerative farms, that still yields the result that one should be (almost entirely) veg*n in practice.

But sure, I'd also support policies that shift farmers away from factory farming to better ways of farming meat.  And if those policies succeeded, it's possible to imagine a future in which the moral case for veg*nism would be much weaker than it is today.

Comment by Richard Y Chappell (RYC) on Famine, Affluence and Morality [Link Post to PDF] · 2022-08-29T16:10:53.379Z · EA · GW

Thanks for sharing our study guide --  please do include the link though!

Comment by Richard Y Chappell (RYC) on Longtermism, risk, and extinction · 2022-08-29T13:15:02.549Z · EA · GW


re: (1), can you point me to a good introductory reference on this? From a quick glance at the Allais paradox, it looks like the issue is that the implicit "certainty bias" isn't any consistent form of risk aversion either, but maybe more like an aversion to the distinctive disutility of regret when you know you otherwise could have won a sure thing?  But maybe I just need to read more about these "broader" cases!

re: (2), the obvious motivation would be to avoid "overturning unanimous preferences"! It seems like a natural way to respect different people's attitudes to risk would be to allow them to choose (between fairly weighted options, that neither systematically advantage nor disadvantage them relative to others) how to weight potential costs vs benefits as applied to them personally.

On the main objection: sure, but traditional EU isn't motivated merely on grounds of being "intuitive". Insofar as that's the only thing going for REU, it seems that being counterintuitive is a much greater cost for REU specifically!

the standard response by ordinary people might reflect the fact that they're not total hedonist utilitarians more than it does the fact that they are not Buchakians.

How so? The relevant axiological claim here is just that the worst dystopian futures are at least as bad as the best utopian futures are good. You don't have to be a total hedonist utilitarian (as indeed, I am not) in order to believe that.

I mean, do you really imagine people responding, "Sure, in principle it'd totally be worth destroying the world to prevent a 1 in 10 million risk of a sufficiently dystopian long-term future, if that future was truly as bad as the more-likely utopian alternative was good; but I just don't accept that evaluative claim that the principle is conditioned on here.  A billion years of suffering for all humanity just isn't that bad!"

Seems dubious.

Comment by Richard Y Chappell (RYC) on Questioning the Foundations of EA · 2022-08-28T19:05:05.147Z · EA · GW

for what fraction of the people you know, including e.g. people you know from work, would you instinctively run to save them over the total stranger?

Uh, maybe 90 - 99%?  (With more on the higher end for people I actually know in some meaningful way, as opposed to merely recognizing their face or having chatted once or twice, which is not at all the same as knowing them as a person.)  Maybe we're just psychologically very different!  I'm totally baffled by your response here.

Comment by Richard Y Chappell (RYC) on Questioning the Foundations of EA · 2022-08-28T13:34:19.725Z · EA · GW

One tendency can always be counterbalanced by another in particular cases; I'm not trying to give the full story of "how emotions work".  I'm just talking about the undeniable datum that we do, as a general rule, care more about those we know than we do about total strangers.

(And I should stress that I don't think we can necessarily 'level-up' our emotional responses; they may be biased and limited in all kinds of ways.  I'm rather appealing to a reasoned generalization from our normative appreciation of those we know best. Much as Nagel argues that we recognize agent-neutral reasons to relieve our own pain--reasons that ideally ought to speak to anyone, even those who aren't themselves feeling the pain--so I think we implicitly recognize agent-neutral reasons to care about our loved ones. And so we can generalize to appreciate that like reasons are likely to be found in others' pains, and others' loved ones, too.) 

I don't have a strong view on which intrinsic features do the work.  Many philosophers (see, e.g., David Velleman in 'Love as a Moral Emotion') argue that bare personhood suffices for this role. But if you give a more specific answer to the question of "What makes this person awesome and worth caring about?" (when considering one of your best friends, say), that's fine too, so long as the answer isn't explicitly relational (e.g. "because they're nice to me!"). I'm open to the idea that lots of people might be awesome and worth caring about for extremely varied reasons--for possessing any of the varied traits you regard as virtues, perhaps (e.g. one may be funny, irreverent, determined, altruistic, caring, thought-provoking, brave, or...).

Comment by Richard Y Chappell (RYC) on Questioning the Foundations of EA · 2022-08-27T23:20:24.156Z · EA · GW

re: 'Shut Up and Divide', you might be interested in my post on leveling up vs down versions of impartiality, which includes some principled reasons to think the leveling up approach is better justified:

The better you get to know someone, the more you tend to (i) care about them, and (ii) appreciate the reasons to wish them well. Moreover, the reasons to wish them well don’t seem contingent on you or your relationship to them—what you discover is instead that there are intrinsic features of the other person that makes them awesome and worth caring about. Those reasons predate your awareness of them. So the best explanation of our initial indifference to strangers is not that there’s truly no (or little) reason to care about them (until, perhaps, we finally get to know them). Rather, the better explanation is simply that we don’t see the reasons (sufficiently clearly), and so can’t be emotionally gripped or moved by them, until we get to know the person better. But the reasons truly were there all along. 

Comment by Richard Y Chappell (RYC) on New Guest Essays on Hedonism [] · 2022-08-26T13:10:36.530Z · EA · GW

normative properties that aren't further explained physically

You've misunderstood Rawlette here.  Her view--analytic hedonism--holds that normative properties are analytically reducible to pleasure and suffering. So her suggestion here is not that we need metaphysically primitive normative properties to explain the experience. Quite the opposite!  It's rather (as I understand it) that the normativity "comes along for free" (so to speak) with the familiar felt nature of the experience.

Comment by Richard Y Chappell (RYC) on Effective altruism's billionaires aren't taxed enough. But they're trying. · 2022-08-24T18:29:11.359Z · EA · GW

It's worth flagging the obvious solution of supporting raising taxes on billionaires while allowing them to donate instead thanks to the charitable tax deduction. (I mention this in the comments to my post on Billionaire Philanthropy, which Dylan Matthews cites and draws upon for the "Given that the billionaires do exist, what else would you rather they spend money on?" argument.)

P.S. Speaking as a New Zealander, I'm pretty confident that most of my compatriots believe that American billionaires should pay more taxes!

Comment by Richard Y Chappell (RYC) on What We Owe The Future is out today · 2022-08-16T18:29:56.085Z · EA · GW

I'm really surprised by how common it is for people's thoughts to turn in this direction!  (cf. this recent twitter thread)  A few points I'd stress in reply:

(1) Pro-natalism just means being pro-fertility in general; it doesn't mean requiring reproduction every single moment, or no matter the costs.

(2) Assuming standard liberal views about the (zero) moral status of the non-conscious embryo, there's nothing special about abortion from a pro-natalist perspective. It's just like any other form of family planning--any other moment when you refrain from having a child but could have done otherwise.

(3) Violating people's bodily autonomy is a big deal; even granting that it's good to have more kids all else equal, it's hard to imagine a realistic scenario in which "forced birth" would be for the best, all things considered.  (For example, it's obviously better for people to time their reproductive choices to better fit with when they're in a position to provide well for their kids. Not to mention the Freakonomics stuff about how unwanted pregnancies, if forced to term, result in higher crime rates in subsequent decades.)

In general, we should just be really, really wary about sliding from "X is good, all else equal" to "Force everyone to do X, no matter what!"  Remember your J.S. Mill, everyone!  Utilitarians should be liberal.

Comment by Richard Y Chappell (RYC) on Review of WWOTF · 2022-08-16T15:11:58.397Z · EA · GW

oops, fixed, thanks!

Comment by Richard Y Chappell (RYC) on Why I Hope (Certain) Hedonic Utilitarians Don't Control the Long-term Future · 2022-08-08T02:20:37.342Z · EA · GW

On the first point, just wanted to add a quote from that I think vibes well with the argument of the OP:

Even if [different theories of well-being] currently coincide in practice, their differences could become more practically significant as technology advances, and with it, our ability to manipulate our own minds. If we one day face the prospect of engineering our descendants so that they experience bliss in total passivity, it will be important to determine whether we would thereby be doing them a favor, or robbing them of much of what makes for a truly flourishing life.

Comment by Richard Y Chappell (RYC) on Why I Hope (Certain) Hedonic Utilitarians Don't Control the Long-term Future · 2022-08-08T02:03:57.113Z · EA · GW

I share a lot of your concerns about hedonism.  But, given their appreciation of normative uncertainty, I'm optimistic that the actual hedonist utilitarians in longtermist EA would not in fact choose to replace wonderfully rich, happy lives with a bland universe tiled with "hedonium". If that's right, then we needn't worry too much about hedonistic utilitarian*s* even if we would worry about hedonistic utilitarian*ism*.

Also, maybe worth more cleanly separating out two issues:

(i) What is the correct theory of well-being? (N.B. utilitarianism per se is completely neutral on this.  I, personally, am most drawn to pluralistic objective list theories.)

(ii) Is utility (happiness or whatever) what matters fundamentally, or does it matter just because (and insofar as) it makes individuals' lives better, and those individuals matter fundamentally? (In my paper 'Value Receptacles', I characterize this as the choice between 'utility fundamentalism' and 'welfarism', and recommend the latter.)

Comment by Richard Y Chappell (RYC) on Most* small probabilities aren't pascalian · 2022-08-07T18:22:00.464Z · EA · GW

Great post!  I like your '1 in a million' threshold as a heuristic, or perhaps sufficient condition for being non-Pascalian.  But I think that arbitrarily lower probabilities could also be non-Pascalian, so long as they are sufficiently "objective" or robustly grounded.

Quick argument for this conclusion: just imagine scaling up the voting example.  It seems worth voting in any election that significantly affects N people, where your chance of making a (positive) difference is inversely proportional (say within an order of magnitude of 1/N, or better).  So long as scale and probability remain approximately inversely proportional, it doesn't seem to make a difference to the choice-worthiness of voting what the precise value of N is here.

Crucially, there are well-understood mechanisms and models that ground these probability assignments.  We're not just making numbers up, or offering a purely subjective credence.  Asteroid impacts seem similar.  We might have robust statistical models, based on extensive astronomical observation, that allow us to assign a 1/trillion chance of averting extinction through some new asteroid-tracking program, in which case it seems to me that we should clearly take those expected value calculations at face value and act accordingly.  However tiny the probabilities may be, if they are well-grounded, they're not "Pascalian".

Pascalian probabilities are instead (I propose) ones that lack robust epistemic support.  They're more or less made up, and could easily be "off" by many, many orders of magnitude.  Per Holden Karnofsky's argument in 'Why we can't take explicit expected value estimates literally', Bayesian adjustments would plausibly mandate massively discounting these non-robust initial estimates (roughly in proportion to their claims to massive impact), leading to low adjusted expected value after all.

I like the previous paragraph as a quick solution to "Pascal's mugging".  But even if you don't think it works, I think this distinction between robust vs non-robustly grounded probability estimates may serve to distinguish intuitively non-Pascalian vs Pascalian tiny-probability gambles.

Conclusion: small probabilities are not Pascalian if they are either (i) not ridiculously tiny, or (ii) robustly grounded in evidence.

Comment by Richard Y Chappell (RYC) on Longtermism, risk, and extinction · 2022-08-07T17:40:47.253Z · EA · GW

And, an admittedly more boring objection:

I'll say the long happy future (i.e. lh) is a thousand times less likely than extinction... and the long miserable future (i.e. lm) is a hundred times less likely than that

Maybe I'm unduly optimistic, but I have trouble wrapping my head around how lm could be even that likely. (E.g. it seems like suicide provides at least some protection against worst-case scenarios, unless we're somehow imagining such totalitarian control that the mistreated can't even kill themselves?  But if such control is possible, why wouldn't the controllers just bliss out their subjects?  The scenario makes no sense to me.)

How robust is the model's conclusions to large changes in the probability of lm (e.g. reducing its probability by 3 - 6 orders of magnitude)?

Comment by Richard Y Chappell (RYC) on Longtermism, risk, and extinction · 2022-08-07T17:27:35.056Z · EA · GW

Wow, what an interesting--and disturbing--paper!

My initial response is to think that it provides a powerful argument for why we should reject (Buchak's version of) risk-averse decision theory.  A couple of quick clarificatory questions before getting to my main objection:


If Sheila chooses to go to Shapwick Heath, we might say that she is risk-averse.

How do we distinguish risk-aversion from, say, assigning diminishing marginal value to pleasure?  I know you previously stipulated that Sheila has the utility function of a hedonistic utilitarian, but I'm wondering if you can really stipulate that.  If she really prefers the certainty of 49 hedons over a 50/50 chance of 100, then it seems to me that she doesn't really value 100 hedons as being more than twice as good (for her) as 49.  Intuitively, that makes more sense to me than risk aversion per se.


it doesn't count against the Risk Principle* or the use of risk-weighted expected utility theory for moral choice that they lead to violations of the Ex Ante Pareto Principle. Any plausible decision theory will do likewise.

Can you say a bit more about this?  In particular, what's the barrier to aggregating attitude-adjusted individual utilities, such that harms to Bob count for more, and benefits to Bob count for less, yielding a greater total moral value to outcome A than to B?  (As before, I guess I'm just really suspicious about how you're assigning utilities in these sorts of cases, and want the appropriate adjustments to be built into our axiology instead.  Are there compelling objections to this alternative approach?)

(Main objection)

It sounds like the main motivation for REU is to "capture" the responses of apparently risk-averse people.  But then it seems to me that your argument in this paper undercuts the claim that Buchak's model is adequate to this task.  Because I'm pretty confident that if you go up to an ordinary person, and ask them, "Should we destroy the world in order to avoid a 1 in 10 million risk of a dystopian long-term future, on the assumption that the future is vastly more likely to be extremely wonderful?" they would think you are insane.

So why should we give any credibility whatsoever to this model of rational choice?  If we want to capture ordinary sorts of risk aversion, there must be a better way to do so.  (Maybe discounting low-probability events, and giving extra weight to "sure things", for example -- though that sure does just seem plainly irrational.  A better approach, I suspect, would be something like Alejandro suggested in terms of properly accounting for the disutility of regret.)

Comment by Richard Y Chappell (RYC) on EA is Insufficiently Value Neutral in Practice · 2022-08-04T22:54:27.360Z · EA · GW

There is (or, at least, ought to be) a big gap between "considering" a view and "allying" with it.  If you're going to ally with any view no matter its content, there's no point in going to the trouble of actually thinking about it.  Thinking is only worthwhile if it's possible to reach conclusions that differ depending on the details of what's considered.

Of course we're fallible, but that doesn't entail radical skepticism (see: any decent intro philosophy text).  Whatever premises you think lead to the conclusion "maybe Nazism is okay after all," you should have less confidence in those philosophical premises than in the opposing conclusion that actually, genocide really is bad.  So those dubious premises can't rationally be used to defeat the more-credible opposing conclusion.

Comment by Richard Y Chappell (RYC) on EA is Insufficiently Value Neutral in Practice · 2022-08-04T21:32:13.976Z · EA · GW

Would you really want to ally with Effective Nazism?

Strict value neutrality means not caring about the difference between good and evil.  I think the "altruism" part of EA is important: it needs to be directed at ends that are genuinely good.  Of course, there's plenty of room for people to disagree about how to prioritize between different good things. We don't all need to have the exact same rank ordering of priorities. But that's a very different thing from value neutrality.

Comment by Richard Y Chappell (RYC) on What reason is there NOT to accept Pascal's Wager? · 2022-08-04T19:07:37.175Z · EA · GW

The content of the beliefs matters to their credibility, far more than sheer numbers.  I give ~zero weight to "what everyone thought", if I don't see any reason to expect their beliefs about the matter to be true. And the idea that an omnibenevolent God would punish people for being epistemically rational strikes me as outright incoherent, and so warrants ~zero credence.