↑ comment by Owen_Cotton-Barratt ·
2020-09-03T01:07:21.227Z · EA(p) · GW(p)
I almost feel cheeky responding to this as you've essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!
That's fine! :)
In turn, an apology: my controversial view has baited you into response, and I'm now going to take your response as kind-of-volunteering for me to be critical. So I'm going to try and exhibit how it seems mistaken to me, and I'm going (in part) to use mockery as a rhetorical means to achieve this. I think this would usually be a violation of discourse norms, but here: the meta-level point is to try and exhibit more clearly what this controversial view I hold is and why; the thing I object to is a style of argument more than a conclusion; I think it's helpful for the exhibition to be able to draw attention to features of a specific instance, and you're providing what-seems-like-implicit-permission for me to do that. Sorry!
I'd say that something doesn't have to be the most effective thing to do for it to be worth doing, even if you're an EA.
To be clear: I strongly agree with this, and this was a big part of what I was trying say above.
So donating to a seeing eye dog charity isn't really a good thing to do.
This is non-central, but FWIW I disagree with this. Donating to the guide dog charity usually is a good thing to do (relative to important social norms where people have property rights over their money), it's just that it turns out there are fairly accessible actions which are quite a lot better.
Choosing to follow a ve*an diet doesn't have an opportunity cost (usually). You have to eat, and you're just choosing to eat something different.
This, I'm afraid, is the type of statement that really bugs me. It's trying to collapse a complex issue onto simple dimensions, draw a simple conclusion there, and project it back to the original complex world. But in doing so it's thrown common-sense out of the window!
If I believed that choosing to follow a ve*an diet usually didn't have an opportunity cost, I would expect to see:
- People usually willing to go ve*an for a year for some small material gain
- In theory if there was no opportunity cost, even for something trivial like $10, but I think many non ve*ans would be unwilling to do this even for $1000
- [As an aside, I think taxes on meat would probably be a good policy that might well be accessible]
- Almost everyone who goes ve*an for ethical reasons keeping it up
- In fact some significant proportion of people stop
Or perhaps you just think the personal cost to you of being ve*an is substantial enough to offset the harm to the animals.
I certainly don't claim this in any utilitarian comparison of welfare. But now the argument seems almost precisely analogous to:
"You could help the poorest people in the world a tremendous amount for the cost of a cup of coffee. Since your welfare shouldn't outweigh theirs, you should forgo that cup of coffee, and every other small luxury in your life, to give more to them."
I think EA correctly rejects this argument, and that it's correct to reject its analogue as well. (I think the argument is stronger for ve*anism than giving to the poor instead of buying coffee; but I also think that there are better giving opportunities than giving directly to the poor, and that when you work it through the coffee argument ends up being stronger than the corresponding one for ve*anism.)
Again, I'm not claiming that EAs shouldn't be ve*an. I think it's a morally virtuous thing to do!
But I don't think EAs have a monopoly on virtue. I think the EA schtick is more like "we'll think things through really carefully and tell you what the most efficient ways to do good are". And so I think that if it's presented as "you want to be an EA now? great! how about ve*anism?" then the implicature is that this is a bigger deal than, say, moving from giving away 7% of your income to giving away 8%, and that this is badly misleading.
Replies from: jackmalde, Bella_Forristal, Telofy
- There may be some people for whom the opportunity cost is trivial
- I think there are probably quite a few people for whom the opportunity cost is actually negative -- i.e. it's overall easier for them to be ve*an than not
- I would feel very good about encouragement to check whether people fall into one of these buckets, as in cases where they do then dietary change may be a particularly efficient way to do good
- I'd also feel very good about moral exhortment to be ve*an that was explicit that it wasn't grounded in EA thinking, like:
- "Many EAs try to be morally serious in all aspects of their lives, beyond just trying to optimise for the most good achievable. This leads us to ve*anism. You might want to consider it."
↑ comment by jackmalde ·
2020-09-03T06:10:11.992Z · EA(p) · GW(p)
I'm not 100% sure but we may be defining opportunity cost differently. I'm drawing a distinction between opportunity cost and personal cost. Opportunity cost relates to the fact that doing something may inhibit you from doing something else that is more effective. Even if going vegan didn't have any opportunity cost (which is what I'm arguing in most cases), people may still not want to do it due to high perceived personal cost (e.g. thinking vegan food isn't tasty). I'm not claiming there is no personal cost and that is indeed why people don't go / stay vegan - although I do think personal costs are unfortunately overblown.
Without addressing all of your points in detail I think a useful thought experiment might be to imagine a world where we are eating humans not animals. E.g. say there are mentally-challenged humans of a comparable intelligence/capacity to suffer to non-human animals and we farm them in poor conditions and eat them causing their suffering. I'd imagine most people would judge this as morally unacceptable and go vegan on consequentialist grounds (although perhaps not and it would actually be deontological grounds?). If you would go vegan in the thought experiment but not in the real world then you're probably speciesist to some degree which I ultimately don't think can be defended.
I think the EA schtick is more like "we'll think things through really carefully and tell you what the most efficient ways to do good are". And so I think that if it's presented as "you want to be an EA now? great! how about ve*anism?"
EA is sometimes described as doing the most good (most common definition) or I suppose is sometimes described as finding the most effective ways to do good. These can be construed as two different things. I would say under the first definition that being vegan naturally becomes part of the conversation for the reasons I have mentioned (little to no opportunity cost).
Also, we may be fundamentally disagreeing on the scale of the benefits on consequentialist grounds of going vegan as well - I think they are quite considerable. Indeed "signalling caring" as you put it can then convince others to consider veganism in which case you can get a snowball of positive effects. But that's a whole other discussion.
P.S. I agree we can probably improve the way veganism is messaged in EA and it's possible I am part of the problem!
↑ comment by Bella_Forristal ·
2020-09-06T07:17:18.870Z · EA(p) · GW(p)
Thanks for this interesting discussion; for others who read this and were interested, I thought I'd [EA · GW] link [EA · GW] some [EA · GW] previous EA discussions on this topic in case it's helpful :)
One brief addition: I think the kind of conscientious omnivorism you describe ('I do try to only consume animals I think have had reasonable welfare levels') might have similar opportunity costs to veg*ism, and there's some not very conclusive psychological literature to suggest that, since it is a finer grained rule than 'eat no animals', it might even be harder to follow.
Obviously, this depends very much on what we mean by opportunity cost, and it also depends on how one goes about only trying to eat happy animals. I'm not sure what the best answer to either of those questions is.
↑ comment by Denis Drescher (Telofy) ·
2020-09-05T21:33:31.601Z · EA(p) · GW(p)
I’ve thought a bit about this for personal reasons, and I found Scott Alexander’s take on it to be enlightening.
I see a tension between the following two arguments that I find plausible:
- Some people run into health issues due to a vegan diet despite correct supplementation. In most cases it’s probably because of incorrect or absent supplementation, but probably not in all. This could mean that a highly productive EA doing highly important work may cease to be as productive with a small probability. Since they’ve probably been doing extremely valuable work, this decrease in output may be worse than the suffering they would’ve inflicted if they had [eaten some beef and had some milk](https://impartial-priorities.org/direct-suffering-caused-by-various-animal-foods.html). So they should at least eat a bit of beef and drink a bit of milk to reduce that risk. (These foods may increase other risks – but let’s assume for the moment that the person can make that tradeoff correctly for themselves.)
- There is currently in our society a strong moral norm against stealing. We want to live in a society that has a strong norm against stealing. So whenever we steal – be it to donate the money to a place where it has much greater marginal utility than with its owner – we erode, in expectation, the norm against stealing a bit. People have to invest more into locks, safes, guards, and fences. People can’t just offer couchsurfing anymore. This increase in anomie (roughly, lack of trust and cohesion) may be small in expectation but has a vast expected societal effect. Hence we should be very careful about eroding valuable societal norms, and, conversely, we should also take care to foster new valuable societal norms or at least not stand in the way of them emerging.
I see a bit of a Laffer curve here (like an upside-down U) where upholding societal rules that are completely unheard of has little effect, and violating societal rules that are extremely well established has little effect again (except that you go to prison). The middle section is much more interesting, and this is where I generally advise to tread softly. (But I’m also against stealing.)
Because the way I resolve this tension for me is to assess whether in my immediate environment – the people who are most likely to be directly influenced by me – a norm is potentially about to emerge. If that is the case, and I approve of the norm, I try to always uphold that norm to at least an above average level.
Well, and then there are a few more random caveats:
- As the norm not to harm other animals for food becomes stronger, it’ll be less socially awkward for people (outside vegan circles) to eat vegan food. Social effects were (last time I checked) still the second most common reason for vegan recidivism.
- As the norm not to harm other animals for food becomes stronger, more effort will be put into providing properly fortified food to make supplementation automatic.
- Eroding a budding social norm because it comes at a cost to one’s own goals seems like the sort of freeriding that I think the EA community needs to be very careful about. In some cases the conflict is only due to lacking idealization of preferences or only between instrumental rather than terminal goals or the others would defect against us in any case, but we don’t know any of this to be the case here. The first comes down to unanswered questions of population ethics, the second to the exact tradeoffs between animal suffering and health risks for a particular person, and the third to how likely animal rights activists are to badmouth AI safety, priorities research, etc. – probably rarely.
- Being vegan among EAs, young, educated people, and other disproportionately antispeciesist groups may be more important than being vegan in a community of hunters.
- A possible, unusual conclusion to draw from this is to be “private carnivor”: You only eat vegan food in public, and when people ask you whether you’re vegan, you tell them that you think eating meat is morally bad, a bad norm, and shameful, and so you only do it in private and as rarely as possible. No lies or pretense.
- There’s also the option of moral offsetting, which I find very appealing (despite these criticisms [EA · GW] – I think I somewhat disagree with my five-year-old comment there now), but it doesn’t seem to quite address the core issue here.
- Another argument you mentioned to me at an EAGx was something along the lines that it’ll be harder to attract top talent in field X (say, AI safety) if they not only have to subscribe X being super important but have to subscribe to X being super important and be vegan. Friend of mine solve that by keeping those things separate. Yes, the catering may be vegan, but otherwise nothing indicates that there’s any need for them to be vegan themselves. (That conversation can happen, if at all, in a personal context separate of any ties to field X.)