Longtermism and animal advocacy
post by Tobias_Baumann
This is a link post for https://centerforreducingsuffering.org/longtermism-and-animal-advocacy/
"There is a common tendency among effective altruists to think of animal advocacy as having little value for improving the long-term future. Similarly, animal advocates often assume that longtermism [EA · GW] has little relevance to their work. Yet this seems misguided: sufficient concern for nonhuman sentient beings is a key ingredient in how well the long-term future will go.
In this post, I will discuss whether animal advocacy – or, more generally, expanding the moral circle [EA · GW] – should be a priority for longtermists, and outline implications of a longtermist perspective on animal advocacy. My starting point is a moral view that rejects speciesism and gives equal weight to the interests and well-being of future individuals."
Comments sorted by top scores.
comment by MichaelPlant ·
2020-11-13T12:14:49.408Z · EA(p) · GW(p)
I don't yet have a strong view on how plausible it is that animal advocacy is a priority for longtermism. However, I think it's worth noting that, if it is, there are probably quite a few other sorts of projects that would qualify using exactly the same arguments.
For instance, at the Happier Lives Institute, we spend a lot of time thinking about best to measure well-being. There's an analogous argument that, if governments had better measures of well-being - e.g. better than GDP - and used them to make public policy decisions, that would have enormously valuable consequences over the long-run. I won't do it here, but the arguments are sufficiently analogous that, in Tobias' post, you could replace "animal advocacy" with "well-being measurement", keep the rest of the text the same and it would still make sense. So perhaps well-being measurement is a plausible longtermist priority too.
Other examples that might work include, just from the top of my head: "democratic institutions", "peace building", "education".
It's not clear to me if the right way to update is (a) all these 'society change' interventions are plausible longterm priorities or (b) none of them are. I lean toward (a), but I'm not very confident. Replies from: MichaelStJules
↑ comment by MichaelStJules ·
2020-11-15T03:15:00.478Z · EA(p) · GW(p)
There's an analogous argument that, if governments had better measures of well-being - e.g. better than GDP - and used them to make public policy decisions, that would have enormously valuable consequences over the long-run.
For human-centric concerns, this could be true, but my impression is that this kind of thing is more likely to happen eventually anyway in most human populations, because humans are both moral patients and moral agents; they will eventually create pressure for reform in this direction. On the other hand, s-risks often involve moral patients who aren't (powerful) agents, so we need to rely on agents to take their interests seriously in order to avoid s-risks, and advocacy is one way we might hope to ensure this.
If we send out vessels with moral patients to colonize space, something which is hard to reverse, if these moral patients are not agents, then their situations may be essentially decided for them at the time they're sent off and by the concern that decision-makers had for their welfare at the time, whereas if they are also agents (and motivated to improve their own welfare), then they can do more to improve their own welfare on their own.
comment by seanrson (firstname.lastname@example.org) ·
2020-11-16T02:19:28.248Z · EA(p) · GW(p)
Thanks for this post. Looking forward to more exploration on this topic.
I agree that moral circle expansion seems massively neglected. Changing institutions to enshrine (at least some) consideration for the interests of all sentient beings seems like an essential step towards creating a good future, and I think that certain kinds of animal advocacy are likely to help us get there.
As a side note, do we have any data on what proportion of EA's adhere to the sort of "equal consideration of interests" view on animals which you advocate? I also hold this view, but its rarity may explain some differences in cause prioritization. I wonder how rare this view is even within animal advocacy.Replies from: MichaelStJules
↑ comment by MichaelStJules ·
2020-11-17T22:02:56.040Z · EA(p) · GW(p)
I would guess that most of the more dedicated EAs believe in something roughly like "equal consideration of interests" ("equal consideration of equal interests" to be more specific), but many might think nonhuman animals' interests are much less strong/important than humans, on average.Replies from: Tobias_Baumann, email@example.com
↑ comment by Tobias_Baumann ·
2020-11-18T11:54:25.438Z · EA(p) · GW(p)
I'm somewhat less optimistic; even if most would say that they endorse this view, I think many "dedicated EAs" are in practice still biased against nonhumans, if only subconsciously. I think we should expect speciesist biases to be pervasive, and they won't go away entirely just by endorsing an abstract philosophical argument. (And I'm not sure if "most" endorse that argument to begin with.)
↑ comment by seanrson (firstname.lastname@example.org) ·
2020-11-18T00:15:47.348Z · EA(p) · GW(p)
Sorry, I'm a bit confused on what you mean here. I meant to be asking about the prevalence of a view giving animals the same moral status as humans. You say that many might think nonhuman animals' interests are much less strong/important than humans. But I think saying they are less strong is different than saying they are less important, right? How strong they are seems more like an empirical question about capacity for welfare, etc.Replies from: MichaelStJules