Why wouldn't I? I don't believe in animal rights. Perhaps if no animal rights activists had ever condoned human rights violations against me, I might be indifferent.
Shouldn't the burden be the other way? Why should you care that it's real if it's otherwise indistinguishable? It sounds like you prefer real meat just to spite animal advocates. There are reasons to break the tie the other way:
1. Moral uncertainty. You might assign some possibility to it being wrong. Are you 100% sure animals don't matter? If you're 100% sure or close to it, is that confidence justified? Also, you don't have to believe in animal "rights" per se to recognize that animal farming causes harm to animals, and it's better to avoid this, all else equal.
2. The harm it causes other humans who care about animals because they care about animals. Imagine if we started farming children with severe intellectual disabilities and torturing them. It's horrifying for us in the same way.
3. Environmental harms.
4. Public health.
5. Injuries, PTSD and other mental health issues caused by slaughterhouse work.
6. Increased crime rates in areas with slaughterhouses. (I'm not sure how strong the causal relationship is here, though, but it's plausible given mental health effects.)
I have a hard time imagining conscious suffering non-person humans.
Infants (<1 year) and many nonverbal humans who are nonverbal because of intellectual disability.
But yeah, I do believe that if you were never a person, will never be a person and aren't a person, I don't need to respect your rights. What would the point be? Human rights are a coordination tool for humans to benefit humans, and even that's not really working very well.
They still have interests, e.g. in not suffering involuntarily If my own involuntary suffering is bad in itself, and I recognize that at least one other individual's involuntary suffering is bad in itself, then it's on me to justify treating some involuntary suffering as bad in itself and others not, and if I can't do this, then I should accept that it's always bad in itself, or that no other individual's suffering is bad in itself (and maybe not my own, either).
Are you not concerned with others' welfare for their sakes, and not just how it benefits you to be concerned with their welfare in other ways? What are the things that, at a fundamental level, make a person better or worse off? Don't those (or at least some of those) also apply to nonhuman animals?
I don't know. Alcohol is legal and people still buy the illegal marihuana.
I don't think alcohol is a good substitute for marijuana. People might still buy illegal stuff when they can buy the same products legally anyway, but there would have to be significant enough differences that make up for the risks for them to do this in large numbers.
Not in your lifetime. I think you're underestimating how culturally entrenched (animal) meat production and consumption is. It's common for vegans to think incorrectly that other people also don't care (much) about meat.
So the relevant attachment here is to (specific) real animal products in particular, which I think will give way much more easily, especially if the substitutes end up as good and cheaper, since I think most people don't have any special attachment to the authenticity of animal products regardless of quality or price or specifically want to buy them to support animal farming (although this might be common among conservatives or in rural areas). And again, you don't need anywhere near 100% support for a ban, which could make it prohibitively risky for people to farm animals or buy real animal products.
Check out this survey and its replication (pages 4-6), in which a third of respondents (Americans) said they supported a ban on animal farming. If and when substitutes become as good and cheaper, people will eat them by default instead of real animal products, and I think they'll become less speciesist.edoarad on Some thoughts on deference and inside-view models
For clarity, Terry Tao argues that it is a bad strategy to work on one open problem because one should skill up first, lose some naivety and get a higher status within the community. Not because it is a better problem solving strategy.michaelstjules on What are the leading critiques of "longtermism" and related concepts
This sounds like a misunderstanding to me. Longtermists concerned with short AI timelines are concerned with them because of AI's long lasting influence into the far future.randomea on What are the leading critiques of "longtermism" and related concepts
As an update, I am working on a full post that will excerpt 20 arguments against working to improve the long-term future and/or working to reduce existential risk as well as responses to those arguments. The post itself is currently at 26,000 words and there are six planned comments (one of which will add 10 additional arguments) that together are currently at 11,000 words. There have been various delays in my writing process but I now think that is good because there have been several new and important arguments that have been developed in the past year. My goal is to begin circulating the draft for feedback within three months.jeff_kaufman on Some thoughts on deference and inside-view models
Similar with what you're saying about AI alignment being preparadigmatic, a major reason why trying to prove the Riemann conjecture head-on would be a bad idea is that people have already been trying to do that for a long time without success. I expect the first people to consider the conjecture approached it directly, and were reasonable to do so.evelynciara on Forum update: Tags are live! Go use them!
Can you please add the tag directory to the sidebar?timothy_liptrot on Reducing long-term risks from malevolent actors
I am skeptical of this line of reasoning because I see little reason to believe that malevolence determined the policies in question. Game theory political scientists argue that different institutional structures make it rational or irrational for leaders to distribute public goods or targeted goods, practice repression, allow political parties. For a more in depth treatment, see the Dictator's Handbook by Bruce Bueno De Meqsuita and Alistair Smith. Their core argument is that because dictators must appease a very small group of powerful interest leaders (generals for Mussolini, members of the centcom for Stalin, tribal and military leaders for the Abdullah II) they can protect their power by rewarding only that small group at the expense of the masses.
Here is an illustrative example of political phenomena that is difficult to explain from the leaders personality. Torture is more common in multi-party autocracies than in one-party states. If the leaders narcissism strongly influences policies and narcissism and sadism are strongly correlated, then we would expect torture to be more common in states that ban dissent. Suppose that torture is not about satisfying the personal desire of the dictator and is instead about policing dissent. Now it makes sense that if some dissent (like resistance to a new "non-security" policy) is allowed, there must be some boundary into banned dissent. Then the occurrence of torture in multi-party states makes more sense and the rarity of torture in the most severe autocracies makes sense.
Opposing personality-of-dictator explanations to ideological explanations surprised me because it ignored the strongest explanations in institutional structures of states and in political cultures. Possibly you emphasized ideology because your samples are older. While the early modernist dictators were authentically ideological, most modern autocrats espouse a bland, centrist, syncretic corporatism. Dictators like Chavez and Castro are the exception today (although their ideology does influence behavior). Here is an article which argues most dictatorships are interest driven, not ideologically driven https://sci-hub.tw/10.1080/13510347.2017.1307823jason-schukraft on How to Measure Capacity for Welfare and Moral Status
Thanks for your comment. Measuring and comparing welfare across species is a tremendous theoretical and practical challenge. For measuring capacity for welfare, we would want to get a rough sense of the range of physical pain and pleasure an animal can experience as well as the range of emotional pain and pleasure an animal can experience. We would also want to know the degree to which physical and emotional pain/pleasure contribute to overall welfare, and this may differ by species. (We will need to account for combination effects: among other things, "stacking" one unit of physical pain on top of one unit of emotional pain may create more or less than two units of overall suffering.) All else being equal, if two animals have the same range of possible physical pains and pleasures, but animal A has a greater range of possible emotional pains and pleasures than animal B, we would expect animal A to have a greater capacity for welfare than animal B.
One thing to keep in mind is that what ultimately matters morally is realized welfare, not capacity for welfare. In many instances, judging the effectiveness of an intervention will require looking at species-specific differences in the way welfare is realized. Two animals may have the same overall capacity for welfare, and they may be subject to the same conditions (solitary confinement, say), but species-specific differences (one is a social animal and the other is not, say) may indicate that one animal suffers much more than the other in those conditions.
Nonetheless, I do believe thinking about capacity for welfare will help increase the efficiency with which our resources are allocated across interventions, especially when applied to big-picture questions, like "What percentage of our resources should ideally go to fish or crustaceans or insects?"zdgroff on How to Measure Capacity for Welfare and Moral Status
Great post, and I'm excited to see RP work on this. I have great confidence in your carefulness about this.
A concern I have with pretty much every approach to weighting welfare across species is that it seems like the correct weights may depend on the type of experience. For example, I could imagine the intensity of physical pain being very similar across species but the severity of depression from not being able to move to vary greatly.
Is there a way to allow for this within the approach you lay out here?amritsidhu-brar on What is a good donor advised fund for small UK donors?
My impression from CAF's webpage on their Charity Accounts was that the 4% fee was a one-off when you contribute money to the account, rather than an annual fee on the balance. However it's not very clear and your interpretation definitely makes sense too. Is anyone's knowledge from a source other than the website?