Posts
Comments
I guess that doesn't work for nihilism, but another option when there's indifference or incomparability about a given decision is to just resample until you get a theory that cares (or condition on theories that care). But you might only want to do this for nihilism, because other theories could care that their votes are being given away, if they could have otherwise used them as leverage for decisions they would care about.
I'm not committed to only illusions related to attention mattering or indicating consciousness. I suspect the illusion of body ownership is an illusion that indicates consciousness of some kind, like with the rubber hand illusion, or, in rodents, the rubber tail illusion. I can imagine illusions related to various components of experiences (e.g. redness, sound, each sense), and the ones that should matter terminally to us would be the ones related to valence and desires/preferences, basically illusions that things actually matter to the system with those illusions.
I suspect that recognizing faces doesn't require any illusion that would indicate consciousness. Still, I'm not sure what counts as an illusion, and I could imagine it being the case that there are very simple illusions everywhere.
I think illusionism is the only theory (or set of theories) that's on the right track to actually (dis)solving the hard problem, by explaining why we have the beliefs we do about consciousness, and I'm pessimistic about all other approaches.
I think you can address the violation of dominance by just allowing representatives of theories to bargain and trade. If theory A doesn't care (much) about some choice, and another theory B cares (a lot), theory B can negotiate with theory A and buy out their votes by offering something in later (or simultaneous) decisions.
Newberry and Ord, 2021 have separately developed the sortition model as "proportional chances" voting version moral parliament, which also explicitly allow bargaining and trading ahead of time. Furthermore, if we consider voting on what to do with each unit of resources independently and resources are divisible into a large number of units, proportional chances voting will converge to a proportional allocation of resources, like Lloyd's property rights approach, which is currently the approach I favour.
Also, to prevent minority views from gaining control and causing extreme harm by the lights of the majority, Newberry and Ord propose imagining voters believe they will be selected at random in proportion to their credences and so they act accordingly, compromising and building coalitions, but then the winner is just chosen by plurality, i.e. whichever theory gets the most votes. This seems a bit too ad hoc to me and gives up a lot of the appeal of random voting in the first place, though, and I hope we can find a more natural response. Some ideas:
- Other theories can burn their votes/resources to eliminate one theory's votes/resources. I think this could be pretty unfair, because with say two theories competing, the one with greater credence could completely rule out the one with lower credence and gain total control. It should be more costly to do something like this.
- There can be a constraining constitution that is decided by a supermajority with a relatively high threshold (not randomly).
I agree the share of individuals who would be convinced to vote based on such an argument seems pretty small. In particular, the share of people hearing these arguments seems pretty small, although maybe if you include far future beings, the share (or influence-weighted share) could be large.
It could matter for people who are concerned with difference-making and think the probability of making a difference is too low under standard causal decision theory and assign reasonably high probability to an infinite universe. See Can an evidentialist be risk-averse? by Hayden Wilkinson. Maybe on other views, too, but not risk neutral expected value-maximizing total utilitarianism.
It could just be attention. If something would otherwise be too sweet, but some other part of it is salient (coldness, carbonization, bitterness, saltiness), those other parts will take some of your attention away from its sweetness, and it'll seem less sweet.
I think noticing your own awareness, a self-model and a model of your own attention are each logically independent of (neither necessary nor sufficient for) consciousness. I interpret AST as claiming that illusions of conscious experience, specific ways information is processed that would lead to inferences like the kind we make about consciousness (possibly when connected to appropriate inference-making systems, even if not normally connected), are what make something conscious, and, in practice in animals, these illusions happen with the attention model and are unlikely to happen elsewhere. From Graziano, 2020:
Suppose the machine has a much richer model of attention. Somehow, attention is depicted by the model as a Moray eel darting around the world. Maybe the machine already had need for a depiction of Moray eels, and it coapted that model for monitoring its own attention. Now we plug in the speech engine. Does the machine claim to have consciousness? No. It claims to have an external Moray eel.
Suppose the machine has no attention, and no attention schema either. But it does have a self-model, and the self-model richly depicts a subtle, powerful, nonphysical essence, with all the properties we humans attribute to consciousness. Now we plug in the speech engine. Does the machine claim to have consciousness? Yes. The machine knows only what it knows. It is constrained by its own internal information.
AST does not posit that having an attention schema makes one conscious. Instead, first, having an automatic self-model that depicts you as containing consciousness makes you intuitively believe that you have consciousness. Second, the reason why such a self-model evolved in the brains of complex animals, is that it serves the useful role of modeling attention.
I would also go a bit further to claim that it's "rich" illusions, not "sparse" illusions, that matter here. Shabasson, 2021 gives a nice summary of Kammerer, 2019, where this distinction is made:
According to Kammerer, the illusion of phenomenal consciousness must be a rich illusion because of its strength. It persists regardless of what an agent might come to believe about the reality (or unreality) of phenomenal consciousness. By contrast, a sparse illusion such as the headless woman illusion quickly loses its grip on us once we come to believe it is an illusion and understand how it is generated. Kammerer criticizes Dennett’s and Graziano’s theories for being sparse-illusion views (2019c: 6–8).
The example rich optical illusion given is the Müller–Lyer illusion. It doesn't matter if you just measured the lines to show they have the same length: once you look at the original illusion again (at least without extra markings or rulers to make it obvious that they are the same length), one line will still look longer than the other.
On a practical and more theory-neutral or theory-light approach, we can also distinguish between conscious and unconscious perception in humans, e.g. with blindsight and other responses to things outside awareness. Of course, it's possible the "unconscious" perception is actually conscious, just not accessible to the higher-order conscious process (conscious awareness/attention), but there doesn't seem to be much reason to believe it's conscious at all. Furthermore, the generation of consciousness illusions below awareness seems more costly compared to only generating them at the level of which we are aware, because most of the illusions would be filtered out of awareness and have little impact on behaviour, so there should be evolutionary pressure against that. Then, we have little reason to believe capacities that are sometimes realized unconsciously in humans indicate consciousness in other animals.
RP's invertebrate sentience research gave little weight to capacities that (sometimes) operate unconsciously in humans. Conscious vs unconscious perception is discussed more by Birch, 2020. He proposes the facilitation hypothesis:
Phenomenally conscious perception of a stimulus facilitates, relative to unconscious perception, a cluster of cognitive abilities in relation to that stimulus.
and three candidate abilities: trace conditioning, rapid reversal learning and cross-model learning. The idea would be to "find out whether the identified cluster of putatively consciousness-linked abilities is selectively switched on and off under masking in the same way it is in humans."
Apparently some rich optical illusions can occur unconsciously while others occur consciously, though (Chen et al., 2018). So, maybe there is some conscious but inaccessible perception, although this is confusing, and I'm not sure about the relationship between these kinds of illusions and illusionism as a theory. Furthermore, I'm still skeptical of inaccessible conscious valence in particular, since valence seems pretty holistic, context-dependent and late in any animal's processing to me. Mason and Lavery, 2022 discuss some refinements to experiments to distinguish conscious and unconscious valence.
I do concede that there could be an important line-drawing or trivial instantiation problem for what counts as having a consciousness illusion, or valence illusion, in particular.
Ya, the analyses explicitly include spillover effects on some individuals who aren't directly affected by the interventions (i.e. household family members), but ignore potentially important predictable nearterm indirect effects (those on nonhuman animals) and all of the far future effects. And it doesn't explain why.
However, ignoring effects on nonhuman animals and the far future is typical for analyses of global health and poverty interventions. And this is discussed in other places where cause prioritization is the main topic. I'd guess, based on comments elsewhere on the EA Forum and other EA-related spaces, nonhuman animal effects are ignored because the authors don't agree with giving nonhuman animals so much moral weight relative to humans, or are doing worldview diversification and they aren't confident in such high moral weights. I don't think we'd want a comment like Vasco's on many global health and poverty intervention posts, because we don't want to have the same discussion scattered and repeated this way, especially when there are better places to have it. Instead, Vasco's own posts, posts about moral weight and posts about cause prioritization would be better places.
When people bring up effects on wild fish, I often point out that they're thinking about it the wrong way (getting the supply responses wrong) and ignoring population effects. But I'm pretty sure this is something they would care about if informed, and there aren't that many posts about wild fish. I also suspect we should be more worried about animal product reduction backfiring in the near term because of wild animal effects, but I think this is more controversial and animal product reduction is covered much more on the EA Forum than fishing in particular, so passing comments on posts about diet change and substitutes doesn't seem like a good way to have this discussion.
I guess there's a question of whether a comment like Vasco's would be welcome every now and then on global health and poverty posts, but it could be a slippery slope.
What I have in mind is specifically that these random particle movements could sometimes temporarily simulate valence-generating systems by chance, even if only for a fraction of a second. I discussed this more here, and in the comments.
My impression across various animal species (mostly mammals, birds and a few insect species) is that 10-30% of neurons are in the sensory-associative structures (based on data here), and even fewer could be used to generate conscious valence (on the right inputs, say), maybe even a fraction of the neurons that ever generate conscious valence. So it seems that around 50 out of the 302 neurons would be enough to simulate, and maybe even a few times less. Maybe this would be overgeneralizing to nematodes, though.
If it's true that individual biological neurons are like two-layer neural networks, then 302 biological neurons would be like thousands (or more?) of artificial neurons.
I did have something like this in mind, but was probably thinking something like biological neurons are 10x more expressive than artificial ones, based on the comments here. Even if that's not more likely than not, a non-tiny chance of at most around 10x could be enough, and even a tiny chance could get us a wager for panpsychism.
I suppose an artificial neuron could also be much more complex than a few particles, but I can also imagine that could not be the case. And invertebrate neuron potentials are often graded rather than spiking, which could make a difference in how many particles are needed.
Even if we had an artificial neural network that could mimic all the cognitive abilities of C. elegans, I think the biological organism would still seem more sentient because it would have a body and would interact with a real, complex environment, which would make the abstract symbol manipulations of its brain feel more grounded and meaningful. Hooking up the artificial brain to a small robot body would feel closer to matching C. elegans in terms of sentience, but by that point, it's plausible to me that the robot itself would warrant nontrivial moral concern.
I'd be willing to buy something like this. In my view, a real C elegans brain separated from the body and receiving misleading inputs should have valence as intense as C elegans with a body, on the right kinds of inputs. On views other than hedonism, maybe a body makes an important difference, and all else equal, I'd expect having a body and interacting with the real world to just mean greater (more positive and less negative) welfare overall, basically for experience machine reasons.
I think the way the theories are assumed to work in that paper are all implausible accounts of consciousness, and, at least for GWT, not how GWT is intended to be interpreted. See https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we#Neural_correlate_theories_of_consciousness_____explanatory_theories_of_consciousness
I now lean towards illusionism, and something like Attention Schema Theory. I don't think illusionism rules out panpsychism, but I'd say it's much less likely under illusionism. I can share some papers that I found most convincing. Luke Muehlhauser's report on consciousness also supports illusionism.
With only 302 neurons, probably only a minority of which actually generate valenced experiences, if they're sentient at all, I might have to worry about random particle interactions in the walls generating suffering.
Nematodes also seem like very minimal RL agents that would be pretty easy to program. The fear-like behaviour seems interesting, but still plausibly easy to program.
I don't actually know much about mites or springtails, but my ignorance counts in their favour, as does them being more closely related to and sharing more brain structures (e.g. mushroom bodies) with arthropods with more complex behaviours that seem like better evidence for sentience (spiders for mites, and insects for springtails).
To clarify what I edited in, I mean that, without better evidence/argument, the effect could be arbitrarily small but still nonzero. What reason do we have to believe it's at least 1%, say, other than very subjective priors?
I agree that analysis of new evidence should help.
(EDITED)
Is this (other than 53% being corrected to 38%) from the post accurate?
Spillovers: HLI estimates that non-recipients of the program in the recipient’s household see 53% of the benefits of psychotherapy from StrongMinds and that each recipient lives in a household with 5.85 individuals.[11] This is based on three studies (Kemp et al. 2009, Mutamba et al. 2018a, and Swartz et al. 2008) of therapy programs where recipients were selected based on negative shocks to children (e.g., automobile accident, children with nodding syndrome, children with psychiatric illness).[12]
If so, a substantial discount seems reasonable to me. It's plausible these studies also say almost nothing about the spillover, because of how unrepresentative they seem. Presumably much of the content of the therapy will be about the child, so we shouldn't be surprised if it has much more impact on the child than general therapy for depression.
It's not clear any specific number away from 0 could be justified.
It seems hard to believe that the life satisfaction of bad lives can only range 0.5 points on a 0-10 life satisfaction scale, assuming a neutral point of 0.5. Or, if it is the case, then a marginal increase in measured life satisfaction should have greater returns to actual welfare (whatever that is) near the low end than elsewhere.
I'm not sure how corporate taxes work exactly, but I don't think the companies benefit overall by giving you the money if you only take advantage of promotions like this and don't also bet more often than otherwise besides with promotions. Otherwise, companies would donate a lot more and enough to never pay any taxes (or pay only the minimum required after all possible deductions).
So the net effects of taking advantage of this are that the companies earn less in net profits and pay less taxes. But they'd pay less in taxes if they were less profitable, anyway (although they're probably paying even less, relative to their net profits). Then
-
If you think it's good that these companies earn less in net profits after taxes, then that counts in favour of taking and using their promotion money.
-
If you're worried about reducing public resources, then this also counts against you using charitable tax credits/deductions in order to donate more to EA charities in general. But EAs don't typically worry about this, because our charitable dollars do much more good and the rate here is still pretty favourable.
Although more controversial, you might even believe reducing public expenditures overall is good, if you believe the state uses marginal funding in net harmful ways, with harms from wars, enforcing unjust laws or overburdensome regulation.
It seems like you're overselling Shapley values here, then, unless I've misunderstood. They won't help to decide which interventions to fund, except for indirect reasons (e.g. assigning credit and funding ex post, judging track record).
You wrote "Then we walk away saying (hyperopically,) we saved a life for $5,000, ignoring every other part of the complex system enabling our donation to be effective. And that is not to say it's not an effective use of money! In fact, it's incredibly effective, even in Shapley-value terms. But we're over-allocating credit to ourselves."
But if $5000 per life saved is the wrong number to use to compare interventions, Shapley values won't help (for the right reasons, anyway). The solution here is to just model counterfactuals better. If you're maximizing the sum of Shapley values, you're acknowledging we have to model counterfactuals better anyway, and the sum is just expected utility, so you don’t need the Shapley values in the first place. Either Shapley value cost-effectiveness is the same as the usual cost-effectiveness (my interpretation 1) and redundant, or it's a predictably suboptimal theoretical target (e.g. maximizing your own Shapley value only, as in Nuno's proposal, or as another option, my interpretation 2, which requires unrealistic counterfactual assumptions).
The solution to the non-EA money problem is also to just model counterfactuals better. For example, Charity Entrepreneurship has used estimates of the counterfactual cost-effectiveness of non-EA money raised by their incubated charities if the incubated charity doesn't raise it.
Do you mean maximizing the sum of Shapley values or just your own Shapley value? I had the latter in mind. I might be mistaken about the specific perverse examples even under that interpretation, since I'm not sure how Shapley values are meant to be used. Maximizing your own Shapley value seems to bring in a bunch of counterfactuals (i.e. your counterfactual contribution to each possible coalition) and weigh them ignoring propensities to cooperate/coordinate, though.
On the other hand, the sum of Shapley values is just the value (your utility?) of the "grand" coalition, i.e. everyone together. If you're just maximizing this, you don’t need to calculate any of the Shapley values first (and in general, you need to calculate the value of the grand coalition for each Shapley value). I think the point of Shapley values would just be for assigning credit (and anything downstream of that), not deciding on the acts for which credit will need to be distributed.
If you're maximizing this sum, what are the options you're maximizing over?
-
On one interpretation, if you're maximizing this only over your own actions and their consequences, including on others' responses (and possibly acausal influence), it's just maximizing expected utility.
-
On another interpretation, if you're maximizing it over everyone's actions or assuming everyone else is maximizing it (and so probably that everyone is sufficiently aligned), then that would be ideal (game-theoretically optimal?), but such an assumption is often unrealistic and making it can lead to worse outcomes. For example, our contributions to a charity primarily supported (funding, volunteering, work) by non-EAs with little room for more support might displace that non-EA support towards far less cost-effective uses or even harmful uses. And in that case, it can be better to look elsewhere more neglected by non-EAs. The assumption may be fine in some scenarios (e.g. enough alignment and competence), but it can also be accommodated in 1.
So, I think the ideal target is just doing 1 carefully, including accounting for your influence over other agents and possibly acausal influence in particular.
Are you suggesting we maximize Shapley values instead of expected utility with counterfactuals? That's going to violate standard rationality axioms, and so (in idealized scenarios with "correctly identified" counterfactual distributions) is likely to lead to worse outcomes overall. It could recommend, both in practice and in theory, doing fully redundant work for just for the credit and no extra value. In the paramedic example, depending on how the numbers work out, couldn't it recommend perversely pushing the paramedics out of the way to do CPR yourself and injuring the individual's spine, even knowing ahead of time you will injure them? There's a coalition - just you - where your marginal/counterfactual impact is to save the person's life, and giving that positive weight could make up for the injury you actually cause.
I think we should only think of Shapley values as a way to assign credit. The ideal is still to maximize expected utility (or avoid stochastic dominance or whatever). Maybe we need to model counterfactuals with other agents better, but I don't find the examples you gave very persuasive.
The most important thing about your decision theory is that it shouldn't predictably and in expectation leave you worse off than if you had used a different approach. My claim in the post is that we're using such an approach, and it leaves us predictably worse off in certain specific cases.
This isn't a problem with expected utility maximization (with counterfactuals), though, right? I think the use of counterfactuals is theoretically sound, but we may be incorrectly modelling counterfactuals.
This is a technically accurate definition, but I still had trouble intuiting this as equivalent to a daily experience of disabling physical pain equivalent to having your leg sliced open with a hot, sharp live wire.
Nest deprivation could be in the bottom half of the disabling pain intensity range. Ren put their tattoo experiences described as "**** me, make it stop. Like someone slicing into my leg with a hot, sharp live wire." near the high end of disabling. Also, the latter just sounds excruciating to me personally, not merely disabling, but we discussed that here.
Besides the evidence you mention, they also mention vocalizations (gakel-calls), which seem generally indicative of frustration across contexts (dustbathing deprivation, food/water deprivation, nesting deprivation), and hens made more of them when nest deprived than when deprived of food, water or dustbathing in Zimmerman et al., 2000, although in that study, the authors discuss the possibility that nest deprivation gakel-calls are more specific and not necessarily indicative of frustration:
In the period Frustration, the number of gakel-calls was higher in treatment Nest than in the other treatments. This might mean that in this treatment the level of frustration was higher. However, this is not supported by higher levels of other behaviours indicative of frustration in treatment Nest compared to the other treatments. An alternative explanation for the higher number of gakel-calls in treatment Nest is suggested by the occurrence of the gakel-call under natural circumstances. The gakel-call is given before oviposition and probably has evolved as a signal towards the rooster McBride et al., 1969; Thornhill, 1988 . According to Meijsser and Hughes 1989 , the performance of the gakel-call is related to finding a suitable nest site, also under husbandry conditions. Another explanation is offered by the motivational model proposed by Wiepkema 1987 . It implies that the gakel-call under these circumstances is an emotional expression of the detection of a prolonged mismatch between actual ‘‘no nest site found’’ state and desired state ‘‘find a suitable nest site’’ and is an indication of frustration. Both oviposition and the detection of a prolonged mismatch could at the same time contribute to the occurrence of gakel-calls. The surplus of gakel-calls in treatment Nest compared to the other treatments might be the gakel-calls specifically related to oviposition.
This latter finding might account for the difference in temporal characteristics of gakel-calls between treatment Nest and the treatments Water and Dust. Gakel-calls in treatment Nest lasted longer and consisted of more notes than in the treatments Water and Dust. Schenk et al. 1983 found that the mean duration of a single gakel-call was longer when dustbathing was thwarted stronger by longer deprivation. However, from the present study, nothing decisive can be concluded about the relation between the number of gakel-calls and their temporal characteristics on the one hand, and the intensity of thwarting in the different treatments on the other
I agree with you that that is definitely conceivable. But I think that, as Carl argued in his post (and elaborated on further in the comment thread with gwern), our default assumption should be that efficiency (and probably also intensity) of pleasure vs pain is symmetric.
I think identical distributions for efficiency is a reasonable ignorance prior, ignoring direct intuitions and evidence one way or the other, but we aren't so ignorant that we can't make any claims one way or the other. The kinds of claims Shulman made are only meant to defeat specific kinds of arguments for negative skew over symmetry, like direct intuition, not to argue for positive skew. Given the possibility that direct intuition in this case could still be useful (and indeed skews towards negative being more efficient, which seems likely), contra Shulman, then without arguments for positive skew (that don't apply equally in favour of negative skew), we should indeed expect the negative to be more efficient.
Furthermore, based on the arguments other than direct intuition I made above, and, as far as I know, no arguments for pleasure being more efficient than pain that don't apply equally in reverse, we have more reason to believe efficiencies should skew negative.
Also similar to gwern's comment, if positive value on non-hedonistic views does depend on things like reliable perception of the outside world or interaction with other conscious beings (e.g. compared to the experience machine or just disembodied pleasure) but bads don't (e.g. suffering won't really be any less bad in an experience machine or if disembodied), then I'd expect negative value to be more efficient than positive value, possibly far more efficient, because perception and interaction require overhead and may slow down experiences.
However, similar efficiency for positive value could still be likely enough that the expected efficiencies are still similar enough and other considerations like their frequency dominate.
Ah, welfare range estimates may already be supposed to capture the probability that an animal can experience intense suffering, like excruciating pain.
Thanks for writing this!
You might be able to make some informed guesses or do some informative sensitivity analysis about net welfare in wild animals, given your pain intensity ratios. I think it's reasonable to assume that animals don't experience any goods as intensely good (as valuable per moment) as excruciating pain is intensely bad. Pleasures as intense as disabling pain may also be rare, but that could be an assumption to vary.
Based on your ratios and total utilitarian assumption, 1 second of excruciating pain outweighs 11.5 days of annoying pain or 1.15 days of hurtful pain, or 11.5 days of goods as intense as annoying pain or 1.15 days of goods as intense as hurtful pain, on average.
Just quickly Googling for the most populous groups I'm aware of, mites, springtails and nematodes live a few weeks at most and copepods up to around a year. There might be other similarly populous groups of aquatic arthropods I'm missing that you should include, but I think mites and springtails capture terrestrial arthropods by moral weight. I think those animals will dominate your calculations, the way you're doing them. And their deaths could involve intense pain and perhaps only a very small share live more than a week. However, it's not obvious these animals can experience very intense suffering at all, even conditional on their sentience, but this probability could be another sensitivity analysis parameter.
(FWIW, I'd be inclined to exclude nematodes, though. Including them feels like a mugging to me and possibly dominated by panpsychism.)
Ants may live up to a few years and are very populous, and I could imagine have relatively good lives on symmetric ethical views, as eusocial insects investing heavily in their young. But they're orders of magnitude less populous than mites and springtails.
Although this group seems likely to be outweighed in expectation, for wild vertebrates (or at least birds and mammals?), sepsis seems to be one of the worst natural ways to die, with 2 hours of excruciating pain and further time at lower intensities in farmed chickens (https://welfarefootprint.org/research-projects/cumulative-pain-and-wild-animal-welfare-assessments/ ). With your ratios, this is the equivalent of more than 200 years of annoying pain or 20 years of hurtful pain, much longer than the vast majority of wild vertebrates (by population and peehaps species) live. I don't know how common sepsis is, though. Finding out how common sepsis is in the most populous groups of vertebrates could have high value of information for wild vertebrate welfare.
I'm not aware of high status individuals in the community justifying prioritizing humans on the mere basis of species membership. Usually I see claims about differences in capacities and interests. Are there public examples you can share?
Thanks for sharing your experiences. There's also an article here with some useful info on and others' experiences with inadequate pain relief for childbirth in the UK: https://www.vice.com/en/article/8x7mm4/childbirth-pain-relief-denied
I think there are multiple ways to be a neartermist or longtermist, but "currently existing" and "next 1 year of experiences" exclude almost all effective animal advocacy we actually do and the second would have ruled out deworming.
Are you expecting yourself (or the average EA) to be able to cause greater quantities of intense pleasure than quantities of intense suffering you (or the average EA) can prevent in the next ~30 years, possibly considering AGI? Maybe large numbers of artificially sentient beings made to experience intense pleasure, or new drugs and technologies for humans?
To me, the distinction between neartermism and longtermism is primarily based on decision theory and priors. Longtermists tend to be more willing to bet more to avoid a single specific existential catastrophe (usually extinction) even if the average longtermist is extremely unlikely to avert the catastrophe. Neartermists rely on better evidence, but seem prone to ignore what they can't measure (McNamara fallacy). It seems hard to have predictably large positive impacts past the average human lifespan other than through one-shots the average EA is very unlikely to be able to affect, or without predictably large positive effects in the nearer term, which could otherwise qualify the intervention as a good neartermist one.
I was just about to share this. I guess some of the psychedelics in their pleasure scale figure could be the easiest to use to experience intense pleasure, depending on your local laws and enforcement.
It may end up being that such intensely positive values are possible in principle and matter as much as intense pains, but they don’t matter in practice for neartermists, because they're too rare and difficult to induce. Your theory could symmetrically prioritize both extremes in principle, but end up suffering-focused in practice. I think the case for upside focus in longtermism could be stronger, though.
It's also conceivable that pleasurable states as intense as excruciating pains in particular are not possible in principle after refining our definitions of pleasure and suffering and their intensities. Pleasure and suffering seem not to be functionally symmetric. Excruciating pain makes us desperate to end it. The urgency seems inherent to its intensity, and its subjective urgency lifts to its moral urgency and importance when we weight individuals' subjective wellbeing. Would similarly intense pleasure make us desperate to continue/experience it? It's plausible to me that such desperation would actually just be bad or unpleasant, and so such a pleasurable state would be worse than other pleasurable ones. Or, at least, such desperation doesn’t seem to me to be inherently positively tied to its intensity. Suffering is also cognitively disruptive in a way pleasure seems not to be. And pain seems to be more tied to motivation than pleasure seems to be (https://link.springer.com/article/10.1007/s13164-013-0171-2 ).
(Edited to correct my math and add the example at the bottom.)
That sounds possible to me, but again I feel very unsure and would want to work through some models before deciding. It still seems possible to me that the effects on wild arthropods (aquatic and terrestrial) could outweigh, even vastly outweigh, the effects on farmed insects. It could depend on which arthropods you count, what probabilities you assign to their consciousness and their expected moral weights, though, e.g. mites, springtails and zooplankton might be too unlikely to be conscious or not matter enough overall, but I suspect the number of them affected will be far more numerous than the number of affected farmed insects, at least in expectation.
Globally, there are an estimated
- 10^17 to 10^19 terrestrial arthropods alive at any moment, according to Tomasik, 2009 and Bar-On, Phillips, and Milo, 2018. Tomasik, 2009 has a breakdown.
- 10^18 copepods (small crustaceans) according to Tomasik, 2009.
On the other hand, I expect the number of farmed insects to reach on the order of 10^11 to 10^12: 500,000 tonnes in 2030 by dry weight or dried protein meal, and about 0.05 grams per dried black soldier fly larva, but they live around 14 days before slaughter, so I'd estimate the number alive at any time in 2030 to be:
So that's at least around 1,000,000x more wild arthropods than farmed arthropods. On the other hand, we should expect a greater share of the farmed insects to be affected, possibly a far greater share, and insects should matter more on average and in expectation, but 1,000,000x is a lot.
For example, springtails are estimated to be "100,000 individuals per square meter of ground" (wiki). To match the number of farmed black soldier fly alive at any moment in 2030, you would need to affect 1 million square meters, or 1 square kilometer of soil (in a way that has a modest impact on their population or average welfare). When you scale that down to individual consumption choices, probably less than 1 millionth of this, it doesn't seem crazy for the springtails to be affected more, and possibly much more, in expectation. If a single farmed salmon is fed 1 kg of dried BSF larva meal*, that's something like 20,000 individual BSF larvae who would have lived around 14 days each. I think it's likely that more than 1 square meter of soil (so possibly more than 100,000 springtails per moment) is affected for more than 14 days in expectation for a few kg of other feed for one farmed salmon. So springtails could be affected far more.
*This is probably an overestimate, maybe 10x too high or more. Farmed salmon convert around 1 kg of dry feed into 1 kg of wet weight gained, and have final weights of around 3 to 6 kg, but insects would probably only make up around 5-15% of their feed by weight even for those that do include insects in their diets (similar to current fishmeal inclusion rates), and the vast majority probably still won't.
You're asking about whether farmed fish dominate anyway, right? I think the specific claims I made about wild fish (including wild fish caught to feed farmed fish) still hold, but it's possible that the main effects of eating fish, whether wild-caught or farmed, are on farmed fish, because
- if you eat more wild-caught fish, others will eat more farmed fish in expectation, because they're substitute goods, and
- the weight of farmed fish produced will probably be more responsive to your diet decisions than the weight of fish caught from the wild is, because of greater price elasticity of supply, as wild fish stocks are limited and often exploited near the rate that gives maximum sustainable yield or are managed specifically to maintain stocks and catch (e.g. quotas).
Still, I think there could be on the order of 10x more wild-caught anchovies than farmed fish (e.g. see the columns for % slaughtered/bred/used annually in this table) and the population effects are probably larger than the catch effects (based on my toying with fishery models), and wild arthropods are even more numerous so could be impacted more. So, even if the effect by weight is smaller, the effect by number of individuals could be larger. Again, I still feel pretty clueless, as I haven't seen a model attempting to quantify all of these effects together. If you ignore wild arthropods (aquatic and terrestrial), I think there's a decent chance we could answer the question either way, but I'm less optimistic when you include wild arthropods.
On the other hand, eating fish could increase insect farming; aquaculture is a primary target market for farmed insects. See for example, this report.
For what it's worth, I also think wild arthropods could easily dominate all diet decisions in the near term in expectation, but if you ignore them, I'd guess eating relatively small farmed animals and their products, like shrimp, herbivorous farmed fish, chicken and eggs, is bad.
If that were the only important effect, maybe, because I'm also suffering-focused. But I'd rather say that I'm clueless about whether eating fish is good or bad. There are also other effects, like on small wild aquatic (and for farmed species, terrestrial) arthropods that could end up dominating and could go the other way, but I haven't thought enough about them. I think this is a reasonable position whether or not you're suffering-focused, as long as you're roughly utilitarian of some kind.
I think for most fisheries, the price elasticity of wild-caught fish supply, including fish caught specifically for fishmeal to feed farmed fish and shrimp, tends to be close to 0 and is sometimes even negative (when there's overfishing, which is common in developing countries). So, it's not clear you would have been responsible in expectation for many extra fishing deaths, and you might have even spared fish from fishing deaths in expectation by reducing the number that could be caught. See:
- https://forum.effectivealtruism.org/posts/snnfmepzrwpAsAoDT/why-anima-international-suspended-the-campaign-to-end-live?commentId=5o6jpg6m47iJ6L2JK
- https://reducing-suffering.org/should-fishing-opponents-be-happy-about-overfishing/
- https://reducing-suffering.org/wild-caught-fishing-affects-wild-animal-suffering/
with the last two written from a suffering-focused perspective.
Maybe it briefly reached excruciating when you had to stop, but it wasn’t excruciating most of the time or immediately excruciating when you started again and you didn’t expect it to be?
Also, you had a better (faster and more accessible) option than to take your life: just ask them to stop. I'm not sure the fact that you started again means it wasn’t excruciating, because you weren’t in (nearly as intense) pain when you asked them to continue, and you expected to be able to bear it again, at least for a while.
I think a pain of constant sensory intensity and quality can vary in how bad, urgent and tolerable it feels depending on how long you've been subjected to it. How bad it feels depends on your psychological reaction to it, e.g. whether you can distract yourself from it, but your ability to control your attention might wear out. A similar point is made here, with respect to stimulus intensity instead of sensory intensity: https://centerforreducingsuffering.org/research/clarifying-lexical-thresholds/
I'm curious why you think the most intensely painful parts of your tattoo experiences were disabling at most, and not excruciating. Is it that you still found them bearable, but just barely? The way you subjectively describe them and having to stop suggests to me that they weren't bearable, but I'm not sure.
For what it's worth, the Welfare Footprint Project has slightly refined pain intensity definitions compared to the ones you quote in this post, presumably to be applicable to nonhuman animals and possibly more general in other ways. They describe excruciating pain this way:
All conditions and events associated with extreme levels of pain that are not normally tolerated even if only for a few seconds. In humans, it would mark the threshold of pain under which many people choose to take their lives rather than endure the pain. This is the case, for example, of scalding and severe burning events. Behavioral patterns associated with experiences in this category may include loud screaming, involuntary shaking, extreme muscle tension, or extreme restlessness. Another criterion is the manifestation of behaviors that individuals would strongly refrain from displaying under normal circumstances, as they threaten body integrity (e.g. running into hazardous areas or exposing oneself to sources of danger, such as predators, as a result of pain or of attempts to alleviate it). The attribution of conditions to this level must therefore be done cautiously. Concealment of pain is not possible.
Tattoo - shoulder - after 30 minutes (thick lines, thick needle)
12
Disabling
Aching buzzing, like a mild headache in my shoulder. Could do a day of work, albeit at a lower capacity.
Should this say 'Hurtful' instead of 'Disabling'? The way you describe it sounds hurtful to me, and "Tattoo - shoulder - after 75 minutes (colouring)" was marked as Hurtful or Disabling but had a higher score.
Those who maximize expected utility with unbounded utility functions are Dutch-bookable in principle. Here's an example where you're forced into a stochastically dominated strategy with just one decision: you trade in a St. Petersburg lottery outcome for a new St. Petersburg lottery that's ex ante worse than the first lottery was before you saw its outcome, no matter the outcome.
There are also 100% guaranteed loss Dutch book arguments with infinitely many decisions, like McGee, 1999 and Pruss, 2022.
Any thoughts on how the FTX crisis affected who responded to the survey and the results through this? I can imagine people leaving EA because of FTX and so not responding as a result, but others (and maybe even people not close to the movement) might respond specifically to express their dissatisfaction with EA and might not have responded otherwise.
Do the estimates for black soldier flies primarily reflect adults? If we wanted to use an estimate for BSF larvae or mealworms, should we use the BSF estimates, the silkworm estimates (which presumably reflect the larvae, or else you'd call them silkmoths?), something in-between (an average?) or something else?
Also, among the proxies you've used, I'd be inclined to give almost all of my weight to a handful of hedonic proxies, namely panic-like behavior, hyperalgesia, PTSD-like behavior, prioritizes pain response in relevant context and motivational trade-off (a cognitive proxy) as indicating the extremes of welfare conditional on sentience, and roughly in that order by weight. The first three all came up "unknown" due to no studies for bees, but there were a few studies suggesting their presence (and none negative) for the fish. Giving almost all of your weight to these proxies would favor the fish over bees. That being said, I wouldn't be that surprised to find out that bees display those behaviors, too, because I also think bees are very impressive and behaviorally complex.
I might use joy-like behavior and play behavior for the other end of the welfare range, but I expect them to be overshadowed by the intense suffering indicators above, and I don't expect them to differ too much across the species. There was evidence of play behavior in all three, but only evidence for joy-like behavior in carps.
The next proxies that could make much difference that I think could matter on some models (although I don't assign them much weight) would be neuron counts and the number of just-noticeable differences, and neuron counts would also favor the fish.
“I can’t believe that bees beat salmon!”
We also find it implausible that bees have larger welfare ranges than salmon. But (a) we’re also worried about pro-vertebrate bias; (b) bees are really impressive; (c) there's a great deal of overlap in the plausible welfare ranges for these two types of animals, so we aren't claiming that their welfare ranges are significantly different; and (d) we don’t know how to adjust the scores in a non-arbitrary way. So, we’ve let the result stand. (We’d make similar points in response to: “I can’t believe that octopuses beat carp!”)
(I could believe octopuses beat carps, because octopuses seem unusually cognitively sophisticated among animals.)
I'd guess the main explanation for this (at least sentience-adjusted, if that's what's meant here), which may have biased your results against salmons and carps, is that you used the prior probability for crab sentience (43% mean, 31% median from table 3 in the doc) as the prior probability for salmon and carp sentience, and your posterior probabilities of sentience are generally very similar to the priors (compare tables 3 and 4 in the doc). Honeybees, fruit flies, crabs, crayfish, salmons and carps all ended up with similar sentience probabilities, but I'd assign higher probabilities to salmons and carps than to the others. You estimated octopuses to be about 2x as likely to be sentient as salmons and carps, according to both your priors and posteriors, with means and medians roughly between 73% and 78% for octopuses. On the other hand, your sentience-conditioned welfare ranges didn't differ too much between the fish, octopuses and bees. It's worth pointing out that Luke Muehlhauser had signficantly higher probabilities for rainbow trouts (70%, in the salmonid family like salmons) than Gazami crabs (20%) and fruit flies (25%), and you could use his prior for rainbow trouts for salmons and carps instead (or something in between). That being said, his probabilities were generated in different ways from yours, so that might introduce other biases. You could instead use your prior for octopuses (or something in between). Or, most consistent with your methodology, would be to have the authors of the original estimates for RP just estimate these probabilities directly, with or without the data you gathered for salmons and carps. Any of these would be relatively small fixes.
As an aside, should we interpret this sentience probability work as not primarily refining your old estimates (since the posteriors and priors are very similar), but as adding other species and further modelling your uncertainty?
There may be some other smaller potential sources of bias that contributed here, but I don't expect them to have been that important:
- I'm guessing salmon and carp (and apparently zebrafish, which seem to often have been used when direct evidence wasn't available, maybe more for carp) are less well-studied than bees, so your conservative assumptions of assigning 0 to "unknown" for both probabilities of sentience and welfare ranges conditional on sentience may count more against them. For example, there were some studies found for "cognitive sophistication" for honeybees but not for salmons or carps, and more found for "mood state behaviors" for honeybees than salmons and carps in your new Sentience Table. For your Welfare Range Table, bees had fewer "unknowns" for cognitive proxies than salmons and carps, but more for hedonic proxies and representational proxies.
- One possible quick-ish fix would be to use a prior for the presence/absence of proxies across animals based on the ones for which there are studies (possibly just those you collected evidence for), although this may worsen other biases, like publication bias, and it requires you to decide how to weigh different animal species (but uniformly across those you collected evidence for is one way, although somewhat arbitrary).
- Another quick-ish fix could be to make more assumptions between species you gathered evidence for, e.g. if a fruit fly has some capacity, I'd expect fish to, as well, and if some mammal is missing some capacity, I'd expect salmon and carp to not have it either. This may be too strong, but you did use the crab sentience prior for the fish.
- Longer fixes could use more sophisticated missing data methods.
- You may have underestimated salmon and carp neuron counts around 100x.
No, it's fine. I just shared in case you wanted to see older answers.
Also, because people have already answered here, I wouldn't recommend deletion.
Similar question from a few years ago:
https://www.lesswrong.com/posts/8Eo52cjzxcSP9CHea/why-do-you-reject-negative-utilitarianism
AFAIK, Benatar isn’t a negative utilitarian or a consequentialist of any kind. He is an antinatalist. He might also think death is bad.
(I'm not familiar with Perry.)
Also, some more philosophical analysis of approaches to decision-making under deep uncertainty in
Mogensen, A.L., Thorstad, D. Tough enough? Robust satisficing as a decision norm for long-term policy analysis. Synthese 200, 36 (2022). https://doi.org/10.1007/s11229-022-03566-5
This paper aims to open a dialogue between philosophers working in decision theory and operations researchers and engineers working on decision-making under deep uncertainty. Specifically, we assess the recommendation to follow a norm of robust satisficing when making decisions under deep uncertainty in the context of decision analyses that rely on the tools of Robust Decision-Making developed by Robert Lempert and colleagues at RAND. We discuss two challenges for robust satisficing: whether the norm might derive its plausibility from an implicit appeal to probabilistic representations of uncertainty of the kind that deep uncertainty is supposed to preclude; and whether there is adequate justification for adopting a satisficing norm, as opposed to an optimizing norm that is sensitive to considerations of robustness. We discuss decision-theoretic and voting-theoretic motivations for robust satisficing, and use these motivations to select among candidate formulations of the robust satisficing norm.
(and the same probabilities that not doing X leads to the opposite outcomes)
I'm not sure exactly what you mean by this, and I expect this will make it more complicated to think about than just giving utility differences with the counterfactual.
The idea of sensitivity to new information has been called credal resilience/credal fragility, but the problem I'm concerned with is having justified credences. I would often find it deeply unsatisfying (i.e. it seems unjustifiable) to represent my beliefs with a single probability distribution; I'd feel like I'm pulling numbers out of my ass, and I don't think we should base important decisions on such numbers. So, I'd often rather give ranges for my probabilities. You literally can give single distributions/precise probabilities, but it seems unjustifiable, overconfident and silly.
If you haven't already, I'd recommend reading the illustrative example here. I'd say it's not actually justifiable to assign precisely 50-50 in that case or in almost any realistic situation that actually matters, because:
- if you actually tried to build a model, it would be extraordinarily unlikely for you to get 50-50 unless you specifically pick your model parameters to get that result (which would be motivated reasoning and kind of defeat the purpose of building the model in the first place) or round the results, given that the evidence isn't symmetric and you'd have multiple continuous parameters.
- if you thought 50-50 was a good estimate before the evidential sweetening, then you can't use 50-50 after, even though it seems just as appropriate for it. Furthermore, if you would have used 50-50 if originally presented with the sweetened information, then your beliefs depend on the timing/order in which you become aware of evidence (say you just miscounted witnesses the first time), which should be irrelevant and is incompatible with Bayesian rationality (unless you have specific reasons for dependence on the timing/order).
For the same reasons, in almost any realistic situation that actually matters, Alice in your example could not justifiably get 50-50. And in general, you shouldn't get numbers with short exact decimal or fractional representations.
So, say in your example, it comes out 51.28... to 48.72..., but could have gone the other way under different reasonable parameter assignments; those are just the ones Alice happened to pick at that particular time. Maybe she also tells you it seems pretty arbitrary, and she could imagine having come up with the opposite conclusion and probabilities much further from 50-50 in each direction. And that she doesn't have a best guess, because, again, it seems too arbitrary.
How would you respond if there isn't enough time to investigate further? But you could instead support something that seems cost-effective without being so sensitive to pretty arbitrary parameter assignments, but not nearly as cost-effective as Alice's intervention or an intervention doing the opposite.
Also imagine Bob gets around 47-53, and agrees with Alice about the arbitrariness and reasonable ranges. Furthermore, you can't weigh Alice and Bob's distributions evenly, because Alice has slightly more experience as a researcher and/or a slightly better score in forecasting, so you should give her estimate more weight.
I think assigning 1/n typically depends on evidential symmetry (like simple cluelessness) or at least that the reasons all balance out precisely, so rules out complex cluelessness. Instead, we might have evidence for and against each possibility, but be unable to weigh it all without making very arbitrary assumptions, so we wouldn't be wiling to commit to the belief that A is more likely than B or vice versa or that they're equally likely. There's an illustrative example here.
Similarly, Brian Tomasik claimed, after looking into many different effects and considerations:
On balance, I'm extremely uncertain about the net impact of climate change on wild-animal suffering; my probabilities are basically 50% net good vs. 50% net bad when just considering animal suffering on Earth in the next few centuries (ignoring side effects on humanity's very long-term future).
But if he had built formal models with precise probabilities, it would almost certainly have come out with climate change bad in expectation or climate change good in expectation, rather than net neutral in expectation, and the expected impact could be (but wouldn’t necessarily be) very very large. But someone else with slightly different (but pretty arbitrary) precise probabilities could get the opposite sign and still huge expected impact. It would seem bad to bet a lot on one side if the sign and magnitude of the expected value is sensitive to arbitrarily chosen numbers.
Even if multiple people come up with different numbers and we want to weigh them, there's still a question of how exactly to weigh them given possibly different levels of relevant expertise and bias between them, so 1/n is probably wrong, but all other approaches to come up with single precise numbers are going to involve arbitrary parameters/weights.
Thanks for sharing this! I'm very much in favour of looking for more robustly positive interventions and reducing sensitivity to arbitrary beliefs or modeling decisions.
I think it’s worth pointing out that minimizing regret is maximally ambiguity averse, which seems controversial philosophically, and may mean giving up enormous upside potential or ignoring far more likely bad cases. In a sense, it’s fanatically worst case-focused. I suppose you could cut out the most unlikely worst cases (hence minimizing "plausible future regret"), but doing so is also arbitrary, and I don’t know how you would want to assess "plausibility".
I might instead make the weaker claim that we should rule out robustly dominated portfolios of interventions, but this might not rule out anything (even research and capacity building could end up being misused): https://forum.effectivealtruism.org/posts/Mig4y9Duu6pzuw3H4/hedging-against-deep-and-moral-uncertainty
CLR (https://longtermrisk.org/ ) works on multipolar scenarios, multi-agent systems and game theory, both technical problems and macrostrategy, prioritizing the reduction of conflicts that increase s-risks. The associated foundation https://www.cooperativeai.com/ supports work on similar problems.
For a technical paper on Pareto improvements, see https://link.springer.com/article/10.1007/s10458-022-09574-6
CLR and CRS (https://centerforreducingsuffering.org/ ) also have worked on risks from malevolent actors (https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors ). I'm not sure if s-risks from sadism or retributionism are being worked on, but they're discussed briefly here: https://centerforreducingsuffering.org/research/a-typology-of-s-risks/
I imagine the work often focuses on a small number of groups, but maybe it generalizes. I'm not aware of more concrete realistic models (rather than more toy models or models that aren't aiming to capture likely preferences and values), but I wouldn't be surprised if they exist. This isn't my area of focus, so I'm not that well-informed. I imagine AI safety/governance groups and especially CLR and CRS are thinking about this, but may not have built explicit models.
One interesting question this raises for me: how does this impact the cost-effectiveness of corporate chicken welfare campaigns in the EU? Would this have been likely to happen without those campaigns? If so, then the counterfactual we were trying to beat was much better than we had anticipated, so the impact of corporate campaigns could be much lower.
On the other hand, if corporate campaigns were instrumental for this, then we should probably treat all of the work and impacts (corporate and regulatory) together in assessing cost-effectiveness.
I think broilers were probably not caged at significant rates. Maybe just add "and adopting broiler reforms" or "the fastest growing broiler breeds", but that's getting long.
Maybe: EU Food Agency Recommends Chicken Welfare Reforms
The broiler reforms also seem pretty significant to me, so I probably wouldn't have singled out cages for egg-laying hens in the title. Also, my impression is that we haven't had much recorded progress with breed transitions for broilers (commitments, but not much actual follow-through), and this would force companies to transition, so this could be a bigger win for broilers than it is for egg-laying hens.
The reforms seem roughly in line with the Better Chicken Commitment, maybe somewhat faster growth rates (50g/day max vs 45-46, compared to 61 with conventional breeds): https://welfarefootprint.org/broilers/