Suffering in Animals vs. Humans 2017-08-15T00:03:26.096Z
Best way to invest with leverage? 2015-04-02T11:04:49.951Z
Can we set up a system for international donation trading? 2015-03-03T13:47:07.747Z
Counterfactual credit assignment 2013-07-24T04:00:08.000Z


Comment by Brian_Tomasik on Animal Welfare Fund: Ask us anything! · 2021-05-30T16:44:17.382Z · EA · GW

Great discussion. :)

I think one thing Brian might not have been aware of at the time is that many wild fishes are caught to feed farmed fishes, so fish farming might be good for reducing wild fish populations.

For whatever it's worth, I was aware of that at the time. :) I'm uncertain about the net impact of fish farming, but like for most other farmed animals, I err on the side of thinking it's bad in expected value because it's bad for the farmed animals directly, and I'm fairly clueless about the indirect effects. For example, maybe reducing populations of small forage fish increases zooplankton populations. Or if the small forage fish are fished sustainably, then maybe fishing them just kills a bunch of them painfully without affecting their populations too much.

With things like crop cultivation, I'm also fairly uncertain. Some crop fields in the US Midwest have higher net primary productivity than native grassland, and in places like California, where there's a lot of irrigation, it seems pretty plausible that crop cultivation increases invertebrate populations.

That said, I tend to agree with Michael's thought that the indirect wild-animal impacts of diet may be more significant than many of the kinds of interventions that WAI could pull off because WAI-type interventions may not be focused on reducing numbers of wild animals, and without reducing numbers of wild animals, it's difficult for me to know if suffering is actually being reduced in light of cluelessness.

Comment by Brian_Tomasik on Small animals have enormous brains for their size · 2021-04-17T02:14:47.758Z · EA · GW

I think densities of mites in soil are typically in the range 10^3 to 10^5 per square meter. For example, see the Brady (1974) and Curl and Truelove (1986) numbers here.

In 2016, I used my microscope camera to look for dust mites around my own house during the summer, and I mainly only found them in areas with lots of accumulated skin flakes. Even in the flake patches, they didn't seem dramatically more densely concentrated than the mites I filmed in the soil outside my house. Of course, this is just one data point. (Also, maybe I could only see the biggest ones? But that would apply to both indoor and outdoor mites.)

Comment by Brian_Tomasik on Differences in the Intensity of Valenced Experience across Species · 2020-10-30T19:01:57.100Z · EA · GW

Thanks for these astoundingly detailed posts. :)

Just to clarify on this:

others have speculated that animals with simpler nervous systems have characteristically much more intense experiences than humans. For example in his blog post “Is Brain Size Morally Relevant?” Brian Tomasik explores the idea that “to a tiny brain, an experience activating just a few pain neurons could feel like the worst thing in the world from its point of view.”

I didn't intend to suggest that small brains have characteristically greater intensities, but just that it would take fewer pain neurons to achieve the same (subjectively relative) intensity as in a larger brain.

In my opinion, the best way to argue for giving more moral weight to larger brains is not that larger brains have more intense experiences but that we just care more about them because they're more complex. As an analogy, we might care more if a very large painting was destroyed than if a small one was, not because the large painting is more "intense" but just because there's more of it. So I would say that

intrinsic value = duration * intensity * (how much we care about the brain),

where the last factor can be based on its complexity. (BTW, I didn't read most of this post, so sorry if you already discussed such things.)

Comment by Brian_Tomasik on "Disappointing Futures" Might Be As Important As Existential Risks · 2020-09-06T00:18:03.677Z · EA · GW

Ok. :) For that question I might give a slightly lower than 50% chance that human-inspired space colonization would create more suffering than happiness (where the numerical magnitudes of happiness and suffering are as judged by a typical classical utilitarian). I think the default should be around 50% because for a typical classical utilitarian, it seems unclear whether a random collection of minds contains more suffering or happiness. There are some scenarios in which a human-inspired future might either be relatively altruistic with wide moral circles or relatively egalitarian such that selfishness alone can produce a significant surplus of happiness over suffering. However, there are also many possible futures where a powerful few oppressively control a powerless many with little concern for their welfare. Such political systems were very common historically and are still widespread today. And there may also be situations analogous to animal suffering of today in which most of the sentience that exists goes largely ignored.

The expected value of human-inspired space colonization may be less symmetric than this because it may be dominated by a few low-probability scenarios in which the future is very good or very bad, with very good futures plausibly being more likely.

Comment by Brian_Tomasik on "Disappointing Futures" Might Be As Important As Existential Risks · 2020-09-04T21:44:31.374Z · EA · GW

Nice post. :) My question "Human-inspired colonization of space will cause net suffering if it happens" that I, Pablo, and you answered was worded poorly. I later rewrote it to be more clear: "Human-inspired colonization of space will cause more suffering than it prevents if it happens". As he explains in his post, Pablo (a classical utilitarian) interpreted my original wording to refer to the net balance of happiness minus suffering, while I (a negative utilitarian) meant merely the net balance of suffering. Which way did you read it?

While Pablo gave 1% probability of more suffering than happiness, he gave 99% probability that suffering itself would increase, saying: "But maybe Brian meant that colonization will cause a surplus of suffering relative to the amount present before colonization. I think this is virtually certain; I’d give it a 99% chance."

Comment by Brian_Tomasik on Physical theories of consciousness reduce to panpsychism · 2020-05-07T12:02:59.018Z · EA · GW

Cool post. :) I'm not sure if I understand the argument correctly, but what would you say to someone who cites the "fallacy of division"? For example, even though recurrent processes are made of feedforward ones, that doesn't mean the purported consciousness of the recurrent processes also applies to the feedforward parts. My guess is that you'd reply that wholes can sometimes be different from the sum of their parts, but in these cases, there's no reason to think there's a discontinuity anywhere, i.e., no reason to think there's a difference in kind rather than degree as the parts are arranged.

Consider a table made of five pieces of wood: four legs and a top. Suppose we create the table just by stacking the top on the four legs, without any nails or glue, to keep things simple. Is the difference between the table versus an individual piece of wood a difference in degree or kind? I'm personally not sure, but I think many people would call it a difference in kind.

I think an alternate route to panpsychism is to argue that the electron has not just information integration but also the other properties you mentioned. It has "recurrent processing" because it can influence something else in its environment (say, a neighboring electron), which can then influence the original electron. We can get higher-order levels by looking at one electron influencing another, which influences another, and so on. The thing about Y predicting X would apply to electrons as well as neurons.

The table analogy to this argument is to note that an individual piece of wood has many of the same properties as a table: you can put things on it, eat food from it, move it around your house as furniture, knock on it to make noise, etc.

Comment by Brian_Tomasik on How good is The Humane League compared to the Against Malaria Foundation? · 2020-05-03T19:02:58.036Z · EA · GW

Good points. :) That post of mine isn't really about the mosquitoes themselves but more about the impacts that a larger human population would have on invertebrates (assuming AMF does increase the size of the human population, which is a question I also mention briefly).

Comment by Brian_Tomasik on Should Longtermists Mostly Think About Animals? · 2020-02-08T02:50:32.217Z · EA · GW

Thanks for this detailed post!

My guess would be that Greaves and MacAskill focus on the "10 billion humans, lasting a long time" scenario just to make their argument maximally conservative, rather than because they actually think that's the right scenario to focus on? I haven't read their paper, but on brief skimming I noticed that the paragraph at the bottom of page 5 talks about ways in which they're being super conservative with that scenario.

Assuming that the goal is just to be maximally conservative while still arguing for longtermism, adding the animal component makes sense but doesn't serve the purpose. As an analogy, imagine someone who denies that any non-humans have moral value. You might start by pointing to other primates or maybe dolphins. Someone could come along and say "Actually, chickens are also quite sentient and are far more numerous than non-human primates", which is true, but it's slightly harder to convince a skeptic that chickens matter than that chimpanzees matter.

such as human’s high brain to body mass ratio

One might also care about total brain size because in bigger brains, there's more stuff going on (and sometimes more sophisticated stuff going on). As an example, imagine that you morally value corporations, and you think the most important part of a corporation is its strategic management (rather than the on-the-ground employees). You may indeed care more about corporations that have a greater ratio of strategic managers to total employees. But you may also care about corporations that have just more total strategic managers, especially since larger companies may be able to pull off more complex analyses that smaller ones lack the resources to do.

Comment by Brian_Tomasik on How Much Leverage Should Altruists Use? · 2020-01-13T00:49:17.315Z · EA · GW

That seems to be a common view, but I haven't yet been able to find any reason why that would be the case, except insofar as rebalancing frequency affects how leveraged you are. I discussed the topic a bit here. Maybe someone who knows more about the issue can correct me.

Comment by Brian_Tomasik on How Much Leverage Should Altruists Use? · 2020-01-09T23:52:33.617Z · EA · GW

Good point. I think such a fund would want to be very clear that it's not for the faint of heart and that it's done in the spirit of trying new risky things. If that message was front and center, I expect the backlash would be less.

Comment by Brian_Tomasik on How Much Leverage Should Altruists Use? · 2020-01-09T22:57:49.810Z · EA · GW

Thanks! From my reading of the post, that critique is not really specific to leveraged ETFs? Volatility drag is inherent to leverage in general (and even to non-leveraged investing to a smaller degree).

He says: "In my next post, I’m going to dive into more detail on what is to distinguish between good and bad uses of leverage." So I found his next post on leverage, which coincidentally is one mentioned in the OP: "The Line Between Aggressive and Crazy". There he clarifies why he doesn't like leveraged ETFs:

From this we start to see the problem with levered ETFs as they are currently constructed: they generally use too much leverage applied to too volatile of assets. Even with the plain vanilla S&P 500 3x leverage is too much. And after accounting for the hefty transactions costs and management fees these ETFs charge, even 2x might be suboptimal (especially if you believe returns will be lower in the future than they have in recent decades). And the S&P 500 is one of the most conservative targets for these products. Take a look at the websites of levered ETF providers and you will see ways to make levered bets on particular industries like biotech or the energy sector, or on commodities like oil and gold, or for more esoteric instruments yet, almost all of which are more volatile than a broadly diversified index like the S&P 500, and thus supporting much lower Kelly leverage ratios, probably less than 2x.

So unless transaction costs are a dealbreaker, it seems like he's mainly opposed to the fact that most leveraged ETFs use too much leverage for their level of volatility (relative to the Kelly Criterion, which assumes logarithmic utility of wealth), not that the instrument itself is flawed? Of course, leveraged ETFs implement a "constant leverage" strategy, and later in that post, Davis proposes adjusting the leverage ratio dynamically (which I agree is better, though it requires more work).

Comment by Brian_Tomasik on How Much Leverage Should Altruists Use? · 2020-01-08T00:08:20.065Z · EA · GW

Leveraged ETFs are one way to keep your leverage ratio from blowing up, without any investor effort.

Keeping all the considerations in this post in mind seems very difficult, so perhaps the ideal solution would be if there were an institution to do it for individuals, such as EA Funds or something like it. You could donate to the fund and let them adjust leverage, correlation with other donors to the same cause, and everything else on your behalf.

Comment by Brian_Tomasik on What ever happened to PETRL (People for the Ethical Treatment of Reinforcement Learners)? · 2020-01-01T18:14:15.224Z · EA · GW

PETRL was (to my knowledge) the only organization focused on the ethics of AI-qua-moral patient

There seems to be a lot of academic and popular discussion about robot rights and machine consciousness, but yeah, I can't name offhand another organization explicitly focused on this topic. (To some degree, Sentience Institute has this as a long-run goal, and many organizations care about it as part of what they work on.)

There's a spoof organization called People for Ethical Treatment of Robots.

Update: I see there's another organization: American Society for the Prevention of Cruelty to Robots. On the FAQ page they say:

Q: Are you serious?

A: The ASPCR is, and will continue to be, exactly as serious as robots are sentient.

Comment by Brian_Tomasik on Ethical offsetting is antithetical to EA · 2019-09-19T21:52:54.049Z · EA · GW

A problem is that different people have different views on what's most effective. If most people are quasi-egoists, then for them, spending money on themselves or their families is "the most effective charity" they can give to. Or even within the realm of what's normally understood to be charity, people might donate to their local church or arts center. Relative to their values, this might be the best charity to give to.

Comment by Brian_Tomasik on Ethical offsetting is antithetical to EA · 2019-09-19T21:32:24.209Z · EA · GW

I think offsetting makes sense when seen as a form of moral trade with other people (or even possibly other factions within your own brain's moral parliament).

Regarding objection #1 about reference classes, the answer can be that you can choose a reference class that's acceptable to your trading partner. For example, suppose you do something that makes the global poor slightly worse off. Suppose that a large faction of society doesn't care much about non-human animals but does care about the global poor. Then donating to an animal charity wouldn't offset this harm in their eyes, but donating to a developing-world charity would.

Regarding objection #2, trade by its nature involves spending resources on things that you think are suboptimal because someone else wants you to.

An objection to this perspective can be that in most offsetting situations, the trading partner isn't paying enough attention or caring enough to actually reciprocate with you in ways that make the trade positive-sum for both sides. (For trade within your own brain, reciprocation seems more likely.)

Comment by Brian_Tomasik on Ethical offsetting is antithetical to EA · 2019-09-19T19:25:07.196Z · EA · GW

Good point.

this is best achieved not through offsetting, but by thinking about who we will want to cooperate with and trying to help their values as cost-effectively as possible.

Couldn't one argue that offsetting harms that people outside EA care about counts as cooperating with mainstream people to some degree? In practice the way this often works is by improved public relations or general trustworthiness, rather than via explicit tit for tat. Anyway, whether this is worthwhile depends how costly the offsets are (in terms of money and time) relative to the benefits.

Comment by Brian_Tomasik on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T15:43:46.603Z · EA · GW

I see here it says: "Aside from mass voting, you can vote using any other criteria you choose." Presumably some people use votes to express dislike rather than to rate the quality of the comment.

(I expect most of us are guilty of this to some extent. I don't downvote comments merely because I disagree, but I upvote more often on comments with which I do agree...)

Comment by Brian_Tomasik on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T15:25:23.898Z · EA · GW

I think the idea is that even a pure utilitarian should care about contractarian-style thinking for almost any practical scenario, even if there are some thought experiments where that's not the case.

Comment by Brian_Tomasik on Interview with Michael Tye about invertebrate consciousness · 2019-08-09T10:07:22.937Z · EA · GW

Congrats on all these great interviews!

There is nothing in the behavior of the nematode worm that indicates the presence of consciousness. It is a simple stimulus-response system without any flexibility in its behavior.

There are numerous papers on learning in C. elegans. Rankin (2004):

Until 1990, no one investigated the possibility that C. elegans might show behavioral plasticity and be able to learn from experience. This has changed dramatically over the last 14 years! Now, instead of asking “what can a worm learn?” it might be better to ask “what cannot a worm learn?” [...]

C. elegans has a remarkable ability to learn about its environment and to alter its behavior as a result of its experience. In every area where people have looked for plasticity they have found it.

Comment by Brian_Tomasik on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-09T23:44:12.743Z · EA · GW

Interesting. :)

giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse.

I was thinking that it's not just a matter of risk aversion, because regular utilitarianism would also favor helping the person with a terrible life if doing so were cheap enough. The perverse behavior of the ex ante view in my example comes from the concave f function.

Comment by Brian_Tomasik on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-09T03:03:16.046Z · EA · GW

We could be wrong either way.

Good point, but I feel like ex post prioritarianism does the allocation better, by being risk-averse (even though this is what Ord criticizes about it in the 2015 paper you cited in the OP). Imagine that someone has a 1/3^^^3 probability of 3^^^^3 utility. Ex ante prioritarianism says the expected utility is so enormous that there's no need to benefit this person at all, even if doing so would be almost costless. Suppose that with probability 1 - 1/3^^^3, this person has a painful congenital disease, grows up in poverty, is captured and tortured for years on end, and dies at age 25. Ex ante prioritarianism (say with a sqrt or log function for f) says that if we could spend $0.01 to prevent all of that suffering, we needn't bother because other uses of the money would be more cost-effective, even though it's basically guaranteed that this person's life will be nothing but horrible. Ex post prioritarianism gets what I consider the right answer because the reduction of torment is not buried into nothingness by the f function, since the expected-value calculation weighs two different scenarios that each get applied the f function separately.

I guess an ex ante supporter could say that if someone chooses the 1/3^^^3 gamble and it doesn't work out, that's the price you pay for taking the risk. But that stance feels pretty harsh.

Comment by Brian_Tomasik on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-08T19:52:40.997Z · EA · GW

I meant the subjective probabilities of the person using the ethical system ("you") applied to everyone, not using their own subjective probabilities.

I see. :) It seems like we'd still have the same problem as I mentioned. For example, I might think that currently elderly people signed up for cryonics have very high expected lifetime utility relative to those who aren't signed up because of the possibility of being revived (assuming positive revival futures outweigh negative ones), so helping currently elderly people signed up for cryonics is relatively unimportant. But then suppose it turns out that cryopreserved people are never actually revived.

(This example is overly simplistic, but the point is that you can get similar scenarios as my original one while still having "reasonable" beliefs about the world.)

Comment by Brian_Tomasik on An Argument for Why the Future May Be Good · 2019-07-08T19:41:41.722Z · EA · GW

I think maybe what I had in mind with my original comment was something like: "There's a high probability (maybe >80%?) that the future will be very alien relative to our values, and it's pretty unclear whether alien futures will be net positive or negative (say 50% for each), so there's a moderate probability that the future will be net negative: namely, at least 80% * 50%." This is a statement about P(future is positive), but probably what you had in mind was the expected value of the future, counting the IMO unlikely scenarios where human-like values persist. Relative to values of many people on this forum, that expected value does seem plausibly positive, though there are many scenarios where the future could be strongly and not just weakly negative. (Relative to my values, almost any scenario where space is colonized is likely negative.)

Comment by Brian_Tomasik on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-08T00:01:35.113Z · EA · GW

your own subjective probability distribution to be used

Would that penalize people who hold optimistic beliefs? Their expected utilities would often be pretty high, so it'd be less important to help them. As an extreme example, someone who expects to spend eternity in heaven would already be so well off that it would be pointless to help him/her, relative to helping an atheist who expects to die at age 75. That's true even if the believer in heaven gets a terminal disease at age 20 and dies with no afterlife.

Comment by Brian_Tomasik on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-06T19:31:20.118Z · EA · GW

Thought experiments like these are why I regard personal identity, and any moral theories that depend on it, as non-starters (including versions of prioritarianism that consider lifetime wellbeing collectively). I think it's best to think either in terms of empty individualism or open individualism. Empty individualism tends to favor suffering-focused views because any given moment of unbearable suffering can't be compensated by other moments of pleasure even within what we normally call the same individual, because the pleasure is actually experienced by a different individual. Open individualism tends to undercut suffering-focused intuitions by saying that torturing one person for the happiness of a billion others is no different than one person experiencing pain for later pleasure.

As others have pointed out before, it is legitimate to try to salvage some ethical concern for personal identity despite the paradoxes. By analogy, the idea of consciousness has many paradoxes, but I still try to salvage it for my ethical reasoning. Neither personal identity nor consciousness "actually exists" in any deep ontological sense, but we can still care about them. It's just that I happen not to care ethically about personal identity.

Comment by Brian_Tomasik on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-05T17:48:05.119Z · EA · GW

Interesting ideas. :)

If I understand the view correctly, it would say that a world where everyone has a 49.99% chance of experiencing pain with utility of -10^1000 and a 50.01% chance of experiencing pleasure with utility of 10^1000 is fine, but as soon as anyone's probability of the pain goes above 50%, things start to become very worrisome (assuming the prioritarian weighting function cares a lot more about negative than positive values)? This is despite the fact that in terms of realized outcomes, the difference between one person having 49.99% chance of the pain vs 50.01% is pretty minimal.

What probability distribution are the expectations taken with respect to? If you were God and knew everything that would happen, there would be no uncertainty (except maybe due to quantum randomness depending on one's view about that). If there's no randomness, I think ex ante prioritarianism collapses to regular prioritarianism.

One issue is how you decide whether a given person exists in a given history or not. For example, if I had been born with a different hair color, would I be the same person? Maybe. How about a different personality? At what point do "I" stop existing and someone else starts existing? I guess similar issues bedevil the question of whether a person stays the same person over time, though there we can also use spatiotemporal continuity to help maintain personal identity.

Comment by Brian_Tomasik on Insect herbivores, life history and wild animal welfare · 2019-07-05T15:17:16.934Z · EA · GW

That shrew thing is fascinating!

would you also claim that species with slower metabolism have less lived experience than those with faster metabolism

Yeah, as an initial hypothesis I would guess that faster brain metabolism often means that more total information processing is occurring, although this rule isn't perfect because the amount of information processing per unit of energy used can vary. Also, the sentience or "amount of experience" of a brain needn't be strictly proportional to information processing.

In 2016 I wrote some amateur speculations on this idea, citing the Healy et al. (2013) paper.

Comment by Brian_Tomasik on How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong. · 2019-07-05T02:07:45.526Z · EA · GW

You're right that communication on this topic hasn't always been the most clear. :)

This section of my reply to Michael Plant helps explain my view on those questions. I think assessments of the intensities of pain and pleasure necessarily involve significant normative judgment calls, unless you define pain and pleasure in a sufficiently concrete way that it becomes a factual matter. (But that begs the question of what concrete definition is the right one to choose.)

I guess most people who aim to quantify pleasure and pain don't choose numbers such that unbearable suffering outweighs any amount of pleasure, so the statement you quoted could be said to be mainly about my negative-utilitarian values (though I would say that a view that pleasure can outweigh unbearable suffering is ultimately a statement about someone's non-negative-utilitarian values).

Comment by Brian_Tomasik on How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong. · 2019-06-25T13:56:52.036Z · EA · GW

Congrats on fixing the error!

When I first discussed Ng (1995)'s mathematical proof with some friends in 2006, they said they didn't find it very convincing because it's too speculative and not very biologically realistic. Other people since then have said the same, and I agree. I've cited it on occasion, but I've never considered the mathematical result of that particular model to be more than an extremely weak argument for the predominance of suffering.

I think the intuition underlying the argument -- that most offspring die not long after birth -- is one of the reasons many people believe wild-animal suffering predominates. It certainly might be the case that this intuition is misguided, such as based on what you said: "when the probability of suffering increases, the severity of suffering should decrease." I have an article that also discusses theoretical reasons why shorter-lived animals and animals who are less likely to ever reproduce may not feel as much pain or fear as we would from the same kinds of injuries.

While I think these kinds of arguments are interesting, I give them relatively low epistemic weight because they're so theoretical. I think the best way to assess the net hedonic balance of wild animals is to watch videos and read about their lives, seeing what kinds of emotions they display, and then come up with our own subjective opinions about how much pain and pleasure they feel. This method is biased by anthropomorphism, but it's at least somewhat more anchored to reality than simple theoretical models. We could try to combat anthropomorphism a bit by learning more about how other animals make tradeoffs between combinations of bad and good things, and so on.

For me, it will always remain obvious that suffering dominates in nature because I believe extreme, unbearable suffering can't be outweighed by other organism-moments experiencing pleasure. In general, I think most of the disagreement about nature's net hedonic balance comes down to differences in moral values rather than disagreements about facts. But yes, it remains useful to improve our frameworks for thinking about this topic, as you're helping to do. :)

Comment by Brian_Tomasik on Why we have over-rated Cool Earth · 2019-06-22T04:07:09.440Z · EA · GW

Good points. I mentioned Cool Earth specifically here, with a tentative calculation suggesting that even if greenhouse-gas emissions increase wild-animal populations (and it's not clear that they do), preserving rainforest to sequester CO2 probably increases wild-animal populations even more.

Comment by Brian_Tomasik on Insect herbivores, life history and wild animal welfare · 2019-06-21T12:37:43.045Z · EA · GW

the mechanisms of diapause are quite variable even within species groups (e.g., Hand et al. 2016).

Interesting. :) When I said "I tend to assume that insects in diapause have relatively little subjective experience", I had in mind the prototypical case of diapause where metabolism dramatically decreases.

I see that Hand et al. (2016) make the point that diapause doesn't always imply reduced metabolism: "Diapause [...] may or may not involve a substantial depression of metabolism" and "Diapause [...] depending on the species, can also be accompanied by depression of metabolism, essential for conserving energy reserves."

When I was reading about diapause, most of the sources suggested that metabolism was reduced, so I assumed that was the usual case. For example: "During diapause an insect's metabolic rate drops to one tenth or less".

Comment by Brian_Tomasik on Insect herbivores, life history and wild animal welfare · 2019-06-21T11:35:36.259Z · EA · GW

Thanks for the further insights. :)

I wasn't very clear about the phrase "adult lifespan", which I was probably using incorrectly. What I had in mind was "average lifespan only counting individuals who survive to adulthood", which I think is similar if not the same as what you had in mind.

Life expectancy at birth may vary a lot, but I think it'd be interesting to see some example numbers to get a sense of the diversity, similar to how you gave lots of other sample numbers for other metrics. I assume one could compute it from survivorship curves. (This is just a general point for future work that people might do. You've already gathered a huge amount of info here, and I don't mean to request even more. :) )

A species that lives in a cool climate does not necessarily have an average experienced daily temperature that is less than a species in a warmer climate, except for really extreme cases

My comment was partly inspired by this quote from your piece: "Species from cool temperate regions tend to have longer life cycles with about one generation per year (e.g., Danks and Foottit 1989), as do species living in areas that have a dry season. But we note that for many of these species, variable environmental conditions determine how many generations there are per year, and in addition, the overwintering generation will have a longer lifespan than growing season generations." I didn't read the source articles, but I was guessing that when species have longer lifespans due to cold or dry conditions, they presumably have to slow down metabolically during those unfavorable periods. And metabolic slowdown presumably means that activity by the nervous system slows down too.

I tried Googling about that and stumbled on Huestis et al. (2012). The authors expected mosquitoes to reduce metabolic rate during aestivation like happens for insects during winter diapause, but resting metabolic rate was actually higher during the late dry season. "The high ambient temperatures during the Sahelian dry season may prevent or limit a reduction in metabolic rate even if it would be adaptive."

Still, it does seem true that insects experiencing cooler temperatures typically slow down metabolism (with your point taken that one has to consider microclimatic temperature). So I guess my point here reduces to the previous point about how winter-diapausing insects (as well as those experiencing reduced temperatures even not in diapause) plausibly matter less per unit time, in proportion to the extent of slowdown (leaving room for lots of exceptions and diversity depending on the details).

Comment by Brian_Tomasik on Insect herbivores, life history and wild animal welfare · 2019-06-16T23:19:01.886Z · EA · GW

Do you think it would be different for detritivores compared with herbivores? Given that many plants aren't significantly consumed by animals, it seems there is often food in existence for herbivores to eat. In contrast, almost all decomposing organic matter will eventually be eaten by someone or other, so that food source could run out (or, if food doesn't run out, then maybe water does during dry periods). That said, maybe insect decomposers are still limited in number by factors like predators and parasitoids, and it's the other decomposers (bacteria, fungi, etc) who mainly face the resource limits.

Comment by Brian_Tomasik on Insect herbivores, life history and wild animal welfare · 2019-06-15T03:52:25.233Z · EA · GW

There's tons of useful info in this piece. :)

I take it that your "Life span" section refers to adult lifespans? For example, the statement that "Overall, very short lifespans (less than 20 days) seem fairly rare" refers to reaching maturity in less than 20 days? Do you have estimates for life expectancy at birth (maybe ignoring egg mortality, assuming eggs aren't sufficiently sentient to warrant concern)? Your sections on "Predators" and "Parasitoids" gave some point estimates based on when predation and inoculation by parasitoids often occur. Maybe those are reasonable approximations for life expectancy at birth. On the other hand, isn't survivorship almost always "concave upward", with most deaths occurring quite early? This figure is one random example, showing that most of the insects are dead before the second instar. And because of the concave-upward shape, the average age of death should be pretty young.

extended longevity associated with extended or repeated diapause

I tend to assume that insects in diapause have relatively little subjective experience, such that those periods of time "don't count" very much if we're using lifespan as a measure of how long the animal experiences pleasure and pain. Of course, if the insect is minimally sentient during that time, then maybe deaths occurring during that time aren't that bad.

Extending this idea, it seems plausible that ectotherms that mature slowly in cool climates have less sentience and less hedonic experience per day than those in warm climates, because biological activity is generally slowed down in cool climates. So maybe the difference in total amount of life experiences is less than one might assume between longer-lived slow-developing insects in high latitudes vs fast-developing insects at low latitudes.

Dung beetles species had the lowest lifetime fecundity (~2 offspring), while mayflies had the largest (~4000 offspring).

If we imagine only two species of insect -- one with lifetime fecundity of 2 and one with 4000 -- and if each species has equal numbers of egg-laying mothers, then the ratio of (total offspring)/(total mothers) will still be very high: (2 + 4000)/(1 + 1) = 2001. When we make assessments about the net hedonic balance of an entire ecosystem containing multiple species, it's this average value that seems most relevant. (Of course, this number is only one heuristic. A full evaluation has to consider the sentience of each organism, the cause of death, lifespan, etc.)

Comment by Brian_Tomasik on A vision for anthropocentrism to supplant wild animal suffering · 2019-06-07T23:57:15.923Z · EA · GW

Interesting info. :)

Jacy has argued that farm-animal suffering is a closer analogy to most far-future suffering than wild-animal suffering, and I largely agree with his arguments, although he and I both believe that some concern for naturogenic suffering is an important part of a "moral-circle-expansion portfolio", especially if events within some large simulations fall mainly into the "naturogenic" moral category. There could also be explicit nature simulations run for reasons of intrinsic/aesthetic value or entertainment.

I agree that terraforming and directed panspermia, if they occur at all, will be relatively brief preludes to a much larger and longer artificial future. A main reason I mention terraforming and directed panspermia at all is because they're less speculative/weird, and there's already a fair amount of discussion about them. But as I said here: "in the long run, it seems likely that most Earth-originating agents will be artificial: robots and other artificial intelligences (AIs). [...] we should expect that digital, not biological, minds will dominate in the future, barring unforeseen technical difficulties or extreme bio-nostalgic preferences on the part of the colonizers."

Then we can have a reasonable expectation that quality of life will be positive, as people will have plenty of contact and responsibility for other organisms.

...only if (1) concern for the experienced welfare (rather than, say, autonomy) of animals increases significantly from where it is now (including for invertebrates, who hold the majority of the neurons) and (2) such concern doesn't later decrease. Both of these assumptions aren't obvious. Personally I find it probable that moral concern for the suffering of animal-like creatures, like most human values, will be a distant memory within 5000 years, for similar reasons as worship of the ancient-Egyptian deities is a distant memory today.

Comment by Brian_Tomasik on What is the current best estimate of the cumulative elasticity of chicken? · 2019-05-17T20:20:35.694Z · EA · GW

Are there goods that economists think do work like what my friend is describing?

Relative to Econ 101 models, that would only happen if supply is perfectly inelastic (i.e., the supply curve is vertical).

Edited to add: ...or if demand is perfectly elastic (i.e., the demand curve is horizontal). Given how much people like eating meat, this seems very implausible.

Comment by Brian_Tomasik on Thoughts on the welfare of farmed insects · 2019-05-13T14:25:41.185Z · EA · GW

I think most of the experts Max has in mind are talking about phenomenal consciousness. One example is Max's recent interview with Jon Mallatt.

My own view is that there is no sharp line separating "phenomenal consciousness" from mere cognitive abilities, though certainly some types of mental abilities tug at our moral heartstrings more than others, and it is a nontrivial question what degree of the heartstring-tugging abilities insects have.

Comment by Brian_Tomasik on Thoughts on the welfare of farmed insects · 2019-05-09T12:41:40.211Z · EA · GW

It seems like most articles on the subject claim higher efficiency

Yeah. :) I was just offering one more data point. In the Table 3 screenshot in the link I gave above, it's carp rather than chicken that are most competitive with crickets in terms of feed conversion.

the Wikipedia page seems pretty sceptical about freezing as a method of killing

I wrote that page, so it's not an independent source :) (although the citations within it are).

wouldn't make sense for the nervous system to send "avoid this" messages to the animal while the animal wasn't able to avoid the situation

It could still make sense in terms of creating a bad experience that makes the animal try harder to avoid such a situation next time (if there is a next time).

Comment by Brian_Tomasik on Thoughts on the welfare of farmed insects · 2019-05-09T05:23:49.780Z · EA · GW

Great post!

My general position is that I expect insect farming to be even worse ethically than the factory farming of larger animals.

In expectation I agree, except maybe farming of chickens or small fish, which might be competitive with cricket and mealworm farming in terms of (sentience per animal)*(number of animals).

most insect farming operations feed crops to insects.

Yes, except some operations raising insects to feed to vertebrate farm animals rather than humans. (So much for displacing other types of factory farming...)

The conversion ratio of crop to insect meat is much better than it is for [other] types of meat

Lundy and Parrella (2015) say that farmed crickets had "little or no [protein conversion efficiency] PCE improvement compared to chicken".

no good evidence that they should be a less painful way of killing these animals

I don't recall if I've ever seen someone make this argument, but my best guess would be that freezing ectotherms should be less bad than freezing endotherms because an endotherm would maintain its body temperature for a while, while an ectotherm is more likely to "give up" and let the cold temperatures come. This seems more likely to be humane for very tiny creatures that can rapidly change temperature compared against, say, reptiles and amphibians. People say that freezing reptiles is inhumane.

There are unfortunately lots of dying bugs around my house, so I regrettably have a lot of experience freezing bugs to euthanize them. I find that a dying fly put in the freezer becomes completely motionless within ~half a minute. There are at least two possible explanations for this:

  1. The cold temperatures slow down metabolic activity so that cells (including neurons) are mostly paused
  2. The nervous system is still active but merely chooses to stop movement, perhaps to avoid bodily injury or something.

I hope the answer is #1 rather than #2, though I agree we don't know much about this stuff. Freezing insects could be anywhere from almost painless (after the first few seconds) to extremely painful.

I avoid testing it out, but I would imagine that if you put a bug in the freezer and took it out a minute or two later, it would come back to being active again. The freezing temperatures probably just put it "on pause" rather than killing it quickly. I don't know how long it takes freezing temperatures to actually kill a bug (and it may vary a lot from one species to the next).

Comment by Brian_Tomasik on On AI and Compute · 2019-04-08T21:24:32.990Z · EA · GW

Thanks for the interesting post!

By cortical neuron count, systems like AlphaZero are at about the same level as a blackbird (albeit one that lives for 18 years)

That comparison makes me think AI algorithms need a lot of work, because blackbirds seem vastly more impressive to me than AlphaZero. Some reasons:

  1. Blackbirds can operate in the real world with a huge action space, rather than a simple toy world with a limited number of possible moves.
  2. Blackbirds don't need to play millions of rounds of games to figure things out. Indeed, they only have one shot to figure the most important things out or else they die. (One could argue that evolution has been playing millions/trillions/etc of rounds of the game over time, with most animals failing and dying, but it's questionable how much of that information can be transmitted to future generations through a limited number of genes.)
  3. Blackbirds seem to have "common sense" when solving problems, in the sense of figuring things out directly rather than stumbling upon them through huge amounts of trial and error. (This is similar to point 2.) Here's a random example of what I have in mind by common sense: "One researcher reported seeing a raven carry away a large block of frozen suet by using his beak to carve a circle around the entire chunk he wanted." Presumably the raven didn't have to randomly peck around on thousands of previous chunks of ice in order to discover how to do that.

Perhaps one could argue that if we have the hardware for it, relatively dumb trial and error can also get to AGI as long as it works, whether or not it has common sense. But this gets back to point #1: I'm skeptical that dumb trial and error of the type that works for AlphaZero would scale to a world as complex as a blackbird's. (Plus, we don't have realistic simulation environments in which to train such AIs.)

All of that said, I acknowledge there's a lot of uncertainty on these issues, and nobody really knows how long it will take to get the right algorithms.

Comment by Brian_Tomasik on Why doesn't the EA forum have curated posts or sequences? · 2019-03-22T04:25:25.941Z · EA · GW

In my opinion, organizations may do best to avoid officially endorsing anything other than the most central content that they produce in order to reduce these PR headaches, regarding both

  1. what's said in the endorsed articles and
  2. which articles were or weren't chosen to begin with (the debate over the EA Handbook comes to mind).

As an alternative, maybe individual people could create their own non-CEA-endorsed lists of recommended content, and these could be made available somewhere. Having many such lists would allow for diversity based on interests and values. (For example, "The best global poverty articles", "The best career articles", "The best articles for suffering-focused altruists", etc.)

Comment by Brian_Tomasik on Suffering of the Nonexistent · 2019-03-08T22:45:11.788Z · EA · GW

I liked the long introductory exposition, though I also agree with adding the summary.

Comment by Brian_Tomasik on [deleted post] 2018-12-31T00:49:46.871Z

Thanks for the analysis. :) As Carl mentions, effects on wild animals are also important. From my perspective, it's plausible that family planning is unfortunately net bad with respect to wild-animal suffering, since humans may reduce global wild-animal populations, although this is far from obvious.

Comment by Brian_Tomasik on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-12-28T01:08:25.481Z · EA · GW

Interesting points. :) I think there could be substantial differences in policy between 10% support and 100% support for MCE depending on the costs of appeasing this faction and how passionate it is. Or between 1% and 10% support for MCE applied to more fringe entities.

philosophically sophisticated people can still have fairly strange values by your own lights, but it seems like there's more convergence.

I'm not sure if sophistication increases convergence. :) If anything, people who think more about philosophy tend to diverge more and more from commonsense moral assumptions.

Yudkowsky and I seem to share the same metaphysics of consciousness and have both thought about the topic in depth, yet we occupy almost antipodal positions on the question of how many entities we consider moral patients. I tend to assume that one's starting points matter a lot for what views one ends up with.

Comment by Brian_Tomasik on Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply · 2018-04-30T11:03:08.695Z · EA · GW

I think the economist guesses are from Compassion, By the Pound, though I also don't have a copy of either book.

no peer-reviewed articles or analyses being quoted

Yeah. Matheny (2003) is a journal article on the same topic, though it's not an economics journal.

they could easily export whatever is not locally consumed (as some EU countries do).

Perhaps that would reduce local meat production in the destination countries.

Comment by Brian_Tomasik on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-24T16:56:26.179Z · EA · GW

You raise some good points. (The following reply doesn't necessarily reflect Jacy's views.)

I think the answers to a lot of these issues are somewhat arbitrary matters of moral intuition. (As you said, "Big part of it seems arbitrary.") However, in a sense, this makes MCE more important rather than less, because it means expanded moral circles are not an inevitable result of better understanding consciousness/etc. For example, Yudkowsky's stance on consciousness is a reasonable one that is not based on a mistaken understanding of present-day neuroscience (as far as I know), yet some feel that Yudkowsky's view about moral patienthood isn't wide enough for their moral tastes.

Another possible reply (that would sound better in a political speech than the previous reply) could be that MCE aims to spark discussion about these hard questions of what kinds of minds matter, without claiming to have all the answers. I personally maintain significant moral uncertainty regarding how much I care about what kinds of minds, and I'm happy to learn about other people's moral intuitions on these things because my own intuitions aren't settled.

E.g. we can think about the DNA based evolution as about large computational/optimization process - suddenly "wild animal suffering" has a purpose and traditional environmnet and biodiversity protection efforts make sense.

Or if we take a suffering-focused approach to these large systems, then this could provide a further argument against environmentalism. :)

If the human cognitive processes are in the priviledged position of creating meaning in this universe ... well, then they are in the priviledged postion, and there is a categorical difference between humans and other minds.

I selfishly consider my moral viewpoint to be "privileged" (in the sense that I prefer it to other people's moral viewpoints), but this viewpoint can have in its content the desire to give substantial moral weight to non-human (and human-but-not-me) minds.

Comment by Brian_Tomasik on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-22T21:23:57.496Z · EA · GW

I tend to think of moral values as being pretty contingent and pretty arbitrary, such that what values you start with makes a big difference to what values you end up with even on reflection. People may "imprint" on the values they receive from their culture to a greater or lesser degree.

I'm also skeptical that sophisticated philosophical-type reflection will have significant influence over posthuman values compared with more ordinary political/economic forces. I suppose philosophers have sometimes had big influences on human politics (religions, Marxism, the Enlightenment), though not necessarily in a clean "carefully consider lots of philosophical arguments and pick the best ones" kind of way.

Comment by Brian_Tomasik on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T13:02:23.831Z · EA · GW

I'm fairly skeptical of this personally, partly because I don't think there's a fact of the matter when it comes to whether a being is conscious.

I would guess that increasing understanding of cognitive science would generally increase people's moral circles if only because people would think more about these kinds of questions. Of course, understanding cognitive science is no guarantee that you'll conclude that animals matter, as we can see from people like Dennett, Yudkowsky, Peter Carruthers, etc.

Comment by Brian_Tomasik on Where can I donate to support insect welfare? · 2018-01-02T08:06:59.725Z · EA · GW

Or maybe do donate to AMF. :)

Comment by Brian_Tomasik on Where can I donate to support insect welfare? · 2017-12-31T13:51:09.344Z · EA · GW

Nice points. :)

it seems doubtful there are any interventions for insects that will substantially change their quality of life without also making a big difference in the total population

One exception might be identifying insecticides that are less painful than existing ones while having roughly similar effectiveness, broad/narrow-spectrum effects, etc. Other forms of humane slaughter, such as on insect farms, would also fall under this category.