Posts

Best way to invest with leverage? 2015-04-02T11:04:49.951Z · score: 5 (7 votes)
Can we set up a system for international donation trading? 2015-03-03T13:47:07.747Z · score: 10 (10 votes)
Counterfactual credit assignment 2013-07-24T04:00:08.000Z · score: 1 (1 votes)

Comments

Comment by brian_tomasik on Should Longtermists Mostly Think About Animals? · 2020-02-08T02:50:32.217Z · score: 9 (7 votes) · EA · GW

Thanks for this detailed post!

My guess would be that Greaves and MacAskill focus on the "10 billion humans, lasting a long time" scenario just to make their argument maximally conservative, rather than because they actually think that's the right scenario to focus on? I haven't read their paper, but on brief skimming I noticed that the paragraph at the bottom of page 5 talks about ways in which they're being super conservative with that scenario.

Assuming that the goal is just to be maximally conservative while still arguing for longtermism, adding the animal component makes sense but doesn't serve the purpose. As an analogy, imagine someone who denies that any non-humans have moral value. You might start by pointing to other primates or maybe dolphins. Someone could come along and say "Actually, chickens are also quite sentient and are far more numerous than non-human primates", which is true, but it's slightly harder to convince a skeptic that chickens matter than that chimpanzees matter.

such as human’s high brain to body mass ratio

One might also care about total brain size because in bigger brains, there's more stuff going on (and sometimes more sophisticated stuff going on). As an example, imagine that you morally value corporations, and you think the most important part of a corporation is its strategic management (rather than the on-the-ground employees). You may indeed care more about corporations that have a greater ratio of strategic managers to total employees. But you may also care about corporations that have just more total strategic managers, especially since larger companies may be able to pull off more complex analyses that smaller ones lack the resources to do.

Comment by brian_tomasik on How Much Leverage Should Altruists Use? · 2020-01-13T00:49:17.315Z · score: 1 (1 votes) · EA · GW

That seems to be a common view, but I haven't yet been able to find any reason why that would be the case, except insofar as rebalancing frequency affects how leveraged you are. I discussed the topic a bit here. Maybe someone who knows more about the issue can correct me.

Comment by brian_tomasik on How Much Leverage Should Altruists Use? · 2020-01-09T23:52:33.617Z · score: 1 (1 votes) · EA · GW

Good point. I think such a fund would want to be very clear that it's not for the faint of heart and that it's done in the spirit of trying new risky things. If that message was front and center, I expect the backlash would be less.

Comment by brian_tomasik on How Much Leverage Should Altruists Use? · 2020-01-09T22:57:49.810Z · score: 2 (2 votes) · EA · GW

Thanks! From my reading of the post, that critique is not really specific to leveraged ETFs? Volatility drag is inherent to leverage in general (and even to non-leveraged investing to a smaller degree).

He says: "In my next post, I’m going to dive into more detail on what is to distinguish between good and bad uses of leverage." So I found his next post on leverage, which coincidentally is one mentioned in the OP: "The Line Between Aggressive and Crazy". There he clarifies why he doesn't like leveraged ETFs:

From this we start to see the problem with levered ETFs as they are currently constructed: they generally use too much leverage applied to too volatile of assets. Even with the plain vanilla S&P 500 3x leverage is too much. And after accounting for the hefty transactions costs and management fees these ETFs charge, even 2x might be suboptimal (especially if you believe returns will be lower in the future than they have in recent decades). And the S&P 500 is one of the most conservative targets for these products. Take a look at the websites of levered ETF providers and you will see ways to make levered bets on particular industries like biotech or the energy sector, or on commodities like oil and gold, or for more esoteric instruments yet, almost all of which are more volatile than a broadly diversified index like the S&P 500, and thus supporting much lower Kelly leverage ratios, probably less than 2x.

So unless transaction costs are a dealbreaker, it seems like he's mainly opposed to the fact that most leveraged ETFs use too much leverage for their level of volatility (relative to the Kelly Criterion, which assumes logarithmic utility of wealth), not that the instrument itself is flawed? Of course, leveraged ETFs implement a "constant leverage" strategy, and later in that post, Davis proposes adjusting the leverage ratio dynamically (which I agree is better, though it requires more work).

Comment by brian_tomasik on How Much Leverage Should Altruists Use? · 2020-01-08T00:08:20.065Z · score: 10 (7 votes) · EA · GW

Leveraged ETFs are one way to keep your leverage ratio from blowing up, without any investor effort.

Keeping all the considerations in this post in mind seems very difficult, so perhaps the ideal solution would be if there were an institution to do it for individuals, such as EA Funds or something like it. You could donate to the fund and let them adjust leverage, correlation with other donors to the same cause, and everything else on your behalf.

Comment by brian_tomasik on What ever happened to PETRL (People for the Ethical Treatment of Reinforcement Learners)? · 2020-01-01T18:14:15.224Z · score: 3 (3 votes) · EA · GW

PETRL was (to my knowledge) the only organization focused on the ethics of AI-qua-moral patient

There seems to be a lot of academic and popular discussion about robot rights and machine consciousness, but yeah, I can't name offhand another organization explicitly focused on this topic. (To some degree, Sentience Institute has this as a long-run goal, and many organizations care about it as part of what they work on.)

There's a spoof organization called People for Ethical Treatment of Robots.

Update: I see there's another organization: American Society for the Prevention of Cruelty to Robots. On the FAQ page they say:

Q: Are you serious?

A: The ASPCR is, and will continue to be, exactly as serious as robots are sentient.

Comment by brian_tomasik on Ethical offsetting is antithetical to EA · 2019-09-19T21:52:54.049Z · score: 1 (1 votes) · EA · GW

A problem is that different people have different views on what's most effective. If most people are quasi-egoists, then for them, spending money on themselves or their families is "the most effective charity" they can give to. Or even within the realm of what's normally understood to be charity, people might donate to their local church or arts center. Relative to their values, this might be the best charity to give to.

Comment by brian_tomasik on Ethical offsetting is antithetical to EA · 2019-09-19T21:32:24.209Z · score: 5 (3 votes) · EA · GW

I think offsetting makes sense when seen as a form of moral trade with other people (or even possibly other factions within your own brain's moral parliament).

Regarding objection #1 about reference classes, the answer can be that you can choose a reference class that's acceptable to your trading partner. For example, suppose you do something that makes the global poor slightly worse off. Suppose that a large faction of society doesn't care much about non-human animals but does care about the global poor. Then donating to an animal charity wouldn't offset this harm in their eyes, but donating to a developing-world charity would.

Regarding objection #2, trade by its nature involves spending resources on things that you think are suboptimal because someone else wants you to.

An objection to this perspective can be that in most offsetting situations, the trading partner isn't paying enough attention or caring enough to actually reciprocate with you in ways that make the trade positive-sum for both sides. (For trade within your own brain, reciprocation seems more likely.)

Comment by brian_tomasik on Ethical offsetting is antithetical to EA · 2019-09-19T19:25:07.196Z · score: 3 (2 votes) · EA · GW

Good point.

this is best achieved not through offsetting, but by thinking about who we will want to cooperate with and trying to help their values as cost-effectively as possible.

Couldn't one argue that offsetting harms that people outside EA care about counts as cooperating with mainstream people to some degree? In practice the way this often works is by improved public relations or general trustworthiness, rather than via explicit tit for tat. Anyway, whether this is worthwhile depends how costly the offsets are (in terms of money and time) relative to the benefits.

Comment by brian_tomasik on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T15:43:46.603Z · score: 1 (1 votes) · EA · GW

I see here it says: "Aside from mass voting, you can vote using any other criteria you choose." Presumably some people use votes to express dislike rather than to rate the quality of the comment.

(I expect most of us are guilty of this to some extent. I don't downvote comments merely because I disagree, but I upvote more often on comments with which I do agree...)

Comment by brian_tomasik on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T15:25:23.898Z · score: 4 (4 votes) · EA · GW

I think the idea is that even a pure utilitarian should care about contractarian-style thinking for almost any practical scenario, even if there are some thought experiments where that's not the case.

Comment by brian_tomasik on Interview with Michael Tye about invertebrate consciousness · 2019-08-09T10:07:22.937Z · score: 15 (8 votes) · EA · GW

Congrats on all these great interviews!

There is nothing in the behavior of the nematode worm that indicates the presence of consciousness. It is a simple stimulus-response system without any flexibility in its behavior.

There are numerous papers on learning in C. elegans. Rankin (2004):

Until 1990, no one investigated the possibility that C. elegans might show behavioral plasticity and be able to learn from experience. This has changed dramatically over the last 14 years! Now, instead of asking “what can a worm learn?” it might be better to ask “what cannot a worm learn?” [...]

C. elegans has a remarkable ability to learn about its environment and to alter its behavior as a result of its experience. In every area where people have looked for plasticity they have found it.

Comment by brian_tomasik on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-09T23:44:12.743Z · score: 2 (2 votes) · EA · GW

Interesting. :)

giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse.

I was thinking that it's not just a matter of risk aversion, because regular utilitarianism would also favor helping the person with a terrible life if doing so were cheap enough. The perverse behavior of the ex ante view in my example comes from the concave f function.

Comment by brian_tomasik on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-09T03:03:16.046Z · score: 7 (4 votes) · EA · GW

We could be wrong either way.

Good point, but I feel like ex post prioritarianism does the allocation better, by being risk-averse (even though this is what Ord criticizes about it in the 2015 paper you cited in the OP). Imagine that someone has a 1/3^^^3 probability of 3^^^^3 utility. Ex ante prioritarianism says the expected utility is so enormous that there's no need to benefit this person at all, even if doing so would be almost costless. Suppose that with probability 1 - 1/3^^^3, this person has a painful congenital disease, grows up in poverty, is captured and tortured for years on end, and dies at age 25. Ex ante prioritarianism (say with a sqrt or log function for f) says that if we could spend $0.01 to prevent all of that suffering, we needn't bother because other uses of the money would be more cost-effective, even though it's basically guaranteed that this person's life will be nothing but horrible. Ex post prioritarianism gets what I consider the right answer because the reduction of torment is not buried into nothingness by the f function, since the expected-value calculation weighs two different scenarios that each get applied the f function separately.

I guess an ex ante supporter could say that if someone chooses the 1/3^^^3 gamble and it doesn't work out, that's the price you pay for taking the risk. But that stance feels pretty harsh.

Comment by brian_tomasik on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-08T19:52:40.997Z · score: 1 (1 votes) · EA · GW

I meant the subjective probabilities of the person using the ethical system ("you") applied to everyone, not using their own subjective probabilities.

I see. :) It seems like we'd still have the same problem as I mentioned. For example, I might think that currently elderly people signed up for cryonics have very high expected lifetime utility relative to those who aren't signed up because of the possibility of being revived (assuming positive revival futures outweigh negative ones), so helping currently elderly people signed up for cryonics is relatively unimportant. But then suppose it turns out that cryopreserved people are never actually revived.

(This example is overly simplistic, but the point is that you can get similar scenarios as my original one while still having "reasonable" beliefs about the world.)

Comment by brian_tomasik on An Argument for Why the Future May Be Good · 2019-07-08T19:41:41.722Z · score: 1 (1 votes) · EA · GW

I think maybe what I had in mind with my original comment was something like: "There's a high probability (maybe >80%?) that the future will be very alien relative to our values, and it's pretty unclear whether alien futures will be net positive or negative (say 50% for each), so there's a moderate probability that the future will be net negative: namely, at least 80% * 50%." This is a statement about P(future is positive), but probably what you had in mind was the expected value of the future, counting the IMO unlikely scenarios where human-like values persist. Relative to values of many people on this forum, that expected value does seem plausibly positive, though there are many scenarios where the future could be strongly and not just weakly negative. (Relative to my values, almost any scenario where space is colonized is likely negative.)

Comment by brian_tomasik on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-08T00:01:35.113Z · score: 1 (1 votes) · EA · GW

your own subjective probability distribution to be used

Would that penalize people who hold optimistic beliefs? Their expected utilities would often be pretty high, so it'd be less important to help them. As an extreme example, someone who expects to spend eternity in heaven would already be so well off that it would be pointless to help him/her, relative to helping an atheist who expects to die at age 75. That's true even if the believer in heaven gets a terminal disease at age 20 and dies with no afterlife.

Comment by brian_tomasik on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-06T19:31:20.118Z · score: 9 (5 votes) · EA · GW

Thought experiments like these are why I regard personal identity, and any moral theories that depend on it, as non-starters (including versions of prioritarianism that consider lifetime wellbeing collectively). I think it's best to think either in terms of empty individualism or open individualism. Empty individualism tends to favor suffering-focused views because any given moment of unbearable suffering can't be compensated by other moments of pleasure even within what we normally call the same individual, because the pleasure is actually experienced by a different individual. Open individualism tends to undercut suffering-focused intuitions by saying that torturing one person for the happiness of a billion others is no different than one person experiencing pain for later pleasure.

As others have pointed out before, it is legitimate to try to salvage some ethical concern for personal identity despite the paradoxes. By analogy, the idea of consciousness has many paradoxes, but I still try to salvage it for my ethical reasoning. Neither personal identity nor consciousness "actually exists" in any deep ontological sense, but we can still care about them. It's just that I happen not to care ethically about personal identity.

Comment by brian_tomasik on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-05T17:48:05.119Z · score: 2 (2 votes) · EA · GW

Interesting ideas. :)

If I understand the view correctly, it would say that a world where everyone has a 49.99% chance of experiencing pain with utility of -10^1000 and a 50.01% chance of experiencing pleasure with utility of 10^1000 is fine, but as soon as anyone's probability of the pain goes above 50%, things start to become very worrisome (assuming the prioritarian weighting function cares a lot more about negative than positive values)? This is despite the fact that in terms of realized outcomes, the difference between one person having 49.99% chance of the pain vs 50.01% is pretty minimal.

What probability distribution are the expectations taken with respect to? If you were God and knew everything that would happen, there would be no uncertainty (except maybe due to quantum randomness depending on one's view about that). If there's no randomness, I think ex ante prioritarianism collapses to regular prioritarianism.

One issue is how you decide whether a given person exists in a given history or not. For example, if I had been born with a different hair color, would I be the same person? Maybe. How about a different personality? At what point do "I" stop existing and someone else starts existing? I guess similar issues bedevil the question of whether a person stays the same person over time, though there we can also use spatiotemporal continuity to help maintain personal identity.

Comment by brian_tomasik on Insect herbivores, life history and wild animal welfare · 2019-07-05T15:17:16.934Z · score: 1 (1 votes) · EA · GW

That shrew thing is fascinating!

would you also claim that species with slower metabolism have less lived experience than those with faster metabolism

Yeah, as an initial hypothesis I would guess that faster brain metabolism often means that more total information processing is occurring, although this rule isn't perfect because the amount of information processing per unit of energy used can vary. Also, the sentience or "amount of experience" of a brain needn't be strictly proportional to information processing.

In 2016 I wrote some amateur speculations on this idea, citing the Healy et al. (2013) paper.

Comment by brian_tomasik on How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong. · 2019-07-05T02:07:45.526Z · score: 4 (4 votes) · EA · GW

You're right that communication on this topic hasn't always been the most clear. :)

This section of my reply to Michael Plant helps explain my view on those questions. I think assessments of the intensities of pain and pleasure necessarily involve significant normative judgment calls, unless you define pain and pleasure in a sufficiently concrete way that it becomes a factual matter. (But that begs the question of what concrete definition is the right one to choose.)

I guess most people who aim to quantify pleasure and pain don't choose numbers such that unbearable suffering outweighs any amount of pleasure, so the statement you quoted could be said to be mainly about my negative-utilitarian values (though I would say that a view that pleasure can outweigh unbearable suffering is ultimately a statement about someone's non-negative-utilitarian values).

Comment by brian_tomasik on How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong. · 2019-06-25T13:56:52.036Z · score: 40 (23 votes) · EA · GW

Congrats on fixing the error!

When I first discussed Ng (1995)'s mathematical proof with some friends in 2006, they said they didn't find it very convincing because it's too speculative and not very biologically realistic. Other people since then have said the same, and I agree. I've cited it on occasion, but I've never considered the mathematical result of that particular model to be more than an extremely weak argument for the predominance of suffering.

I think the intuition underlying the argument -- that most offspring die not long after birth -- is one of the reasons many people believe wild-animal suffering predominates. It certainly might be the case that this intuition is misguided, such as based on what you said: "when the probability of suffering increases, the severity of suffering should decrease." I have an article that also discusses theoretical reasons why shorter-lived animals and animals who are less likely to ever reproduce may not feel as much pain or fear as we would from the same kinds of injuries.

While I think these kinds of arguments are interesting, I give them relatively low epistemic weight because they're so theoretical. I think the best way to assess the net hedonic balance of wild animals is to watch videos and read about their lives, seeing what kinds of emotions they display, and then come up with our own subjective opinions about how much pain and pleasure they feel. This method is biased by anthropomorphism, but it's at least somewhat more anchored to reality than simple theoretical models. We could try to combat anthropomorphism a bit by learning more about how other animals make tradeoffs between combinations of bad and good things, and so on.

For me, it will always remain obvious that suffering dominates in nature because I believe extreme, unbearable suffering can't be outweighed by other organism-moments experiencing pleasure. In general, I think most of the disagreement about nature's net hedonic balance comes down to differences in moral values rather than disagreements about facts. But yes, it remains useful to improve our frameworks for thinking about this topic, as you're helping to do. :)

Comment by brian_tomasik on Why we have over-rated Cool Earth · 2019-06-22T04:07:09.440Z · score: 2 (2 votes) · EA · GW

Good points. I mentioned Cool Earth specifically here, with a tentative calculation suggesting that even if greenhouse-gas emissions increase wild-animal populations (and it's not clear that they do), preserving rainforest to sequester CO2 probably increases wild-animal populations even more.

Comment by brian_tomasik on Insect herbivores, life history and wild animal welfare · 2019-06-21T12:37:43.045Z · score: 2 (2 votes) · EA · GW

the mechanisms of diapause are quite variable even within species groups (e.g., Hand et al. 2016).

Interesting. :) When I said "I tend to assume that insects in diapause have relatively little subjective experience", I had in mind the prototypical case of diapause where metabolism dramatically decreases.

I see that Hand et al. (2016) make the point that diapause doesn't always imply reduced metabolism: "Diapause [...] may or may not involve a substantial depression of metabolism" and "Diapause [...] depending on the species, can also be accompanied by depression of metabolism, essential for conserving energy reserves."

When I was reading about diapause, most of the sources suggested that metabolism was reduced, so I assumed that was the usual case. For example: "During diapause an insect's metabolic rate drops to one tenth or less".

Comment by brian_tomasik on Insect herbivores, life history and wild animal welfare · 2019-06-21T11:35:36.259Z · score: 2 (2 votes) · EA · GW

Thanks for the further insights. :)

I wasn't very clear about the phrase "adult lifespan", which I was probably using incorrectly. What I had in mind was "average lifespan only counting individuals who survive to adulthood", which I think is similar if not the same as what you had in mind.

Life expectancy at birth may vary a lot, but I think it'd be interesting to see some example numbers to get a sense of the diversity, similar to how you gave lots of other sample numbers for other metrics. I assume one could compute it from survivorship curves. (This is just a general point for future work that people might do. You've already gathered a huge amount of info here, and I don't mean to request even more. :) )

A species that lives in a cool climate does not necessarily have an average experienced daily temperature that is less than a species in a warmer climate, except for really extreme cases

My comment was partly inspired by this quote from your piece: "Species from cool temperate regions tend to have longer life cycles with about one generation per year (e.g., Danks and Foottit 1989), as do species living in areas that have a dry season. But we note that for many of these species, variable environmental conditions determine how many generations there are per year, and in addition, the overwintering generation will have a longer lifespan than growing season generations." I didn't read the source articles, but I was guessing that when species have longer lifespans due to cold or dry conditions, they presumably have to slow down metabolically during those unfavorable periods. And metabolic slowdown presumably means that activity by the nervous system slows down too.

I tried Googling about that and stumbled on Huestis et al. (2012). The authors expected mosquitoes to reduce metabolic rate during aestivation like happens for insects during winter diapause, but resting metabolic rate was actually higher during the late dry season. "The high ambient temperatures during the Sahelian dry season may prevent or limit a reduction in metabolic rate even if it would be adaptive."

Still, it does seem true that insects experiencing cooler temperatures typically slow down metabolism (with your point taken that one has to consider microclimatic temperature). So I guess my point here reduces to the previous point about how winter-diapausing insects (as well as those experiencing reduced temperatures even not in diapause) plausibly matter less per unit time, in proportion to the extent of slowdown (leaving room for lots of exceptions and diversity depending on the details).

Comment by brian_tomasik on Insect herbivores, life history and wild animal welfare · 2019-06-16T23:19:01.886Z · score: 2 (2 votes) · EA · GW

Do you think it would be different for detritivores compared with herbivores? Given that many plants aren't significantly consumed by animals, it seems there is often food in existence for herbivores to eat. In contrast, almost all decomposing organic matter will eventually be eaten by someone or other, so that food source could run out (or, if food doesn't run out, then maybe water does during dry periods). That said, maybe insect decomposers are still limited in number by factors like predators and parasitoids, and it's the other decomposers (bacteria, fungi, etc) who mainly face the resource limits.

Comment by brian_tomasik on Insect herbivores, life history and wild animal welfare · 2019-06-15T03:52:25.233Z · score: 12 (6 votes) · EA · GW

There's tons of useful info in this piece. :)

I take it that your "Life span" section refers to adult lifespans? For example, the statement that "Overall, very short lifespans (less than 20 days) seem fairly rare" refers to reaching maturity in less than 20 days? Do you have estimates for life expectancy at birth (maybe ignoring egg mortality, assuming eggs aren't sufficiently sentient to warrant concern)? Your sections on "Predators" and "Parasitoids" gave some point estimates based on when predation and inoculation by parasitoids often occur. Maybe those are reasonable approximations for life expectancy at birth. On the other hand, isn't survivorship almost always "concave upward", with most deaths occurring quite early? This figure is one random example, showing that most of the insects are dead before the second instar. And because of the concave-upward shape, the average age of death should be pretty young.

extended longevity associated with extended or repeated diapause

I tend to assume that insects in diapause have relatively little subjective experience, such that those periods of time "don't count" very much if we're using lifespan as a measure of how long the animal experiences pleasure and pain. Of course, if the insect is minimally sentient during that time, then maybe deaths occurring during that time aren't that bad.

Extending this idea, it seems plausible that ectotherms that mature slowly in cool climates have less sentience and less hedonic experience per day than those in warm climates, because biological activity is generally slowed down in cool climates. So maybe the difference in total amount of life experiences is less than one might assume between longer-lived slow-developing insects in high latitudes vs fast-developing insects at low latitudes.

Dung beetles species had the lowest lifetime fecundity (~2 offspring), while mayflies had the largest (~4000 offspring).

If we imagine only two species of insect -- one with lifetime fecundity of 2 and one with 4000 -- and if each species has equal numbers of egg-laying mothers, then the ratio of (total offspring)/(total mothers) will still be very high: (2 + 4000)/(1 + 1) = 2001. When we make assessments about the net hedonic balance of an entire ecosystem containing multiple species, it's this average value that seems most relevant. (Of course, this number is only one heuristic. A full evaluation has to consider the sentience of each organism, the cause of death, lifespan, etc.)

Comment by brian_tomasik on A vision for anthropocentrism to supplant wild animal suffering · 2019-06-07T23:57:15.923Z · score: 5 (4 votes) · EA · GW

Interesting info. :)

Jacy has argued that farm-animal suffering is a closer analogy to most far-future suffering than wild-animal suffering, and I largely agree with his arguments, although he and I both believe that some concern for naturogenic suffering is an important part of a "moral-circle-expansion portfolio", especially if events within some large simulations fall mainly into the "naturogenic" moral category. There could also be explicit nature simulations run for reasons of intrinsic/aesthetic value or entertainment.

I agree that terraforming and directed panspermia, if they occur at all, will be relatively brief preludes to a much larger and longer artificial future. A main reason I mention terraforming and directed panspermia at all is because they're less speculative/weird, and there's already a fair amount of discussion about them. But as I said here: "in the long run, it seems likely that most Earth-originating agents will be artificial: robots and other artificial intelligences (AIs). [...] we should expect that digital, not biological, minds will dominate in the future, barring unforeseen technical difficulties or extreme bio-nostalgic preferences on the part of the colonizers."

Then we can have a reasonable expectation that quality of life will be positive, as people will have plenty of contact and responsibility for other organisms.

...only if (1) concern for the experienced welfare (rather than, say, autonomy) of animals increases significantly from where it is now (including for invertebrates, who hold the majority of the neurons) and (2) such concern doesn't later decrease. Both of these assumptions aren't obvious. Personally I find it probable that moral concern for the suffering of animal-like creatures, like most human values, will be a distant memory within 5000 years, for similar reasons as worship of the ancient-Egyptian deities is a distant memory today.

Comment by brian_tomasik on What is the current best estimate of the cumulative elasticity of chicken? · 2019-05-17T20:20:35.694Z · score: 2 (2 votes) · EA · GW

Are there goods that economists think do work like what my friend is describing?

Relative to Econ 101 models, that would only happen if supply is perfectly inelastic (i.e., the supply curve is vertical).

Edited to add: ...or if demand is perfectly elastic (i.e., the demand curve is horizontal). Given how much people like eating meat, this seems very implausible.

Comment by brian_tomasik on Thoughts on the welfare of farmed insects · 2019-05-13T14:25:41.185Z · score: 6 (4 votes) · EA · GW

I think most of the experts Max has in mind are talking about phenomenal consciousness. One example is Max's recent interview with Jon Mallatt.

My own view is that there is no sharp line separating "phenomenal consciousness" from mere cognitive abilities, though certainly some types of mental abilities tug at our moral heartstrings more than others, and it is a nontrivial question what degree of the heartstring-tugging abilities insects have.

Comment by brian_tomasik on Thoughts on the welfare of farmed insects · 2019-05-09T12:41:40.211Z · score: 11 (5 votes) · EA · GW

It seems like most articles on the subject claim higher efficiency

Yeah. :) I was just offering one more data point. In the Table 3 screenshot in the link I gave above, it's carp rather than chicken that are most competitive with crickets in terms of feed conversion.

the Wikipedia page seems pretty sceptical about freezing as a method of killing

I wrote that page, so it's not an independent source :) (although the citations within it are).

wouldn't make sense for the nervous system to send "avoid this" messages to the animal while the animal wasn't able to avoid the situation

It could still make sense in terms of creating a bad experience that makes the animal try harder to avoid such a situation next time (if there is a next time).

Comment by brian_tomasik on Thoughts on the welfare of farmed insects · 2019-05-09T05:23:49.780Z · score: 14 (7 votes) · EA · GW

Great post!

My general position is that I expect insect farming to be even worse ethically than the factory farming of larger animals.

In expectation I agree, except maybe farming of chickens or small fish, which might be competitive with cricket and mealworm farming in terms of (sentience per animal)*(number of animals).

most insect farming operations feed crops to insects.

Yes, except some operations raising insects to feed to vertebrate farm animals rather than humans. (So much for displacing other types of factory farming...)

The conversion ratio of crop to insect meat is much better than it is for [other] types of meat

Lundy and Parrella (2015) say that farmed crickets had "little or no [protein conversion efficiency] PCE improvement compared to chicken".

no good evidence that they should be a less painful way of killing these animals

I don't recall if I've ever seen someone make this argument, but my best guess would be that freezing ectotherms should be less bad than freezing endotherms because an endotherm would maintain its body temperature for a while, while an ectotherm is more likely to "give up" and let the cold temperatures come. This seems more likely to be humane for very tiny creatures that can rapidly change temperature compared against, say, reptiles and amphibians. People say that freezing reptiles is inhumane.

There are unfortunately lots of dying bugs around my house, so I regrettably have a lot of experience freezing bugs to euthanize them. I find that a dying fly put in the freezer becomes completely motionless within ~half a minute. There are at least two possible explanations for this:

  1. The cold temperatures slow down metabolic activity so that cells (including neurons) are mostly paused
  2. The nervous system is still active but merely chooses to stop movement, perhaps to avoid bodily injury or something.

I hope the answer is #1 rather than #2, though I agree we don't know much about this stuff. Freezing insects could be anywhere from almost painless (after the first few seconds) to extremely painful.

I avoid testing it out, but I would imagine that if you put a bug in the freezer and took it out a minute or two later, it would come back to being active again. The freezing temperatures probably just put it "on pause" rather than killing it quickly. I don't know how long it takes freezing temperatures to actually kill a bug (and it may vary a lot from one species to the next).

Comment by brian_tomasik on On AI and Compute · 2019-04-08T21:24:32.990Z · score: 18 (5 votes) · EA · GW

Thanks for the interesting post!

By cortical neuron count, systems like AlphaZero are at about the same level as a blackbird (albeit one that lives for 18 years)

That comparison makes me think AI algorithms need a lot of work, because blackbirds seem vastly more impressive to me than AlphaZero. Some reasons:

  1. Blackbirds can operate in the real world with a huge action space, rather than a simple toy world with a limited number of possible moves.
  2. Blackbirds don't need to play millions of rounds of games to figure things out. Indeed, they only have one shot to figure the most important things out or else they die. (One could argue that evolution has been playing millions/trillions/etc of rounds of the game over time, with most animals failing and dying, but it's questionable how much of that information can be transmitted to future generations through a limited number of genes.)
  3. Blackbirds seem to have "common sense" when solving problems, in the sense of figuring things out directly rather than stumbling upon them through huge amounts of trial and error. (This is similar to point 2.) Here's a random example of what I have in mind by common sense: "One researcher reported seeing a raven carry away a large block of frozen suet by using his beak to carve a circle around the entire chunk he wanted." Presumably the raven didn't have to randomly peck around on thousands of previous chunks of ice in order to discover how to do that.

Perhaps one could argue that if we have the hardware for it, relatively dumb trial and error can also get to AGI as long as it works, whether or not it has common sense. But this gets back to point #1: I'm skeptical that dumb trial and error of the type that works for AlphaZero would scale to a world as complex as a blackbird's. (Plus, we don't have realistic simulation environments in which to train such AIs.)

All of that said, I acknowledge there's a lot of uncertainty on these issues, and nobody really knows how long it will take to get the right algorithms.

Comment by brian_tomasik on Why doesn't the EA forum have curated posts or sequences? · 2019-03-22T04:25:25.941Z · score: 14 (11 votes) · EA · GW

In my opinion, organizations may do best to avoid officially endorsing anything other than the most central content that they produce in order to reduce these PR headaches, regarding both

  1. what's said in the endorsed articles and
  2. which articles were or weren't chosen to begin with (the debate over the EA Handbook comes to mind).

As an alternative, maybe individual people could create their own non-CEA-endorsed lists of recommended content, and these could be made available somewhere. Having many such lists would allow for diversity based on interests and values. (For example, "The best global poverty articles", "The best career articles", "The best articles for suffering-focused altruists", etc.)

Comment by brian_tomasik on Suffering of the Nonexistent · 2019-03-08T22:45:11.788Z · score: 1 (1 votes) · EA · GW

I liked the long introductory exposition, though I also agree with adding the summary.

Comment by Brian_Tomasik on [deleted post] 2018-12-31T00:49:46.871Z

Thanks for the analysis. :) As Carl mentions, effects on wild animals are also important. From my perspective, it's plausible that family planning is unfortunately net bad with respect to wild-animal suffering, since humans may reduce global wild-animal populations, although this is far from obvious.

Comment by brian_tomasik on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-12-28T01:08:25.481Z · score: 3 (3 votes) · EA · GW

Interesting points. :) I think there could be substantial differences in policy between 10% support and 100% support for MCE depending on the costs of appeasing this faction and how passionate it is. Or between 1% and 10% support for MCE applied to more fringe entities.

philosophically sophisticated people can still have fairly strange values by your own lights, but it seems like there's more convergence.

I'm not sure if sophistication increases convergence. :) If anything, people who think more about philosophy tend to diverge more and more from commonsense moral assumptions.

Yudkowsky and I seem to share the same metaphysics of consciousness and have both thought about the topic in depth, yet we occupy almost antipodal positions on the question of how many entities we consider moral patients. I tend to assume that one's starting points matter a lot for what views one ends up with.

Comment by brian_tomasik on Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply · 2018-04-30T11:03:08.695Z · score: 1 (1 votes) · EA · GW

I think the economist guesses are from Compassion, By the Pound, though I also don't have a copy of either book.

no peer-reviewed articles or analyses being quoted

Yeah. Matheny (2003) is a journal article on the same topic, though it's not an economics journal.

they could easily export whatever is not locally consumed (as some EU countries do).

Perhaps that would reduce local meat production in the destination countries.

Comment by brian_tomasik on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-24T16:56:26.179Z · score: 6 (6 votes) · EA · GW

You raise some good points. (The following reply doesn't necessarily reflect Jacy's views.)

I think the answers to a lot of these issues are somewhat arbitrary matters of moral intuition. (As you said, "Big part of it seems arbitrary.") However, in a sense, this makes MCE more important rather than less, because it means expanded moral circles are not an inevitable result of better understanding consciousness/etc. For example, Yudkowsky's stance on consciousness is a reasonable one that is not based on a mistaken understanding of present-day neuroscience (as far as I know), yet some feel that Yudkowsky's view about moral patienthood isn't wide enough for their moral tastes.

Another possible reply (that would sound better in a political speech than the previous reply) could be that MCE aims to spark discussion about these hard questions of what kinds of minds matter, without claiming to have all the answers. I personally maintain significant moral uncertainty regarding how much I care about what kinds of minds, and I'm happy to learn about other people's moral intuitions on these things because my own intuitions aren't settled.

E.g. we can think about the DNA based evolution as about large computational/optimization process - suddenly "wild animal suffering" has a purpose and traditional environmnet and biodiversity protection efforts make sense.

Or if we take a suffering-focused approach to these large systems, then this could provide a further argument against environmentalism. :)

If the human cognitive processes are in the priviledged position of creating meaning in this universe ... well, then they are in the priviledged postion, and there is a categorical difference between humans and other minds.

I selfishly consider my moral viewpoint to be "privileged" (in the sense that I prefer it to other people's moral viewpoints), but this viewpoint can have in its content the desire to give substantial moral weight to non-human (and human-but-not-me) minds.

Comment by brian_tomasik on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-22T21:23:57.496Z · score: 8 (8 votes) · EA · GW

I tend to think of moral values as being pretty contingent and pretty arbitrary, such that what values you start with makes a big difference to what values you end up with even on reflection. People may "imprint" on the values they receive from their culture to a greater or lesser degree.

I'm also skeptical that sophisticated philosophical-type reflection will have significant influence over posthuman values compared with more ordinary political/economic forces. I suppose philosophers have sometimes had big influences on human politics (religions, Marxism, the Enlightenment), though not necessarily in a clean "carefully consider lots of philosophical arguments and pick the best ones" kind of way.

Comment by brian_tomasik on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T13:02:23.831Z · score: 6 (6 votes) · EA · GW

I'm fairly skeptical of this personally, partly because I don't think there's a fact of the matter when it comes to whether a being is conscious.

I would guess that increasing understanding of cognitive science would generally increase people's moral circles if only because people would think more about these kinds of questions. Of course, understanding cognitive science is no guarantee that you'll conclude that animals matter, as we can see from people like Dennett, Yudkowsky, Peter Carruthers, etc.

Comment by brian_tomasik on Where can I donate to support insect welfare? · 2018-01-02T08:06:59.725Z · score: 11 (11 votes) · EA · GW

Or maybe do donate to AMF. :)

Comment by brian_tomasik on Where can I donate to support insect welfare? · 2017-12-31T13:51:09.344Z · score: 3 (5 votes) · EA · GW

Nice points. :)

it seems doubtful there are any interventions for insects that will substantially change their quality of life without also making a big difference in the total population

One exception might be identifying insecticides that are less painful than existing ones while having roughly similar effectiveness, broad/narrow-spectrum effects, etc. Other forms of humane slaughter, such as on insect farms, would also fall under this category.

Comment by brian_tomasik on Where can I donate to support insect welfare? · 2017-12-31T11:15:20.663Z · score: 2 (4 votes) · EA · GW

It's a big topic area, and I think we need articles on lots of different issues. The overview piece for invertebrate sentience was just a small first step. Philosophers, neuroscientists, etc. have written thousands of papers debating criteria for sentience, so I don't expect such issues to be resolved soon. In the meanwhile, cataloguing what abilities different invertebrate taxa have seems valuable. But yes, some awareness of the arguments in philosophy of mind and how they bear on the empirical research is useful. :)

Comment by brian_tomasik on Where can I donate to support insect welfare? · 2017-12-31T04:51:27.909Z · score: 7 (9 votes) · EA · GW

Great overview!

Yeah, Wild-Animal Suffering Research's plans include some invertebrate components, especially Georgia Ray’s topics.

If you're also concerned about reducing the suffering of small artificial minds in the far future, Foundational Research Institute may be of interest.

Comment by brian_tomasik on Why I think the Foundational Research Institute should rethink its approach · 2017-09-21T03:37:07.803Z · score: 2 (1 votes) · EA · GW

Interesting. :)

Daswani and Leike (2015) also define (p. 4) happiness as the temporal difference error (in an MDP), and for model-based agents, the definition is, in my interpretation, basically the common Internet slogan that "happiness = reality - expectations". However, the authors point out (p. 2) that pleasure = reward != happiness. This still leaves open the issue of what pleasure is.

Personally I think pleasure is more morally relevant. In Tomasik (2014), I wrote (p. 11):

After training, dopamine spikes when a cue appears signaling that a reward will arrive, not when the reward itself is consumed [Schultz et al., 1997], but we know subjectively that the main pleasure of a reward comes from consuming it, not predicting it. In other words, in equation (1), the pleasure comes from the actual reward r, not from the amount of dopamine δ.

In this post commenting on Daswani and Leike (2015), I said:

I personally don't think the definition of "happiness" that Daswani and Leike advance is the most morally relevant one, but the authors make an interesting case for their definition. I think their definition corresponds most closely with "being pleased of one's current state in a high-level sense". In contrast, I think raw pleasure/pain is most morally significant. As a simple test, ask whether you'd rather be in a state where you've been unexpectedly notified that you'll get a cookie in a few minutes or whether you'd rather be in the state where you actually eat the cookie after having been notified a few minutes earlier. Daswani and Leike's definition considers being notified about the cookie to be happiness, while I think eating the cookie has more moral relevance.


Dayan mentioned that liking may even be an epiphenomenon of some things that are going on in the brain when we eat food/have sex etc, similar to how the specific flavour of pleasure we get from listening to music is such an epiphenomenon.

I'm not sure I understand, but I wrote a quick thing here inspired by this comment. Do you think that's what he meant? If so, may I attribute him/you for the idea? It seems fairly plausible. :) Studying what separates red from blue might help shine light on this topic.

Comment by brian_tomasik on S-risk FAQ · 2017-09-19T00:11:11.796Z · score: 3 (3 votes) · EA · GW

the sort of thing we were pointing at in the late 90s before we started talking about x-risk

I'd be interested to hear more about that if you want to take the time.

Comment by brian_tomasik on Why I think the Foundational Research Institute should rethink its approach · 2017-08-27T23:42:29.462Z · score: 2 (1 votes) · EA · GW

So we're left with an agent that decides initially that it won't do anything at all (not even updating its beliefs) because it doesn't want to be outside of the room and then remains inactive. The question arises if that's an agent at all and if it's meaningfully different unconsciousness.

Hm. :) Well, what if the agent did do stuff inside the room but still decided not to go out? We still wouldn't be able to tell if it was experiencing net positive, negative, or neutral welfare. Examples:

  1. It's winter. The agent is cold indoors and is trying to move to the warm parts of the room. We assume its welfare is net negative. But it doesn't go outside because it's even colder outside.

  2. The agent is indoors having a party. We assume it's experiencing net positive welfare. It doesn't want to go outside because the party is inside.

We can reproduce the behavior of these agents with reward/punishment values that are all positive numbers, all negative numbers, or a combination of the two. So if we omit the higher-level thoughts of the agents and just focus on the reward numbers at an abstract level, it doesn't seem like we can meaningfully distinguish positive or negative welfare. Hence, the sign of welfare must come from the richer context that our human-centered knowledge and evaluations bring?

Of course, qualia nonrealists already knew that the sign and magnitude of an organism's welfare are things we make up. But most people can agree upon, e.g., the sign of the welfare of the person at the party. In contrast, there doesn't seem to be a principled way that most people would agree upon for us to attribute a sign of welfare to a simple RL agent that reproduces the high-level behavior of the person at the party.

Comment by brian_tomasik on Why I think the Foundational Research Institute should rethink its approach · 2017-08-26T23:24:02.745Z · score: 1 (1 votes) · EA · GW

Your explanation was clear. :)

acting vigorously doesn't say anything about whether the agent is currently happy

Yeah, I guess I meant the trivial observation that you act vigorously if you judge that doing so has higher expected total discounted reward than not doing so. But this doesn't speak to whether, after making that vigorous effort, your experiences will be net positive; they might just be less negative.

Of course, if you don't like it outside the room at all, you'll never press the lever - so there is a 'zero point' in terms of how much you like it outside.

...assuming that sticking around inside the room is neutral. This gets back to the "unwarranted assumption that the agent is at the zero-point before it presses the lever."

The theory that assumes nonexistence is the zero-point kind of does the same thing though.

Hm. :) I feel like there's a difference between (a) an agent inside the room who hasn't yet pressed the lever to get out and (b) the agent not existing at all. For (a), it seems we ought to be able to give a (qualia and morally nonrealist) answer about whether its experiences are positive or negative or neutral, while for (b), such a question seems misplaced.

If it were a human in the room, we could ask that person whether her experiences before lever pressing were net positive or negative. I guess such answers could vary a lot between people based on various cultural, psychological, etc. factors unrelated to the activity level of reward networks. If so, perhaps one position could be that the distinction between positive vs. negative welfare is a pretty anthropomorphic concept that doesn't travel well outside of a cognitive system capable of making these kinds of judgments. Intuitively, I feel like there is more to the sign of one's welfare than these high-level, potentially idiosyncratic evaluations, but it's hard to say what.

I suppose another approach could be to say that the person in the room definitely is at welfare 0 (by fiat) based on lack of reward or punishment signals, regardless of how the person evaluates her welfare verbally.

Comment by brian_tomasik on Why I think the Foundational Research Institute should rethink its approach · 2017-08-26T04:23:02.358Z · score: 1 (1 votes) · EA · GW

Thanks!! Interesting. I haven't read the linked papers, so let me know if I don't understand properly (as I probably don't).

I've always thought of simple RL agents as getting a reward at fixed time intervals no matter what they do, in which case they can't act faster or slower. For example, if they skip pressing a lever, they just get a reward of 0 for that time step. Likewise, in an actual animal, the animal's reward neurons don't fire during the time when the lever isn't being pressed, which is equivalent to a reward of 0.

Of course, animals would prefer to press the lever more often to get a positive reward rather than a reward of 0, but this would be true whether the lever gave positive reward or merely relief from punishment. For example, maybe the time between lever presses is painful, and the pressed lever is merely less painful. This could be the experience of, e.g., a person after a breakup consuming ice cream scoops at a higher rate than normal to escape her pain: even with the increased rate of ice cream intake, she may still have negative welfare, just less negative. It seems like vigor just says that what you're doing is better than not doing it?

For really simple RL agents like those living in Grid World, there is no external clock. Time is sort of defined by when the agent takes its next step. So it's again not clear if a "rate of actions" explanation can help here (but if it helps for more realistic RL agents, that's cool!).

This answer says that for a Markov Decision Process, "each action taken is done in a time step." So it seems like a time step is defined as the interval between one action and the next?