Posts

What might decorticate rats tell us about the distribution of consciousness? 2022-07-20T12:00:50.332Z
Fanatical EAs should support very weird projects 2022-06-30T12:07:06.055Z
The importance of getting digital consciousness right 2022-06-13T10:41:11.524Z
Notes on the risks and benefits of kidney donation 2021-11-26T15:02:57.540Z

Comments

Comment by Derek Shiller on Why you should consider becoming a nondirected kidney donor · 2022-08-08T16:12:29.639Z · EA · GW

you have to have substantially above average kidney functioning in order to be eligible

When I looked into this, I came away with the impression that this is not true. According to the formal requirements, you don't need to have above average kidney function to be eligible. You need to not have a recorded history of kidney disease and take a test that weakly indicates that you are currently in the healthy range.

  • The healthy range isn't tied to your age. A young person with healthy kidney function can expect to probably have reduced kidney function in old age. The cut off for donation is a GFR of 90 (though doctors can accept patients as low as 60 in some cases). If your GFR is 90 and you're 80 years old, you've got great kidney function. If your GFR is 90 and you're 30, you don't have to worry about kidney problems now, but you may have reason to be wary about the future.

  • For the purposes of donation, your kidney function is measured by creatinine clearance. This is an easy test to do, but isn't particularly accurate. Your measurement can change significantly from test to test. There may be significant racial and dietary effects that aren't considered in the process.

  • Current kidney function isn't a fantastic guide to future kidney function. Your kidneys can work at different levels. Their full capacity is a function of their nephron count and it is hugely variable between individuals. If you don't have a lot of nephrons but they're working overtime, you might have a high GFR even though your prospects for the future aren't great. There is no good way to assess your nephron count.

Individual doctors have leeway to reject people who they think are at particular risk, but they have moral and financial incentives to err on the side of accepting borderline acceptable candidates and the formal requirements don't require exceptional kidney function.

Comment by Derek Shiller on What reason is there NOT to accept Pascal's Wager? · 2022-08-04T16:11:41.733Z · EA · GW

This might take care of the wager's implications for what we should try to believe ourselves, but it would probably have weird implications of its own (for fanatical EAs, at least). It might suggest that critical thinking courses have a much higher expected utility than bed nets.

Comment by Derek Shiller on What might decorticate rats tell us about the distribution of consciousness? · 2022-07-21T16:50:21.245Z · EA · GW

Do you mean people in general? Or, in EA/neuroscience/consciousness research, ...

Maybe this was unfair. I meant that these issues touch on comparative neuroanatomy, evolutionary neuroscience, animal experimentation, and the philosophy of consciousness, and few people (in academia or out) have much experience in all of them. I also think consciousness is just really hard to think about.

Could you share any resources that suggest otherwise?

The Merker piece I cite is the prime example of a denial in the contemporary literature. Merker draws inspiration from Penfield and Jasper, who had a similar view in the middle of the last century.

How does k- and r-selection relate to a species cortex properties (and to consciousness)?

Social animals tend to have significantly larger brains, so I expect k-selection species would have larger brains, and probably larger cortices too, though I'm not entirely sure about how k-selection species compare with closely related r-selection species. Social animals may have a need for mental flexibility empathy that helps account for the value of consciousness, but that is pretty speculative.

Only rats were studied so conclusions about various brains and nervous systems cannot be stated. Is it that this reasoning could suggest that primates would be more conscious than other species but not that these species would be non-conscious because cortex does affect rats' behavior somewhat? Also, even if the decorticate rat behaves similarly as one with cortex, it can be that it is less conscious, for example cannot feel closeness with family as much on in specific ways?

I'm pretty confident that humans and other primates have a greater range of possible conscious experiences than rats, and that the complexity of our cortex has something to do with it. The big question is whether the cortex does something that allows for consciousness or whether it just does something that shapes conscious experiences.

Are you aware of the Cambridge Declaration on Consciousness? What do you think about it?

I think it is a great piece of marketing, but not based on great evidence. There remains a ton of disagreement on which capacities and which brain regions are responsible for consciousness in human beings. It is overly presumptive to declare that we're confident that other animals have exactly what's necessary for consciousness. The best arguments for animal consciousness come from behavior, not comparative neuroscience.

Comment by Derek Shiller on What might decorticate rats tell us about the distribution of consciousness? · 2022-07-21T11:56:44.568Z · EA · GW

For Possibility 3, I guess you mean more specifically “Decorticate rats are not conscious, and neither are intact rats”, correct?

That was what I meant when I started writing the section. When I finished, I decided I wanted to hedge my claims to not completely exclude the possibility you mention. In retrospect, I don't think that hedge makes a lot of sense in the context of my overall argument.

Are deep-RL agents conscious? Well, maybe you think they are. But I and lots of people think they aren’t. At the very least, you need to make that argument, it doesn’t go without saying. And if we can’t unthinkingly infer consciousness from behavior in deep RL, then we likewise can’t unthinkingly infer consciousness from seeing not-very-different behaviors in decorticate rats (or any other animals).

It would be a mistake to infer from such behavior to consciousness without making some assumptions about implementation. In the typical case, when people infer consciousness in animals on the basis of similar behaviors, I take it that they implicitly assume something about similarity in brain structures that would account for similarities in the behaviors. This doesn't seem to hold for RL agents who might use radically different architectures to produce the same ends. It also seems to hold only to a much lesser extent in animals with different sorts of brains like octopi (or possibly, given these studies, rats).

I'm not completely unsympathetic with the thought that the cortex is necessary for consciousness in rats.

Faculties for consciousness might exist in the cortex just to help with complex action planning; when the cortex is lost the behavioral effects are minor and revealed only by studies requiring complex actions. If it is plausible that rats have conscious experiences produced solely within their cortex, it would undermine my claim about the overall upshot of theses studies.

I do think it is somewhat counterintuitive for consciousness to exist in rats but not be necessary for basic behaviors. E.g., if they feel pain but don't need to feel pain in order to be motivated to avoid noxious stimuli.

I also am a bit confused by your suggestion that decorticate-from-birth rats are wildly different from decorticate-from-birth primates. Merker 2007a argues that humans with hydranencephaly are basically decorticate-from-birth, and discusses all their behavior on p79, which very much seemed conscious to both Merker and the parents of these children, just as decorticate rats seem conscious to you.

It has been awhile since I've looked through that literature. My recollection was that the case was very unconvincing, and a lot of it looked like cherry-picked examples and biased reasoning. The important point is that decorticate-from-birth humans don't have the ability to act nearly to the extent that rats do. They can orient towards interesting phenomena and have some control over muscles for things like smiling or kicking, but they can't walk into the kitchen to get themselves a snack. I also think it is important that rats exhibit these capacities even when they lost their cortex in adulthood.

mammal cortex seems to have a lot in common with bird pallium, such that "all mammals are conscious and no birds are conscious"

I've heard the similarity claim a lot, but I've never been able to track down very convincing details. Birds are clearly very smart, and their palliums have evolved to solve the same sorts of problems as our cortices, but I would be surprised if there were strong neuroscientific grounds for thinking that if one group were conscious, the other would be too that didn't depend on behavior.

As for whether anyone thinks that, Brian Key or Jack Rose have denied consciousness to fish specifically because the differences in their forebrains. I'm not sure what they would say about birds.

Comment by Derek Shiller on What might decorticate rats tell us about the distribution of consciousness? · 2022-07-21T00:03:08.195Z · EA · GW

The contrast between the apparent effects of partial and total damage are perplexing. The cortex surely does a lot of work in sensory processing and action selection, even if it isn't strictly necessary for a lot of behaviors. This sort of thing makes me somewhat wary of trusting the decortication studies too much. That said, it isn't obvious to me why they should be misleading.

The only study of nociception in decorticate rats is included here, in the learning section:

Kolb, B., & Whishaw, I. Q. (1981). Decortication of rats in infancy or adulthood produced comparable functional losses on learned and species-typical behaviors. Journal of Comparative and Physiological Psychology, 95(3), 468–483.

Decorticate rats do learn to avoid a source of shocks, apparently very quickly, but do not engage in a response common to the control rats, burying the source in sawdust. Whishaw suggests this may be the result of motor impairments.

Merker offers one way to square the difference that makes some sense to me. He points to the Sprague effect, in which some deficits caused by loss of the visual cortex of one hemisphere can to some extent be mitigated by damage to the contralateral superior colliculus. Merker suggests that what is going on is that the loss of the visual cortex in one hemisphere creates an unbalance and that balance is partially restored by the damage to the contralateral superior colliculus.

The frontal cortex takes part in action selection with input from the ACC and insular cortex. If it becomes disinhibited by the loss of the ACC / IC, then it may exert an overriding influence on midbrain faculties for action selection that could otherwise appropriately respond to noxious stimuli. Perhaps the midbrain gets a vote in what to do, but that vote is overwhelmed by the disinhibited cortex.

Comment by Derek Shiller on What might decorticate rats tell us about the distribution of consciousness? · 2022-07-20T22:32:12.610Z · EA · GW

I only skimmed, so may have missed it, but are these conclusions based only on behaviour shortly after (within days?) of decortification, or also much longer?

There are a number of studies, and I haven't gone through them but I expect the details will differ. Whishaw is clear though that a lot of basic abilities return in the hours following surgery. It isn't as though the rats return to the helplessness of infancy for a week or so. (Though it is also clear that some part of the return to normal function is a result of re/learning to cope with their deficits.)

Could functional neuroplasticity have allowed previously unconscious brain structures became conscious after some time?

I think that isn't crazy, even without neuroplasticity. It might be like what many people think about the case of split brain patients, where hemispheric integration disrupts separate consciousnesses that are revealed following a corpus callosotomy. This seems less likely in the case of decortication because my impression is the midbrain structures aren't as completely integrated into the cortex in the way each hemisphere is with the other, but I could be wrong about that.

Comment by Derek Shiller on What might decorticate rats tell us about the distribution of consciousness? · 2022-07-20T15:59:23.179Z · EA · GW

Thanks for highlighting these concerns. This is something I fretted about before writing this, and I condensed my thoughts into footnote 1. Let me expand on them here:

1.) These sorts of studies are long out of vogue. I don't believe my engaging with them (especially on the EA forum, which confers little academic prestige) will encourage any similar experiments to be carried out in the future. I also don't think it will affect the status of the researchers or the trajectory of their careers.

2.) There are a huge number of experiments that are callously harmful to sentient creatures like rats, as you note. Decortication studies stand out because they involve harms to bodily and mental integrity, which we find particularly repulsive, but many experiments in psychology, medicine, and neuroscience routinely involve killing their test subjects. I'm hesitant to disengage from such research (or to refuse to benefit from it, or let other animals benefit from it) entirely.

3.) All sorts of work indirectly contributes to animal suffering. It is conceivable to me that more suffering is caused by poisoning rats / mice around your average university building, or to provide food for the average university conference, than was caused by these studies to the animals involved. Avoiding engaging with work that involves avoidable animal suffering is extremely difficult. I don't think it makes sense to disengage with work just because the harms it causes are more obvious.

4.) Understanding consciousness is important for cause prioritization. These sorts of studies have the potential to tell us a lot that might bear on how we think about projects aiming to benefit fish or insects. If they can help us direct funds more effectively for animals, we should pay attention to them.

5.) Animal activists have a reputation for naivete and credulity. Engaging substantively with science, which necessarily includes studies that cruelly harm animals, may help us to be taken more seriously.

Comment by Derek Shiller on Fanatical EAs should support very weird projects · 2022-07-09T15:10:15.404Z · EA · GW

Thanks for sharing this. My (quick) reading is that the idea is to treat expected value calculations not as gospel, but as if they are experiments with estimated error intervals. These experiments should then inform, but not totally supplant, our prior. That seems sensible for givewell’s use cases, but I don’t follow the application to pascal’s mugging cases or better supported fanatical projects. The issue is that they don’t have expected value calculations that make sense to regard as experiments.

Perhaps the proposal is that we should have a gut estimate and a gut confidence based on not thinking through the issues much, and another estimate based on making some guesses and plugging in the numbers, and we should reconcile those. I think this would be wrong. If anything, we should take our Bayesian prior to be our estimate after thinking through all the issues, (but perhaps before plugging in all of the exact numbers). If you’ve thought through all the issues above, I think it is appropriate to allow an extremely high expected value for fanatical projects even before trying to make a precise calculation. Or at least it is reasonable for your prior to be radically uncertain.

Comment by Derek Shiller on Person-affecting intuitions can often be money pumped · 2022-07-07T16:45:39.718Z · EA · GW

Interesting. It reminds me of a challenge for denying countable additivity:

God runs a lottery. First, he picks two integers at random (each integer has an equal and 0 probability of being picked, violating countable additivity.) Then he shows one of the two at random to you. You know in advance that there is a 50% chance you'll see the higher one (maybe he flips a coin), but no matter what it is, after you see it you'll be convinced it is the lower one.

I'm inclined to think that this is a problem with infinities in general, not with unbounded utility functions per se.

Comment by Derek Shiller on Person-affecting intuitions can often be money pumped · 2022-07-07T15:28:47.720Z · EA · GW

unbounded social welfare functions can be Dutch booked/money pumped and violate the sure-thing principle

Do you have an example?

Comment by Derek Shiller on The importance of getting digital consciousness right · 2022-07-05T23:10:15.513Z · EA · GW

I think the greater potential concern is false-positives on consciousness, not false negatives

This is definitely a serious worry, but it seems much less likely to me.

One way this could happen is if we build large numbers of general purpose AI systems that we don't realize are conscious and/or can suffer. However, I think that suffering is a pretty specialized cognitive state that was designed by natural selection for a role specific to our cognitive limitations and not one we are likely encounter by accident while building artificial systems. (It seems more likely to me that digital minds won't suffer, but will have states that are morally relevant that we don't realize are morally relevant because we're so focused on suffering.)

Another way this could happen is if we artificially simulate large numbers of biological minds in detail. However, it seems very unlikely to me that we will ever run those simulations and very unlikely that we miss the potential for accidental suffering if we do. At least in the short term, I expect most plausible digital minds will be intentionally designed to be conscious, which I think makes the risks of mistakenly believing they're conscious more of a worry.

That said, I'm wary of trying adjudicate which is a more concerning for topics that are still so speculative.

proposed "p-risk" after "p-zombies

I kinda like "z-risk", for similar reasons.

Comment by Derek Shiller on Moral weights for various species and distributions · 2022-07-03T22:55:48.182Z · EA · GW

This is an interesting exercise. I imagine that Luke’s estimates were informed by his uncertainty about multiple incompatible theories / considerations and so any smooth distribution won’t properly reflect the motivations that lead to those estimates. Do you think these results suggest anything about what a lumpy distribution would say?

Comment by Derek Shiller on Fanatical EAs should support very weird projects · 2022-07-02T15:26:56.949Z · EA · GW

Your utility function can instead be bounded wrt the difference you make relative to some fixed default distribution of outcomes ("doing nothing", or "business as usual") or in each pairwise comparison (although I'm not sure this will be well-behaved). For example, take all the differences in welfare between the two random variable outcomes corresponding to two options, apply some bounded function of all of these differences, and finally take the expected value.

Consider the following amended thought experiment: (changes in bold)

Walking home one night from a lecture on astrophysics where you learned about the latest research establishing the massive size of the universe, you come across a child drowning in a pond. The kid is kicking and screaming trying to stay above the water. You can see the terror in his eyes and you know that it's going to get painful when the water starts filling his lungs. You see is mother, off in the distance, screaming and running. Something just tells you she'll never get over this. It will wreck her marriage and her career. There's two buttons near you. Pressing either will trigger an event that adds really good lives to the universe. (The buttons will create the exact same lives and only function once.) The second also causes a life preserver to be tossed to the child. The second button is slightly further from you, and you'd have to strain to reach it. And there's a real small chance that solipsism is true, in which case your whims matter much more (we're not near the bounds) and satisfying them will make a much bigger difference to total value. The altruistic thing to do is to not make the additional effort to react the further button, which could be mildly unpleasant, even though it very likely means the kid will die an agonizing death and his mother will mourn for decades.

Comment by Derek Shiller on Fanatical EAs should support very weird projects · 2022-07-02T15:11:42.991Z · EA · GW

Thanks for clarifying! I think I get what you're saying. This certainly is a rabbit hole. But to bring it back to the points that I initially tried to make, I'm kind of struggling to figure out what the upshot would be. The following seem to me to be possible take-aways:

1.) While the considerations in the ballpark of what I've presented do have counterintuitive implications (if we're spawning infinite divisions every second, that must have some hefty implications for how we should and shouldn't act, mustn't it?), fanaticism per se doesn't have any weird implications for how we should be behaving because it is fairly likely that we're already producing infinite amounts of value and so long shots don't enter into it.

2.) Fanaticism per se doesn't have any weird implications for how we should be behaving, because it is fairly likely that the best ways to produce stupendous amounts of value happen to align closely with what commonsense EA suggests we should be doing anyway. (I like Michael St. Jules approach to this that says we should promote the long-term future of humanity so we have the chance to research possible transfinite amounts of value.)

3.) These issues are so complicated that there is no way to know what to do if we're going fanatical, so even if trying to create branches appears to have more expected utility than ordinary altruistic actions, we should stick to the ordinary altruistic actions to avoid opening up that can of worms.

Comment by Derek Shiller on Fanatical EAs should support very weird projects · 2022-07-01T22:00:05.046Z · EA · GW

Just because the universe is very big doesn't mean we are very near the bound. We'd only be very near the bound if the universe was both very big and very perfect, i.e. suffering, injustice, etc. all practically nonexistent as a fraction of things happening.

My thought was that you'd need a large universe consisting of people like us to be very near the bound, otherwise you couldn't use boundedness to get out of assigning a high expected value to the example projects I proposed. There might be ways of finessing the dimensions of boundedness to avoid this sort of concern, but I'm skeptical (though I haven't thought about it much).

I also find it methodologically dubious to adjust your value function to fit what actions you think you should do. It feels to me like your value function should be your value function, and you should adjust your decision rules if they produce a bad verdict. If your value function is bounded, so be it. But don't cut it off to make expected value maximization more palatable.

If the math checks out then I'll keep my bounded utility function but also maybe add in some nonconsequentialist-ish stuff to cover this case and cases like it.

I can see why you might do this, but it feels strange to me. The reason to save the child isn't because its a good thing for the child not to drown, but because there's some rule that you're supposed to follow that tells you to save the kid? Do these rules happen to require you to act in ways that basically align with what a total utilitarian would do, or do they have the sort of oddities that afflict deontological views (e.g. don't lie to the murderer at the door)?

Comment by Derek Shiller on Fanatical EAs should support very weird projects · 2022-07-01T12:03:02.781Z · EA · GW

Is there any plausible path to producing (or even ) amounts of value with the standard metaphysical picture of the world we have? Or are you thinking that we may discover that it is possible and so should aim to position ourselves to make that discovery?

Comment by Derek Shiller on Fanatical EAs should support very weird projects · 2022-06-30T22:14:22.463Z · EA · GW

Big fan of your sequence!

I'm curious how you think about bounded utility function. Its not something I've thought about much. The following sort of case seems problematic.

Walking home one night from a lecture on astrophysics where you learned about the latest research establishing the massive size of the universe, you come across a child drowning in a pond. The kid is kicking and screaming trying to stay above the water. You can see the terror in his eyes and you know that it's going to get painful when the water starts filling his lungs. You see is mother, off in the distance, screaming and running. Something just tells you she'll never get over this. It will wreck her marriage and her career. There's a life preserver in easy reach. You could save the child without much fuss. But you recall your lecture the oodles and oodles of people living on other planets and figure that we must be very near the bound of total value for the universe, so the kid's death can't be of more than the remotest significance. And there's a real small chance that solipsism is true, in which case your whims matter much more (we're not near the bounds) and satisfying them will make a much bigger difference to total value. The altruistic thing to do is to not make the effort, which could be mildly unpleasant, even though it very likely means the kid will die an agonizing death and his mother will mourn for decades.

That seems really wrong. Much more so than thinking that fanaticism is unreasonable.

Comment by Derek Shiller on Fanatical EAs should support very weird projects · 2022-06-30T21:50:55.343Z · EA · GW

However, if you compare "MWI where branching doubles the amount of stuff that matters" to "MWI where there's an infinite sea of stuff and within that sea, there's objective reality fluid or maybe everything's subjective and something something probabilities are merely preferences over simplicity," then it's entirely unclear how to compare these two pictures. (Basically, the pictures don't even agree on what it means to exist, let alone how to have impact.)

I'm not sure I really understand the response. Is it that we shouldn't compare the outcomes between, say, a Bohmian interpretation and my simplistic MW interpretation, but between my simplistic MW interpretation and a more sophisticated and plausible MW interpretation, and those comparisons aren't straightforward?

If I've got you right, this seems to me to be a sensible response. But let me try to push back a little. While you're right that it may be difficult to compare different metaphysical pictures considered as counterfactual, I'm only asking you to compare metaphysical pictures considered as actual. You know how great it actually is to suck on a lollipop? That's how great it is to suck on a lollipop whether you're a worm navigating through branching worlds or a quantum ghost whose reality is split across different possibilities or a plain old Bohmian hunk of meat. Suppose you're a hunk of meat, how great would it be if you were instead a worm? Who knows and who cares! We don't have to make decisions for metaphysical possibilities that are definitely not real and where sucking on a lollipop isn't exactly this great.

Comment by Derek Shiller on Fanatical EAs should support very weird projects · 2022-06-30T21:36:18.358Z · EA · GW

Generally the claims here fall prey to the fallacy of unevenly applying the possibility of large consequences to some acts where you highlight them and not to others, such that you wind up neglecting more likely paths to large consequences.

Could you be more specific about the claims that I make that involve this fallacy? This sounds to me like a general critique of Pascal's mugging, which I don't think fits the case that I've made. For instance, I suggested that the simple MWI has a probability ~ and would mean that it is trivially possible if true to generate in value, where v is all the value currently in the world. The expected value of doing things that might cause 1000 successive branchings is ~ where v is all the value in the world. Do you think that there is a higher probability way to generate a similar amount of value?

then making a much more advanced and stable civilization is far more promising for realizing things related to that.

I suppose your point might be something like, absurdist research is promising, and that is precisely why we need humanity to spread throughout the stars. Just think of how many zany long-shot possibilities we'll get to pursue! If so, that sounds fair to me. Maybe that is what the fanatic would want. It's not obvious that we should focus on saving humanity for now and leave the absurd research for later. Asymmetries in time might make us much more powerful now than later, but I can see why you might think that. I find it a rather odd motivation though.

Comment by Derek Shiller on Fanatical EAs should support very weird projects · 2022-06-30T16:22:39.784Z · EA · GW

Lesswrong related to how "the number of pigs in gestation crates (at least) doubles!" is probably a confused way of thinking.

Sure, but how small is the probability that it isn't? It has to be really small to counteract the amount of value doubling would provide.

Comment by Derek Shiller on Fanatical EAs should support very weird projects · 2022-06-30T16:19:49.977Z · EA · GW

I meant to suggest that our all-things-considered assignments of probability and value should support projects like the ones I laid out. Those assignments might include napkin calculations, but if we know we overestimate those, we should adjust accordingly.

(g) extremely large and extremely small numbers should be sandboxed (e.g., capped in the influence they can have on the conclusion)

This sounds to me like it is in line with my takeaways. Perhaps we differ on the grounds for sandboxing? Expected value calculations don't involve capping influence of component hypotheses. Do you have a take on how you would defend that?

or (ii), I mainly have in mind three claims about fanaticism: (iia) "Fanaticism is unintuitive," (iib) "Fanaticism is absurd (a la reductio ad absurdum," and (iic) "Fanaticism breaks some utility axioms."

I don't mean to say that fanaticism is wrong. So please don't read this as a reductio. Interpreted as a claim about rationality, I largely am inclined to agree with it. What I would disagree with is a normative inference from its rationality to how we should act. Let's not focus less on animal welfare or global poverty because of farfetched high-value possibilities, even if it would be rational to do so.

Comment by Derek Shiller on Fanatical EAs should support very weird projects · 2022-06-30T16:04:28.662Z · EA · GW

That being said, it's been a long time since I last checked on the state of the matter... but the main lesson I learned about PW was that ideas should "pay rent" to be in our heads (I think Yudkowsky mentioned it while writing about a PW's scenario). So the often neglected issue with PW scenarios is that it's hard to account for their opportunity costs - and they are potentially infinite, precisely because it's so cheap to formulate them.

Pascal's wager is somewhat fraught, and what you should make of it may turn on what you think about humility, religious epistemology, and the space of plausible religions. What's so interesting about the MWI project is that it isn't like this. It isn't some theory concocted from nothing and assigned a probability. There's at least some evidence that something in the ballpark of the theory is true. And it's not easy to come up with an approximately as plausible hypothesis that suggests that the actions which might cause branchings might instead prevent them, or that we have alternative choices might lead to massive amounts of value in other ways.

If you grant that MWI is coherent, then I think you should be open to the possibility that it isn't unique, and there are other hypotheses that suggest possible projects that are much more likely to create massive amounts of value than prevent it.

Comment by Derek Shiller on Fanatical EAs should support very weird projects · 2022-06-30T15:52:50.387Z · EA · GW

I don't love the EA/rationalist tendency to dismiss long shots as Pascal's muggings. Pascal's mugging raises two separate issues: 1) what should we make of long shots with high expected value? and 2) what evidence does testimony by itself provide to highly implausible hypotheses (particularly compared with other salient possibilities). Considerations around (2.) seem sufficient to be wary of Pascal's mugging, regardless of what you think of (1.).

I definitely think that if you were 100% confident in the simple MWI view, that should really dominate your altruistic concern. Every time the world splits, the number of pigs in gestation crates (at least) doubles! How can you not see that as something you should really care about? It might be a lonely road, but how can you pass up such high returns? (Of course it is bad for there to be pigs in gestation crates -- I assume it is outweighed by good things, but those good things must be really good to outweigh such bads, so we should really want to double them. If they're not outweighed we should really try to stop branchings)

For what it's worth, I think I'd be inclined to think that the simple MWI should dominate our considerations even at a 1 in a thousand probability. Not sure about the 1 in a million range.

I think this post is the result of three motivations.

1.) I think the expected value of weird projects really is ludicrously high. 2.) I don't want to work on them, or feel like I should be working on them. I get the impression that many, even most, EAs would agree. 3.) I'd bet I'm not going to win a fight about the rationality of fanaticism with Yoaav Isaacs or Hayden Wilkinson.

Comment by Derek Shiller on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-19T20:05:00.254Z · EA · GW

then it would be a violation of the law of the conservation of expected evidence for you to update your beliefs on observing the passage of a minute without the bomb's exploding.

Interesting! I would think this sort of case just shows that the law of conservation of expected evidence is wrong, at least for this sort of application. I figure it might depend on how you think about evidence. If you think of the infinite void of non-existence as possibly constituting your evidence (albeit evidence you're not in a position to appreciate, being dead and all), then that principle wouldn't push you toward this sort of anthropic reasoning.

I am curious, what do you make of the following case?

Suppose you're touring Acme Bomb & Replica Bomb Co with your friend Eli. ABRBC makes bombs and perfect replicas of bombs, but they're sticklers for safety so they alternate days for real bombs and replicas. You're not sure which sort of day it is. You get to the point of the tour where they show off the finished product. As they pass around the latest model from the assembly line, Eli drops it, knocking the safety back and letting the bomb (replica?) land squarely on its ignition button. If it were a real bomb, it would kill everyone unless it were one of the 1-in-a-million bombs that's a dud. You hold your breath for a second but nothing happens. Whew. How much do you want to bet that it's a replica day?

Comment by Derek Shiller on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-19T17:23:38.031Z · EA · GW

Suppose you've been captured by some terrorists and you're tied up with your friend Eli. There is a device on the other side of the room you that you can't quite make out. Your friend Eli says that he can tell (he's 99% sure) it is a bomb and that it is rigged to go off randomly. Every minute, he's confident there's a 50-50 chance it will explode, killing both of you. You wait a minute and it doesn't explode. You wait 10. You wait 12 hours. Nothing. He starts eying the light fixture, and say's he's pretty sure there's a bomb there too. You believe him?

Comment by Derek Shiller on The importance of getting digital consciousness right · 2022-06-17T16:13:33.353Z · EA · GW

One can argue that AI reflects the society (e. g. in order to make good decisions or sell products), so would, at most, double the sentience in the world. Furthermore, today, many individuals (including humans not considered in decisionmaking, not profitable to reach, or without the access to electricity, and non-human animals, especially wild ones) are not considered by AI systems. Thus, any possible current and prospective AI's contribution to sentience is limited.

It is very unclear how many digital minds we should expect, but it is conceivable that in the long run they will greatly outnumber us. The reasons we have to create more human beings -- companionship, beneficence, having a legacy -- are reasons we would have to create more digital minds. We can fit a lot more digital minds on Earth than we can humans. We could more easily colonize other planets with digital minds. For these reasons, I think we should be open to the possibility that most future minds will be digital.

Unintentional creation of necessary suffering AI that would not reflect the society but perceive relatively independently is the greatest risk. For example, if AI really hates selling products in a way that in consequence and in the process reduces humans' wellness, or if it makes certain populations experience low or negative wellbeing otherwise.

It strikes me as less plausible that we will have massive numbers of digital minds that unintentionally suffer while performing cognitive labor for us. I'm skeptical that the most effective ways to produce AI will make them conscious, and even if it does it seems like a big jump from phenomenal experience to suffering. Even if they are conscious, I don't see why we would need a number of digital minds for every person. I would think that the cognitive power of artifical intelligence means we would need rather few of them, and so the suffering they experience, unless particularly intense, wouldn't be particularly significant.

Comment by Derek Shiller on The importance of getting digital consciousness right · 2022-06-15T13:02:18.196Z · EA · GW

they've got leading advocates of two leading consciousness theories (global workspace theory and integrated information theory;

Thanks for sharing! This sounds like a promising start. I’m skeptical that things like this could fully resolve the disagreements, but they could make progress that would be helpful in evaluating AIs.

I do think that there is a tension between taking a strong view that AI is not conscious/ will not be conscious for a long time, versus assuming that animals with very different brain structures do have conscious experience.

If animals with very different brains are conscious, then I’m sympathetic with the thought that we could probably make conscious systems if we really tried. Modern AI systems look a bit Chinese roomish, so it might still be that the incentives aren’t there to put in the effort to make really conscious systems.

Comment by Derek Shiller on The importance of getting digital consciousness right · 2022-06-13T22:53:39.641Z · EA · GW

You’re probably right. I’m not too optimistic that my suggestion would make a big difference. But it might make some.

If a company were to announce tomorrow that it had built a conscious AI and would soon have it available for sale, I expect that it would prompt a bunch of experts to express their own opinions on twitter and journalists to contact a somewhat randomly chosen group of outspoken academics to get their perspective. I don’t think that there is any mechanism for people to get a sense of what experts really think, at least in the short run. That’s dangerous because it means that what they might hear would be somewhat arbitrary, possibly reflecting the opinion of overzealous or overcautious academics, and because it might lack authority, being the opinions of only a handful of people.

In my ideal scenario, there would be some neutral body, perhaps that did regular expert surveys, that journalists would think to talk to before publishing their pieces and that could give the sort of judgement I gestured to above. That judgement might show that most views on consciousness agree that the system is or isn’t conscious, or at least that there is significant room for doubt. People might still make up their minds, but they might entertain doubts longer, and such a body might provide incentives for companies to try harder to build more likely to be conscious systems.

Comment by Derek Shiller on The importance of getting digital consciousness right · 2022-06-13T12:08:24.925Z · EA · GW

I was imagining that the consensus would concern conditionals. I think it is feasible to establish what sets of assumptions people might naturally make, and what views those assumptions would support. This would allow a degree of objectivity without settling the right theory. It might also involve assigning probabilities, or ranges of probabilities to view themselves, or to what it is rational for other researchers to think about different views.

So we might get something like the following (when researchers evaluate gpt6):

There are three major groups of assumptions, a, b, and c.

  • Experts agree that gpt6 has a 0% probability of being conscious if a is correct.
  • Experts agree that the rational probability to assign to gpt6 being conscious if b is correct falls between 2 and 20%.
  • Experts agree that the rational probability to assign to gpt6 being conscious if c is correct falls between 30 and 80%
Comment by Derek Shiller on ‘Consequentialism’ is being used to mean several different things · 2022-06-11T18:11:08.845Z · EA · GW

My impression is that EAs also often talk about ethical consequentialism when they mean something somewhat different. Ethical consequentialism is traditionally a theory about what distinguishes the right ways to act from the wrong ways to act. In certain circumstances, it suggests that lying, cheating, rape, torture, and murder can be not only permissible, but downright obligatory. A lot of people find these implications implausible.

Ethical consequentialists often think what they do because they really care about value in aggregate. They don't just want to be happy and well off themselves, or have a happy and well off family. They want everyone to be happy and well off. They want value to be maximized, not distributed in their favor.

A moral theory that gets everyone to act in ways that maximize value will make the world a better place. However, it is consistent to think that consequentialism is wrong about moral action and to nonetheless care primarily about value in aggregate. I get the impression that EAs are more attached to the latter than the former. We generally care that things be as good as they can be. We have less a stake in whether torture is a-ok if the expected utility is positive. The EA attitude is more of a 'hey, lets do some good!' and less of a 'you're not allowed to fail to maximize value!'. This seems like an important distinction.

Comment by Derek Shiller on Animal Welfare: Reviving Extinct (human) intermediate species? · 2022-05-05T13:07:27.850Z · EA · GW

That humans and non-human animals are categorically distinct seems to be based on the fairly big cognitive and communicative gap between humans and the smartest animals.

There is already a continuum between the cognitive capacities of humans and animals. Peter Singer has pointed to cognitively disabled humals in arguing for better treatment of animals.

Do you think homo erectus would add something further? People often (arbitrarily) draw the line at species, but it seems to me that they could just as easily draw it at any clade. Growing fetuses display a similar variation between single cells and normal adults, and it seems most people don't have issues carving moral categories along arbitrary lines.

Comment by Derek Shiller on Key questions about artificial sentience: an opinionated guide · 2022-04-26T11:31:32.693Z · EA · GW

Computational functionalism about sentience: for a system to have a given conscious valenced experience is for that system to be in a (possibly very complex) computational state. That assumption is why the Big Question is asked in computational (as opposed to neural or biological) terms.

I think it is a little quick to jump from functionalism to thinking that consciousness is realizable in a modern computer architecture if we program the right functional roles. There might be important differences in how the functional roles are implemented that rules out computers. We don't want to allow just any arbitrary gerrymandered states to count as an adequate implementation of consciousness's functional roles; the limits to what is adequate are underexplored.

Suppose that Palgrave Macmillan produced a 40 volume atlas of the bee brain, where each neuron is drawn on some page (in either a firing or silent state) and all connections are accounted for. Every year, they release a new edition from a momentary time slice later, updating all of the firing patterns slightly after looking at the patterns in the last edition. Over hundreds of years, a full second of bee brain activity is accounted for. Is the book conscious? My intuition is NO. There are a lot of things you might think are going wrong here -- maybe the neurons printed on each page aren't doing enough causal work in generating the next edition, maybe the editions are too spatially or temporally separated, etc. I could see some of these explanations as applying equally to contemporary computer architectures.

Comment by Derek Shiller on Consciousness, counterfactual robustness and absurdity · 2022-04-12T23:58:15.740Z · EA · GW

But there are many ordered subsets of merely trillions of interacting particles we can find, effectively signaling each other with forces and small changes to their positions.

In brains, patterns of neural activity stimulate further patterns of neural activity. We can abstract this out into a system of state changes and treat conscious episodes as patterns of state changes. Then if we can find similar causal networks of state changes in the wall, we might have reason to think they are conscious as well. Is this the idea? If so, what sort of states are you imagining to change in the wall? Is it the precise configurations of particles? I expect a lot of the states you'll identify to fulfill the relevant patterns will be arbitrary or gerrymandered. That might be an important difference that should make us hesitate before ascribing conscious experiences to walls.

Comment by Derek Shiller on Consciousness, counterfactual robustness and absurdity · 2022-04-12T23:44:51.753Z · EA · GW

Yes, it's literally a physical difference, but, by hypothesis, it had no influence on anything else in the brain at the time, and your behaviour and reports would be the same. Empty space (or a disconnected or differently connected neuron) could play the same non-firing neuron role in the actual sequence of events. Of course, empty space couldn't also play the firing neuron role in counterfactuals (and a differently connected neuron wouldn't play identical roles across counterfactuals), but why would what didn't happen matter?

I can get your intuition about your case. Here is another with the same logic in which I don't have the corresponding intuition:

Suppose that instead of just removing all non-firing neurons, we also remove all neurons both before they are triggered and after they trigger the next neurons in the sequence. E.g. you brain consists of neurons that magically pop into existence just in time to have the right effect on the next neurons that pop into existence in the sequence, and then they disappear back into nothing. We could also go a level down and have your brain consist only in atoms that briefly pop into existence in time to interact with the next atoms.

Your behavior and introspective reports wouldn't change -- do you think you'd still be conscious?

Comment by Derek Shiller on Consciousness, counterfactual robustness and absurdity · 2022-04-12T12:42:20.365Z · EA · GW

That seems unphysical, since we're saying that even if something made no actual physical difference, it can still make a difference for subjective experience.

The neuron is still there, so its existing-but-not-firing makes a physical difference, right? Not firing is as much a thing a neuron can do as firing. (Also, for what it's worth, my impression is that cognition is less about which neurons are firing and more about what rate they are firing at and how their firing is coordinated with that of other neurons.)

But neurons don't seem special, and if you reject counterfactual robustness, then it's hard to see how we wouldn't find consciousness everywhere, and not only that, but maybe even human-like experiences, like the feeling of being tortured, could be widespread in mundane places, like in the interactions between particles in walls.

The patterns of neural firing involved in conscious experiences are surely quite complicated. Why think that we would find similar patterns anywhere outside of brains?

Comment by Derek Shiller on The Future Fund’s Project Ideas Competition · 2022-03-05T18:16:03.877Z · EA · GW

Authoritative Statements of EA Views

Epistemic Institutions

In academia, law, and government, it would be helpful to have citeable statements of EA relevant views presented in an authoritative and unbiased manner. Having such material available lends gravitas to proposals that help address related problems and provides greater justification in taking those views for granted.

(This is a variation on 'Expert polling for everything' focused on providing authority of views to non-experts. The Cambridge Declaration on Consciousness is a good example.)

Comment by Derek Shiller on Some thoughts on vegetarianism and veganism · 2022-02-14T18:00:51.666Z · EA · GW

Insofar as we are all imperfect and have to figure out which ways to prioritize improving on, it isn't obvious that we should treat veganism as a priority. That said, I think there is an important difference between what it makes sense to do and how it makes sense to feel. It makes sense to feel horrified by factory farming and disgusted by factory farmed meat if you care about the suffering of animals. It makes sense to respond to suffering inflicted on your behalf with sadness and regret.

Effective altruists should generally be vegan, not (just) because it is the right thing to do, but because that behavior follows naturally from the right way to feel. This is not to say that you should try to change how you feel if you're not inclined to at least a little sadness at the sight of a dead chicken's body on your plate, but something has gone wrong.

Comment by Derek Shiller on Simplify EA Pitches to "Holy Shit, X-Risk" · 2022-02-11T13:53:46.462Z · EA · GW

the probabilities are of the order of 10^-3 to 10^-8, which is far from infinitesimal

I'm not sure what the probabilties are. You're right that they are far from infinitesimal (just as every number is!): still, the y may be close enough to warrant discounting on whatever basis people discount Pascal's mugger.

what is important is reducing the risk to an acceptable level

I think the risk is pretty irrelevant. If we lower the risk but still go extinct, we can pat ourselves on the back for fighting the good fight, but I don't hink we should assign it much value. Our effect on the risk is instrumentally valulable for what it does for the species.

Also I don't understand the comment on AI Alignment

The thought was that it is possible to make a difference between AI being pretty well and very well aligned, so we might be able to impact whether the future is good or great, and that is worth pursuing regardless of its relation to existential risk.

Comment by Derek Shiller on Simplify EA Pitches to "Holy Shit, X-Risk" · 2022-02-11T13:22:30.214Z · EA · GW

Let me clarify that I'm not opposed to paying Pascal's mugger. I think that is probably rational (though I count myself lucky to not be so rational).

But the idea here is that x-risk is all or nothing, which translates into each person having a very small chance of making a very big difference. Climate change can be mitigated, so everyone working on it can make a little difference.

Comment by Derek Shiller on Simplify EA Pitches to "Holy Shit, X-Risk" · 2022-02-11T13:19:33.257Z · EA · GW

I'm not disagreeing with the possibility of a significant impact in expectation. Paying Pascals' mugger is promising in expectation. The thought is that in order to make a marginal difference to x-risk, there needs to be some threshold for hours/money/etc under which our species will be wiped out and over which our species will survive, and your contributions have to push us over that threshold.

X-risk, at least where the survival of the species is concerned, is an all or nothing thing. (This is different than AI alignment, where your contributions might make things a little better or a little worse.)

Comment by Derek Shiller on Simplify EA Pitches to "Holy Shit, X-Risk" · 2022-02-11T12:20:49.227Z · EA · GW

But also, we’re dealing with probabilities that are small but not infinitesimal. This saves us from objections like Pascal’s Mugging - a 1% chance of AI x-risk is not a Pascal’s Mugging.

It seems to me that the relevant probability is not the chance of AI x-risk, but the chance that your efforts could make a marginal difference. That probability is vastly lower, possibly bordering on mugging territory. For x-risk in particular, you make a difference only if your decision to work on x-risk makes a difference to whether or not the species survives. For some of us that may be plausible, but for most, it is very very unlikely.

Comment by Derek Shiller on Splitting the timeline as an extinction risk intervention · 2022-02-07T17:15:25.780Z · EA · GW

Importantly (as I'm sure you're aware), no amount of world slicing is going to increase the expected value of the future (roughly all the branches from here)

What makes you think that? So long as value can change with the distribution of events across branches (as perhaps with the Mona Lisa) the expected value of the future could easily change.

Comment by Derek Shiller on Why don't governments seem to mind that companies are explicitly trying to make AGIs? · 2021-12-24T16:32:33.933Z · EA · GW

Are you sure that they don't mind? I would be surprised if intelligence agencies weren't keeping some track of the technical capabilities of foreign entities, and I'd be unsurprised if they were also keeping track of domestic entities as well. If they thought we were six months away from transformative AGI, they could nationalize it or shut it down.

Comment by Derek Shiller on Why do you find the Repugnant Conclusion repugnant? · 2021-12-18T14:45:44.458Z · EA · GW

There is a challenge here in making the thought experiment specific, conceivable, and still compelling for the majority of people. I think a marginally positive experience like sucking on a cough drop is easy to imagine (even if it is hard to really picture doing it for 40,000 years) and intuitively just slightly better than non-existence minute by minute.

Someone might disagree. There are some who think that existence is intrinsically valuable, so simply having no negative experiences might be enough to have a life well worth living. But it is hard to paint a clear picture of a life that is definitely barely worth living and involves some mix of ups and downs, because you then have to make sure that the ups and downs balance each other out, and this is more difficult to imagine and harder to gauge.

Comment by Derek Shiller on Why do you find the Repugnant Conclusion repugnant? · 2021-12-17T17:09:45.528Z · EA · GW

I find your attitude somewhat surprising. I'm much less sympathetic to trolley problems or utility monsters than the repugnant conclusion. I can see why some people aren't moved by it, but I have a hard time seeing how someone couldn't get what it is moving about it. Since it is a rather basic intuition, it's not super easy to pump. But I wonder, what do you think about this alternative, which seems to draw on similar intuitions for me:

Suppose that you could right now, at this moment, choose between continuing to live your life, with all its ups and downs and complexity, or going into a state of near-total suspended animation. In the state of suspended animation, you will have no thoughts and no feelings, except you will have a sensation of sucking on a rather disappointing but not altogether bad cough drop. You won't be able to meditate on your existence, or focus on the different aspects of the flavor. You won't feel pain or boredom. Just the cough drop. If you continue your life, you'll die in 40 years. If you go into the state of animation, it will last for 40,000 years (or 500,000, or 20 million, whatever number it takes.) Is it totally obvious that the right thing to do is to opt for the suspended animation (at least, from a selfish perspective) ?

Comment by Derek Shiller on Notes on the risks and benefits of kidney donation · 2021-11-27T12:53:53.704Z · EA · GW

My logic is (deferring judgment to medical professions) just the amount of effort and money that is spent on facilitating kidney donations, despite the existence of dialysis, indicates that experts think the cost/benefit ration is a good one. One reason I feel safe in this deference is because the field of medicine seems to have strong "loss aversion". I.e. Doctors seem strongly concerned about direct actions that cause harm, even if it is for the greater good.

The cynical story I've heard is that insurance providers cover it because it is cheaper than years of dialysis and doctors provide it because it pays well. Some doctors are hesitant about it, particularly for non-directed donors, but they aren't the ones performing it.

I do think that is overly cynical: there are clear advantages to the recipient that make transplantation very desirable. Dialysis is a pain, and not without its risks. Quality of life definitely goes up. Life expectancy probably goes up a fair bit too. If I had to make a guess, I'd guess donation produced something like 3-8 QALYs on average for the primary beneficiary, at a cost of about .5 QALYs for the donor. That is a pretty reasonable altruistic trade, but it isn't saving a life at the cost of a surgery and a few weeks recovery.

Comment by Derek Shiller on Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping. · 2021-05-18T12:20:23.185Z · EA · GW

I agree that there are challenges for each of them in the case of an infinite number of people. My impression is that total utilitarianism can handle infinite cases pretty respectably, by supplementing the standard maxim of maximizing utility with a dominance principle to the effect of 'do what's best for the finite subset of everyone that you're capable of affecting', though it also isn't something I've thought about too much either. I initially was thinking that average utilitarians can't make a similar move without undermining it's spirit, but maybe they can. However, if they can, I suspect they can make the same move in the finite case ('just focus on the average among the population you can affect') and that will throw off your calculations. Maybe in that case, if you can only affect a small number of individuals, the threat from solipsism can't even get going.

In any case, I would hope that SIA is at least able to accommodate an infinite number of possible people, or the possibility of an infinite number of people, without becoming useless. I take it that there are an infinite number of epistemically possible people, and so this isn't just an exercise.

Comment by Derek Shiller on Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping. · 2021-05-17T11:59:11.497Z · EA · GW

Interesting application of SIA, but I wonder if it shows too much to help average utilitarianism.

SIA seems to support metaphysical pictures in which more people actually exist. This is how you discount the probability of solipsism. But do you think you can simultaneously avoid the conclusion that there are an infinite number of people?

This would be problematic: if you're sure that there are an infinite number of people, average utilitarianism won't offer much guidance because you almost certainly won't have any ability to influence the average utility.

Comment by Derek Shiller on Thoughts on the welfare of farmed insects · 2019-05-12T17:05:28.125Z · EA · GW

Nice summary of the issues.

A couple of related thoughts:

There are some reasons to think that insects would not be especially harmed by factory farming, in the way that vertebrates are. It is plausible that the largest source of suffering in factory farms come from the stress produced by lack of enrichment and unnatural and overcrowded conditions. Even if crickets are phenomenally conscious AND can suffer, they might not be capable of stress or capable of stress in the same sort of dull over-crowded conditions as vertebrates. Given their ancient divergence in brain structures, their very different life styles, and their comparatively minuscule brains, it is reasonable to be skeptical that they feel environment induced stress. Death is conceivably such a short portion of their life that even a relatively painful death won't tip the balance.

If crickets are not harmed by the conditions of factory farms, they might instead benefit from factory farming. It seems possible that the average factory farmed cricket might have a net positive balance of good experiences vs bad experiences. In that case, it might be better to raise crickets in factory farm conditions than to produce equivalent amounts of non-sentient meat alternatives. The risks are not entirely on the farming side.