Posts

Notes: Stubble Burning in India 2021-01-13T02:58:28.335Z
Research Summary: The Intensity of Valenced Experience across Species 2020-11-11T17:26:10.083Z
Differences in the Intensity of Valenced Experience across Species 2020-10-30T01:13:01.681Z
Research Summary: The Subjective Experience of Time 2020-08-10T13:28:13.181Z
Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? 2020-08-03T13:30:38.872Z
The Subjective Experience of Time: Welfare Implications 2020-07-27T13:24:41.585Z
How to Measure Capacity for Welfare and Moral Status 2020-06-01T15:01:58.437Z
Comparisons of Capacity for Welfare and Moral Status Across Species 2020-05-18T00:42:04.134Z
Intervention Profile: Ballot Initiatives 2020-01-13T15:41:06.182Z
Managed Honey Bee Welfare: Problems and Potential Interventions 2019-11-14T19:03:33.709Z
Opinion: Estimating Invertebrate Sentience 2019-11-07T02:38:06.420Z
Invertebrate Welfare Cause Profile 2019-07-09T17:28:26.735Z
Features Relevant to Invertebrate Sentience, Part 3 2019-06-12T14:49:38.772Z
Invertebrate Sentience: A Useful Empirical Resource 2019-06-12T01:15:59.645Z
Features Relevant to Invertebrate Sentience, Part 2 2019-06-11T15:23:59.244Z
Features Relevant to Invertebrate Sentience, Part 1 2019-06-10T18:00:49.478Z
EA Research Organizations Should Post Jobs on PhilJobs.org 2019-05-02T19:37:11.592Z
Detecting Morally Significant Pain in Nonhumans: Some Philosophical Difficulties 2018-12-23T17:49:00.750Z

Comments

Comment by Jason Schukraft on Open and Welcome Thread: February 2021 · 2021-02-13T16:21:45.571Z · EA · GW

I'm pretty sure the Forum uses the same karma vote-power as LessWrong.

Comment by Jason Schukraft on Notes: Stubble Burning in India · 2021-02-06T14:47:35.587Z · EA · GW

Great, thanks! Just added it.

Comment by Jason Schukraft on Money Can't (Easily) Buy Talent · 2021-01-25T20:17:31.518Z · EA · GW

I do think we have been able to acquire talent that would not have been otherwise counterfactually acquired by other organizations.

As an additional data point, I can report that I think it's very unlikely that I would currently be employed by an EA organization if Rethink Priorities didn't exist. I applied to Rethink Priorities more or less on a whim, and the extent of my involvement with the EA community in 2018 (when I was hired) was that I was subscribed to the EA newsletter (where I heard about the job) and I donated to GiveWell top charities. At the time, I had completely different career plans.

Comment by Jason Schukraft on Why "cause area" as the unit of analysis? · 2021-01-25T16:21:35.553Z · EA · GW

A lot depends on what constitutes a cause area and what counts as analysis. My own rough and tentative view is that at some level of generality (which could plausibly be called "cause area"), we can use heuristics to compare broad categories of interventions. But in terms of actual rigorous analysis, cause area is certainly not the right unit, and, furthermore, as a matter of empirical fact, there aren't really any research organizations (including Rethink Priorities, where I work) that take cause area to be the appropriate unit of analysis.

Very curious to hear the thoughts of others, as I think this is a super important question!

Comment by Jason Schukraft on Meat substitutes: outside view · 2021-01-19T14:09:23.582Z · EA · GW

If you haven't seen it yet, you might find this report on the viability of cultured meat helpful. Open Philanthropy commissioned the report.

Comment by Jason Schukraft on Notes: Stubble Burning in India · 2021-01-18T20:02:34.925Z · EA · GW

Hi David,

Thanks for the suggestions! Anyone who works on this topic in the future should probably investigate them further. My current rough impression is that, even if there were a market for the stubble, the process of baling the stubble for transport and sale would either be time-and-labor intensive or require equipment that the average farmer in the region can't afford. Because of the nature of the crop cycle, farmers are under intense pressure to clear the stubble quickly, hence the appeal of stubble burning.

Comment by Jason Schukraft on Notes: Stubble Burning in India · 2021-01-15T21:49:37.084Z · EA · GW

Hey Harrison, I think the short answer is that it's just a really messy situation and any potential solution that has a shot at improving on the status quo has to take political reality into account.

Comment by Jason Schukraft on Notes: Stubble Burning in India · 2021-01-15T02:03:29.244Z · EA · GW

Hey Harrison,

I'm also not knowledgeable about Indian politics, but it seems pretty clear that Indian farmers wield considerable political influence. (See the reaction to the introduction of three market-friendly farm laws for the most recent demonstration of this power.) I'd like to think political compromise is possible, but it's hard to know which compromises are feasible.

Fortunately, it appears that many of the potential solutions to stubble burning are essentially win-win. Although stubble burning is an effective way to deal with crop residue in the short term, the practice is pretty bad for the soil. Many of the alternatives to stubble burning would probably raise yields in the long-run.

Comment by Jason Schukraft on Ask Rethink Priorities Anything (AMA) · 2020-12-21T15:15:10.696Z · EA · GW

That's fine by me!

Comment by Jason Schukraft on Ask Rethink Priorities Anything (AMA) · 2020-12-15T16:33:54.139Z · EA · GW

Hi Dan,

Thanks for your questions. I'll let Marcus and Peter answer the first two, but I feel qualified to answer the third.

Certainly, the large number of invertebrate animals is an important factor in why we think invertebrate welfare is an area that deserves attention. But I would advise against relying too heavily on numbers alone when assessing the value of promoting invertebrate welfare. There are at least two important considerations worth bearing in mind:

(1) First, among sentient animals, there may be significant differences in capacity for welfare or moral status. If these differences are large enough, they might matter more than the differences in the numbers of different types of animals.

(2) Second, at some point, Pascal's Mugging will rear its ugly head. There may be some point below which we are rationally required to ignore probabilities. It's not clear to me where that point lies. (And it's also not clear that this is the best way to address Pascal's Mugging.) There are about 440 quintillion nematodes alive at any given time, which sounds like a pretty good reason to work on nematode welfare, even if one's credence in their sentience is really low. But nematodes are nothing compared to bacteria. There are something like 5 million trillion trillion bacteria alive at any given time. At some point, it seems as if expected value calculations cease to be appropriately action-guiding, but, again, it's very uncertain where to draw the line.

Comment by Jason Schukraft on Ask Rethink Priorities Anything (AMA) · 2020-12-15T16:02:03.698Z · EA · GW

Hi Denis,

Lots of really good questions here. I’ll do my best to answer.

  1. Thinking vs reading: I think it depends on the context. Sometimes it makes sense to lean toward thinking more and sometimes it makes sense to lean toward reading more. (I wouldn’t advise focusing exclusively on one or the other.) Unjustified anchoring is certainly a worry, but I think reinventing the wheel is also a worry. One could waste two weeks groping toward a solution to a problem that could have been solved in afternoon just by reading the right review article.

  2. Self-consciousness: Yep, I am intimately familiar with hopelessly inchoate thoughts and notes. (I’m not sure I’ve ever completed a project without passing through that stage.) For me at least, the best way to overcome this state is to talk to lots of people. One piece of advice I have for young researchers is to come to terms with sharing your work with people you respect before it’s polished. I’m very grateful to have a large network of collaborators willing to listen to and read my confused ramblings. Feedback at an early stage of a project is often much more valuable than feedback at a later stage.

  3. Is there something interesting here?: Yep, this also happens to me. Unfortunately, I don’t have any particular insight. Oftentimes the only way to know whether an idea is interesting is to put in the hard exploratory work. Of course, one shouldn’t be afraid to abandon an idea if it looks increasingly unpromising.

  4. *Survival vs. exploratory mindset: Insofar as I understand the terms, an exploratory mindset is an absolute must. Not sure how to cultivate it, though.

  5. Optimal hours of work per day: I work between 4 and 8 hours a day. I don’t find any difference in my productivity within that range, though I imagine if I pushed myself to work more than 8, I would pretty quickly hit diminishing returns.

  6. Learning a new field: I can’t emphasize enough the value of just talking to existing experts. For me at least, it’s by far the most efficient way to get up-to-speed quickly. For that reason, I really value having a large network of diverse people I can contact with questions. I put a fair amount of effort into cultivating such a network.

  7. Hard problems: I’m fortunate that my work is almost always intrinsically interesting. So even if I don’t make progress on a problem, I continue to be motivated to work on it because the work itself is so very pleasant. That said, as I’ve emphasized above, when I’m stuck, I find it most helpful to talk to lots of people about the problem.

  8. Emotional motivators: When I reflect on my life as a whole, I’m happy that I’m in a career that aims to improve the world. But in terms of what gets me out of bed in the morning and excited to work, it’s almost never the impact I might have. It’s the intrinsically interesting nature of my work. I almost certainly would not be successful if I did not find my research to be so fascinating.

  9. Typing speed: No idea what my typing speed is, but it doesn’t feel particularly fast, and that doesn’t seem to handicap me. I’ve always considered myself a slow thinker, though.

  10. Obvious questions: Yeah, I think there is a general skill of “noticing the obvious.” I don’t think I’m great at it, but one thing I do pretty often is reflect on the sorts of things that appear obvious now that weren’t obvious to smart people ~200 years ago.

  11. Tiredness, focus, etc.: Regular exercise certainly helps. Haven’t tried anything else. Mostly I’ve just acclimated to getting work done even though I’m tired. (Not sure I would recommend that “solution,” though!)

  12. Meta: I’d like to see others answer questions 1, 3, 6, 7, and 10.

Comment by Jason Schukraft on Ask Rethink Priorities Anything (AMA) · 2020-12-15T15:29:35.925Z · EA · GW

Hi Roger,

There are different possible scenarios in which invertebrates turn out to be sentient. It might be the case, for instance, that panpsychism is true. So if one comes to believe that invertebrates are sentient because panpsychism is true, one should also come to believe that robots and plants are sentient. Or it could be that some form of information integration theory is true, and invertebrates instantiate enough integration for sentience. In that case, the probability that you assign to the sentience of plants and robots will depend on your assessment of their relevant level of integration.

For what it's worth, here's how I think about the issue: sentience, like other biological properties, has an evolutionary function. I take it as a datum that mammals are sentient. If we can discern the role that sentience is playing in mammals, and it appears there is analogous behavior in other taxa, then, in the absence of defeaters, we are licensed to infer that individuals of those taxa are sentient. In the past few years I've updated toward thinking that arthropods and (coleoid) cephalopods are sentient, but the majority of these updates have been based on learning new empirical information about these animals. (Basically, arthropods and cephalopods engage in way more complex behaviors than I realized.) When we constructed our invertebrate sentience table, we also looked at plants, prokaryotes, protists, and, in an early version of the table, robots and AIs of various sorts. The individuals in these categories did not engage in the sort of behaviors that I take to be evidence of sentience, so I don't feel licensed to infer that they are sentient.

Comment by Jason Schukraft on Ask Rethink Priorities Anything (AMA) · 2020-12-15T14:43:35.524Z · EA · GW

Hey Edo,

I definitely receive valuable feedback on my work by posting it on the Forum, and the feedback is often most valuable when it comes from people outside my current network. For me, the best example of this dynamic was when Gavin Taylor left extensive comments on our series of posts about features relevant to invertebrate sentience (here, here, and here) back in June 2019. I had never interacted with Gavin before, but because of his comments, we set up a meeting, and he has become an invaluable collaborator across many different projects. My work is much improved due to his insights. I'm not sure Gavin and I would ever have met (much less collaborated) if not for his comments on the Forum.

Comment by Jason Schukraft on EA Forum Prize: Winners for October 2020 · 2020-12-11T14:26:52.449Z · EA · GW

Sometimes, they ask us to instead donate the money to a charity on their behalf, which we are also willing to do.

Oh, cool. I didn't realize this was a possibility. I've always claimed the money and then donated the same amount to Rethink Priorities (where I work). If I'm lucky enough to have the opportunity in the future, I'll do this instead.

(I basically get paid to write content for the Forum, so I'm not really comfortable accepting the prize money.)

Comment by Jason Schukraft on Research Summary: The Intensity of Valenced Experience across Species · 2020-11-13T19:30:26.529Z · EA · GW

Hey Michael,

Thanks for your comment! The point you raise is a good one. I’ve thought about related issues over the last few months, but my views still aren’t fully settled. And I’ll just reiterate for readers that my tentative conclusions are just that: tentative. More than anything, I want everyone to appreciate how much uncertainty we face here.

We can crudely ask whether motivation is tied to the relative intensity of valenced experience or the absolute intensity of valenced experience. (‘Crudely’ because the actual connection between motivation and valenced experience is likely to be a bit messy and complicated.) If it’s the relative intensity, then, all else equal, a pain at the top end of an animal’s range is going to be very motivating, even if the pain has a phenomenal feel comparable to a human experiencing a very mild muscle spasm. If it’s absolute intensity, then, all else equal, a pain like that won’t be very motivating. I’m not sure what the right view is here, but the relative view that you endorse in the comment is certainly a live option, so let’s go with that.

If it’s relative intensity that matters for motivation, then natural selection needs a reason to generate big differences in absolute intensity. (Setting aside the fact that evolution sometimes goes kinda haywire.) You suggest the fitness benefit of a fine-grained valence scale, especially for animals that face many competing pulls on their attention. I agree that the resolution of an animal’s valence scale probably matters. But it’s unclear to me how much this tells us about differences in absolute intensity.

It seems possible to be better or worse at distinguishing gradations of valenced experience. It might be the case that animals with similar intensity ranges can differ in the number of intensity levels they can distinguish. (It might also be the case that animals with different intensity ranges have a similar number of intensity levels they can distinguish.) So if there were a fitness benefit to having 100 distinguishable gradations rather than 10, evolution could either select for animals with wider ranges or select for animals with better resolutions. (Or some combination thereof.) Considerations like the Weber-Fechner law incline me toward thinking an increase in resolution would be more efficient than an increase in range (though of course there are limits to how much resolution can be increased). But at this point I’m just speculating; there’s a lot more basic research that needs to be done to get a handle on these sorts of questions.

Comment by Jason Schukraft on Differences in the Intensity of Valenced Experience across Species · 2020-11-05T01:31:33.272Z · EA · GW

Oh nice, that sounds really cool - definitely keep me updated!

Comment by Jason Schukraft on Differences in the Intensity of Valenced Experience across Species · 2020-11-03T20:11:24.145Z · EA · GW

Hey Peter,

Thanks for the kind words. There’s no current plan to pursue academic publication. This question comes up periodically at Rethink Priorities, and there’s a bit of disagreement about what the right strategy is here. Speaking personally, I would love to see more of my work published academically. However, thinking about strategic decisions like these is not my comparative advantage, so I’m happy to defer to others on this question, and leadership at Rethink Priorities generally isn’t keen on using researcher hours to pursue academic publication. The main reason is the time cost. According to the prevailing view at Rethink Priorities, the time cost of pursuing publication normally doesn’t outweigh the benefits of widening the audience and earning credibility for the organization. Of course, there are exceptions: if there are special reasons to publish academically (e.g., fielding-building for welfare biology) or converting a report into an academic publication would take an unusually short time, then it might be worth it.

For now, the most plausible means by which my research will get published academically is through collaboration with others. For example, Bob Fischer recently generously offered to co-author a paper with me based on my report about differences in the subjective experience of time across species, which is now under review. He was thus able to significantly reduce the time burden on me. Naturally, I’m very open to collaboration with others in a similar vein.

Comment by Jason Schukraft on Differences in the Intensity of Valenced Experience across Species · 2020-11-02T21:54:30.146Z · EA · GW

Great, this is fantastic, thanks! Clearly there is a lot more I need to think about! I just sent you a message to arrange a chat. For anyone following this exchange, I'll try to post some more thoughts on this topic after Adam and I have talked.

Comment by Jason Schukraft on Differences in the Intensity of Valenced Experience across Species · 2020-11-02T15:36:09.984Z · EA · GW

Hey Adam,

Thanks for your comment! I agree that the distinction between the sensory and affective components of pain experience is an important one that merits more discussion. I briefly considered including such a discussion, but the report was already long and I was hoping to avoid adding another layer of complexity. My assumption was that, while it’s possible for the two components to come apart, such dissociation is rare enough that we can safely ignore it at this level of abstraction. That could be a naïve assumption, though. Even if not, you’re right that by failing to take account of the different components, I’ve introduced an ambiguity into the report. When I refer to the intensity of pain, I intend to refer to the degree of felt badness of the experience (that is, the affective component). But the sensory component can also be said to be more or less intense, and some of the literature I cite either conflates the two components or refers to sensory intensity.

I would be interested to hear more of your thoughts about the Yue article and related work. Suppose it’s true that gamma-band oscillations reliably track the sensory intensity of pain experience and that for our purposes the sensory component is morally irrelevant. If sensory intensity and affective intensity are correlated in humans, do you think it’s reasonable to assume that the components are correlated in other mammals? If so, then we can still use gamma-band oscillations as a rough proxy for the thing we care about, at least in animals neurologically similar to humans.

Basically, my main questions are:

(1) How often and under what conditions does sensory intensity come apart from affective intensity in humans? (2) How can we use what we know about the components coming apart in humans to predict how often and under what conditions sensory intensity and affective intensity come apart in nonhuman animals?

If you’re interested, I’d love to schedule a call to talk further. This might be too big a topic to discuss easily via Forum comments.

Comment by Jason Schukraft on Differences in the Intensity of Valenced Experience across Species · 2020-11-02T14:38:35.493Z · EA · GW

Hey Michael,

I think this is an interesting idea. Unfortunately, I'm woefully ignorant about the relevant details, so it's unclear to me if the differences between artificial neural networks and actual brains makes the analogy basically useless. Still, I think it would probably be worthwhile for someone with more specialized knowledge than myself to think through the analogy roughly along the lines you've outlined and see what comes of it. I'd be happy to collaborate if anyone (including yourself) wants to take up the task.

Comment by Jason Schukraft on Differences in the Intensity of Valenced Experience across Species · 2020-10-31T14:25:02.885Z · EA · GW

There are lots of potential points of contact. The most obvious is that to determine an individual's possible intensity range of valenced experience, we have to think about the most intense (in the sense of most positive and most negative) experiences available to that individual. I don't have a view about how long-tailed the distribution of pleasures and pains is in humans, but I agree that it's a question worth investigating. And if there are differences in how long-tailed the distribution of valenced experiences is across species, that would entail differences in possible (though not necessarily characteristic) intensity range across species.

Happy to speak to something more specific if you had a particular question in mind.

Comment by Jason Schukraft on Differences in the Intensity of Valenced Experience across Species · 2020-10-31T14:13:30.836Z · EA · GW

Thanks for the clarification, Brian!

Comment by Jason Schukraft on Differences in the Intensity of Valenced Experience across Species · 2020-10-31T13:45:03.377Z · EA · GW

It’s plausible to assign split-brain patients 2x moral weight because it’s plausible that split-brain patients contain two independent morally relevant seats of consciousness. (To be clear, I’m just claiming this is a plausible view; I’m not prepared to give an all-things-considered defense of the view.) I take it to be an empirical question how much of the corpus callosum needs to be severed to generate such a split. Exploring the answer to this empirical question might help us think about the phenomenal unity of creatures with less centralized brains than humans, such as cephalopods.

Comment by Jason Schukraft on Differences in the Intensity of Valenced Experience across Species · 2020-10-31T13:44:06.188Z · EA · GW

This seems like a pretty good reason to reject a simple proportion account

To be clear, I also reject the simple proportion account. For that matter, I reject any simple account. If there’s one thing I’ve learned from thinking about differences in the intensity of valenced experience, it’s that brains are really, really complicated and messy. Perhaps that’s the reason I’m less moved by the type of thought experiments you’ve been offering in this thread. Thought experiments, by their nature, abstract away a lot of detail. But because the neurological mechanisms that govern valenced experience are so complex and so poorly understood, it’s hardly ever clear to me which details can be safely ignored. Fortunately, our tools for studying the brain are improving every year. I’m tentatively confident that the next couple decades will bring a fairly dramatic improvement in our neuroscientific understanding of conscious experience.

Comment by Jason Schukraft on Differences in the Intensity of Valenced Experience across Species · 2020-10-30T22:14:10.379Z · EA · GW

Hey Michael,

Thanks for engaging so deeply with the piece. This is a super complicated subject, and I really appreciate your perspective.

I agree that hidden qualia are possible, but I’m not sure there’s much of an argument on the table suggesting they exist. When possible, I think it’s important to try to ground these philosophical debates in empirical evidence. The split-brain case is interesting precisely because there is empirical evidence for dual seats of consciousness. From the SEP entry on the unity of consciousness:

In these operations, the corpus callosum is cut. The corpus callosum is a large strand of about 200,000,000 neurons running from one hemisphere to the other. When present, it is the chief channel of communication between the hemispheres. These operations, done mainly in the 1960s but recently reintroduced in a somewhat modified form, are a last-ditch effort to control certain kinds of severe epilepsy by stopping the spread of seizures from one lobe of the cerebral cortex to the other. For details, see Sperry (1984), Zaidel et al. (1993), or Gazzaniga (2000).

In normal life, patients show little effect of the operation. In particular, their consciousness of their world and themselves appears to remain as unified as it was prior to the operation. How this can be has puzzled a lot of people (Hurley 1998). Even more interesting for our purposes, however, is that, under certain laboratory conditions, these patients seem to behave as though two ‘centres of consciousness’ have been created in them. The original unity seems to be gone and two centres of unified consciousness seem to have replaced it, each associated with one of the two cerebral hemispheres.

Here are a couple of examples of the kinds of behaviour that prompt that assessment. The human retina is split vertically in such a way that the left half of each retina is primarily hooked up to the left hemisphere of the brain and the right half of each retina is primarily hooked up to the right hemisphere of the brain. Now suppose that we flash the word TAXABLE on a screen in front of a brain bisected patient in such a way that the letters TAX hit the left side of the retina, the letters ABLE the right side, and we put measures in place to ensure that the information hitting each half of the retina goes only to one lobe and is not fed to the other. If such a patient is asked what word is being shown, the mouth, controlled usually by the left hemisphere, will say TAX while the hand controlled by the hemisphere that does not control the mouth (usually the left hand and the right hemisphere) will write ABLE. Or, if the hemisphere that controls a hand (usually the left hand) but not speech is asked to do arithmetic in a way that does not penetrate to the hemisphere that controls speech and the hands are shielded from the eyes, the mouth will insist that it is not doing arithmetic, has not even thought of arithmetic today, and so on—while the appropriate hand is busily doing arithmetic!

So I don’t think it’s implausible to assign split-brain patients 2x moral weight.

I also think it’s possible to find empirical evidence for differences in phenomenal unity across species. There’s some really interesting work concerning octopuses. See, for example, “The Octopus and the Unity of Consciousness”. (I might write more about this topic in a few months, so stay tuned.)

As for the paper, it seems neutral between the view that the raw number of neurons firing is correlated with valence intensity (which is the view I was disputing) and the view that the proportional number of neurons firing (relative to some brain region) is correlated with valence intensity. So I’m not sure the paper really cuts any dialectical ice. (Still a super interesting paper, though, so thanks for alerting me to it!)

Comment by Jason Schukraft on Differences in the Intensity of Valenced Experience across Species · 2020-10-30T13:31:30.887Z · EA · GW

Hi Michael,

Thanks for the comment and thanks for prompting me to write about these sorts of thought experiments. I confess I’ve never felt their bite, but perhaps that’s because I’ve never understood them. I’m not sure what the crux of our disagreement is, and I worry that we might talk past each other. So I’m just going to offer some reactions, and I’ll let you tell me what is and isn’t relevant to the sort of objection you’re pursuing.

  1. Big brains are not just collections of little brains. Large brains are incredibly specialized (though somewhat plastic).

  2. At least in humans, consciousness is unified. Even if you could carve out some smallish region of a human brain and put it in a system such that it becomes a seat of consciousness, that doesn’t mean that within the human brain that region is itself a seat of consciousness. (Happy to talk in much more detail about this point if this turns out to be the crux.)

  3. Valence intensity isn’t controlled by the raw number of neurons firing. I didn’t find any neuroscience papers that suggested there might be a correlation between neuron count and valence intensity. As with all things neurological, the actual story is a lot more complicated than a simple metric like neuron count would suggest.

  4. Not sure where this fits in, but if you yoke two brains together, it seems to me you’d have two independent seats of consciousness. There’s probably some way of filling out the thought experiment such that that would not be the case, but I think the details actually matter here, so I’d have to see the filled-out thought experiment.

Comment by Jason Schukraft on Research Summary: The Subjective Experience of Time · 2020-10-25T13:51:34.970Z · EA · GW

Cool, thanks Michael, I hadn't seen that. (And thanks to Antonia as well for writing the summary!)

Comment by Jason Schukraft on Parenting: Things I wish I could tell my past self · 2020-09-17T01:13:42.135Z · EA · GW

Yes, that post is fantastic!

Comment by Jason Schukraft on Parenting: Things I wish I could tell my past self · 2020-09-16T01:19:53.870Z · EA · GW

Hey Ruth,

Unfortunately, I don't have an answer, but I just wanted to tell you that you're not alone! My wife and I both struggled with sleep deprivation for a long time. Our two kids didn't consistently sleep through the night until ~21 months. I became pretty good at stealing a 20 minute nap whenever the opportunity presented itself, but other than that, I didn't find a solution...

Comment by Jason Schukraft on Parenting: Things I wish I could tell my past self · 2020-09-15T01:36:51.315Z · EA · GW

As the parent of two young children, I was really pleased to see this post on the EA Forum.

I'll echo the bit about the importance of having support networks. Parenting is really hard in unexpected ways, and having other parents with whom to share your strange hardships is really comforting. (I have so many potty training horror stories that only other parents could possibly appreciate.)

That said, I also think it's really important to cultivate a support network of non-parent friends. It's pretty easy (at least for me, especially when I was a stay-home-dad for 18 months) to let your kids become your whole identity. It's sometimes a relief to talk about anything but my kids,  just to remind myself that I'm an independent human with his own thoughts and interests.

In addition to being full of misinformation and pseudo-science, many parenting books also give the false impression that once you reach certain milestones, parenting magically becomes super easy. I remember being convinced that as soon as my kids could sleep through the night, my job was pretty much done. In reality, parenting is a marathon, not a sprint. I don't wake up in the middle of the night anymore, but the sheer willpower that a 3-year-old can display when he doesn't want to get dressed for the day is draining in its own unique way.

Contra Michelle's experience, I did change a bit as a person, sometimes in surprising ways. (For instance, before I had kids I would watch sports for hours on the weekend, and my subjective well-being rose and fell with the fortunes of my favorite teams. For whatever reason, I've now completely lost interest in sports, and for the life of me, can't remember why I spent all those hours glued to the TV.)

One last thing, in case it's not obvious: parenting can be incredibly rewarding. Earlier this year my 5-year-old daughter donated, of her own volition and without pressure from me, a portion of her allowance to Evidence Action's Deworm the World Initiative. The pride I felt is pretty close to indescribable. (Obviously I helped her pick the charity, based on her goal to "help kids who aren't as lucky as I am.")

Comment by Jason Schukraft on Research Summary: The Subjective Experience of Time · 2020-08-11T01:27:24.589Z · EA · GW

Thanks, that’s a great question!

Welfare is constituted by those things that are non-instrumentally good or bad for the creature. Insofar as reflexes are unconscious, they probably are not non-instrumentally good or bad. (They are, of course, often instrumentally good; they help the creature get other things that are good for it.) Conscious experiences, on the other hand, are usually non-instrumentally good or bad. Experiences with a positive valence are non-instrumentally good; experiences with a negative valence are non-instrumentally bad. (Experiences that are perfectly neural may not be non-instrumentally good or bad; experiences can also be instrumentally useful in a variety of ways.)

Differences in the subjective experience of time—assuming they exist—are relevant to welfare (both realized welfare and capacity for welfare) because they reflect differences in the amount of experience a creature undergoes per unit of objective time. I write about the moral importance of the subjective experience of time in this part of the first post.

You’re right that there are other aspects of temporal perception that may not be directly relevant to welfare. We already know that there are differences in temporal resolution (roughly: the rate at which a perceptual system samples information about its environment) across species. Enhanced temporal resolution may, among other things, enable faster unconscious reflexes. Naturally, the speed of a creature’s reflexes will indirectly contribute to its welfare, but those unconscious reflexes won’t be part of what constitutes the creature’s welfare. Whether or not there is a correlation between temporal resolution and the subjective experience of time is an open question, one that I explore in depth in the second post.

Hope that clarifies things a bit for you, but if not, please ask a follow-up question!

Comment by Jason Schukraft on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-06T01:32:20.993Z · EA · GW

Great, thanks Michael - that clarifies the argument for me.

Premise 1: Any observed conscious temporal resolution frequency for an individual X (within some set of possible conditions C) is a lower bound for the maximum frequency of subjective experience for X (within C).

While I think it's plausible that one's temporal resolution sets some sort of bound on one's rate of subjective experience, I just want to reiterate that I believe this is an empirical claim, not a conceptual claim. I'm open to the possibility that temporal resolution is just totally irrelevant to the subjective experience of time.

(As an aside, I think we have to be a bit careful how we (myself included) use the word 'conscious' in this context. In the post I distinguish behavioral methods for determining CFF from ERG methods for determining CFF. But even bees can be trained on the behavioral paradigm. This of course doesn't settle the question of whether they're conscious.)

Does it make sense to interpret the rate of subjective experience as a frequency, the number of subjective experiences per second? Maybe our conscious experiences are not sufficiently synchronized across our brains for such an interpretation?

This is another good question for which I don't have the answer. A related issue is whether experiences are discrete (countable) in the relevant sense. There are arguments that pull in either direction here. But, just to clarify, even if experiences are countable in the relevant sense, it would be an astounding coincidence if our experience frequency exactly matched our critical flicker-fusion frequency (i.e., 60 experiences per second).

Comment by Jason Schukraft on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-05T19:54:38.098Z · EA · GW

Hi Michael,

Thanks for the interesting argument. Before I can evaluate it, however, I'd need you to clarify your terms a bit for me. In particular, I'd need to know more about what you mean by "frequency of conscious experience." Based on my best reconstruction of the argument, it can't mean temporal resolution or rate of subjective experience.

I'll try clarify my position a bit, in case it's helpful to you or other readers. I don't think there's an a priori connection between temporal resolution (as measured by CFF or any other method) and rate of subjective experience. If there's a correlation between the two, that's a contingent empirical fact. There is no conceptual tension between the claim that a creature consciously perceives the flicker-to-steady-glow transition at some high threshold (200 Hz vs 60 Hz for humans, say) and the claim that the creature has the same rate of subjective experience as a typical human. (Similarly, there is no conceptual tension between the claim that some creature consciously perceives the transition at the same threshold as humans but has a different rate of subjective experience.) It's tempting to think that temporal resolution is like the frame rate of a video, and as the temporal resolution goes up or down, so too must the rate of subjective experience. But the mechanisms that govern the intake and processing of perceptual information are a lot more complicated than that, and the mechanisms that govern the subjective experience of time appear to be more complicated still.

One analogy that is sometimes helpful to me is to think of (visual) temporal resolution as a measure of motion blur. As one's temporal resolution improves, motion blur is reduced. But changes in motion blur need not have any connection to temporal experience. When I'm drunk, my motion blur greatly increases, but my rate of subjective experience doesn't change.

(Also, apologies if in elaborating my position I've missed the point of your argument. Like I said, it looks interesting, I just need to understand the terms better to evaluate it.)

Comment by Jason Schukraft on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-04T20:33:35.670Z · EA · GW

Doesn't using behavioural studies based on trained behaviour avoid this concern?

Thanks, this is a good question. The short answer is no, it doesn’t. The longer answer is a bit more complicated.

Nobody denies that differences in CFF generate differences in perceptual experience. But differences in perceptual experience are cheap. As I say in the post, the values I discuss are maximum CFF thresholds (that is, the highest CFF an individual can register in any condition). One’s actual CFF threshold is constantly shifting due to differences in things like background lighting conditions. So a light that an individual perceives as flickering in one situation may be perceived as glowing steadily in a different situation. The question is whether maximum CFF thresholds correlate with differences in subjective temporal experience.

Differences in one’s perceptual experience affect what one’s body can do unconsciously. Balancing on one foot with one’s eyes open is much easier than balancing on one foot with one’s eyes closed. The reason is that your visual system allows your body to make continual microadjustments to stay balanced.

So if differences in visual temporal resolution (as measured by CFF) confer a fitness advantage only in virtue of improvements in unconscious movements, we shouldn’t expect differences in CFF to be correlated with differences in subjective temporal experience. As I explain in the post, the temporal resolution of one’s senses doesn’t directly govern the subjective experience of time. If differences in temporal resolution correlate with differences in subjective temporal experience, it’s probably because improvements in temporal resolution make improvements in the subjective experience of time more useful (and/or vice versa).

Did the CFF estimates in your table come from behavioural studies or ERG studies, or both?

Both.

Comment by Jason Schukraft on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-04T20:10:00.336Z · EA · GW

Thanks, and apologies if the wording is unclear. To clarify, in the post I discuss both (a) situations in which it looks like a difference in CFF is not accompanied by a difference in the subjective experience of time and (b) situations in which it looks like a difference in the subjective experience of time is not accompanied by a difference in CFF.

Comment by Jason Schukraft on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-04T20:02:33.873Z · EA · GW

Yeah, sorry I define that in the previous post. Quoting from there:

I operationalize ‘characteristic and significant differences in the subjective experience of time’ as the claim that for at least half their daily waking lives, some animals maintain subjective rates of experience at least twice as fast as some other animals.

Comment by Jason Schukraft on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-04T19:59:02.496Z · EA · GW

Hi Michael,

Thanks for your comments. If temporal resolution does, under the right conditions, track the subjective experience of time, I expect it will be the temporal resolution of whichever sensory modality exerts the biggest selection pressure due to differences in the subjective experience of time. In many cases that will probably be the sensory modality with the best (fastest) temporal resolution, but that need not always be the case. As I say in the post, temporal resolution only plausibly tracks the subjective experience of time for animals in which the fitness-improving actions that the greater temporal resolution enables require conscious processing. It may be the case that for certain animals, improvements in temporal resolution in one sense enable actions that increase fitness without conscious processing, while improvements in temporal resolution in a different sense enable actions that increase fitness only with additional conscious processing.

In short, I unfortunately don’t expect there to be any simple rule that will show which measures of temporal resolution are best at tracking differences in the subjective experience of time in all circumstances. In determining whether and to what degree a particular measure of temporal resolution might track differences in the subjective experience of time within a group of species, we’ll need to pay attention to the context in which evolutionary pressures were likely to exert an influence on temporal resolution and temporal experience for the species in the group.

Comment by Jason Schukraft on The Subjective Experience of Time: Welfare Implications · 2020-07-30T01:02:02.999Z · EA · GW

Hi Matt,

Yes, the reference is to people reporting that time appears to slow down during life-threatening events, such as fighter pilots ejecting from their jets and rock climbers suffering serious falls. People on certain psychedelic drugs also sometimes report that time seems to stretch out. I discuss these reports in more detail in this section.

Comment by Jason Schukraft on The Subjective Experience of Time: Welfare Implications · 2020-07-28T15:23:54.922Z · EA · GW

Thanks Michael, good question. I think the key issue is that, as far as we can tell, there is no single brain region responsible for temporal experience. And because neuronal firing regimes differ so dramatically across brain regions, we can't assign overall neuronal firing rates and compare them across species.

Admittedly, this is also somewhat of an issue for some of the other neurological proxies I've identified. (For instance, as I mention in the post, axonal conduction velocity varies pretty significantly throughout the central nervous system.)

To be clear, this doesn't tell us how often signals are sent, just how long it takes a signal to get from one point to another, and an upper bound on how often signals can be sent and received?

Correct. But at least for mammals, we know that homologous brain regions in different animals all fire at roughly the same rate. On the other hand, interneuronal distance does vary across mammals (and even more so across vertebrates). If there are differences in temporal experience across species, I wouldn't expect mammals to have a uniform rate of subjective experience. So it seems to me that interneuronal distance is likely to be a more informative (though still very imperfect) metric than neuronal firing rate.

Comment by Jason Schukraft on The Subjective Experience of Time: Welfare Implications · 2020-07-28T01:38:23.853Z · EA · GW

This is a really good question for which I don't yet have a clear answer, despite thinking about for a fair amount of time.

For our purposes, the morally significant differences in sensory collection, processing, and integration are those differences that affect the phenomenal duration (or quality, for that matter) of the experience.

At various points in the post I appeal to an analogy between the subjective experience of time and a movie played at various speeds. But that's not actually a good metaphor. Perceptual processing and integration is extraordinarily complicated. Our brains take in a huge range of information across our different senses, and this information comes in at different speeds. Different parts of the brain process and integrate this information in different ways, modulating the integration for differences in the speed with which different modalities deliver information, eventually presenting us with what appears to be a unified cross-sensory model of our environment. In principle at least, it seems as if the different steps in this complicated chain of events could be run at different speeds, and it's still unclear to me what the effect would be on conscious experience.

Comment by Jason Schukraft on The Subjective Experience of Time: Welfare Implications · 2020-07-28T01:23:19.409Z · EA · GW

Yeah, and just to reiterate what I say in the post: CFF is a visual measure, so comparing animals that inhabit environments of characteristically different luminances is not advised. Most (though possibly not all) of the CFF variation between the crustaceans in the spreadsheet and the insects in the spreadsheet can be explained by differences in the extent to which the different animals rely on vision to interact with the world.

Comment by Jason Schukraft on The Subjective Experience of Time: Welfare Implications · 2020-07-28T01:16:51.001Z · EA · GW

It's a weird phenomenon. If targets A and B are presented 100 ms apart, both are likely to be correctly identified. If targets A and B are presented 700 ms apart, both are likely to be correctly identified. But if targets A and B are presented ~300 ms apart, only A is likely to be correctly identified.

It's called "attentional blink" because there is a reliable duration after an initial stimulus is presented at which you likely can't focus your attention well enough to identify a new target. Targets presented before or after the blink are easier to identify than targets presented during the blink window.

A caveat: vision science is not my area of expertise, so I would defer to an expert if one offered a clearer explanation of the phenomenon.

Comment by Jason Schukraft on The Subjective Experience of Time: Welfare Implications · 2020-07-28T01:05:31.430Z · EA · GW

Hey Michael.

Thanks as always for your many thoughtful comments.

By definition, differences in the subjective experience of time can only affect diachronic welfare (that is, welfare across time).

I agree that differences in the subjective experience of time shouldn't affect moral status--that would amount to double-counting. An individual's welfare shouldn't be worth more (less) just because she has more (less) of it.

I don't find it problematic, however, to think that differences in the subjective experience of time affect (diachronic) capacity for welfare. If two species have the same lifespan as measured in objective time, but species A has a characteristically faster rate of subjective experience than species B, then, all else equal, we should prioritize lifetime welfare improvements to species A because there is more welfare at stake.

That said, if the capacity for welfare angle is confusing or conceptually unsound, I think it's fine to frame the issue solely in terms of differences to realized welfare.

Comment by Jason Schukraft on Marcus Davis: Rethink Priorities — empirical research on neglected causes · 2020-07-13T15:11:35.869Z · EA · GW

Until it's fixed, here is the appropriate link, for anyone interested.

Comment by Jason Schukraft on What was the first being on Earth to experience suffering? · 2020-07-10T15:13:08.339Z · EA · GW

I haven't investigated this question in any detail, but a natural thought is that the emergence of sentience coincided with (either as a byproduct of or causal factor in) the Cambrian Explosion, ~540 million years ago. The capacity for valenced experience probably arose either simultaneously with the capacity for general awareness or shortly thereafter. With the capacity for valenced experience comes the capacity for negative hedonic states, which under many circumstances would constitute suffering, in my view. Depending on how robustly you're defining 'desire,' desires might also have arisen around the same time (e.g., many animals probably have the basic desire to avoid negative hedonic states).

See Michael Trestman's 2013 paper "The Cambrian Explosion and the Origins of Embodied Cognition" for more on the connection between consciousness and the Cambrian explosion. Max Carpendale also wrote about this topic on the Forum last year. For the view that consciousness emerged much later (i.e., not until mammals), see Stanislas Dehaene's 2014 book Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. For general discussion of the evolutionary origins of consciousness, see Peter Godfrey-Smith's excellent 2017 book Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness.

Comment by Jason Schukraft on How to Measure Capacity for Welfare and Moral Status · 2020-07-09T21:38:46.357Z · EA · GW

Yeah, that's an interesting idea. Sounds pretty good in principle, though I imagine fairly hard to implement in practice. AI Impacts did something similar last year when they investigated the relationship between neuron count and general intelligence. They prepared anonymized descriptions of the behavior of four species (two birds and two primates). Survey participants were asked to judge which animals were more intelligent on the basis of the anonymized descriptions. (The birds scored about the same as the primates.)

Comment by Jason Schukraft on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-07-06T14:07:08.377Z · EA · GW

Hey Michael,

Yeah, these are good questions. I think objective list theories are definitely vulnerable to anthropocentric and speciesist reasoning. It's certainly open to an objective list theorist to hold that there are non-instrumental goods that are inaccessible to humans, though I'm not aware of any examples of this in the relevant literature. This sort of question is occasionally raised in the literature on "supra-personal moral status" (i.e., moral status greater than humans). (See Douglas 2013 for a representative example. Fun fact: this literature is actually hundreds of years old; theologians used to debate whether angels had a higher moral status than humans).

Arguing over non-instrumental goods is notoriously difficult. In practice, it usually involves a lot of appealing to intuitions, especially intuitions about thought experiments. Not a fantastic methodology, to be sure, but in most cases it's unclear what the alternative would be.

Comment by Jason Schukraft on How to Measure Capacity for Welfare and Moral Status · 2020-06-19T01:26:54.057Z · EA · GW

Hi Jacob,

Thanks for your comment! I’m happy to chat in more detail if you’d like to set up a call.

While capacity and moral weight are important parameters, I think there also remains significant empirical uncertainty about actual experience as well.

I agree, and I fully support more research aimed at figuring out how to measure realized welfare. For many comparisons of specific interventions, learning more about the realized welfare of a given group of animals (and how a change in conditions would affect realized welfare) is going to be much more action-relevant than information about capacity for welfare. Considerations pertaining to capacity for welfare are most pertinent to big-picture questions about how we should allocate resources across fairly distinct types of animals (e.g., chickens vs. fish vs. crustaceans vs. insects). I think some uncertainties surrounding capacity for welfare can be resolved without fully solving the problem of how to measure realized welfare in every case. Of course, measuring realized welfare and measuring capacity for welfare share many of the same conceptual and practical hurdles, so we may be able to make progress on the two in tandem.

While this is not exactly the same task as assessing capacity for welfare and moral status, it seems analogous and illustrative of the need for a hybrid approach.

Not sure how much we disagree here. I certainly think all-things-considered expert judgments have an important role to play in assessing capacity for welfare. The post emphasizes the atomistic approach because it’s a lot more complicated (and thus warrants deeper explanation) and also because it’s much more likely to uncover action-relevant information that our untutored all-things-considered judgments may miss. (I liken the project to RP’s previous work on invertebrate sentience, which required many subjective judgment calls but ultimately whose main contribution was a compilation of hard data on 53 empirically measurable features that are relevant to assessing whether or not an animal is sentient.)

This seems very unlikely to be the correct taxa in my opinion. First, taxa above genus or family are generally arbitrary in scope. Second, relevant traits would likely be heterogeneous within such a broad group.

Yeah, I could be convinced that order is the wrong taxonomic rank. My main concern is tractability. The scale of the potential project is already so enormous, and moving from order to family could easily add another 500-1000 hours of work. My hope was that we would be able to discern some broad trends at the level of order (which could be refined in the future). But if neither time nor money were a particular concern, then, for the reasons you outline, I think family would be a much better rank at which to investigate these questions.

Again, happy to talk more if you’re interested!

Comment by Jason Schukraft on EA Forum feature suggestion thread · 2020-06-16T17:41:33.313Z · EA · GW

I'd like the Forum to support superscript and subscript.

Comment by Jason Schukraft on EA Forum feature suggestion thread · 2020-06-16T17:41:05.518Z · EA · GW

I'd like to see the experimental sequences feature rolled out to all users.