Posts

A Primer on the Symmetry Theory of Valence 2021-09-06T11:41:56.155Z
Why I think the Foundational Research Institute should rethink its approach 2017-07-20T20:46:27.298Z
A review of what affective neuroscience knows about suffering & valence. (TLDR: Affective Neuroscience is very confused about what suffering is.) 2017-01-13T02:01:11.223Z
Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk 2016-12-09T05:47:23.087Z

Comments

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-13T23:22:12.149Z · EA · GW

Generally speaking, I agree with the aphorism “You catch more flies with honey than vinegar;”

For what it’s worth, I interpreted Gregory’s critique as an attempt to blow up the conversation and steer away from the object level, which felt odd. I’m happiest speaking of my research, and fielding specific questions about claims.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-10T23:31:29.782Z · EA · GW

Gregory, I’ll invite you to join the object-level discussion between Abby and I.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-10T23:23:20.391Z · EA · GW

Welcome, thanks for the good questions.

Asymmetries in stimuli seem crucial for getting patterns through the “predictive coding gauntlet.” I.e., that which can be predicted can be ignored. We demonstrably screen perfect harmony out fairly rapidly.

The crucial context for STV on the other hand isn’t symmetries/asymmetries in stimuli, but rather in brain activity. (More specifically, as we’re currently looking at things, in global eigenmodes.)

With a nod back to the predictive coding frame, it’s quite plausible that the stimuli that create the most internal symmetry/harmony are not themselves perfectly symmetrical, but rather have asymmetries crafted to avoid top-down predictive models. I’d expect this to vary quite a bit across different senses though, and depend heavily on internal state.

The brain may also have mechanisms which introduce asymmetries in global eigenmodes, in order to prevent getting ‘trapped’ by pleasure — I think of boredom as fairly sophisticated ‘anti-wireheading technology’ — but if we set aside dynamics, the assertion is that symmetry/harmony in the brain itself is intrinsically coupled with pleasure.

Edit: With respect to the Mosers, that’s really cool example of this stuff. I can’t say I have answers here but as a punt, I’d suspect the “orthogonal neural coding of similar but distinct memories” is going to revolve around some pretty complex frequency regimes and we may not yet be able to say exact things about how ‘consonant’ or ‘dissonant’ these patterns are to each other yet. My intuition is that this result about the golden mean being the optimal ratio for non-interaction will end up intersecting with the Mosers’ work. That said I wonder if STV would assert that some sorts of memories are ‘hedonically incompatible’ due to their encodings being dissonant? Basically, as memories get encoded, the oscillatory patterns they’re encoded with could subtly form a network which determines what sorts of new memories can form and/or which sorts of stimuli we enjoy and which we don’t. But this is pretty hand-wavy speculation…

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-09T22:20:18.230Z · EA · GW

Hi Abby, I understand. We can just make the best of it.

1a. Yep, definitely. Empirically we know this is true from e.g. Kringelbach and Berridge’s work on hedonic centers of the brain; what we’d be interested in looking into would be whether these areas are special in terms of network control theory.

1c. I may be getting ahead of myself here: the basic approach to testing STV we intend is looking at dissonance in global activity. Dissonance between brain regions likely contribute to this ‘global dissonance’ metric. I’m also interested in measuring dissonance within smaller areas of the brain as I think it could help improve the metric down the line, but definitely wouldn’t need to at this point.

1d. As a quick aside, STV says that ‘symmetry in the mathematical representation of phenomenology corresponds to pleasure’. We can think of that as ‘core STV’. We’ve then built neuroscience metrics around consonance, dissonance, and noise that we think can be useful for proxying symmetry in this representation; we can think of that as a looser layer of theory around STV, something that doesn’t have the ‘exact truth’ expectation of core STV. When I speak of dissonance corresponding to suffering, it’s part of this looser second layer.

To your question — why would STV be true? — my background is in the philosophy of science, so I’m perhaps more ready to punt to this domain. I understand this may come across as somewhat frustrating or obfuscating from the perspective of a neuroscientist asking for a neuroscientific explanation. But, this is a universal thread across philosophy of science: why is such and such true? Why does gravity exist; why is the speed of light as it is? Etc. Many things we’ve figured out about reality seem like brute facts. Usually there is some hints of elegance in the structures we’re uncovering, but we’re just not yet knowledgable to see some universal grand plan. Physics deals with this a lot, and I think philosophy of mind is just starting to grapple with this in terms of NCCs. Here’s something Frank Wilczek (won the 2004 Nobel Prize in physics for helping formalize the Strong nuclear force) shared about physics:

>... the idea that there is symmetry at the root of Nature has come to dominate our understanding of physical reality. We are led to a small number of special structures from purely mathematical considerations--considerations of symmetry--and put them forward to Nature, as candidate elements for her design. ... In modern physics we have taken this lesson to heart. We have learned to work from symmetry toward truth. Instead of using experiments to infer equations, and then finding (to our delight and astonishment) that the equations have a lot of symmetry, we propose equations with enormous symmetry and then check to see whether Nature uses them. It has been an amazingly successful strategy. (A Beautiful Question, 2015)

So — why would STV be the case? ”Because it would be beautiful, and would reflect and extend the flavor of beauty we’ve found to be both true and useful in physics” is probably not the sort of answer you’re looking for, but it’s the answer I have at this point. I do think all the NCC literature is going to have to address this question of ‘why’ at some point.

4. We’re ultimately opportunistic about what exact format of neuroimaging we use to test our hypotheses, but fMRI checks a lot of the boxes (though not all). As you say, fMRI is not a great paradigm for neurotech; we’re looking at e.g. headsets by Kernel and others, and also digging into the TUS (transcranial ultrasound) literature for more options.

5. Cool! I’ve seen some big reported effect sizes and I’m generally pretty bullish on neurofeedback in the long term; Adam Gazzaley‘s Neuroscape is doing some cool stuff in this area too. 

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-08T16:53:27.692Z · EA · GW

Good catch; there’s plenty that our glossary does not cover yet. This post is at 70 comments now, and I can just say I’m typing as fast as I can!

I pinged our engineer (who has taken the lead on the neuroimaging pipeline work) about details, but as the collaboration hasn’t yet been announced I’ll err on the side of caution in sharing.

To Michael — here’s my attempt to clarify the terms you highlighted:

  • Neurophysiological models of suffering try to dig into the computational utility and underlying biology of suffering

-> existing theories talk about what emotions ‘do’ for an organism, and what neurochemicals and brain regions seem to be associated with suffering

  • symmetry

Frank Wilczek calls symmetry ‘change without change’. A limited definition is that it’s a measure of the number of ways you can rotate a picture, and still get the same result. You can rotate a square 90 degrees, 180 degrees, and 270 degrees and get something identical; you can rotate a circle any direction and get something identical. Thus we’d say circles have more rotational symmetries than squares (who have more than rectangles, etc)

  • harmony

Harmony has been in our vocabulary a long time, but it’s not a ‘crisp’ word. This is why I like to talk about symmetry, rather than harmony — although they more-or-less point in the same direction

  • dissonance

The combination of multiple frequencies that have a high amount of interaction, but few common patterns. Nails on a chalkboard create a highly dissonant sound; playing the C and C# keys at the same time also creates a relatively dissonant sound

  • resonance as a proxy for characteristic activity

I’m not sure I can give a fully satisfying definition here that doesn’t just reference CSHW; I’ll think about this one more.

  • Consonance Dissonance Noise Signature

A way of mathematically calculating how much consonance, dissonance, and noise there is when we add different frequencies together. This is an algorithm developed at QRI by my co-founder, Andrés 

  • self-organizing systems

A system which isn’t designed by some intelligent person, but follows an organizing logic of its own. A beehive or anthill would be a self-organizing system; no one’s in charge, but there’s still something clever going on

  • Neural Annealing

In November 2019 I released a work speaking of the brain as a self-organizing system. Basically, “when the brain is in an emotionally intense state, change is easier” similar to how when metal heats up and starts to melt, it’s easier to change the shape of the metal

  • full neuroimaging stack

All the software we need to do an analysis (and specifically, the CSHW analysis), from start to finish

  • precise physical formalism for consciousness

A perfect theory of consciousness, which could be applied to anything. Basically a “consciousness meter”

  • STV gives us a rich set of threads to follow for clear neurofeedback targets, which should allow for much more effective closed-loop systems, and I am personally extraordinarily excited about the creation of technologies that allow people to “update toward wholesome”,

Ah yes this is a litttttle bit dense. Basically, one big thing holding back neurotech is we don’t have good biomarkers for well-being. If we design these biomarkers, we can design neurofeedback systems which work better (not sure how familiar you are with neurofeedback)

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-08T16:16:51.593Z · EA · GW

Hi Abby, thanks for the questions. I have direct answers to 2,3,4, and indirect answers to 1 and 5.

1a. Speaking of the general case, we expect network control theory to be a useful frame for approaching questions of why certain sorts of activity in certain regions of the brain are particularly relevant for valence. (A simple story: hedonic centers of the brain act as ‘tuning knobs’ toward or away from global harmony. This would imply they don’t intrinsically create pleasure and suffering, merely facilitate these states.) This paper from the Bassett lab is the best intro I know of to this;

1b. Speaking again of the general case, asynchronous firing isn’t exactly identical to the sort of dissonance we’d identify as giving rise to suffering: asynchronous firing could be framed as in uncorrelated firing, or ‘non-interacting frequency regimes’. There’s a really cool paper asserting that the golden mean is the optimal frequency ratio for non-interaction, and some applications to EEG work, in case you’re curious. What we’re more interested in is frequency combinations that are highly interacting, and lacking a common basis set. An example would be playing the C and C# keys on a piano. This lens borrows more from music theory and acoustics (e.g. Helmholtz, Sethares) than traditional neuroscience although it lines up with some work by e.g. Buzsáki (Rhythms of the Brain); Friston has also done some cool work here on frequencies, communication, and birdsong, although I’d have to find the reference.

1c. Speaking again of the general case, naively I’d expect dissonance somewhere in the brain to induce dissonance elsewhere in the brain. I‘d have to think about what reference I could point to here as I don’t know if you’ll share this intuition, but a simple analogy would be if many people are walking in a line, if someone trips, more people might trip; chaos begets chaos.

1d. Speaking, finally, of the specific case, I admit I have only a general sense of the structure of the brain networks in question and I’m hesitant to put my foot in my mouth by giving you an answer I have little confidence in. I’d probably punt to the general case, and say if there’s dissonance between these two regions, depending on the network control theory involved, it could be caused by dissonance elsewhere in the brain, and/or it could spread to elsewhere in the brain: i.e. it could be both cause and effect.

2&3. The harmonic analysis we’re most interested in depends on accurately modeling the active harmonics (eigenmodes) of the brain. EEG doesn’t directly model eigenmodes; to infer eigenmodes we’d need fairly accurate source localization. It could be there are alternative ways to test STV without modeling brain eigenmodes, and that EEG could give us. I hope that’s the case, and I hope we find it, since EEG is certainly a lot easier to work with than fMRI.

I.e. we’re definitely not intrinsically tied to source localization, but currently we just don’t see a way to get clean enough abstractions upon which we could compute consonance/dissonance/noise without source localization.

4. Usually we can, and usually it’s much better than trying to measure it with some brain scanner! The rationale for pursuing this line of research is that existing biomarkers for mood and well-being are pretty coarse. If we can design a better biomarker, it’ll be useful for e.g. neurotech wearables. If your iPhone can directly measure how happy you are, you can chart that, correlate it with things, and all sorts of things. “What you can measure, you can manage.” It could also lead to novel therapies and other technologies, and that’s probably what I’m most viscerally excited about. There are also more ‘sci-fi’ applications such as using this to infer the experience of artificial sentience.

5. This question is definitely above my pay grade; I take my special edge here to be helping build a formal theory and more accurate biomarkers for suffering, rather than public policy (e.g. Michael D. Plant‘s turf). I do suspect however that some of the knowledge gained from better biomarkers could help inform emotional wellness best practices, and these best practices could be used by everyone, not just people getting scanned. I also think some therapies that might arise out of having better biomarkers could heal some sorts of trauma more-or-less permanently, so the scanning would just need to be a one-time-thing, not continuous. But this gets into the weeds of implementation pretty quickly.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-08T14:53:50.266Z · EA · GW

Hi Samuel, I think it’s a good thought experiment. One prediction I’ve made is that one could make an agent such as that, but it would be deeply computationally suboptimal: it would be a system that maximizes disharmony/dissonance internally, but seeks out consonant patterns externally. Possible to make but definitely an AI-complete problem.

Just as an idle question, what do you suppose the natural kinds of phenomenology are? I think this can be a generative place to think about qualia in general.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-08T13:39:09.031Z · EA · GW

Hi Abby,

I feel we’ve been in some sense talking past each other from the start. I think I bear some of the responsibility for that, based on how my post was written (originally for my blog, and more as a summary than an explanation).

I’m sorry for your frustration. I can only say I’m not intentionally trying to frustrate you, but that we appear to have very different styles of thinking and writing and this may have caused some friction, and I have been answering object-level questions from the community as best I can.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-08T13:30:06.613Z · EA · GW

I really appreciate you putting it like this, and endorse everything you wrote. 

I think sometimes researchers can get too close to their topics and collapse many premises and steps together; they sometimes sort of ‘throw away the ladder’ that got them where they are, to paraphrase Wittgenstein. This can make it difficult to communicate to some audiences. My experience on the forum this week suggests this may have happened to me on this topic. I’m grateful for the help the community is offering on filling in the gaps.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-08T13:19:35.704Z · EA · GW

Hi Samuel,

I’d say there’s at least some diversity of views on these topics within QRI. When I introduced STV in PQ, I very intentionally did not frame it as a moral hypothesis. If we’re doing research, best to keep the descriptive and the normative as separate as possible. If STV is true it may make certain normative frames easier to formulate, but STV itself is not a theory of morality or ethics.

One way to put this is that when I wear my philosopher’s hat, I’m most concerned about understanding what the ‘natural kinds’ (in Plato’s terms) of qualia are. If valence is a natural kind (similar to how a photon or electromagnetism are natural kinds), that’s important knowledge about the structure of reality. My sense is that ‘understanding what reality’s natural kinds are’ is prior to ethics: first figure out what is real, and then everything else (such as ethics and metaethics) becomes easier.

In terms of specific ethical frames, we do count among QRI some deeply committed hedonistic utilitarians. I see deep value in that frame although I would categorize myself as closer to a virtue ethicist.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-08T07:59:33.178Z · EA · GW

Hi all, I messaged some with Holly a bit about this, and what she shared was very helpful. I think a core part of what happened was a mismatch of expectations: I originally wrote this content for my blog and QRI’s website, and the tone and terminology was geared toward “home team content”, not “away team content”. Some people found both the confidence and somewhat dense terminology offputting, and I think that’s reasonable of them to raise questions. As a takeaway, I‘ve updated that crossposting involves some pitfalls and intend to do things differently next time.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-08T06:01:18.066Z · EA · GW

I take Andrés’s point to be that there’s a decently broad set of people who took a while to see merit in STV, but eventually did. One can say it’s an acquired taste, something that feels strange and likely wrong at first, but is surprisingly parsimonious across a wide set of puzzles. Some of our advisors approached STV with significant initial skepticism, and it took some time for them to come around. That there are at least a few distinguished scientists who like STV isn’t proof it’s correct, but may suggest withholding some forms of judgment.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-08T05:31:55.412Z · EA · GW

Andrés’s STV presentation to Imperial College London’s psychedelics research group is probably the best public resource I can point to on this right now. I can say after these interactions it’s much more clear that people hearing these claims are less interested in the detailed structure of the philosophical argument, and more in the evidence, and in a certain form of evidence. I think this is very reasonable and it’s something we’re finally in a position to work on directly: we spent the last ~year building the technical capacity to do the sorts of studies we believe will either falsify or directly support STV.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-08T05:21:00.038Z · EA · GW

Hi Holly, I’d say the format of my argument there would be enumeration of claims, not e.g. trying to create a syllogism. I’ll try to expand and restate those claims here:

A very important piece of this is assuming there exists a formal structure (formalism) to consciousness. If this is true, STV becomes a lot more probable. If it isn’t, STV can’t be the case.

Integrated Information Theory (IIT) is the most famous framework for determining the formal structure to an experience. It does so by looking at the causal relationships between components of a system; the more a system’s parts demonstrate ‘integration’ (which is a technical, mathematical term that tries to define how much a system’s parts interact with its other parts), the more conscious the system is.

I didn’t make IIT, I don’t know if it’s true, and I actually suspect it might not be true (I devoted a section of Principia Qualia to explaining IIT, and another section to critiques of IIT). But it’s a great example of an attempt to formalize phenomenology, and I think the project or overall frame of IIT (the idea of consciousness being the sort of thing that one can apply formal mathematics to) is correct even if its implementation (integration) isn’t.

You can think of IIT as a program. Put in the details of how a system (such as a brain) is put together, and it gives you some math that tells you what the system is feeling. 

You can think of STV as a way to analyze this math. STV makes a big jump in that it assumes the symmetry of this mathematical object corresponds to how pleasurable the experience it represents is. This  is a huge, huge, huge jump, and cannot be arrived at by deduction; none of my premeses force this conclusion. We can call it an educated guess. But, it is my best educated guess after thinking about this topic for about 7 years before posting my theory. I can say I’m fully confident the problem is super important and I’m optimistic this guess is correct, for many reasons, but many of these reasons are difficult to put into words. My co-founder Andrés also believes in STV and his way of describing things is often very different than mine in helpful ways and he recently posted his own description of this, so I also encourage you to read his comment.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-08T01:04:04.092Z · EA · GW

Just a quick comment in terms of comment flow: there’s been a large amount of editing of the top comment, and some of the replies that have been posted may not seem to follow the logic of the comment they‘re attached to. If there are edits to a comment that you wish me to address, I’d be glad if you made a new comment. (If you don’t, I don’t fault you but I may not address the edit.)

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-08T00:43:52.520Z · EA · GW

Hi Charles, I think several people (myself, Abby, and now Greg) were put in some pretty uncomfortable positions across these replies. By posting, I open myself to replies, but I was pretty surprised by some of the energy of the initial comments (as apparently were others; both Abby and I edited some of our comments to be less confrontational, and I’m happy with and appreciate that).

Happy to answer any object level questions you have that haven’t been covered in other replies, but this remark seems rather strange to me.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-07T23:39:48.606Z · EA · GW

Hi Michael, I appreciate the kind effortpost, as per usual. I’ll do my best to answer.

  1. This is a very important question. To restate it in several ways: what kind of thing is suffering? What kind of question is ‘what is suffering’? What would a philosophically satisfying definition of suffering look like? How would we know if we saw it? Why does QRI think existing theories of suffering are lacking? Is an answer to this question a matter of defining some essence, or defining causal conditions, or something else?

Our intent is to define phenomenological valence in a fully formal way, with the template being physics: we wish to develop our models such that we can speak of pain and pleasure with all the clarity, precision, and rigor as we currently describe photons and quarks and fields.

This may sound odd, but physics is a grand success story of formalization, and we essentially wish to apply the things that worked in physics, to phenomenology. Importantly, physics has a strong tradition of using symmetry considerations to inform theory. STV borrows squarely from this tradition (see e.g. my write up on Emmy Noether).

Valence is subjective as you note, but that doesn’t mean it’s arbitrary; there are deep patterns in which conditions and sensations feel good, and which feel bad. We think it’s possible to create a formal system for the subjective. Valence and STV are essentially the pilot project for this system. Others such as James and Husserl have tried to make phenomenological systems, but we believe they didn’t have all the pieces of the puzzle. I’d offer  our lineages page for what we identify as ‘the pieces of the puzzle’; these are the shoulders we’re standing on to build our framework.

2. I see the question. Also, thank you for your work on the Happier Lives Institute; we may not interact frequently but I really like what you’re doing.

The significance of a fully rigorous theory of valence might not be fully apparent, even to the people working on it. Faraday and Maxwell formalized electromagnetism; they likely did not foresee theIr theory being used to build the iPhone. However, I suspect that they had deep intuitions that there’s something deeply useful in understanding the structure of nature, and perhaps they wouldn’t be as surprised as their contemporaries. We also hold intuitions as to the applications of a full theory of valence.

The simplest would be, it would unlock novel psychological and psychiatric diagnostics. If there is some difficult-to-diagnose nerve pain, or long covid type bodily suffering, or some emotional disturbance that is difficult to verbalize, well, this is directly measurable in principle with STV. This wouldn’t replace economics and psychology, as you say, but it would augment them.

Longer term, I’m reminded of the (adapted) phrase, “what you can measure, you can manage.” If you can reliably measure suffering, you can better design novel interventions for reducing it. I could see a validated STV as the heart of a revolution in psychiatry, and some of our work (Neural Annealing, Wireheading Done Right) are aimed at possible shapes this might take.

3. Aha, an easy question :) I’d point you toward our web glossary

To your question, “Finally, and perhaps most importantly, I really not sure what it could even mean to represent consciousness/ valence as a mathematic shape“ — this is perhaps an overly-fancy way of saying that we believe consciousness is precisely formalizable. The speed of light is precisely formalizable; the UK tax rate is precisely formalizable; the waveform of an mp3 is precisely formalizable, and all of these formalizations can be said to be different ‘mathematical shapes’. To say something does not have a ‘mathematical shape’ is to say it defies formal analysis. 

Thanks again for your clear and helpful questions.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-07T22:48:56.759Z · EA · GW

Hi Seb, I appreciate the honest feedback and kind frame.

I can say that it’s difficult to write a short piece that will please a diverse audience, but that ducks the responsibility of the writer. 

You might be interested in my reply to Linch which notes that STV may be useful even if false; I would be surprised if it were false but it wouldn’t be an end to qualia research, merely a new interesting chapter.

I spoke with the team today about data, and we just got a new batch this week we’re optimistic has exactly the properties we’re looking for (meditative cessations, all 8 jhanas in various orders, DTI along with the fMRI). We have a lot of people on our team page but to this point QRI has mostly been fueled by volunteer work (I paid myself my first paycheck this month, after nearly five years) so we don’t always have the resources to do everything we want to do as fast as we want to do it, but I’m optimistic we’ll have something to at least circulate privately within a few months.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-07T22:37:22.206Z · EA · GW

Hi Linch, that’s very well put. I would also add a third possibility (c), which is “is STV false but generative.” — I explore this a little here, with the core thesis summarized in this graphic:

I.e., STV could be false in a metaphysical sense, but insofar as the brain is a harmonic computer (a strong reframe of CSHW), it could be performing harmonic gradient descent. Fully expanded, there would be four cases:

STV true, STHR true

STV true, STHR false

STV false, STHR true

STV false, STHR false

Of course, ‘true and false’ are easier to navigate if we can speak of absolutes; STHR is a model, and ‘all models are wrong; some are useful.’

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-07T14:23:33.371Z · EA · GW

This is in fact the claim of STV, loosely speaking; that there is an identity relationship here. I can see how it would feel like an aggressive claim, but I’d also suggest that positing identity relationships is a very positive thing, as they generally offer clear falsification criteria. Happy to discuss object-level arguments as presented in the linked video.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-07T12:24:22.737Z · EA · GW

Thanks for adjusting your language to be nicer. I wouldn’t say we’re overwhelmingly confident in our claims, but I am overwhelmingly confident in the value of exploring these topics from first principles, and although I wish I had knockout evidence for STV to share with you today, that would be Nobel Prize tier and I think we’ll have to wait and see what the data brings. For the data we would identify as provisional support, this video is likely the best public resource at this point: 

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-07T10:22:33.746Z · EA · GW

I’d say that’s a fair assessment — one wrinkle that isn’t a critique of what you wrote, but seems worth mentioning, is that it’s an open question if these are the metrics we should be optimizing for. If we were part of academia, citations would be the de facto target, but we have different incentives (we’re not trying to impress tenure committees). That said, the more citations the better of course.

As you say, if STV is true, it would essentially introduce an entirely new subfield. It would also have implications for items like AI safety and those may outweigh its academic impact. The question we’re looking at is how to navigate questions of support, utility, and impact here: do we put our (unfortunately rather small) resources toward academic writing and will that get us to the next step of support, or do we put more visceral real-world impact first (can we substantially improve peoples’ lives? How much and how many?), or do we go all out towards AI safety?

It’s of course possible to be wrong; I’m also understanding it’s possible to be right, but take the wrong strategic path and run out of gas. Basically I’m a little worried that racking up academic metrics like citations is less a panacea than it might appear, and we’re looking to hedge our bets here.

For what it’s worth, we’ve been interfacing with various groups working on emotional wellness neurotech and one internal metric I’m tracking is how useful a framework STV is to these groups; here’s Jay Sanguinetti explaining STV to Shinzen Young (first part of the interview):

https://open.spotify.com/episode/6cI9pZHzT9sV1tVwoxncWP?si=S1RgPs_CTYuYQ4D-adzNnA&dl_branch=1

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-07T09:49:14.371Z · EA · GW

Thanks valence. I do think the ‘hits-based giving’ frame is important to develop, although I understand it’s doesn’t have universal support as some of the implications may be difficult to navigate.

And thank for appreciating the problem; it’s sometimes hard for me to describe how important the topic feels and all the reasons for working on it.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-07T09:36:54.086Z · EA · GW

Hi Linch, cool idea.

I’d suggest that 100 citations can be a rather large number for papers, depending on what reference class you put us in, 3000 larger still; here’s an overview of the top-cited papers in neuroscience for what it’s worth: https://www.frontiersin.org/articles/10.3389/fnhum.2017.00363/full

Methods papers tend to be among the most highly cited, and e.g. Selen Atasoy’s original work on CSHW has been cited 208 times, according to Google Scholar. Some more recent papers are at significantly less than 100, though this may climb over time.

Anyway my sense is (1) is possible but depends on future direction, (2) is unlikely, (3) is likely, (4) is unlikely (high confidence).

Perhaps a better measure of success could be expert buy-in. I.e., does QRI get endorsements from distinguished scientists who themselves fit criteria (1) and/or (2)? Likewise, technological usefulness, e.g. has STV directly inspired the creation of some technical device that is available to buy or is used in academic research labs? I’m much more optimistic about these criteria than citation counts, and by some measures we’re already there.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-07T08:29:10.978Z · EA · GW

Hi Abby, to give a little more color on the data: we’re very interested in CSHW as it gives us a way to infer harmonic structure from fMRI, which we’re optimistic is a significant factor in brain self-organization. (This is still a live hypothesis, not established fact; Atasoy is still proving her paradigm, but we really like it.)

We expect this structure to be highly correlated with global valence, and to show strong signatures of symmetry/harmony during high-valence states. The question we’ve been struggling with as we’ve been building this hypothesis is “what is a signature of symmetry/harmony?” — there’s a bit of research from Stanford (Chon) here on quantifying consonance in complex waveforms and some cool music theory based on Helmholz’s work, but this appears to be an unsolved problem. Our “CDNS” approach basically looks at pairwise relationships between harmonics to quantify the degree to which they’re in consonance or dissonance with each other. We’re at the stage here where we have the algorithm, but need to validate it on audio samples first before applying it too confidently to the brain.

There’s also a question of what datasets are ideal for the sort of thing we’re interested in. Extreme valence datasets are probably the most promising, states of extreme pleasure or extreme suffering. We prefer datasets involving extreme pleasure, for two reasons:

(1) We viscerally feel better analyzing this sort of data than states of extreme suffering;

(2) fMRI’s time resolution is such that the best results will come from mental states with high structural stability. We expect this structural stability to be much higher during pleasure than suffering.

As such we’ve been focusing on collecting data from meditative jhana states, and from MDMA states. There might be other states that involve reliable good emotion that we can study, but these are the best we’ve found conceptually so far.

Lastly, there’s been the issue of neuroimaging pipelines and CSHW. Atasoy‘s work is not open source, so we had to reimplement her core logic (big thanks to Patrick here) and we ended up collaborating with an external group on a project to combine this core logic with a neuroimaging packaging system. I can’t share all the details here as our partner doesn’t want to be public about their involvement yet but this is thankfully wrapping up soon.

I wish we had a bunch of deeply analyzed data we could send you in direct support of STV! And I agree with you that is the ideal and you’re correct to ask for it. Sadly we don’t at this point, but I’m glad to say a lot of the preliminaries have been now taken care of and things are moving. I hope my various comments here haven’t come across as disrespectful (and I sincerely apologize if they have- not my intention but if that’s been your interpretation I accept it, sorry!); there’s just a lot of high-context stuff here that’s hard to package up into something that’s neat and tidy, and overall what clarity we’ve been able to find on this topic has been very hard-won.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-07T08:02:09.702Z · EA · GW

Hi Gregory, I’ll own that emoticon. My intent was not to belittle, but to show I’m not upset and I‘m actually enjoying the interaction. To be crystal clear, I have no doubt Hoskin is a sharp scientist and cast no aspersions on her work. Text can be a pretty difficult medium for conveying emotions (things can easily come across as either flat or aggressive).

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-07T07:48:17.020Z · EA · GW

Hi Abby, to be honest the parallels between free-energy-minimizing systems and dissonance-minimizing systems is a novel idea we’re playing with (or at least I believe it’s novel - my colleague Andrés coined it to my knowledge) and I’m not at full liberty to share all the details before we publish it. I think it’s reasonable to doubt this intuition, and we’ll hopefully be assembling more support for it soon. 

To the larger question of neural synchrony and STV, a good collection of our argument and some available evidence would be our talk to Robin Carhart-Harris’ lab: 

(I realize an hour-long presentation is a big ‘ask’; don’t feel like you need to watch it, but I think this shares what we can share publicly at this time)

>I agree neuroimaging is extremely messy and discouraging, but you’re the one posting about successfully  building an fmri analysis pipeline to run this specific analysis to support your theory. I am very annoyed that your response to my multiple requests for any empirical data to support your theory is you basically saying “science is hard”, as opposed to "no experiment, dataset, or analysis is perfect, but here is some empirical evidence that is at least consistent with my theory."

One of my takeaways from our research is that neuroimaging tooling is in fairly bad shape overall. I’m frankly surprised we had to reimplement an fMRI analysis pipeline in order to start really digging into this question, and I wonder how typical our experience here is.

One of the other takeaways from our work is that it’s really hard to find data that’s suitable for fundamental research into valence; we just got some MDMA fMRI+DTI data that appears very high quality, so we may have more to report soon. I’m happy to talk about what sorts of data are, vs are not, suitable for our research and why; my hands are a bit tied with provisional data at this point (sorry about that, wish I had more to share)

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-07T07:24:11.915Z · EA · GW

Hi Harrison, that’s very helpful. I think it’s a challenge to package fairly technical and novel research into something that’s both precise and intuitive. Definitely agree that “harmony” is an ambiguous concept.

One of the interesting aspects of this work is it does directly touch on issues of metaphysics and ontology: what are the natural kinds of reality? What concepts ‘carve reality at the joints’? Most sorts of research can avoid dealing with these questions directly, and just speak about observables and predictions. But since part of what we’re doing is to establish valence as a phenomenological natural kind, we have to make certain moves, and these moves may raise certain yellow flags, as you note, since often when these moves are made there’s some philosophical shenanigans going on. That said, I’m happy with the overall direction of our work, which has been steadily more and more empirical.

One takeaway that I do hope I can offer is the deeply philosophically unsatisfactory nature of existing answers in this space. Put simply, no one knows what pleasure and suffering are, or at least have definitions that are coherent across all domains they’d like to be able to define them. This is an increasing problem as we tackle e.g. problems of digital sentience and fundamental questions of AI alignment. I’m confident in our research program, but even more confident that the questions we’re trying to grapple with are important to try to address directly, and that there’s no good ’default hypothesis’ at present.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-06T23:30:56.623Z · EA · GW

I’m glad to hear you feel good about your background and are filled with confidence in yourself and your field. I think the best work often comes from people who don’t at first see all the challenges involved in doing something, because often those are the only people who even try. 

At first I was a little taken aback by your tone, but to be honest I’m a little amused by the whole interaction now.

The core problem with EEG is that the most sophisticated analyses depend on source localization (holographic reconstruction of brain activity), and accurate source localization from EEG remains an unsolved problem, at least at the resolution and confidence we’d need. In particular we’ve looked at various measures of coherence as applied to EEG and found them all wanting in various ways. I notice some backtracking on your criticism of CSHW. ;) it’s a cool method, not without downsides but occupies a cool niche. I have no idea what your research is about but it might be useful for you to learn about for some purposes.

I’m glad you‘re reading more of our ‘back issues’ as it were. We have some talks on our YouTube channel as well (including the NA presentation to Friston), although not all of our work on STV is public yet.

If you share what your research is about, and any published work, I think it’d I’d help me understand where your critiques are coming from a little better. Totally up to you though.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-06T21:55:30.899Z · EA · GW

Hi Abby, thanks for the clear questions. In order:

  1. In brief, asynchrony levies a complexity and homeostatic cost that harmony doesn’t. A simple story here is that dissonant systems shake themselves apart; we can draw a parallel between dissonance in the harmonic frame and free energy in the predictive coding frame. 
  2. We work with all the high-quality data we can get our hands on. We do have hd-EEG data of jhana meditation, but EEG data as you may(?) know is very noisy and ‘NCC-style’ research with EEG is a methodological minefield.
  3. We know and like Graziano. I’ll share the idea of using Princeton facilities with the team.

To be direct, years ago I felt as you did about the simplicity of the scientific method in relation to neuroscience; “Just put people in an fMRI, have them do things, analyze the data; how hard can it be?” — experience has cured me of this frame, however. I’ve learned that neuroimaging data pipelines are often held together by proverbial duct tape, neuroimaging is noisy, the neural correlates of consciousness frame is suspect and existing philosophy of mind is rather bonkers, and to even say One True Thing about the connection between brain and mind is very hard (and expensive) indeed. I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD, and I hope you can turn that into determination to refactor the system towards elegance, rather than being progressively discouraged by all the hidden mess.

:)

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-06T21:34:04.659Z · EA · GW

Edit: probably an unhelpful comment

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-06T20:30:17.223Z · EA · GW

Hi Jpmos, really appreciate the comments. To address the question of evidence, this is a fairly difficult epistemological situation but we’re working with high-valence datasets from Daniel Ingram & Harvard, and Imperial College London (jhana data, and MDMA data, respectively) and looking for signatures of high harmony. 

Neuroimaging is a pretty messy thing, there are no shortcuts to denoising data, and we are highly funding constrained, so I’m afraid we don’t have any peer-reviewed work published on this yet. I can say that initial results seem fairly promising and we hope to have something under review in 6 months. There is a wide range of tacit evidence that stimulation patterns with higher internal harmony produce higher valence than dissonant patterns (basically: music feels good, nails on a chalkboard feels bad), but this is in a sense ‘obvious’ and only circumstantial evidence for STV.

Happy to ‘talk shop’ if you want to dig into details here.

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-06T20:21:04.910Z · EA · GW

Hi Harrison, appreciate the remarks. My response would be more-or-less an open-ended question: do you feel this is a valid scientific mystery? And, what do you feel an answer would/should look like? I.e., correct answers to long-unsolved mysteries might tend to be on the weird side, but there’s “useful generative clever weird” and “bad wrong crazy timecube weird”. How would you tell the difference?

Comment by MikeJohnson on A Primer on the Symmetry Theory of Valence · 2021-09-06T20:16:50.583Z · EA · GW

Hi Abby, I‘m happy to entertain well-meaning criticism, but it feels your comment rests fairly heavily on credentialism and does not seem to offer any positive information, nor does it feel like high-level criticism (“their actual theory is also bad”). If your background is as you claim, I’m sure you understand the nuances of “proving” an idea in neuroscience, especially with regard to NCCs (neural correlates of consciousness) — neuroscience is also large enough that  “I published a peer reviewed fMRI paper in a mainstream journal” isn’t a particularly ringing endorsement of domain knowledge in affective neuroscience. If you do have domain knowledge sufficient to take a crack at the question of valence I’d be glad to hear your ideas.

For a bit of background to theories of valence in neuroscience I’d recommend my forum post here - it goes significantly deeper into the literature than this primer.

Again, I’m not certain you read my piece closely, but as mentioned in my summary, most of our collaboration with British universities has been with Imperial (Robin Carhart-Harris’s lab, though he recently moved to UCSF) rather than Oxford, although Kringelbach has a great research center there and Atasoy (creator of the CSHW reference implementation, which we independently reimplemented) does her research there, so we’re familiar with the scene.

Comment by MikeJohnson on All Possible Views About Humanity's Future Are Wild · 2021-07-14T01:17:30.265Z · EA · GW

I like this theme a lot! 

In looking at longest-term scenarios, I suspect there might be useful structure&constraints available if we take seriously the idea that consciousness is a likely optimization target of sufficiently intelligent civilizations. I offered the following on Robin Hanson's blog:

Premise 1: Eventually, civilizations progress until they can engage in megascale engineering: Dyson spheres, etc.

Premise 2: Consciousness is the home of value: Disneyland with no children is valueless. 
Premise 2.1: Over the long term we should expect at least some civilizations to fall into the attractor of treating consciousness as their intrinsic optimization target.

Premise 3: There will be convergence that some qualia are intrinsically valuable, and what sorts of qualia are such.

Conjecture: A key piece of evidence for discerning the presence of advanced alien civilizations will be megascale objects which optimize for the production of intrinsically valuable qualia.

--

Essentially: I think formal consciousness research could generate a new heuristic for both how to parse cosmological data for intelligent civilizations, and what longest-term future humanity may choose for itself.

Physicalism seems plausible, and the formulation of physicalism I most believe in (dual-aspect monism) has physics and phenomenology as two sides of the same coin. As Tegmark notes, "humans ... aren't the optimal solution to any well-defined physics problem." Similarly, humans aren't the optimal solution to any well-defined phenomenological problem.

I can't say I know for sure we'll settle on filling the universe with such an "optimal solution", nor would I advocate anything at this point, but if we're looking for starting threads for how to conceptualize the longest-term optimization targets of humanity, a little consciousness research might go a long way.

More: 

https://opentheory.net/2019/09/whats-out-there/

https://opentheory.net/2019/06/taking-monism-seriously/

https://opentheory.net/2019/02/simulation-argument/

Comment by MikeJohnson on Qualia Research Institute: History & 2021 Strategy · 2021-01-28T07:13:17.622Z · EA · GW

Hi Daniel,

Thanks for the reply! I am a bit surprised at this:

Getting more clarity on emotional valence does not seem particularly high-leverage to me. What's the argument that it is?

The quippy version is that, if we’re EAs trying to maximize utility, and we don’t have a good understanding of what utility is, more clarity on such concepts seems obviously insanely high-leverage. I’ve written about specific relevant to FAI here: https://opentheory.net/2015/09/fai_and_valence/ Relevance to building a better QALY here: https://opentheory.net/2015/06/effective-altruism-and-building-a-better-qaly/ And I discuss object-level considerations on how better understanding of emotional valence could lead to novel therapies for well-being here: https://opentheory.net/2018/08/a-future-for-neuroscience/ https://opentheory.net/2019/11/neural-annealing-toward-a-neural-theory-of-everything/ On mobile, pardon the formatting.

Your points about sufficiently advanced AIs obsoleting human philosophers are well-taken, though I would touch back on my concern that we won’t have particular clarity on philosophical path-dependencies in AI development without doing some of the initial work ourselves, and these questions could end up being incredibly significant for our long-term trajectory — I gave a talk about this for MCS that I’ll try to get transcribed (in the meantime I can share my slides if you’re interested). I’d also be curious to flip your criticism and ping your models for a positive model for directing EA donations — is the implication that there are no good places to donate to, or that narrow-sense AI safety is the only useful place for donations? What do you think the highest-leverage questions to work on are? And how big are your ‘metaphysical uncertainty error bars’? What sorts of work would shrink these bars?

Comment by MikeJohnson on Qualia Research Institute: History & 2021 Strategy · 2021-01-26T09:06:50.061Z · EA · GW

Hi Daniel,

Thanks for the remarks! Prioritization reasoning can get complicated, but to your first concern:

Is emotional valence a particularly confused and particularly high-leverage topic, and one that might plausibly be particularly conductive getting clarity on? I think it would be hard to argue in the negative on the first two questions. Resolving the third question might be harder, but I’d point to our outputs and increasing momentum. I.e. one can levy your skepticism on literally any cause, and I think we hold up excellently in a relative sense. We may have to jump to the object-level to say more.

To your second concern, I think a lot about AI and ‘order of operations’. Could we postulate that some future superintelligence might be better equipped to research consciousness than we mere mortals? Certainly. But might there be path-dependencies here such that the best futures happen if we gain more clarity on consciousness, emotional valence, the human nervous system, the nature of human preferences, and so on, before we reach certain critical thresholds in superintelligence development and capacity? Also — certainly.

Widening the lens a bit, qualia research is many things, and one of these things is an investment in the human-improvement ecosystem, which I think is a lot harder to invest effectively in (yet also arguably more default-safe) than the AI improvement ecosystem. Another ‘thing’ qualia research can be thought of as being is an investment in Schelling point exploration, and this is a particularly valuable thing for AI coordination.

I’m confident that, even if we grant that the majority of humanity's future trajectory will be determined by AGI trajectory — which seems plausible to me — I think it’s also reasonable to argue that qualia research is one of the highest-leverage areas for positively influencing AGI trajectory and/or the overall AGI safety landscape.

Comment by MikeJohnson on New book — "Suffering-Focused Ethics: Defense and Implications" · 2020-06-01T10:05:04.864Z · EA · GW

Congratulations on the book! I think long works are surprisingly difficult and valuable (both to author and reader) and I'm really happy to see this.

My intuition on why there's little discussion of core values is a combination of "a certain value system [is] tacitly assumed" and "we avoid discussing it because ... discussing values is considered uncooperative." To wit, most people in this sphere are computationalists, and the people here who have thought the most about this realize that computationalism inherently denies the possibility of any 'satisfyingly objective' definition of core values (and suffering). Thus it's seen as a bit of a faux pas to dig at this -- the tacit assumption is, the more digging that is done, the less ground for cooperation there will be. (I believe this stance is unnecessarily cynical about the possibility of a formalism.)

I look forward to digging into the book. From a skim, I would just say I strongly agree about the badness of extreme suffering; when times are good we often forget just how bad things can be. A couple quick questions in the meantime:

  • If you could change peoples' minds on one thing, what would it be? I.e. what do you find the most frustrating/pernicious/widespread mistake on this topic?
  • One intuition pump I like to use is: 'if you were given 10 billion dollars and 10 years to move your field forward, how precisely would you allocate it, and what do you think you could achieve at the end?'
Comment by MikeJohnson on Reducing long-term risks from malevolent actors · 2020-05-05T09:40:11.625Z · EA · GW

A core 'hole' here is metrics for malevolence (and related traits) visible to present-day or near-future neuroimaging.

Briefly -- Qualia Research Institute's work around connectome-specific harmonic waves (CSHW) suggests a couple angles:

(1) proxying malevolence via the degree to which the consonance/harmony in your brain is correlated with the dissonance in nearby brains;
(2) proxying empathy (lack of psychopathy) by the degree to which your CSHWs show integration/coupling with the CSHWs around you.

Both of these analyses could be done today, given sufficient resource investment. We have all the algorithms and in-house expertise.

Background about the paradigm: https://opentheory.net/2018/08/a-future-for-neuroscience/

Comment by MikeJohnson on Intro to Consciousness + QRI Reading List · 2020-04-09T01:58:09.504Z · EA · GW

Very important topic! I touch on McCabe's work in Against Functionalism (EA forum discussion); I hope this thread gets more airtime in EA, since it seems like a crucial consideration for long-term planning.

Comment by MikeJohnson on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-30T01:54:50.987Z · EA · GW

Hey Pablo! I think Andres has a few up on Metaculus; I just posted QRI's latest piece of neuroscience here, which has a bunch of predictions (though I haven't separated them out from the text):

https://opentheory.net/2019/11/neural-annealing-toward-a-neural-theory-of-everything/

Comment by MikeJohnson on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T03:50:43.830Z · EA · GW

We’ve looked for someone from the community to do a solid ‘adversarial review’ of our work, but we haven’t found anyone that feels qualified to do so and that we trust to do a good job, aside from Scott, and he's not available at this time. If anyone comes to mind do let me know!

Comment by MikeJohnson on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T01:53:48.064Z · EA · GW

I think this is a great description. "What happens if we seek out symmetry gradients in brain networks, but STV isn't true?" is something we've considered, and determining ground-truth is definitely tricky. I refer to this scenario as the "Symmetry Theory of Homeostatic Regulation" - https://opentheory.net/2017/05/why-we-seek-out-pleasure-the-symmetry-theory-of-homeostatic-regulation/ (mostly worth looking at the title image, no need to read the post)

I'm (hopefully) about a week away from releasing an update to some of the things we discussed in Boston, basically a unification of Friston/Carhart-Harris's work on FEP/REBUS with Atasoy's work on CSHW -- will be glad to get your thoughts when it's posted.

Comment by MikeJohnson on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T01:37:14.931Z · EA · GW

I think we actually mostly agree: QRI doesn't 'need' you to believe qualia are real, that symmetry in some formalism of qualia corresponds to pleasure, that there is any formalism about qualia to be found at all. If we find some cool predictions, you can strip out any mention of qualia from them, and use them within the functionalism frame. As you say, the existence of some cool predictions won't force you to update your metaphysics (your understanding of which things are ontologically 'first class objects').

But- you won't be able to copy our generator by doing that, the thing that created those novel predictions, and I think that's significant, and gets into questions of elegance metrics and philosophy of science.

I actually think the electromagnetism analogy is a good one: skepticism is always defensible, and in 1600, 1700, 1800, 1862, and 2018, people could be skeptical of whether there's 'deep unifying structure' behind these things we call static, lightning, magnetism, shocks, and so on. But it was much more reasonable to be skeptical in 1600 than in 1862 (the year Maxwell's Equations were published), and more reasonable in 1862 than it was in 2018 (the era of the iPhone).

Whether there is 'deep structure' in qualia is of course an open question in 2019. I might suggest STV is equivalent to a very early draft of Maxwell's Equations: not a full systematization of qualia, but something that can be tested and built on in order to get there. And one that potentially ties together many disparate observations into a unified frame, and offers novel / falsifiable predictions (which seem incredibly worth trying to falsify!)

I'd definitely push back on the frame of dualism, although this might be a terminology nitpick: my preferred frame here is monism: https://opentheory.net/2019/06/taking-monism-seriously/ - and perhaps this somewhat addresses your objection that 'QRI posits the existence of too many things'.

Comment by MikeJohnson on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T00:19:45.334Z · EA · GW

Thanks Matthew! I agree issues of epistemology and metaphysics get very sticky very quickly when speaking of consciousness.

My basic approach is 'never argue metaphysics when you can argue physics' -- the core strategy we have for 'proving' we can mathematically model qualia is to make better and more elegant predictions using our frameworks, with predicting pain/pleasure from fMRI data as the pilot project.

One way to frame this is that at various points in time, it was completely reasonable to be a skeptic about modeling things like lightning, static, magnetic lodestones, and such, mathematically. This is true to an extent even after Faraday and Maxwell formalized things. But over time, with more and more unusual predictions and fantastic inventions built around electromagnetic theory, it became less reasonable to be skeptical of such.

My metaphysical arguments are in my 'Against Functionalism' piece, and to date I don't believe any commenters have addressed my core claims:

https://forum.effectivealtruism.org/posts/FfJ4rMTJAB3tnY5De/why-i-think-the-foundational-research-institute-should#6Lrwqcdx86DJ9sXmw

But, I think metaphysical arguments change distressingly few peoples' minds. Experiments and especially technology changes peoples' minds. So that's what our limited optimization energy is pointed at right now.

Comment by MikeJohnson on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T00:09:15.313Z · EA · GW

QRI is tackling a very difficult problem, as is MIRI. It took many, many years for MIRI to gather external markers of legitimacy. My inside view is that QRI is on the path of gaining said markers; for people paying attention to what we're doing, I think there's enough of a vector right now to judge us positively. I think these markers will be obvious from the 'outside view' within a short number of years.

But even without these markers, I'd poke at your position from a couple angles:

I. Object-level criticism is best

First, I don't see evidence you've engaged with our work beyond very simple pattern-matching. You note that "I also think that I'm somewhat qualified to assess QRI's work (as someone who's spent ~100 paid hours thinking about philosophy of mind in the last few years), and when I look at it, I think it looks pretty crankish and wrong." But *what* looks wrong? Obviously doing something new will pattern-match to crankish, regardless of whether it is crankish, so in terms of your rationale-as-stated, I don't put too much stock in your pattern detection (and perhaps you shouldn't either). If we want to avoid accidentally falling into (1) 'negative-sum status attack' interactions, and/or (2) hypercriticism of any fundamentally new thing, neither of which is good for QRI, for MIRI, or for community epistemology, object-level criticisms (and having calibrated distaste for low-information criticisms) seem pretty necessary.

Also, we do a lot more things than just philosophy, and we try to keep our assumptions about the Symmetry Theory of Valence separate from our neuroscience - STV can be wrong and our neuroscience can still be correct/useful. That said, empirically the neuroscience often does 'lead back to' STV.

Some things I'd offer for critique:

https://opentheory.net/2018/08/a-future-for-neuroscience/#

https://opentheory.net/2018/12/the-neuroscience-of-meditation/

https://www.qualiaresearchinstitute.org/research-lineages

(you can also watch our introductory video for context, and perhaps a 'marker of legitimacy', although it makes very few claims https://www.youtube.com/watch?v=HetKzjOJoy8 )

I'd also suggest that the current state of philosophy, and especially philosophy of mind and ethics, is very dismal. I give my causal reasons for this here: https://opentheory.net/2017/10/rescuing-philosophy/ - I'm not sure if you're anchored to existing theories in philosophy of mind being reasonable or not.


II. What's the alternative?

If there's one piece I would suggest engaging with, it's my post arguing against functionalism. I think your comments presuppose functionalism is reasonable and/or the only possible approach, and the efforts QRI is putting into building an alternative are certainly wasted. I strongly disagree with this; as I noted in my Facebook reply,

>Philosophically speaking, people put forth analytic functionalism as a theory of consciousness (and implicitly a theory of valence?), but I don't think it works *qua* a theory of consciousness (or ethics or value or valence), as I lay out here: https://forum.effectivealtruism.org/.../why-i-think-the...-- This is more-or-less an answer to some of Brian Tomasik's (very courageous) work, and to sum up my understanding I don't think anyone has made or seems likely to make 'near mode' progress, e.g. especially of the sort that would be helpful for AI safety, under the assumption of analytic functionalism.

https://forum.effectivealtruism.org/posts/FfJ4rMTJAB3tnY5De/why-i-think-the-foundational-research-institute-should#6Lrwqcdx86DJ9sXmw

----------

I always find in-person interactions more amicable & high-bandwidth -- I'll be back in the Bay early December, so if you want to give this piece a careful read and sit down to discuss it I'd be glad to join you. I think it could have significant implications for some of MIRI's work.

Comment by MikeJohnson on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T15:50:29.065Z · EA · GW

Thanks, added.

Comment by MikeJohnson on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T08:52:55.780Z · EA · GW

Buck- for an internal counterpoint you may want to discuss QRI's research with Vaniver. We had a good chat about what we're doing at the Boston SSC meetup, and Romeo attended a MIRI retreat earlier in the summer and had some good conversations with him there also.

To put a bit of a point on this, I find the "crank philosophy" frame a bit questionable if you're using only thin-slice outside view and not following what we're doing. Probably, one could use similar heuristics to pattern-match MIRI as "crank philosophy" also (probably, many people have already done exactly this to MIRI, unfortunately).

Comment by MikeJohnson on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T08:35:56.588Z · EA · GW

We're pretty up-front about our empirical predictions; if critics would like to publicly bet against us we'd welcome this, as long as it doesn't take much time away from our research. If you figure out a bet we'll decide whether to accept it or reject it, and if we reject it we'll aim to concisely explain why.

Comment by MikeJohnson on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T08:22:29.867Z · EA · GW

For a fuller context, here is my reply to Buck's skepticism about the 80% number during our back-and-forth on Facebook -- as a specific comment, the number is loosely held, more of a conversation-starter than anything else. As a general comment I'm skeptical of publicly passing judgment on my judgment based on one offhand (and unanswered- it was not engaged with) comment on Facebook. Happy to discuss details in a context we'll actually talk to each other. :)

--------------my reply from the Facebook thread a few weeks back--------------

I think the probability question is an interesting one-- one frame is asking what is the leading alternative to STV?

At its core, STV assumes that if we have a mathematical representation of an experience, the symmetry of this object will correspond to how pleasant the experience is. The latest addition to this (what we're calling 'CDNS') assumes that consonance under Selen Atasoy's harmonic analysis of brain activity (connectome-specific harmonic waves, CSHW) is a good proxy for this in humans. This makes relatively clear predictions across all human states and could fairly easily be extended to non-human animals, including insects (anything we can infer a connectome for, and the energy distribution for the harmonics of the connectome). So generally speaking we should be able to gather a clear signal as to whether the evidence points this way or not (pending resources to gather this data- we're on a shoestring budget).

Empirically speaking, the competition doesn't seem very strong. As I understand it, currently the gold standard for estimating self-reports of emotional valence via fMRI uses regional activity correlations, and explains ~16% of the variance. Based on informal internal estimations looking at coherence within EEG bands during peak states, I'd expect us to do muuuuch better.

Philosophically speaking, people put forth analytic functionalism as a theory of consciousness (and implicitly a theory of valence?), but I don't think it works *qua* a theory of consciousness (or ethics or value or valence), as I lay out here: https://forum.effectivealtruism.org/.../why-i-think-the...-- This is more-or-less an answer to some of Brian Tomasik's (very courageous) work, and to sum up my understanding I don't think anyone has made or seems likely to make 'near mode' progress, e.g. especially of the sort that would be helpful for AI safety, under the assumption of analytic functionalism.

So in short, I think STV is perhaps the only option that is well-enough laid out, philosophically and empirically, to even be tested, to even be falsifiable. That doesn't mean it's true, but my prior is it's ridiculously worthwhile to try to falsify, and it seems to me a massive failure of the EA and x-risk scene that resources are not being shifted toward this sort of inquiry. The 80% I gave was perhaps a bit glib, but to dig a little, I'd say I'd give at least an 80% chance of 'Qualia Formalism' being true, and given that, a 95% chance of STV being true, and a 70% chance of CDNS+CSHW being a good proxy for the mathematical symmetry of human experiences.

An obvious thing we're lacking is resources; a non-obvious thing we're lacking is good critics. If you find me too confident I'd be glad to hear why. :)

Resources:
Principia Qualia: https://opentheory.net/PrincipiaQualia.pdf(exploratory arguments for formalism and STV laid out)
Against Functionalism: https://forum.effectivealtruism.org/.../why-i-think-the...
(an evaluation of what analytic functionalism actually gives us)
Quantifying Bliss: https://qualiacomputing.com/.../quantifying-bliss-talk.../
(Andres Gomez Emilsson's combination of STV plus Selen Atasoy's CSHW, which forms the new synthesis we're working from)
A Future for Neuroscience: https://opentheory.net/2018/08/a-future-for-neuroscience/#
(more on CSHW)

Happy to chat more in-depth about details.