## Posts

Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? 2020-06-22T16:41:58.831Z · score: 8 (9 votes)
Physical theories of consciousness reduce to panpsychism 2020-05-07T05:04:39.502Z · score: 27 (17 votes)
Replaceability with differing priorities 2020-03-08T06:59:09.710Z · score: 17 (9 votes)
Biases in our estimates of Scale, Neglectedness and Solvability? 2020-02-24T18:39:13.760Z · score: 91 (43 votes)
[Link] Assessing and Respecting Sentience After Brexit 2020-02-19T07:19:32.545Z · score: 16 (5 votes)
Changes in conditions are a priori bad for average animal welfare 2020-02-09T22:22:21.856Z · score: 18 (11 votes)
Please take the Reducing Wild-Animal Suffering Community Survey! 2020-02-03T18:53:06.309Z · score: 24 (13 votes)
What are the challenges and problems with programming law-breaking constraints into AGI? 2020-02-02T20:53:04.259Z · score: 6 (2 votes)
Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? 2020-01-11T19:09:00.931Z · score: 15 (6 votes)
Should animal advocates donate now or later? A few considerations and a request for more. 2019-11-13T07:30:50.554Z · score: 21 (8 votes)
MichaelStJules's Shortform 2019-10-24T06:08:48.038Z · score: 7 (4 votes)
Conditional interests, asymmetries and EA priorities 2019-10-21T06:13:04.041Z · score: 14 (15 votes)
What are the best arguments for an exclusively hedonistic view of value? 2019-10-19T04:11:23.702Z · score: 7 (4 votes)
Defending the Procreation Asymmetry with Conditional Interests 2019-10-13T18:49:15.586Z · score: 18 (14 votes)
Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests 2019-07-04T23:56:44.330Z · score: 10 (9 votes)

Comment by michaelstjules on The problem with person-affecting views · 2020-08-05T22:40:31.611Z · score: 2 (1 votes) · EA · GW

Also, in my view, a symmetric total view applied to preference consequentialism is the worst way to do preference consequentialism (well, other than obviously absurd approaches). I think a negative view/antifrustrationism or some mixture with a "preference-affecting view" is more plausible.

The reason I think this is because rather than satisfying your existing preferences, it can be better to create new preferences in you and satisfy them, against your wishes. This undermines the appeal of autonomy and subjectivity that preference consequentialism had in the first place. If, on the other hand, new preferences don't add positive value, then they can't compensate for the violation of preferences, including the violation of preferences to not have your preferences manipulated in certain ways.

I discuss these views a bit here.

Comment by michaelstjules on The problem with person-affecting views · 2020-08-05T22:18:34.241Z · score: 2 (1 votes) · EA · GW

I think giving up IIA seems more plausible if you allow that value might be essentially comparative, and not something you can just measure in a given universe in isolation. Arrow's impossibility theorem can also be avoided by giving it up. And standard intuitions when facing the repugnant conclusion itself (and hence similar impossibility theorems) seem best captured by an argument incompatible with IIA, i.e. whether or not it's permissible to add the extra people depends on whether or not the more equal distribution of low welfare is an option.

It seems like most consequentialists assume IIA without even making this explicit, and I have yet to see a good argument for IIA. At least with transitivity, there are Dutch books/money pump arguments to show that you can be exploited if you reject it. Maybe there was some decisive argument in the past that lead to consensus on IIA and no one talks about it anymore, except when they want to reject it?

Another option to avoid the very repugnant conclusion but not the repugnant conclusion is to give (weak or strong) lexical priority to very bad lives or intense suffering. Center for Reducing Suffering has a few articles on lexicality. I've written a bit about how lexicality could look mathematically here without effectively ignoring everything that isn't lexically dominating, and there's also rank-discounted utilitarianism: see point 2 in this comment, this thread, or papers on "rank-discounted utilitarianism".

Comment by michaelstjules on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-05T21:38:11.173Z · score: 2 (1 votes) · EA · GW

Thanks for the interesting argument. Before I can evaluate it, however, I'd need you to clarify your terms a bit for me. In particular, I'd need to know more about what you mean by "frequency of conscious experience." Based on my best reconstruction of the argument, it can't mean temporal resolution or rate of subjective experience.

My intention was rate of subjective experience. I can rephrase Premise 1:

Premise 1: Any observed conscious temporal resolution frequency for an individual X (within some set of possible conditions C) is a lower bound for the maximum frequency of subjective experience for X (within C).

Does it make sense to interpret the rate of subjective experience as a frequency, the number of subjective experiences per second? Maybe our conscious experiences are not sufficiently synchronized across our brains for such an interpretation?

Even if it does make sense, Premise 1 could still be false. Or, even if Premise 1 is true, it could be that the actual maximum frequency of subjective experience isn't well correlated with the observed maximum temporal resolution frequency (say as measured by CFF). Maybe the gap is huge, and our max frequency of subjective experiences is millions of times faster than our max temporal resolution frequency.

It's tempting to think that temporal resolution is like the frame rate of a video, and as the temporal resolution goes up or down, so too must the rate of subjective experience. But the mechanisms that govern the intake and processing of perceptual information are a lot more complicated than that, and the mechanisms that govern the subjective experience of time appear to be more complicated still.

Premise 1 depends on interpreting temporal resolution like a lower bound for the frame rate of the video which is our subjective experience, although it isn't committed to the claim about correlation between temporal resolution and the rate of subjective experience.

There is no conceptual tension between the claim that a creature consciously perceives the flicker-to-steady-glow transition at some high threshold (200 Hz vs 60 Hz for humans, say) and the claim that the creature has the same rate of subjective experience as a typical human. (Similarly, there is no conceptual tension between the claim that some creature consciously perceives the transition at the same threshold as humans but has a different rate of subjective experience.)

How this could look is that the 60 Hz max CFF for humans is a bad lower bound for our frequency of subjective experience, which is actually much faster, but to match an individual with a CFF of 200 Hz, our maximum frequency of subjective experience would have to be at least 200 Hz.

Comment by michaelstjules on The problem with person-affecting views · 2020-08-05T20:24:39.650Z · score: 4 (3 votes) · EA · GW

I don't assign much credence to neutrality, because I think adding bad lives is in fact bad. I prefer the procreation asymmetry, which might be stated this way:

Additional lives never make things go better, all else equal, and additional bad lives make things go worse, all else equal.

Also, you can give up the independence of irrelevant alternatives instead of transitivity. This would mean that which of two options is better might depend on what alternatives there are available to you, i.e. the ranking of outcomes depends on available options. I actually find this a fairly intuitive way to avoid the repugnant conclusion.

A few papers taking this approach to the procreation asymmetry and which avoid the repugnant conclusion:

I also have a few short arguments for asymmetry here and here in my shortform.

Comment by michaelstjules on Should we create an EA index? · 2020-08-05T01:28:49.134Z · score: 4 (2 votes) · EA · GW

Paul Christiano on divestment

Hauke Hillebrandt on mission hedging (and 7 minute talk here)

Comment by michaelstjules on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-05T00:48:46.287Z · score: 2 (1 votes) · EA · GW

It seems like there's a simple and plausible argument for a lower bound on the maximum frequency of conscious experience that doesn't depend so much on evolutionary pressures, although maybe only having one bound isn't good enough for comparisons between species.

Premise 1: If an individual X can consciously recognize that a stimulus is changing when it's changing at some frequency , then X can (sometimes) have experiences at a frequency at least . In other words, X's maximum frequency of conscious experience  satisfies .

Premise 2: Individual X can consciously recognize that some stimulus is changing at frequency , that some stimulus (possibly the same, or even a different modality) is changing at frequency , ..., and that some stimulus is changing at frequency .

Conclusion: X's maximum frequency of conscious experience  is such that , and hence .

I think that the truth or falsity of Premise 1 for a given individual X shouldn't depend so finely on the evolutionary pressures their species faced, and it should be the same across species for which conscious perception works the same way in their brains, but possibly with different emphases/prioritization, i.e. differences in degree, not kind. So, if the brains of individuals X and Y differ only in number of neurons doing similar jobs or the frequency with which their neurons fire, Premise 1 should be true for both, or false for both. Enough functional and structural homology (together) would preserve the truth value of Premise 1.

For example, I'd expect Premise 1 to be true for both moles and hedgehogs, or false for both, even though moles have poor vision. I'd even guess that Premise 1 is true for (almost?) all mammals or false for (almost?) all mammals, because high-level structures like the cerebral cortex, occipital lobe, visual cortex, etc. are common to us. This is not a particularly informed guess, though.

Is Premise 1 too strong? I think one reason to doubt it is that if you subsample a periodic signal at some regular period, it's very rare that the subsample will be constant. For example, a periodic discrete subsample of  will only be constant if its period is an integer multiple of .

Is it the case that CFF and similar measures often can't be used for the frequencies in Premise 2, say, because the recognition often isn't conscious?

Comment by michaelstjules on Should we create an EA index? · 2020-08-04T16:15:45.262Z · score: 4 (2 votes) · EA · GW

I doubt that the positive impacts of investing in publicly traded companies would be measurable or significant, and it would generally cost us returns, and that loss could be better spent. Besides alexrjl's links, here's a good video on responsible/sustainable investing. It's worth taking a look at some of the comments, too.

However, there can still be benefits to having an EA index (or ETF): it could bring attention to issues, e.g. plant-based meat, AI, etc.., and EAs may have different risk-aversions, credences in extreme events, discount rates and time horizons from the overall market (although these probably differ by cause area). There are also other tweaks that can be made to a standard market-cap weighted index to improve risk-adjusted returns, e.g. towards the other factors in this model, like value, but there are also already indices and ETFs for these factors (although usually not handling all of them together at the same time, it's up to you to combine them).

Comment by michaelstjules on The one-minute EA Forum feedback survey · 2020-08-04T07:14:51.116Z · score: 2 (1 votes) · EA · GW

It seems like there have been a lot more posts recently, or at least the past week. It might be better if the Frontpage Posts section on the front page could fit more posts in it without having to click "Load More".

Or, maybe a small number of high-level aggregate tags could be used to organize posts into different sections on the front page, more like a traditional forum with subforums? Or users could choose their own tags to define the sections.

Comment by michaelstjules on The Subjective Experience of Time: Welfare Implications · 2020-08-04T06:19:08.076Z · score: 2 (1 votes) · EA · GW

I don't find this plausible since I'm pretty committed to unitarianism (rejecting degrees of moral status), but here's an interesting thought:

I find slower animals like turtles, sloths and snails pity-inducing (whether or not their movement speed reflects their subjective experience of time), I suppose because they seem more vulnerable and less capable. If vulnerability is an important factor in moral status, it could be the case that animals with slower subjective experiences of time deserve more weight, not less, all else equal.

Of course, being slower does have welfare implications besides how much an animal experiences; it has implications for what they experience. It's worse riskier to be a turtle crossing a road or a snail crossing a walking path than a squirrel doing the same. Being slower also makes animals more vulnerable to predators.

Comment by michaelstjules on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-04T05:49:02.323Z · score: 2 (1 votes) · EA · GW

On the other hand, animals as putatively similar as trout (~27 Hz) and salmon (~72 Hz), geckos (~20 Hz) and iguanas (~80 Hz), and guinea pigs (~50 Hz) and ground squirrels (~120 Hz), have drastically different CFF thresholds.

To add to this, it's surprising that the difference between the fastest and slowest rodents in the table is even greater than our own CFF. 120-39=81 vs our 60 Hz, with 39 Hz for the brown rat and 120 Hz for the golden-mantled ground squirrel.

Comment by michaelstjules on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-04T05:44:59.971Z · score: 2 (1 votes) · EA · GW

It's kind of funny because it confirms stereotypes that the leatherback sea turtle's CFF is on the low end (15 Hz vs 60 Hz for humans). I wonder what it is for sloths and snails. :P

Comment by michaelstjules on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-04T05:38:06.692Z · score: 4 (2 votes) · EA · GW

What credence would you assign to the max (fastest) of CFF-like measures over the different sensory modalities correlating with the subjective experience of time? This could account for differences in importance in senses.

Or maybe some other aggregate than the max?

Comment by michaelstjules on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-04T05:33:18.025Z · score: 2 (1 votes) · EA · GW

There are characteristic differences in the subjective experience of time across species

What do you mean by "characteristic" here? Significant/large? Larger than what threshold?

I would assign credence ~1 to the proposition that there are differences in subjective experience of time across species (and indeed individuals within species), since it's basically a continuous measure, because, for example, brain size and distance between neurons are continuous, and subjective experience of time should depend on those. The probability that a real number sampled from a given continuous distribution matches a specific real number is exactly 0.

Comment by michaelstjules on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-04T05:21:46.210Z · score: 2 (1 votes) · EA · GW

Moreover, other experimental evidence paints a somewhat different picture. Hagura et al. 2012 describe “a novel type of time distortion that occurs during the motor preparatory period before execution of a ballistic reaching movement. Visual stimuli presented during this period were perceived to be prolonged, relative to a control condition without reaching, and their flicker rate was perceived as slower. Moreover, the speed of visual information processing became faster, resulting in a higher detection rate of rapidly presented letters. These findings indicate that the visual processing during motor preparation is accelerated, with direct effects on perception of time” (Hagura et al. 2012: 4404). The researchers conclude that because “the time dilation, slowing down of perceived flicker frequency and the increase in letter-detection rate all occurs at the same action preparatory period, we believe that these effects are related to each other” (Hagura et al. 2012: 4405). More experiments of this type could potentially shed light on the relationship between CFF and the subjective experience of time.[40]

I'd be interested in knowing if other senses (sound, especially) are processed faster at the same time. It could be that for a reaching movement, our attention is focused primarily visually, and we only process vision faster. On the other hand, I guess CCF tasks all require attention, so maybe this can't explain it?

This also seems like an easy, cheap and ethical experiment to do.

Comment by michaelstjules on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-04T05:00:43.659Z · score: 2 (1 votes) · EA · GW

(1) differences in CFF could fail to reflect differences in the subjective experience of time, and (2) differences in the subjective experience of time could fail to be reflected in differences in CFF.

This wording here is confusing to me. With C = differences in CFF, and S = differences in subjective experience of time, this is

(1) C could fail to reflect S

(2) S could fail to be reflected in C

But these mean the same thing to me. In both cases, we're asking about differences in CFF doing the reflecting, and differences in subjective experience of time being reflected.

Did you mean that C could fail to reflect S, and S could fail to reflect C?

Also, it might be helpful for readability to separate the discussion of the two points in separate subsections.

Comment by michaelstjules on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-04T04:58:46.559Z · score: 2 (1 votes) · EA · GW

The pitch of the fly’s buzzing doesn’t change, for instance.

In situations where subjective experience of time is changed, do people actually perceive a difference in pitch? If not, maybe rather than being sensitive to each vibration, we're only sensitive to the (real-time) frequency of vibration? We don't hear individual vibrations in a constant tone; the tone sounds constant.

Comment by michaelstjules on Does Critical Flicker-Fusion Frequency Track the Subjective Experience of Time? · 2020-08-04T04:06:35.688Z · score: 2 (1 votes) · EA · GW

The degree to which CFF values give us evidence about the subjective experience of time may also depend on the behavioral plasticity of the animal in question.

(...)

Much of the sensory information that animals (including humans) absorb is processed unconsciously. Differences in the speed of unconscious reactions don’t reveal anything about subjective experience.

Doesn't using behavioural studies based on trained behaviour avoid this concern? I suppose trained behaviour might still be unconscious, but it seems less likely and especially for this task (it's not pure reaction, the goal isn't to answer faster, it's just to give the right answer), assuming the animal is conscious at all. Well, even reinforced behaviours in humans may be unconscious/reflexive, e.g. flinching near someone who scares, tickles or hits you often. There's also muscle memory, which might be largely unconscious, e.g. balance, riding a bike.

Did the CFF estimates in your table come from behavioural studies or ERG studies, or both?

EDIT: In one of your footnotes:

See Ros & Biewener 2016 and Ibbotson 2017 for more on hummingbird flight stabilization. Humans also have a sensory-motor system that governs balance, and this system operates below our conscious awareness. (People don’t typically realize how many microadjustments one’s body continually makes to successfully carry a load of laundry up a flight of stairs without falling over.) In birds, flight stabilization mechanisms are governed by a homologous brain region.

Comment by michaelstjules on EA reading list: EA motivations and psychology · 2020-08-04T03:06:10.657Z · score: 3 (2 votes) · EA · GW

We are in triage every second of every day by Holly Elmore

Fear and Loathing at Effective Altruism Global 2017 by Scott Alexander

And I'm sure you can find plenty by Peter Singer, including full books. Here are a few short reads:

The Drowning Child and the Expanding Circle

Famine, affluence and morality

https://www.philosophyexperiments.com/singer/ (a questionnaire based on Singer's drowning child thought experiment)

https://schwitzsplinters.blogspot.com/2020/06/contest-winner-philosophical-argument.html

There's also his introductory TED talk.

Comment by michaelstjules on EA reading list: suffering-focused ethics · 2020-08-04T02:52:30.355Z · score: 8 (4 votes) · EA · GW

Another reading list here by Center for Reducing Suffering.

Comment by michaelstjules on EA reading list: longtermism and existential risks · 2020-08-04T02:45:28.214Z · score: 2 (3 votes) · EA · GW

Maybe a few on s-risks, which are not only of concern for those with suffering-focused views? These might be good places to start:

https://longtermrisk.org/risks-of-astronomical-future-suffering/

https://longtermrisk.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/

https://longtermrisk.org/altruists-should-prioritize-artificial-intelligence/

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-07-31T20:05:41.916Z · score: 2 (1 votes) · EA · GW

Maybe an alternative to moral status to capture "speciesist" intuitions is that we should just give more weight to more intense experiences than the ratio scale would suggest and this could apply to both suffering and pleasure (whereas prioritarianism or negative-leaning utilitarianism might apply it only to suffering, or to overall quality of a life). Some people might not trade away their peak experiences for any number of mild pleasures. This could reduce the repugnance of the repugnant conclusion (and the very repugnant conclusion, too) or even avoid it altogether if taken far enough (with lexicality, weak or strong). This isn't the same as Mill's higher and lower pleasures; we're only distinguishing them by intensity, not quality, and there need not be any kind of discontinuity.

That being said, I've come to believe that there's no fact of the matter about the degree to which one experience is better than another experience (for the same individual or across individuals). Well, I was already a moral antirealist, but I'm more confident in being able in principle (but not in practice) to compare welfare in different individual experiences, even between species, as better/worse, than in the cardinal welfare. Simon Knutsson has written about this here and here.

Comment by michaelstjules on Are we neglecting education? Philosophy in schools as a longtermist area · 2020-07-31T18:56:04.779Z · score: 2 (1 votes) · EA · GW
Comment by michaelstjules on Are we neglecting education? Philosophy in schools as a longtermist area · 2020-07-31T00:18:37.343Z · score: 11 (5 votes) · EA · GW

Another recent study on the (short-term) impacts of a classroom intervention on animal product consumption, mostly on middle school and high school students:

Educated Choices Program: An Impact Evaluation of a Classroom Intervention to Reduce Animal Product Consumption, by Christopher Bryant and Courtney Dillard

They followed up on a subset to check consumption 3-30 months after the presentation.

Comment by michaelstjules on Lukas_Gloor's Shortform · 2020-07-29T07:23:49.454Z · score: 4 (2 votes) · EA · GW
Another argument that points to "pleasure is good" is that people and many animals are drawn to things that gives them pleasure

It's worth pointing out that this association isn't perfect. See [1] and [2] for some discussion. Tranquilism allows that if someone is in some moment neither drawn to (craving) (more) pleasurable experiences nor experiencing pleasure (or as much as they could be), this isn't worse than if they were experiencing (more) pleasure. If more pleasure is always better, then contentment is never good enough, but to be content is to be satisfied, to feel that it is good enough or not feel that it isn't good enough. Of course, this is in the moment, and not necessarily a reflective judgement.

I also approach pleasure vs suffering in a kind of conditional way, like an asymmetric person-affecting view, or "preference-affecting view":

I would say that something only matters if it matters (or will matter) to someone, and an absence of pleasure doesn't necessarily matter to someone who isn't experiencing pleasure, and certainly doesn't matter to someone who does not and will not exist, and so we have no inherent reason to promote pleasure. On the other hand, there's no suffering unless someone is experiencing it, and according to some definitions of suffering, it necessarily matters to the sufferer. (A bit more on this argument here, but applied to good and bad lives.)

Comment by michaelstjules on Can High-Yield Investing Be the Most Good You Can Do? · 2020-07-27T22:29:39.456Z · score: 2 (1 votes) · EA · GW
To get a sense of how fast these annual returns compound, Cornell has concluded that $100 invested in 1988 would be worth almost$400 million in 2018. This fund has posted annual returns of 98% after fees in 2000 during the dot-com crash and 82% after fees in 2008 during the Great Recession, which indicates that it actually hedges against risk, unlike most other high-returning investment vehicles.

I think 2000 and 2008 were actually its best years, see this graph, and the post I got it from.

Abstract from the Cornell paper:

The performance of Renaissance Technologies’ Medallion fund provides the ultimate counterexample to the hypothesis of market efficiency. Over the period from the start of trading in 1988 to 2018, $100 invested in Medallion would have grown to$398.7 million, representing a compound return of 63.3%. Returns of this magnitude over such an extended period far outstrip anything reported in the academic literature. Furthermore, during the entire 31-year period, Medallion never had a negative return despite the dot.com crash and the financial crisis. Despite this remarkable performance, the fund’s market beta and factor loadings were all negative, so that Medallion’s performance cannot be interpreted as a premium for risk bearing. To date, there is no adequate rational market explanation for this performance.
Comment by michaelstjules on Can High-Yield Investing Be the Most Good You Can Do? · 2020-07-27T21:51:22.184Z · score: 5 (3 votes) · EA · GW

What are the next best options after the Medallion Fund?

Comment by michaelstjules on Can High-Yield Investing Be the Most Good You Can Do? · 2020-07-27T21:50:55.698Z · score: 3 (2 votes) · EA · GW
First, the EA community could reach out to Renaissance Technology employees like REG reaches out to prestigious poker players. Perhaps the employees could lobby to increase the size of the fund (currently \$10 billion) to accommodate EA money from Founder’s Pledge or Open Phil.

Could there be significant charitable tax incentives for them to do this? This might help make the case. There's also the more general ask, which is to allow registered charities to invest in the Medallion Fund.

Also, interestingly, some executives' political donations are almost exclusively to liberal political candidates, and others almost exclusively to conservative ones. Maybe we can convince them to cooperate/moral trade and donate to EA orgs instead of competing with each other?

https://en.wikipedia.org/wiki/Renaissance_Technologies#Campaign_contributions

Comment by michaelstjules on The Subjective Experience of Time: Welfare Implications · 2020-07-27T20:12:01.798Z · score: 5 (3 votes) · EA · GW
Critical flicker-fusion frequency (CFF) is a well-studied measure of visual temporal resolution. I have compiled a spreadsheet comparing CFF values across 70 species and 33 orders of animals.

It's surprising that the CFFs are mostly higher for the insects in the table than humans, but lower for the crustaceans and spiders. I suppose it's naive to treat invertebrates too uniformly given how large and varied a group that is.

EDIT: And, of course, as you point out in "How Considering the Subjective Experience of Time Could Influence Resource Allocation", it's naive to treat animals uniformly within each of the groupings of insects, fishes, crustaceans, etc..

For instance, swordfish have a CFF of 22 Hz; tuna have a CFF of 80 Hz. Cockroaches have a CFF of 42 Hz; honey bees have a CFF of 200 Hz.
Comment by michaelstjules on The Subjective Experience of Time: Welfare Implications · 2020-07-27T19:55:12.472Z · score: 2 (1 votes) · EA · GW
In humans, if the targets are presented in rapid succession, roughly 100 ms apart, they are both consciously processed and are likely to be correctly identified. If the targets are separated by a duration of more than ~700 ms, they are also likely to be correctly identified. However, targets presented roughly 300 ms apart, the second target is much harder to identify.

Is there a typo here or have I misunderstood? Both correctly identified at 100 ms apart, but the second harder to identify at 300 ms apart? Or are both still likely to be correctly identified at > 100 ms, just that it's much harder at 300 ms than 700 ms?

Comment by michaelstjules on The Subjective Experience of Time: Welfare Implications · 2020-07-27T19:43:21.283Z · score: 2 (1 votes) · EA · GW
The speed at which an animal’s central nervous system can send and receive signals depends on four main factors: (1) interneuronal distance, (2) transsynaptic transmission time, (3) axon diameter, and (4) axon myelination (Roth & Dicke 2017: 142).

To be clear, this doesn't tell us how often signals are sent, just how long it takes a signal to get from one point to another, and an upper bound on how often signals can be sent and received?

Another metric that might be investigated is neuronal firing rates. However, this is probably not a good proxy for the subjective experience of time.[50] Different parts of the brain fire at different rates and with different regularity. Among mammals, homologous brain regions appear to exhibit similar firing regimes despite differences in brain size (Mochizuki et al. 2016).

I'm surprised you don't think it's a good proxy (if looked at in the right parts of the brain), or at least better than speed at which signals can be sent and received, since the latter doesn't tell us how often they're actually sent and received.

Comment by michaelstjules on The Subjective Experience of Time: Welfare Implications · 2020-07-27T19:33:40.602Z · score: 8 (5 votes) · EA · GW

Do you expect that subjective experience of time differs significantly between regions of an animal's brain or between modalities/senses?

You might think that humans are doing a lot of extra computational work that slows us down and contributes to our experiences, but a lot of what matters might be happening faster at a lower level or just different part of the brain.

And, as you point out, CFF is only a visual measure.

Also, if I experience vision at a rate of X per second, and sound at Y per second, then, ignoring other senses, for my overall experience of time, should we use the max of X and Y, the sum, or something else? Maybe this doesn't matter, because the welfare-relevant measures aren't based on basic senses, but rather the pleasantness or unpleasantness of an experience (under hedonism), which might be more unified, although I'm not sure.

One argument for the sum: the experiences across modalities won't necessarily line up temporally or be fully integrated, or they might matter at a level before full integration.

One argument for the max: if they are fully integrated, and it's only what happens after full integration that matters, then each modality's rate is a lower bound on the actual rate after integration, since we know a the whole brain can process at least that fast. Or, rather than the max, we should just use the range or a mean.

Comment by michaelstjules on The Subjective Experience of Time: Welfare Implications · 2020-07-27T19:04:39.013Z · score: 3 (2 votes) · EA · GW

(In reference to Why the Subjective Experience of Time Matters)

Should welfare capacity, or at least, if used for moral weight, be understood as "instantaneous" rather than duration-based? I can't really imagine giving more inherent moral weight to certain animals (or humans) just because they live longer or experience more subjective time, all else equal, on top of also accumulating their welfare over their lives, since that definitely seems like double counting (although using welfare capacity already might be double counting, as you've suggested in the previous post and comments, but I think it's even worse with subjective time).

Rather, at a fundamental level, objective time should just be replaced with subjective time, and that's the only thing that should be changed in the typical ethical calculus.

I think only instantaneous welfare capacity/moral weight makes sense with empty individualism, too.

Comment by michaelstjules on Lukas_Gloor's Shortform · 2020-07-27T18:25:04.714Z · score: 3 (2 votes) · EA · GW

I think if you concede that some moral facts exist, it might be more accurate to call yourself a moral realist. The indeterminacy of morality could be a fundamental feature, allowing for many more acts to be ethically permissible (or no worse than other acts) than with a linear (complete) ranking. I think consequentialists are unusually prone to try to rank outcomes linearly.

I read this recently, which describes how moral indeterminacy can be accommodated within moral realism, although it was kind of long for what it had to say. I think expert agreement (or ideal observers/judges) could converge on moral indeterminacy; they could agree that we can't know how to rank certain options and further that there's no fact of the matter.

Comment by michaelstjules on Should we think more about EA dating? · 2020-07-25T18:01:17.186Z · score: 11 (6 votes) · EA · GW

The gender distribution is 71% male, 27% female and 2% other, according to the most recent EA survey.

Comment by michaelstjules on Utilitarianism with and without expected utility · 2020-07-25T17:56:53.715Z · score: 4 (2 votes) · EA · GW

It's worth pointing out that their theorems 2.2 and 3.5 are compatible with Rawls' difference principle/leximin/maximin (infinite risk-aversion), so their results generalize both Harsanyi's and Rawls' approaches, rather than defend utilitarianism against Rawls. They don't require continuity or cardinal welfare for these theorems, and as far as I know, continuity is not actually an axiom justifiable with Dutch books or money pumps, so I'm not sure what reason we have to believe it other than pure intuition, which is especially suspect in extreme tradeoffs (e.g. involving torture) and because of time-inconsistency in our preferences.

Continuity would of course also fail under utilitarianism with stochastic separability and infinite stakes, i.e. Pascalian problems, although I suppose one defence might be that the physical differences in outcomes are also infinite in these cases, so we might only have continuity starting from finite physical differences and extend it from there.

I don't think continuity deserves to be called a rationality axiom, and without it and cardinal welfare, the case for utilitarianism as normally conceived falls apart.

Comment by michaelstjules on What posts do you want someone to write? · 2020-07-25T06:59:14.004Z · score: 6 (3 votes) · EA · GW

On 2, see this post (a link post for this).

I also left some comments on the EA Forum post pulling out the first two theorems and the definitions to state them in way that's hopefully a bit more accessible, skipping some unnecessary jargon and introducing notation only just before it's used, rather than at the start so you have to jump back. They're still pretty technical, though. Upon reflection, it probably took me more time to write the comments than it'll save people to read my comments instead of reading the parts of the paper where they're found. :/

There are also several other theorems in that paper.

Comment by michaelstjules on Utilitarianism with and without expected utility · 2020-07-25T06:47:08.604Z · score: 2 (1 votes) · EA · GW

For the variable population case, they

• add an extra welfare state to represent nonexistence, without saying how it compares to other welfare states at all (e.g. totalism or person-affecting views). Prospects can include nonexistence, so you (may be able to) compare prospects with different probabilities of nonexistence.
• replace the finite constant population with an infinite set of all possible individuals and assign welfare (nonexistence) to individuals who don't exist in a given welfare distribution.
• generalize the Anteriority, Reduction to Prospects and Two-Stage Anonymity conditions. Only Reduction to Prospects looks different, since rather than defining lotteries for everyone in as a whole, you require it to hold for every finite non-empty subset of .
• define Omega Independence.
• generalize Theorem 2.2 for Theorem 3.5.

For a given welfare state , let denote the prospect with definite welfare state , with probability 1. In particular, denotes definite nonexistence.

Omega Independence: For any two prospects and , and any rational probability ,

Then Theorem 3.5 is basically the same as Theorem 2.2, with the corresponding definitions, but the social preorder only exists at all if Omega Independence is satisfied and the veil of ignorance comparisons are applied only to pairs of lotteries from a common finite subset of (which may have any individuals assigned nonexistence ):

Theorem 3.5: Given an arbitrary individual preorder, there is at most one social preorder satisfying Anteriority, Reduction to Prospects, and Two-Stage Anonymity. When it exists, it is given by if and only if

according to the individual preorder for any finite non-empty such that and are lotteries in . The social preorder exists if and only if the individual preorder satisfies Omega Independence.

Personally, I like the procreation asymmetry, so I might say that is strictly better than some states (hence defined as negative), but either incomparable to or at least as good as all other states.

Comment by michaelstjules on Utilitarianism with and without expected utility · 2020-07-25T06:02:32.611Z · score: 2 (1 votes) · EA · GW

Abstract:

We give two social aggregation theorems under conditions of risk, one for constant population cases, the other an extension to variable populations. Intra and interpersonal welfare comparisons are encoded in a single ‘individual preorder’. The theorems give axioms that uniquely determine a social preorder in terms of this individual preorder. The social preorders described by these theorems have features that may be considered characteristic of Harsanyi-style utilitarianism, such as indifference to ex ante and ex post equality. However, the theorems are also consistent with the rejection of all of the expected utility axioms, completeness, continuity, and independence, at both the individual and social levels. In that sense, expected utility is inessential to Harsanyi-style utilitarianism. In fact, the variable population theorem imposes only a mild constraint on the individual preorder, while the constant population theorem imposes no constraint at all. We then derive further results under the assumption of our basic axioms. First, the individual preorder satisfies the main expected utility axiom of strong independence if and only if the social preorder has a vector-valued expected total utility representation, covering Harsanyi’s utilitarian theorem as a special case. Second, stronger utilitarian-friendly assumptions, like Pareto or strong separability, are essentially equivalent to strong independence. Third, if the individual preorder satisfies a ‘local expected utility’ condition popular in non-expected utility theory, then the social preorder has a ‘local expected total utility’ representation. Fourth, a wide range of non-expected utility theories nevertheless lead to social preorders of outcomes that have been seen as canonically egalitarian, such as rank-dependent social preorders. Although our aggregation theorems are stated under conditions of risk, they are valid in more general frameworks for representing uncertainty or ambiguity.
Comment by michaelstjules on Utilitarianism with and without expected utility · 2020-07-25T05:47:48.611Z · score: 2 (1 votes) · EA · GW

Teruji Thomas, one of the authors, wrote a paper for GPI with a similar theorem, called the Supervenience Theorem. There's an EA Forum post about it here. There's an EA Forum post on Harsanyi's original utilitarian theorem here, too.

I pulled out the definitions and put them together to be able to state the first theorem more compactly, introducing the notation as it's needed and skipping some unnecessary notation and jargon.

The first theorem is for the constant population case, with a finite set of individuals . Welfare states come from some set , and "a distribution is an assignment of welfare states to individuals", or an element of the set of vectors indexed by individuals in . Then,

A ‘lottery’ is a probability measure (or probability distribution or random variable) over distributions. A ‘prospect’ is a probability measure (or probability distribution or random variable) over welfare states. Each lottery determines a prospect for each individual. The ‘social preorder’ expresses a view about how good lotteries are from an impartial perspective, while the ‘individual preorder’ expresses a view about how good prospects are for individuals, allowing interpersonal comparisons. The central question for us is how the social preorder should depend upon the individual preorder.

That there's only one individual preorder that's used for everyone allows interpersonal comparisons. Welfare states can be arbitrary otherwise, even allowing incomparability between welfare states and between prospects. A preorder is just a ranking that allows incomparability; it's a transitive and reflexive relation (at least as good as), and we write if both and .

• Reflexivity: for all .
• Transitivity: if and , then .

For a given lottery and individual , let denote the prospect that faces in .

Anteriority: Given lotteries and , if for each individual , and are identically distributed (equal up to shuffling the outcomes randomly), then according to the social preorder, .

In other words, "the social preorder only depends on which prospect each individual faces", and not how their actual outcomes may be statistically dependent upon one another, ruling out concern for "ex post equality", according to which it would be better if prospects are correlated than anticorrelated or independent. For example, if A and B have equal chances of being happy or miserable, Anteriority implies it doesn't matter if they'd be happy or miserable together with equal chances (correlated), or if one would be happy if and only if the other would be miserable (anticorrelated).

Let denote the lottery in which everyone faces prospect , so that for each individual , and "and it is certain that all individuals will have the same welfare" as each other.

Reduction to Prospects: If according to the individual preorder, then according to the social preorder.

Or, "for lotteries that guarantee perfect equality, social welfare matches individual welfare." That is, perfect equality in welfare between everyone, but not necessarily any guarantee at what welfare level, as there may still be uncertainty involved. Again, if for an individual, some prospect is at least as good as prospect , then the lottery with everyone facing , , is at least as good as the one with everyone facing , .

For a permutation (bijection) of identities and a lottery , we write the permuted lottery as . This is just swapping people's identities. The permutation is applied uniformly so that if , then in , individual faces the prospect that faces in .

Anonymity: Given a permutation of identities, and a lottery , the social preorder is indifferent between and the permuted lottery : .

One important operation on lotteries is "probabilistic mixture". Given two lotteries and , and a probability , , we can define a compound lottery , which, for a binary random variable that's with probability and with probability (like a biased coin, and independent of the randomness in and ), conditional on , the compound lottery is identical to , not just identically distributed, but also resolves to a given welfare distribution if and only if the compound lottery, conditional on does, too, and conditional on , it's identical to . Hence, and

We can also do this with more than two lotteries and use summation notation, , for it.

Anonymity is then strengthened:

Two-Stage Anonymity: Given two lotteries and , (a rational number between and , inclusive), and a permutation of identities , then according to the social preorder, we have the equivalence:

So, you can permute individuals conditionally on the binary random variable that mixes the two lotteries while maintaining equivalence. This rules out concern for "ex ante equality", according to which it would be better if people had fairer chances or equal opportunities. So, if I can benefit one of two people the same with the same initial welfare, it doesn't matter if I just choose one, or flip a coin to choose, giving each a fair chance.

Let denote the number of individuals in . For a given lottery , is the prospect given by Harsanyi's veil of ignorance, where with equal probability, "you" will be one of the individuals , and then face their prospect .

And now we can state their first theorem:

Theorem 2.2: Given an arbitrary preorder on the set of prospects (lotteries for single individuals), if the social preorder satisfies Anteriority, Reduction to Prospects and Two-Stage Anonymity, then according to the social preorder if and only if

That is, you can permute individuals conditionally on the binary random variable that mixes the two lotteries while maintaining equivalence.

So the social preorder is just the one obtained by imagining yourself in the place of each individual with equal probability and applying the individual preorder, as in Harsanyi's veil of ignorance.

1. The statement uses equality notation instead of identical distributions, but equality for each individual forces the lotteries to be literally the same, , not just equivalent, and the definition is trivially satisfied.

Comment by michaelstjules on How do i know a charity is actually effective · 2020-07-18T04:30:52.586Z · score: 9 (3 votes) · EA · GW

It might help to read through GiveWell's research. There's a lot, but consider reading some background, and pick one charity/intervention to do a deep dive into.

For background:

For research on specific charities and interventions:

Comment by michaelstjules on Can you have an egoistic preference about your own birth? · 2020-07-17T23:07:24.374Z · score: 2 (1 votes) · EA · GW

What does it mean for it to have a preference if it's never been run/conscious? Is it functionality/potential, so that if it were run in a certain way, that preference would become conscious? In what ways are we allowed to run it for this? I'd imagine you'd want to exclude destroying or changing connections before it runs, but how do we draw lines non-arbitrarily? Do drugs, brain stimulation, dreams or hallucinations count?

It seems that we'd all have many preferences we've never been conscious of, because our brains haven't been run in the right ways to make them conscious.

I wouldn't care about the preferences that won't become conscious, so if the mind is never run, nothing will matter to them. If the mind is run, then some things might matter, but not every preference it could but won't experience.

I think there are some similarities with the ethics of abortion. I think there's no harm to a fetus if aborted before consciousness, but, conditional on becoming conscious, there are ways to harm the future person the fetus is expected to become, e.g. drinking during pregnancy.

Comment by michaelstjules on Objections to Value-Alignment between Effective Altruists · 2020-07-15T22:56:31.434Z · score: 11 (6 votes) · EA · GW

One issue with moral uncertainty is that I think it means much less for moral antirealists. As a moral antirealist myself, I still use moral uncertainty, but in reference to views I personally am attracted to (based on argument, intuition, etc.) and that I think I could endorse with further reflection, but currently have a hard time deciding between. This way I can assign little weight to views I personally don't find attractive, whereas someone who is a moral realist has to defend their intuitions (both make positive arguments for and address counterarguments) and refute intuitions they don't have (but others do), a much higher bar, or else they're just pretending their own intuitions track the moral truth while others' do not. And most likely, they'll still give undue weight to their own intuitions.

I don't know what EA's split is on moral realism/antirealism, though.

Of course, none of this says we shouldn't try to cooperate with those who hold views we disagree with.

Comment by michaelstjules on Objections to Value-Alignment between Effective Altruists · 2020-07-15T22:25:39.859Z · score: 21 (9 votes) · EA · GW
Advocates for traditional diversity metrics such as race, gender and class do so precisely because they track different ways of thinking.

I don't think that's the only reason, and I'm not sure (either way) it's the main reason. I suspect demographic homogeneity is self-reinforcing, and may limit movement growth and the pool of candidates for EA positions more specifically. So, we could just be missing out on greater contributions to EA, whether or not their ways of thinking are different.

Comment by michaelstjules on Objections to Value-Alignment between Effective Altruists · 2020-07-15T22:17:42.405Z · score: 8 (5 votes) · EA · GW
But that mechanism for belief transmission within EA, i.e. object-level persuasion, doesn't run afoul of your concerns about echochamberism, I don't think.

Getting too little exposure to opposing arguments is a problem. Most arguments are informal so not necessarily even valid, and even for the ones that are, we can still doubt their premises, because there may be other sets of premises that conflict with them but are at least as plausible. If you disproportionately hear arguments from a given community, you're more likely than otherwise to be biased towards the views of that community.

Comment by michaelstjules on Objections to Value-Alignment between Effective Altruists · 2020-07-15T21:40:57.311Z · score: 2 (1 votes) · EA · GW
Greg put it crisply in his post on epistemic humility

This link didn't work for me.

Comment by michaelstjules on Objections to Value-Alignment between Effective Altruists · 2020-07-15T21:36:28.694Z · score: 20 (13 votes) · EA · GW
I think there is however no record of the actual IQ of these people.

FWIW, I think IQ isn't what we actually care about here; it's the quality, cleverness and originality of their work and insights. A high IQ that produces nothing of value won't get much reverence, and rightfully so. People aren't usually referring to IQ when they call someone intelligent, even if IQ is a measure of intelligence that correlates with our informal usage of the word.

Comment by michaelstjules on Postponing research can sometimes be the optimal decision · 2020-07-09T18:13:35.138Z · score: 7 (4 votes) · EA · GW

Another consideration might be that if you expect this research technology to be developed but had not taken that into consideration for estimating your impact, you may be underestimating the likelihood that someone else would do that same research before anyway since if the research becomes easier, others are more likely to do it. You could be overestimating your counterfactual impact.

Comment by michaelstjules on How to Measure Capacity for Welfare and Moral Status · 2020-07-09T18:06:07.514Z · score: 2 (1 votes) · EA · GW

Have you considered a (semi-)blind approach? Collect data on each of the species/taxa of interests into a table, but hide the species (except possibly human, as the reference?) and make moral weight judgements based on that (and the judges can do this without any formal or precise weighting of features if they prefer). You could also get separate people who do the research and prepare the table from those who make the judgements, to reduce the identifiability of the species/taxa from the data, although this risk won't really go away.

Comment by michaelstjules on MichaelStJules's Shortform · 2020-07-08T00:43:20.601Z · score: 2 (1 votes) · EA · GW

Here's a way to capture lexical threshold utilitarianism with a separable theory and while avoiding Pascalian fanaticism, with a negative threshold and a positive threshold > 0:

• The first term is just standard utilitarianism, but squashed with a function into an interval of length at most 1.
• The second/middle sum is the number of individuals (or experiences or person-moments) with welfare at least , which we add to the first term. Any change in number past this threshold dominates the first term.
• The third/last sum is the number of individuals with welfare at most , which we subtract from the rest. Any change in number past this threshold dominates the first term.

Either of the second or third term can be omitted.

We could require for all , although this isn't necessary.

More thresholds could be used, as in this comment: we would apply to the whole expression above, and then add new terms like the second and/or the third, with thresholds and , and repeat as necessary.

Comment by michaelstjules on MichaelStJules's Shortform · 2020-07-07T22:58:38.924Z · score: 3 (2 votes) · EA · GW

This nesting approach with above also allows us to "fix" maximin/leximin under conditions of uncertainty to avoid Pascalian fanaticism, given a finite discretization of welfare levels or finite number of lexical thresholds. Let the welfare levels be , and define:

i.e. is the number of individuals with welfare level at most , where is the welfare of individual , and is 1 if and 0 otherwise. Alternatively, we could use .

In situations without uncertainty, this requires us to first choose among options that minimize the number of individuals with welfare at most , because takes priority over , for all , and then, having done that, choose among those that minimize the number of individuals with welfare at most , since takes priority over , for all , and then choose among those that minimize the number of individuals with welfare at most , and so on, until .

This particular social welfare function assigns negative value to new existences when there are no impacts on others, which leximin/maximin need not do in general, although it typically does in practice, anyway.

This approach does not require welfare to be cardinal, i.e. adding and dividing welfare levels need not be defined. It also dodges representation theorems like this one (or the stronger one in Lemma 1 here, see the discussion here), because continuity is not satisfied (and welfare need not have any topological structure at all, let alone be real-valued). Yet, it still satisfies anonymity/symmetry/impartiality, monotonicity/Pareto, and separability/independence. Separability means that whether one outcome is better or worse than another does not depend on individuals unaffected by the choice between the two.