Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests 2019-07-04T23:56:44.330Z · score: 6 (8 votes)


Comment by michaelstjules on Do Long-Lived Scientists Hold Back Their Disciplines? · 2019-08-12T20:33:18.794Z · score: 6 (3 votes) · EA · GW

Is intellect healthspan the problem? Would increasing neuroplasticity help?

People develop biases over their lives which will affect their work. You might call some of these biases wisdom or expertise or crystallized intelligence. Researchers develop tools and intuitions that will come to serve them well, so they'll learn to rely on them. And then they start to rely on them too much. Is this a failure of neuroplasticity, or just something that happens when people work in a given field for a long time?

Comment by michaelstjules on Extreme uncertainty in wild animal welfare requires resilient model-building · 2019-08-09T02:40:04.528Z · score: 2 (2 votes) · EA · GW

What does it mean for suffering/pleasure to be measured on a linear scale, or any other scale?

What does it mean for one pain to be twice as intense as another? Maybe, from a baseline neutral state, you would be indifferent between experiencing the less intense pain for twice as long as the more intense one?

And, if we're using a hedonistic view of utility rather than a preference-based one, we're already skeptical of preferences, so can we justify trusting our preferences in these hypotheticals?

More on the issue here:

Comment by michaelstjules on Boundaries of Empathy and Their Consequences · 2019-08-02T20:29:22.185Z · score: 1 (1 votes) · EA · GW

I agree, but I think it goes a bit further: if preference satisfaction and subjective wellbeing (including suffering and happiness/pleasure) don't matter in themselves for a particular nonhuman animal with the capacity for either, how can they matter in themselves for anyone at all, including any human? I think a theory that does not promote the preference satisfaction or the subjective wellbeing as an end in itself for the individual is far too implausible.

I suppose this is a statement of a special case of the equal consideration of equal interests.

Comment by michaelstjules on Four practices where EAs ought to course-correct · 2019-08-02T13:37:12.839Z · score: 8 (3 votes) · EA · GW
However, in the linked post I took the numbers displayed by ACE in 2019, and scaled them back a few times to be conservative, so it would be tough to argue that they are over-optimistic. I also used conservative estimates of climate change charities to offset the climate impacts, and also toyed with using climate change charities to offset animal suffering by using the fungible welfare estimates (I didn't post that part but it's easy to replicate).

With a skeptical prior, multiplying by factors like this might not be enough. A charity could be 100s of times (or literally any number of times) less cost-effective than the EV without such a prior if the evidence is weak, and if there are negative effects with more robust evidence than the positive ones, these might come to dominate and turn your positive EV negative. From "Why we can’t take expected value estimates literally (even when they’re unbiased)":

I have seen some using the EEV framework who can tell that their estimates seem too optimistic, so they make various “downward adjustments,” multiplying their EEV by apparently ad hoc figures (1%, 10%, 20%). What isn’t clear is whether the size of the adjustment they’re making has the correct relationship to (a) the weakness of the estimate itself (b) the strength of the prior (c) distance of the estimate from the prior. An example of how this approach can go astray can be seen in the “Pascal’s Mugging” analysis above: assigning one’s framework a 99.99% chance of being totally wrong may seem to be amply conservative, but in fact the proper Bayesian adjustment is much larger and leads to a completely different conclusion.

On the other hand, the more direct effects of abstaining from specific animal products rely largely on estimates of elasticities, which are much more robust.

Comment by michaelstjules on Four practices where EAs ought to course-correct · 2019-08-02T07:10:38.543Z · score: 5 (3 votes) · EA · GW

Is veganism a foot in the door towards effective animal advocacy (EAA) and donation to EAA charities? Maybe it's an easier sell than getting people to donate while remaining omnivores, because it's easier to rationalize indifference to farmed animals if you're still eating them.

Maybe veganism is also closer to a small daily and often public protest than turning off the lights, and as such is more likely to lead to further action later than be used as an excuse to accomplish less overall.

Of course, this doesn't mean we should push for EAs to go vegan. However, if we want the support (e.g. donations) of the wider animal protection movement, it might be better to respect their norms and go veg, especially or only if you work at an EA or EAA org or are fairly prominent in the movement. (And, the norm itself against unnecessary harm is probably actually valuable to promote in the long-term.)

Finally, in trying to promote donating to animal charities face-to-face, will people take you more or less seriously if you aren't yourself vegan? I can see arguments each way. If you're not vegan, then this might reduce their fear of becoming or being perceived as a hypocrite if they donate to animal charities but aren't vegan, so they could be more likely to donate. On the other hand, they might see you as a hypocrite, and feel that if you don't take your views seriously enough to abstain from animal products, then they don't have to take your views seriously either.

Comment by michaelstjules on Boundaries of Empathy and Their Consequences · 2019-08-01T18:23:33.294Z · score: 2 (2 votes) · EA · GW

I think if you decide what we should promote in a human for its own sake (and there could be multiple such values), then you'd need to explain why it isn't worth promoting in nonhumans. For example, if preference satisfaction matters in itself for a human, then why does the presence or absence of a given property in another animal imply that it does not matter for that animal? For example, why would the absence of personhood, however you want to define it, mean the preferences of an animal don't matter, if they still have preferences? In what way is personhood relevant and nonarbitrary where say skin colour is not? Like "preferences matter, but only if X". The "but only if X" needs to be justified, or else it's arbitrary, and anyone can put anything there.

I see personhood as binary, but also graded. You can be a person or not, and if you are one, you may have the qualities that define personhood to a greater or lesser degree.

If you're interested in some more reading defending the case for the consideration of the interests of animals along similar lines, here are a few papers:

Comment by michaelstjules on Four practices where EAs ought to course-correct · 2019-08-01T04:10:31.424Z · score: 2 (2 votes) · EA · GW

(I'm not disagreeing with your overall point about the emphasis on the vegan diet)

You can of course supplement, but at the cost of extra time and money - and that's assuming that you remember to supplement. For some people who are simply bad at keeping habits - me, at least - supplementing for an important nutrient just isn't a reliable option; I can set my mind to do it but I predictably fail to keep up with it.

One way to make this easier could be to keep your supplements next to your toothbrush, and take them around the first time you brush your teeth in a day.

I actually have most of my supplements (capsules/pills) on my desk in front of or next to my laptop. I also keep my toothbrush and toothpaste next to my desk in my room.

I would usually put creatine powder in my breakfast, but I've been eating breakfast at work more often lately, so I haven't been consistent. Switching to capsules/pills would probably be a good idea.

I think you could keep your supplements under $2 a day. Some of these supplements you might want to take anyway, veg or not, too. So I don't think you'd necessarily be spending more on a vegan diet than an omnivorous one, if you're very concerned with cost, since plant proteins and fats are often cheaper than animal products. If you're not that concerned with cost in the first place, then you don't need to be that concerned with the cost of supplements.

There's a lot that we don't understand, including chemicals that may play a valuable health role but haven't been properly identified as such. Therefore, in the absence of clear guidance it's wise to defer to eating (a) a wide variety of foods, which is enhanced by including animal products, and (b) foods that we evolved to eat, which has usually included at least a small amount of meat.

You could also be bivalvegan/ostrovegan, and you don't need to eat bivalves every day; just use them to fill in any missing unknowns in your diet, so the daily cost can be reduced even if they aren't cheap near you. Bivalves also tend to have relatively low mercury concentrations among sea animals, and some are good sources of iron or omega-3.

Here's a potentially useful meta-analysis of studies on food groups and all-cause mortality, but the weaknesses you've already pointed out still apply, of course. See Table 1, especially, and, of course, the discussions of the limitations and strength of the evidence. They also looked at processed meats separately, but I don't think they looked at unprocessed meats separately.

Another issue with applying this meta-analysis to compare vegan and nonvegan diets, though, is that the average diet with 0 servings of beef probably has chicken in it, and possibly more than the average diet with some beef in it. Or maybe they adjusted for these kinds of effects; I haven't looked at the methodology that closely.

unhealthy foods such as store-bought bread (with so many preservatives, flavorings etc)

Do you think it's better to not eat any store-bought whole grain bread at all? I think there's a lot of research to support their benefits. See also the meta-analysis I already mentioned; even a few servings of refined grains per day were associated with reduced mortality. (Of course, you need to ask what people were eating less of when they ate more refined grains.)

How bad are preservatives and flavourings?

Comment by michaelstjules on Four practices where EAs ought to course-correct · 2019-08-01T03:00:59.629Z · score: 1 (1 votes) · EA · GW

On being ruthless, do you think we should focus on framing EA as a moral obligation instead of a mere opportunity? What about using a little shaming, like this? I think the existence of the Giving Pledge with its prominent members, and the fact that most people aren't rich (although people in the developed world are in relative terms) could prevent this light shaming from backfiring too much.

Comment by michaelstjules on Boundaries of Empathy and Their Consequences · 2019-07-31T19:39:39.543Z · score: 2 (2 votes) · EA · GW

I think the best explanation for the moral significance of humans is consciousness. Conscious individuals (and those who have been and can again be conscious) matter because what happens to them matters to them. They have preferences and positive and negative experiences.

On the other hand, (1) something that is intelligent (or has any other property) but could never be conscious doesn't matter in itself, while (2) a human who is conscious but not intelligent (or any other property) would still matter in themself. I think most would agree with (2) here (but probably not (1)), and we can use it to defend the moral significance of nonhuman animals, because the category "human" is not in itself morally relevant.

Are you familiar with the argument from species overlap?

Comment by michaelstjules on Boundaries of Empathy and Their Consequences · 2019-07-30T05:22:47.749Z · score: 4 (4 votes) · EA · GW

Unless the post has been edited, I don't see this as necessarily question begging, although I can also see why you might think that. My reading is that the claim is assumed to be true, and the post is about how to best convince people of it (or to become more empathetic) in practice, which need not be through a logical argument. It's not about proving the claim.

It could be that making it easier for people to avoid animal products is a way to convince them (or the next generation) of the claim. Another way might be getting them to interact with or learn more about animals and their personalities.

Comment by michaelstjules on Defining Effective Altruism · 2019-07-24T15:24:08.566Z · score: 2 (2 votes) · EA · GW

I think it would be a good idea to be more explicit that other considerations besides those from (i) can inform how we do (ii), since otherwise we're committed to consequentialism.

Also, I'm being a bit pedantic, but if the maximizing course(s) of action are ruled out for nonconsequentialist reasons, then since (i) only cares about maximization, we won't necessarily have information ranking the options that aren't (near) maximal (we might devote most of our research to decisions we suspect might be maximal, to the neglect of others), and (ii) won't necessarily be informed by the value of outcomes.

Comment by michaelstjules on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-16T05:29:21.228Z · score: 1 (1 votes) · EA · GW

I won't say I'm convinced by my own responses here, but I'll offer them anyway.

I think B could reasonably claim that Lottery 1 is less fair to them than Lottery 2, while A could not claim that Lottery 2 is less fair to them than Lottery 1 (it benefits them less in expectation, but this is not a matter of fairness). This seems a bit clearer with the understanding that von Neumann-Morgenstern rational agents maximize expected (ex ante) utility, so an individual's ex ante utility could matter to that individual in itself, and an ex ante view respects this. (And I think the claim that ex post prioritarianism is Pareto-suboptimal may only be meaningful in the context of vNM-rational agents; the universe doesn't give us a way to make tradeoffs between happiness and suffering (or other values) except through individual preferences. If we're hedonistic consequentialists, then we can't refer to preferences or the veil of ignorance to justify classical utilitarianism over hedonistic prioritarianism.)

Furthermore, if you would imagine repeating the same lottery with the same individuals and independent probabilities over and over, you'd find in the long run, either in Lottery 1, A would benefit by 100 on average and B would benefit by 0 on average, or with Lottery 2, A would benefit by 20 on average and B would benefit by 20 on average. On these grounds, a prioritarian could reasonably prefer Lottery 2 to Lottery 1. Of course, an ex post prioritarian would come to the same conclusion if they're allowed to consider the whole sequence of independent lotteries and aggregate each individual's own utilities within each individual before aggregating over individuals.

(On the other hand, if you repeat Lottery 1, but swap the positions of A and B each time, then Lottery 1 benefits A by 50 on average and B by 50 on average, and this is better than Lottery 2. The utilitairan, ex ante prioritarian and ex post prioritarian would all agree.)

A similar problem is illustrated in "Decide As You Would With Full Information! An Argument Against Ex Ante Pareto" by Marc Fleurbaey & Alex Voorhoeve (I read parts of this after I wrote the post). You can check Table 1 on p.6 and the surrounding discussion. I'm changing the numbers here. EDIT: I suppose the examples can be used to illustrate the same thing (except the utilitarian preference for Lottery 1): Ex post you prefer Lottery 1 and would realize you'd made a mistake, and if you find out ahead of time exactly which outcome Lottery 2 would have given, you'd also prefer Lottery 1 and want to change your mind.

Suppose there are two diseases, SEVERE and MILD. An individual with SEVERE will have utility 10, while an individual with MILD will have utility 100. If SEVERE is treated, it will instead have utility 20, a gain of 10. If MILD is treated, it will instead have utility 120, a gain of 20.
Now, suppose there are two individuals, A and B. One will have SEVERE, and the other will have MILD. You can treat either SEVERE or MILD, but not both. Which should you treat?
1. If you know who will have SEVERE with certainty, then with a sufficiently prioritarian view, you should treat SEVERE. To see why, suppose you know A has SEVERE. Then, by treating SEVERE, the utilities would be (20, 100) for A and B, respectively, but by treating MILD, they would be (10, 120). (20, 100) is better than (10, 120) if you're sufficiently prioritarian. Symmetrically, if you know B has SEVERE, you get (100, 20) for treating SEVERE or (120, 10) for treating MILD, and again it's better to treat SEVERE.
2. If you think each will have SEVERE or MILD with probability 0.5 each (and one will have SEVERE and the other, MILD), then you should treat MILD. This is because the expected utility if you treat MILD is (10+120)*0.5 = 65 for each individual, while the expected utility if you treat SEVERE is (20+100)*0.5 = 60 for each individual. Treating MILD is ex ante better than treating SEVERE for each of A and B. If neither of them knows who has which, they'd both want to you treat MILD.
What's the difference from your point of view between 1 and 2? Extra information in 1. In 1, whether you find out that A will have SEVERE or B will have SEVERE, it's better to treat SEVERE. So, no matter which you learn is the case in reality, it's better to treat SEVERE. But if you don't know, it's better to treat MILD.

So, in your ignorance, you would treat MILD, but if you found out who had SEVERE and who had MILD, no matter which way it goes, you'd realize you had made a mistake. You also know that seeking out this information of who has which ahead of time, no matter which way it goes, will cause you to change your mind about which disease to treat. EDIT: I suppose both of these statements are true of your example. Ex post you prefer Lottery 1 and would realize you'd made a mistake, and if you find out ahead of time exactly which outcome Lottery 2 would have given, you'd also prefer Lottery 1.

Comment by michaelstjules on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-09T05:14:02.993Z · score: 2 (2 votes) · EA · GW

I agree that this feels too harsh. My first reaction to the extreme numbers would be to claim that expected values are actually not the right way to deal with uncertainty (without offering a better alternative). I think you could use a probability of 0.1 for an amazing life (even infinitely good), and I would arrive at the same conclusion: giving them little weight is too harsh. Because this remains true in my view no matter how great the value of the amazing life, I do think this is still a problem for expected values, or at least expected values applied directly to affective wellbeing.

I also do lean towards a preference-based account of wellbeing, which allows individuals to be risk-averse. Some people are just not that risk-averse, and (if something like closed individualism were true and their preferences never changed), giving greater weight to worse states is basically asserting that they are mistaken for not being more risk-averse. However, I also suspect most people wouldn't value anything at values 3^^^^3 (or -3^^^^3, for that matter) if they were vNM-rational, and most of them are probably risk-averse to some degree.

Maybe ex ante prioritarianism makes more sense with a preference-based account of wellbeing?

Also, FWIW, it's possible to blend ex ante and ex post views. An individual's actual utility (treated as a random variable) and their expected utility could be combined in some way (weighted average, minimum of the two, etc.) before aggregating and taking the expected value. This seems very ad hoc, though.

Comment by michaelstjules on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-08T22:37:22.903Z · score: 2 (2 votes) · EA · GW

Tbh, I find this fairly intuitive (under the assumption that something like closed individualism is true and cryonics would preserve identity). You can think about it like decreasing marginal value of expected utility (to compare to decreasing marginal value of income/wealth), so people who have higher EU for their lives should be given (slightly) less weight.

If they do eventually get revived, and we had spent significant resources on them, this could mean we prioritized the wrong people. We could be wrong either way.

Comment by michaelstjules on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-08T04:13:30.742Z · score: 2 (2 votes) · EA · GW

Sorry that was unclear: I meant the subjective probabilities of the person using the ethical system ("you") applied to everyone, not using their own subjective probabilities.

Allowing each individual to use their own subjective probabilities would be interesting, and have problems like you point out. It could respect individual autonomy further, especially for von Neumann-Morgenstern rational agents with vNM utility as our measure of wellbeing; we would rank choices for them (ignoring other individuals) exactly as they would rank these choices for themselves. However, I'm doubtful that this would make up for the issues, like the one you point out. Furthermore, many individuals don't have subjective probabilities about most things that would be important for ethical deliberation in practical cases, including, I suspect, most people and all nonhuman animals.

Another problematic example would be healthcare professionals (policy makers, doctors, etc.) using the subjective probabilities of patients instead of subjective probabilities informed by actual research (or even their own experience as professionals).

Comment by michaelstjules on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-06T17:55:50.829Z · score: 2 (2 votes) · EA · GW
One issue is how you decide whether a given person exists in a given history or not. For example, if I had been born with a different hair color, would I be the same person? Maybe. How about a different personality? At what point do "I" stop existing and someone else starts existing? I guess similar issues bedevil the question of whether a person stays the same person over time, though there we can also use spatiotemporal continuity to help maintain personal identity.

Here are some interesting examples I thought of. If I rearranged someone's brain cells (and maybe atoms) to basically make a (possibly) completely different brain structure with (possibly) completely different memories and personality, should we consider these different individuals? Consider the following cases:

1. What if all brain function stops, I rearrange their brain, and then brain function starts again?

2. What if all brain function stops, I rearrange their brain to have the same structure (and memories and personality), but with each atom/cell in a completely different area from where it started, and then brain function starts again?

3. What if all brain function stops, the cells and atoms move or change as they naturally would without my intervention, and then brain function starts again?

To me, 1 clearly brings about a completely different individual, and unless we're willing to say that two physically separate people with the same brain structure, memories and personality are actually one individual, I think 2 should also bring about a completely different individual. 3 only really differs from 1 and 2 only by degree of change, so I think it should also bring about a completely different individual, too.

What this tells me is that if we're going to use some kind of continuity to track identity at all, it should also include continuity of conscious experiences. Then we have to ask:

Are there frequent (e.g. daily) discontinuities or breaks in a person's conscious experiences?

Whether there are or not, should our theory of identity even depend on this fact? If it happened to be the case that sleep involved such discontinuities/breaks and people woke up as completely different individuals, would our theory of identity be satisfactory?

Maybe a way around this is to claim that there are continuous degrees of identification between a person at different moments in their life, e.g. me now and me in a week are only 99% the same individual. I'm not sure how we could do ethical calculus with this, though.

Comment by michaelstjules on Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests · 2019-07-06T17:10:14.179Z · score: 2 (2 votes) · EA · GW
If I understand the view correctly, it would say that a world where everyone has a 49.99% chance of experiencing pain with utility of -10^1000 and a 50.01% chance of experiencing pleasure with utility of 10^1000 is fine, but as soon as anyone's probability of the pain goes above 50%, things start to become very worrisome (assuming the prioritarian weighting function cares a lot more about negative than positive values)?

Yes, although it's possible that a single individual even having a 100% possibility of pain might not outweigh the pleasure of the others, if the number of other individuals is large enough and the social welfare function is sufficiently continuous and "additive", e.g. it takes the form for strictly increasing everywhere.

What probability distribution are the expectations taken with respect to? If you were God and knew everything that would happen, there would be no uncertainty (except maybe due to quantum randomness depending on one's view about that). If there's no randomness, I think ex ante prioritarianism collapses to regular prioritarianism.

I intended for your own subjective probability distribution to be used, but what you say here leads to some more weird examples (besides collapsing to regular prioritarianism (possibly while aggregating actual utilities over each individual first before aggregating across them)):

I've played a board game where the player who gets to go first is the one who has the pointiest ears. The value of this outcome would be different if you knew ahead of time who this would be compared to if you didn't. In particular, if there's were morally significant tradeoff between utilities, then this rule could be better or worse than a more (subjectively) random choice, depending on whether the worse off players are expected to benefit more or less. Of course, a random selection could be better or worse than one whose actual outcome you know in advance for utilitarians, but there are some differences.

For ex ante prioritarianism, this is also the case before and after you would realize the outcome of the rolls of dice or coin flips; once you realize what the outcome of the random selection is, it's no longer random, and the value of following through with it changes. In particular, if each person had the same wellbeing before the rolls of the dice and stood to gain or lose the same amount if they won (regardless of the selection process), then random selection would be optimal and better than any fixed selection with whose outcome you know in advance, but once you know the outcome of the random selection process, before you apply it, it reduces to using any particular rule whose outcome you know in advance.

One issue is how you decide whether a given person exists in a given history or not. For example, if I had been born with a different hair color, would I be the same person? Maybe. How about a different personality? At what point do "I" stop existing and someone else starts existing? I guess similar issues bedevil the question of whether a person stays the same person over time, though there we can also use spatiotemporal continuity to help maintain personal identity.

Yes, I think it's basically the same issue. If we can use something like spatiotemporal continuity (I am doubtful that this can be made precise and coherent enough in a way that's very plausible), then we could start before a person is even conceived. Right before conception, the sperm cells and ova could be used to determine the identities of the potential future people. Before the sperm cell used in conception even exists, you could imagine two sperm cells with different physical (spatiotemporal) origins in different outcomes that happen to carry the same genetic information, and you might consider the outcomes in which one is used to have a different person than the outcomes in which the the other is. Of course, you might have to divide up these two groups of outcomes further still. For example, you wouldn't want to treat identical twins as a single individual, even if they originated from some common group of cells.

Comment by michaelstjules on How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong. · 2019-07-01T23:27:14.335Z · score: 2 (4 votes) · EA · GW

I think this is a good place to start, although not written by Brian:

There’s ongoing sickening cruelty: violent child pornography, chickens are boiled alive, and so on. We should help these victims and prevent such suffering, rather than focus on ensuring that many individuals come into existence in the future. When spending resources on increasing the number of beings instead of preventing extreme suffering, one is essentially saying to the victims: “I could have helped you, but I didn’t, because I think it’s more important that individuals are brought into existence. Sorry.” (See this essay for a longer case for suffering-focused ethics.)
Comment by michaelstjules on Announcing the launch of the Happier Lives Institute · 2019-07-01T23:13:53.504Z · score: 1 (1 votes) · EA · GW
What's more, stipulating preferences can/must be laundered is also borderline inconsistent with subjectivism: if you tell me that some of my preferences doesn't count towards my well-being because they 'irrational' you don't seem to be respecting the view that my well-being consists in whatever I say it does.

I don't think this need be the case, since we can have preferences that are mutually exclusive in their satisfaction, and having such preferences means we can't be maximally satisfied. So, if the mathematician's preference upon reflection is to not count blades of grass (and do something else) but they have the urge to do so, at least one of these two preferences will go unsatisfied, which detracts from their wellbeing.

However, this on its own wouldn't tell us the mathematician is better off not counting blades of grass, and if we did always prioritize rational preferences over irrational ones, or preferences about preferences over the preferences to which they refer, then it would be as if the irrational/lower preferences count for nothing, as you suggest.

On the experience machine, this only helps preference satisfactionists, not life satisfactionist: I could plug you into the experience machine such that you judged yourself to be maximally satisfied with your life. If well-being just consists in judging one's life is going well, it doesn't matter how you come to that judgement.

I agree, although it also doesn't help preference satisfactionists who only count preference satisfaction/frustration when it's experienced consciously, and it might also not help them if we're allowed to change your preferences, since having easier preferences to satisfy might outweigh the preference frustration that would result from having your old preferences replaced by and ignored for the new preferences.

I think the involuntary experience machine and wireheading are problems for all the consequentialist theories with which I'm familiar (at least under the assumption of something like closed individualism, which I actually find to be unlikely).

Comment by michaelstjules on Announcing the launch of the Happier Lives Institute · 2019-06-21T16:22:40.894Z · score: 7 (3 votes) · EA · GW
Consider John Rawls' grass-counter case: imagine a brilliant Harvard mathematician, fully informed about the options available to her, who develops an overriding desire to count the blades of grass on the lawns. Suppose this person then does spend their time counting blades of grass and is miserable while doing so. On the subjectivist view, this person's life is going well for them. I think this person's life is going poorly for them because they are unhappy.

I think the example might seem absurd because we can't imagine finding satisfaction in counting blades of grass; it seems like a meaningless pursuit. But is it any more meaningful in any objective sense than doing mathematics (in isolation, assuming no one else would ever benefit)? The objectivist might say that this is exactly the point, but the subjectivist could just respond that it doesn't matter as long as the individual is (more) satisfied.

Furthermore, I think life satisfaction and preference satisfaction are slightly different. If we're talking about life satisfaction rather than preference satisfaction, it's not an overriding desire (which sounds like addiction), but, upon reflection, (greater) satisfaction with the choices they make and their preferences for those choices. If we are talking about preference satisfaction, people can also have preferences over their preferences. A drug addict might be compelled to use drugs, but prefer not to be. In this case, does the mathematician prefer to have different preferences? If they don't, then the example might not be so counterintuitive after all. If they do, then the subjectivist can object in a way that's compatible with their subjectivist intuitions.

Also, a standard objection to hedonistic (or more broadly experiential) views is wireheading or the experience machine, of which I'm sure you're aware, but I'd like to point them out to everyone else here. People don't want to sacrifice the pursuits they find meaningful to be put into an artificial state of continuous pleasure, and they certainly don't want that choice to be made for them. Of course, you could wirehead people or put them in experience machines that make their preferences satisfied (by changing these preferences or simulating things that satisfy their preferences), and people will also object to that.

Comment by michaelstjules on Invertebrate Sentience Table · 2019-06-18T00:06:00.743Z · score: 12 (7 votes) · EA · GW

There's some criticism here:

Is the report by Cammaerts and Caemmaerts (2015) positive evidence of self-recognition in ants? Our answer is an emphatic no. Too many crucial methodological details are not given. No formal period between marking the subjects and then exposing them to the mirror was included; the reader is simply asked to accept that no self-cleaning movements occurred before marked ants first saw themselves in the mirror and that marked ants without any mirror did not do so. There is no clear mention of how these data were collected. Were the ants recorded on video? Were they observed directly? In other studies of ant behavior some means of magnification are used, but Caemmerts and Cammaerts provide no information about this, and it is not even clear if any attempt to assess inter-observer reliability was made.
It also remains a possibility that responses to the mirror on the mark test were confounded by chemical cues from the ant’s antennae and chemoreceptors on the mandibles. For instance, if the blue dye was chemically different from the brown dye, chemoreception could explain why ants marked with blue dye were more likely to be attacked by other ants. It is also important to note that the ants must have sensed that they had the marks on themselves through these and other olfactory channels prior to being exposed to the mirror, which would invalidate the mark test.
Notwithstanding the absence of evidence for vision-based individual facial recognition in ants, it would be astonishing if such poorly sighted, small brained insects − especially those without any mirror experience − could immediately use their reflection to try to remove a freshly applied foreign mark that was only visible in the mirror
Comment by michaelstjules on Invertebrate Sentience Table · 2019-06-15T22:20:20.083Z · score: 5 (4 votes) · EA · GW

Some ideas for the presentation of the table to make it more digestible:

1. Is the table downloadable? Can it be made downloadable?

2. Can the table cell/font sizes and table height be made adjustable? It would be nice to be able to fit more of it (ideally all of it) on my screen at once. Just zooming out in my browser doesn't work, since the table shrinks, too, and the same cells are displayed.

3. What about description boxes that pop up when you click on (or hover over) a cell (description/motivation of the feature itself, a box with the footnotes/text/sources when you click on the given cell)? Could also stick to informal recognizable names (cows, ants) where possible and put the taxon in a popup to save on space.

4. Different colour cells for "Likely No", "Lean No", "Unknown", "Lean Yes", "Likely Yes" (e.g. red, pink, grey, light green, green).

Comment by michaelstjules on Invertebrate Sentience Table · 2019-06-15T21:58:18.601Z · score: 2 (2 votes) · EA · GW

Was the mirror test experiment with ants missed or was it intentionally excluded? If the latter, why? It seems the journal it was published in is not very reputable, and the results have not been replicated independently.

Comment by michaelstjules on Invertebrate Sentience Table · 2019-06-15T21:04:27.848Z · score: 4 (4 votes) · EA · GW

What are the plans for maintaining/expanding this database? Would you consider making a wiki or open source version and allowing contribution from others (possibly through some formal approval process)?

I imagine it could be a useful resource not just for guiding our beliefs about the consciousness of invertebrates, but also the consciousness of other forms of life (and AI in the future).

One suggestion: I think it could be useful to have a column for the age at which each feature is first observable in humans on average (or include these in the entries for humans, as applicable).

Comment by michaelstjules on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-27T21:38:05.877Z · score: 1 (1 votes) · EA · GW

tl;dr: even using priors, with more options and hazier probabilities, you tend to increase the number of options which are too sensitive to supporting information (or just optimistically biased due to your priors), and these options look disproportionately good. This is still an optimizer’s curse in practice.

This is an issue of the models and priors. If your models and priors are not right... then you should update over your priors and use better models. Of course they can still be wrong... but that's true of all beliefs, all reasoning, etc.
If you assume from the outside (unbeknownst to the agent) that they are all fair, then you're not showing a problem with the agent's reasoning, you're just using relevant information which they lack.

In practice, your models and priors will almost always be wrong, because you lack information; there's some truth of the matter of which you aren't aware. It's unrealistic to expect us to have good guesses for the priors in all cases, especially with little information or precedent as in hazy probabilities, a major point of the OP.

You'd hope that more information would tend to allow you to make better predictions and bring you closer to the truth, but when optimizing, even with correctly specified likelihoods and after updating over priors as you said should be done, the predictions for the selected coin can be more biased in expectation with more information (results of coin flips). On the other hand, the predictions for any fixed coin will not be any more biased in expectation over the new information, and if the prior's EV hadn't matched the true mean, the predictions would tend to be less biased.

More information (flips) per option (coin) would reduce the bias of the selection on average, but, as I showed, more options (coins) would increase it, too, because you get more chances to be unusually lucky.

My prior would not be uniform, it would be 0.5! What else could "unbiased coins" mean?

The intent here again is that you don't know the coins are fair.

Bayesian EV estimation doesn't do hypothesis testing with p-value cutoffs. This is the same problem popping up in a different framework, yes it will require a different solution in that context, but they are separate.

Fair enough.

The proposed solution applies here too, just do (simplistic, informal) posterior EV correction for your (simplistic, informal) estimates.

How would you do this in practice? Specifically, how would you get an idea of the magnitude for the correction you should make?

Maybe you could test your own (or your group's) prediction calibration and bias, but it's not clear how exactly you should incorporate this information, and it's likely these tests won't be very representative when you're considering the kinds of problems with hazy probabilities mentioned in the OP.

Comment by michaelstjules on Why does EA use QALYs instead of experience sampling? · 2019-04-24T03:51:16.229Z · score: 3 (2 votes) · EA · GW

I suspect experience sampling is much more costly and time-consuming to get data on than alternatives, and there's probably much less data. Life satisfaction or other simple survey questions about subjective wellbeing might be good enough proxies, and there's already a lot of available data out there.

Here's a pretty comprehensive post on using subjective wellbeing:

A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare by Michael Plant

Another good place to read more about this is

Comment by michaelstjules on Reasons to eat meat · 2019-04-24T03:38:01.392Z · score: 10 (5 votes) · EA · GW

Deliberately offsetting a harm through a "similar" opposite benefit means deliberately restricting that donation to a charity from a restricted subset of possible charities, and it may be less effective than the ones you've ruled out.

Offsetting could also justify murder, because there are life-saving charities.

Also related:

Comment by michaelstjules on Reasons to eat meat · 2019-04-24T03:26:31.450Z · score: 11 (8 votes) · EA · GW

I know the post is satirical, but I think it's worth pointing out that ego depletion, the idea that self-control or willpower draws upon a limited pool of mental resources that can be used up, is on shaky ground, i.e. the effect was not replicated in a few meta-analyses, although an older meta-analysis did replicate it.

Comment by michaelstjules on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-21T00:00:43.866Z · score: 6 (3 votes) · EA · GW

This paper (Schuyler, J. R., & Nieman, T. (2007, January 1). Optimizer's Curse: Removing the Effect of this Bias in Portfolio Planning. Society of Petroleum Engineers. doi:10.2118/107852-MS; earlier version) has some simple recommendations for dealing with the Optimizer's Curse:

The impacts of the OC will be evident for any decisions involving ranking and selection among alternatives and projects. As described in Smith and Winkler, the effects increase when the true values of alternatives are more comparable and when the uncertainty in value estimations is higher. This makes intuitive sense: We expect a higher likelihood of making incorrect decisions when there is little true difference between alternatives and where there is significant uncertainty in our ability to asses value.
(...) Good decision-analysis practice suggests applying additional effort when we face closely competing alternatives with large uncertainty. In these cases, we typically conduct sensitivity analyses and value-of-information assessments to evaluate whether to acquire additional information. Incremental information must provide sufficient additional discrimination between alternatives to justify the cost of acquiring the additional information. New information will typically reduce the uncertainty in our values estimates, with the additional benefit of reducing the magnitude of OC.

The paper's focus is actually on a more concrete Bayesian approach, based on modelling the population from which potential projects are sampled.

Comment by michaelstjules on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-20T22:32:02.652Z · score: 1 (1 votes) · EA · GW

I made a long top-level comment that I hope will clarify some problems with the solution proposed in the original paper.

I ask the same question I asked of OP: give me some guidance that applies for estimating the impact of maximizing actions that doesn't apply for estimating the impact of randomly selected actions.

This is a good point. Somehow, I think you’d want to adjust your posterior downward based on the set or the number of options under consideration and how unlikely the data that makes the intervention look good. This is not really useful, since I don't know how much you should adjust these. Maybe there's a way to model this explicitly, but it seems like you'd be trying to model your selection process itself before you've defined it, and then you look for a selection process which satisfies some properties.

You might also want to spend more effort looking for arguments and evidence against each option the more options you're considering.

When considering a larger number of options, you could use some randomness in your selection process or spread funding further (although the latter will be vulnerable to the satisficer's curse if you're using cutoffs).

What do you mean by "the priors"?

If I haven’t decided on a prior, and multiple different priors (even an infinite set of them) seem equally reasonable to me.

Comment by michaelstjules on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-20T22:00:14.348Z · score: 2 (2 votes) · EA · GW

I’m going to try to clarify further why I think the Bayesian solution in the original paper on the Optimizer’s Curse is inadequate.

The Optimizer's Curse is defined by Proposition 1: informally, the expectation of the estimated value of your chosen intervention overestimates the expectation of its true value when you select the intervention with the maximum estimate.

The proposed solution is to instead maximize the posterior expected value of the variable being estimated (conditional on your estimates, the data, etc.), with a prior distribution for this variable, and this is purported to be justified by Proposition 2.

However, Proposition 2 holds no matter which priors and models you use; there are no restrictions at all in its statement (or proof). It doesn’t actually tell you that your posterior distributions will tend to better predict values you will later measure in the real world (e.g. by checking if they fall in your 95% credence intervals), because there need not be any connection between your models or priors and the real world. It only tells you that your maximum posterior EV equals your corresponding prior’s EV (taking both conditional on the data, or neither, although the posterior EV is already conditional on the data).

Something I would still call an “optimizer’s curse” can remain even with this solution when we are concerned with the values of future measurements rather than just the expected values of our posterior distributions based on our subjective priors. I’ll give 4 examples, the first just to illustrate, and the other 3 real-world examples:

1. Suppose you have different fair coins, but you aren’t 100% sure they’re all fair, so you have a prior distribution over the future frequency of heads (it could be symmetric in heads and tails, so the expected value would be for each), and you use the same prior for each coin. You want to choose the coin which has the maximum future frequency of landing heads, based on information about the results of finitely many new coin flips from each coin. If you select the one with the maximum expected posterior, and repeat this trial many times (flip each coin multiple times, select the one with the max posterior EV, and then repeat), you will tend to find the posterior EV of your chosen coin to be greater than , but since the coins are actually fair, your estimate will be too high more than half of the time on average. I would still call this an “optimizer’s curse”, even though it followed the recommendations of the original paper. Of course, in this scenario, it doesn’t matter which coin is chosen.

Now, suppose all the coins are as before except for one which is actually biased towards heads, and you have a prior for it which will give a lower posterior EV conditional on heads and no tails than the other coins would (e.g. you’ve flipped it many times before with particular results to achieve this; or maybe you already know its bias with certainty). You will record the results of coin flips for each coin. With enough coins, and depending on the actual probabilities involved, you could be less likely to select the biased coin (on average, over repeated trials) based on maximum posterior EV than by choosing a coin randomly; you'll do worse than chance.

(Math to demonstrate the possibility of the posteriors working this way for heads out of : you could have a uniform prior on the true future long-run average frequency of heads for the unbiased coins, i.e. for in the interval , then , and , which goes to as goes to infinity. You could have a prior which gives certainty to your biased coin having any true average frequency , so any of the unbiased coins which lands heads out of times will beat it for large enough.)

If you flip each coin times, there’s a number of coins, , so that the true probability (not your modelled probability) of at least one of the other coins getting k heads is strictly greater than , i.e. (for , you need , and for , you need , so grows pretty fast as a function of ). This means, with probability strictly greater than , you won’t select the biased coin, so with probability strictly less than , you will select the biased coin. So, you actually do worse than random choice, because of how many different coins you have and how likely one of them is to get very lucky. You would have even been better off on average ignoring all of the new coin flips and sticking to your priors, if you already suspected the biased coin was better (if you had a prior with mean ).

2. A common practice in machine learning is to select the model with the greatest accuracy on a validation set among multiple candidates. Suppose that the validation and test sets are a random split of a common dataset for each problem. You will find that under repeated trials (not necessarily identical; they could be over different datasets/problems, with different models) that by choosing the model with the greatest validation accuracy, this value will tend to be greater than its accuracy on the test set. If you build enough models each trial, you might find the models you select are actually overfitting to the validation set (memorizing it), sometimes to the point that the models with highest validation accuracy will tend to have worse test accuracy than models with validation accuracy in a lower interval. This depends on the particular dataset and machine learning models being used. Part of this problem is just that we aren’t accounting for the possibility of overfitting in our model of the accuracies, but fixing this on its own wouldn’t solve the extra bias introduced by having more models to choose from.

3. Due to the related satisficer’s curse, when doing multiple hypothesis tests, you should adjust your p-values upward or your p-value cutoffs (false positive rate, significance level threshold) downward in specific ways to better predict replicability. There are corrections for the cutoff that account for the number of tests being performed, a simple one is that if you want a false positive rate of , and you’re doing tests, you could instead use a cutoff of .

4. The satisficer’s curse also guarantees that empirical study publication based on p-value cutoffs will cause published studies to replicate less often than their p-values alone would suggest. I think this is basically the same problem as 3.

Now, if you treat your priors as posteriors that are conditional on a sample of random observations and arguments you’ve been exposed to or thought of yourself, you’d similarly find a bias towards interventions with “lucky” observations and arguments. For the intervention you do select compared to an intervention chosen at random, you’re more likely to have been convinced by poor arguments that support it and less likely to have seen good arguments against it, regardless of the intervention’s actual merits, and this bias increases the more interventions you consider. The solution supported by Proposition 2 doesn’t correct for the number of interventions under consideration.

Comment by michaelstjules on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-16T06:42:19.149Z · score: 3 (2 votes) · EA · GW
You seem to be using "people all agree" as a stand-in for "the optimizer's curse has been addressed". I don't get this. Addressing the optimizer's curse has been mathematically demonstrated. Different people can disagree about the specific inputs, so people will disagree, but that doesn't mean they haven't addressed the optimizer's curse.

Maybe we're thinking about the optimizer's curse in different ways.

The proposed solution of using priors just pushes the problem to selecting good priors. It's also only a solution in the sense that it reduces the likelihood of mistakes happening (discovered in hindsight, and under the assumption of good priors), but not provably to its minimum, since it does not eliminate the impacts of noise. (I don't think there's any complete solution to the optimizer's curse, since, as long as estimates are at least somewhat sensitive to noise, "lucky" estimates will tend to be favoured, and you can't tell in principle between "lucky" and "better" interventions.)

If you're presented with multiple priors, and they all seem similarly reasonable to you, but depending on which ones you choose, different actions will be favoured, how would you choose how to act? It's not just a matter of different people disagreeing on priors, it's also a matter of committing to particular priors in the first place.

If one action is preferred with almost all of the priors (perhaps rare in practice), isn't that a reason (perhaps insufficient) to prefer it? To me, using this could be an improvement over just using priors, because I suspect it will further reduce the impacts of noise, and if it is an improvement, then just using priors never fully solved the problem in practice in the first place.

I agree with the rest of your comment. I think something like that would be useful.

Comment by michaelstjules on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-14T23:09:51.766Z · score: 3 (4 votes) · EA · GW
What do you mean by "a good position"?
I'm getting a little confused about what sorts of concrete conclusions we are supposed to take away from here.

I'm not saying we shouldn't use priors or that they'll never help. What I am saying is that they don't address the optimizer's curse just by including them, and I suspect they won't help at all on their own in some cases.

Maybe checking sensitivity to priors and further promoting interventions whose value depends less on them (among some set of "reasonable" priors) would help. You could see this as a special case of Chris's suggestion to "Entertain multiple models".

Perhaps you could even use an explicit model to combine the estimates or posteriors from multiple models into a single one in a way that either penalizes sensitivity to priors or gives less weight to more extreme estimates, but a simpler decision rule might be more transparent or otherwise preferable. From my understanding, GiveWell already uses medians of its analysts' estimates this way.

Ah, I guess we'll have to switch to a system of epistemology which doesn't bottom out in unproven assumptions. Hey hold on a minute, there is none.

I get your point, but the snark isn't helpful.

Comment by michaelstjules on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-14T17:57:45.144Z · score: 3 (4 votes) · EA · GW
Yes, but it's very hard to attack any particular prior as well.

I don't think this leaves you in a good position if your estimates and rankings are very sensitive to the choice of "reasonable" priors. Chris illustrated this in his post at the end of part 2 (with the atheist example), and in part 3.

You could try to choose some compromise between these priors, but there are multiple "reasonable" ways to compromise. You could introduce a prior on these priors, but you could run into the same problem with multiple "reasonable" choices for this new prior.

Comment by michaelstjules on Existential risk as common cause · 2019-02-24T19:54:01.114Z · score: 2 (2 votes) · EA · GW

I think even more people have things in the bads set, and there will be more agreement on these values, too, e.g. suffering, cruelty and injustice. The question is then a matter of weight.

Most people (and probably most EAs) aren't antinatalists, so you would expect, for them, the total good to outweigh the total bad. Or, they haven't actually thought about it enough.

Comment by michaelstjules on Cause profile: mental health · 2018-12-31T20:08:56.093Z · score: 3 (3 votes) · EA · GW

OTOH, while current mental health issues may prevent altruism, prior experiences of suffering may lead to increased empathy and compassion.

Comment by michaelstjules on What’s the Use In Physics? · 2018-12-30T22:40:11.719Z · score: 4 (4 votes) · EA · GW

A few more: energy (nuclear fusion, green tech, energy storage), medical physics, quantum computing (and its medical applications), risks from space and preparedness for worst case scenarios (like ALLFED).

Comment by michaelstjules on How High Contraceptive Use Can Help Animals? · 2018-12-30T18:31:07.557Z · score: 1 (1 votes) · EA · GW
By preventing one pregnancy in Vietnam, we save approximately: 30 mammals 850 chickens 1395 fish from being produced in factory-farmed conditions (or 35 626 welfare points).

Is this only from the animal products the child would have eaten themself? Should the consumption from that child's descendants be included?

None of the GiveWell/ACE top or standout charities are working in these areas.

FWIW, TLYCS recommends PSI and DMI, and DMI is one of GiveWell's standout charities, and both do family planning work.

Comment by michaelstjules on How High Contraceptive Use Can Help Animals? · 2018-12-30T10:10:03.112Z · score: 2 (2 votes) · EA · GW

FWIW, this is aimed at developing countries.

Couldn't you say the same about GiveWell's evaluation of AMF, TLYCS's evaluation of PSI or the evaluation of any other charity or intervention that would predictably affect population sizes? ACE doesn't consider impacts on wild animals for most of the charities/interventions it looks into, either, despite the effects of agriculture on wild animals.

My impression is that Charity Science/Entrepreneurship prioritizes global health/poverty and animal welfare, so we shouldn't expect them to consider the effects on technological advancement or GCRs anymore than we should expect GiveWell, TLYCS or ACE to.

They have worked on evaluating animal welfare, though, so it would be nice to see this work applied here for wild animals.

EDIT: Oh, is the concern that they're looking at a more biased subset of possible effects (by focusing primarily on effects that seem positive)?

Comment by michaelstjules on Detecting Morally Significant Pain in Nonhumans: Some Philosophical Difficulties · 2018-12-29T01:07:15.354Z · score: 9 (3 votes) · EA · GW

For the Rethink Priorities project, why not also look into consciousness in plant species (e.g. mimosa and some carnivorous plants), AI (especially reinforcement learning) and animal/brain simulations (e.g. OpenWorm)? Whether or not they're conscious (or conscious in a way that's morally significant), they can at least provide some more data to adjust our credences in the consciousness of different animal species; they can still be useful for comparisons.

I understand that there will be little research to use here, but I expect this to mean proportionately less time will be spent on them.

Comment by michaelstjules on The harm of preventing extinction · 2018-12-26T06:19:06.450Z · score: 5 (3 votes) · EA · GW
My rough answer to this is: If someone wants to die (after thinking about it for a long time and having time to reflect on it), let them die.

Some people don't have the choice to die, because they're prevented from it, like victims of abuse/torture or certain freak accidents.

I don't see how the atrocities that are experienced by humans outweigh the benefits, given that the vast majority of humans seem to have a pretty decent will to live.

I think this is a problem with the idea of "outweigh". Utilitarian interpersonal tradeoffs can be extremely cruel and unfair. If you think the happiness can aggregate to outweigh the worst instances of suffering:

1. How many additional happy people would need to be born to justify subjecting a child to a lifetime of abuse and torture?

2. How many extra years of happy life for yourself would you need to justify subjecting a child to a lifetime of abuse and torture?

The framings might invoke very different immediate reactions (2 seems much more accusatory because the person benefitting from another's abuse and torture is the one making the decision to subject them to it), but for someone just aggregating by summation, like a classical utilitarian, they're basically the same.

I think it's put pretty well here, too:

There’s ongoing sickening cruelty: violent child pornography, chickens are boiled alive, and so on. We should help these victims and prevent such suffering, rather than focus on ensuring that many individuals come into existence in the future. When spending resources on increasing the number of beings instead of preventing extreme suffering, one is essentially saying to the victims: “I could have helped you, but I didn’t, because I think it’s more important that individuals are brought into existence. Sorry.”
Comment by michaelstjules on The expected value of extinction risk reduction is positive · 2018-12-23T20:54:48.327Z · score: 1 (1 votes) · EA · GW

Isn't it equally justified to assume that their welfare in the conditions they were originally optimized/designed for is 0 in expectation? If anything, it makes more sense to me to make assumptions about this setting first, since it's easier to understand their motivations and experiences in this setting based on their value for the optimization process.

Apart from that, I am not sure if the two assumptions listed as bullet points above will actually hold for the majority of "sentient tools".

We can ignore any set of tools that has zero total wellbeing in expectation; what's left could still dominate the expected value of the future. We can look at sets of sentient tools that we might think could be biased towards positive or negative average welfare:

1. the set of sentient tools used in harsher conditions,

2. the set used in better conditions,

3. the set optimized for pleasure, and

4. the set optimized for pain.

Of course, there are many other sets of interest, and they aren't all mutually exclusive.

The expected value of the future could be extremely sensitive to beliefs about these sets (their sizes and average welfares). (And this could be a reason to prioritize moral circle expansion instead.)

Comment by michaelstjules on The expected value of extinction risk reduction is positive · 2018-12-18T21:05:13.242Z · score: 2 (2 votes) · EA · GW
Assuming that future agents are mostly indifferent towards the welfare of their “tools”, their actions would affect powerless beings only via (in expectation random) side-effects. It is thus relevant to know the “default” level of welfare of powerless beings.

By "in expectation random", do you mean 0 in expectation? I think there are reasons to expect the effect to be negative (individually), based on our treatment of nonhuman animals. Our indifference to chicken welfare has led to severe deprivation in confinement, more cannibalism in open but densely packed systems, the spread of diseases, artificial selection causing chronic pain and other health issues, and live boiling. I expect chickens' wild counterparts (red jungle fowls) to have greater expected utility, individually, and plausibly positive EU (from a classical hedonistic perspective, although I'm not sure either way). Optimization for productivity seems usually to come at the cost of individual welfare.

Even for digital sentience, if designed with the capacity to suffer -- regardless of our intentions and their "default" level of welfare, and especially if we mistakenly believe them not to be sentient -- we might expect their levels of welfare to decrease as we demand more from them, since there's not enough instrumental value for us to recalibrate their affective responses or redesign them with higher welfare. The conditions in which they are used may become significantly harsher than the conditions for which they were initially designed.

It's also very plausible that many of our digital sentiences will be designed through evolutionary/genetic algorithms or other search algorithms that optimize for some performance ("fitness") metric, and because of how expensive these approaches are computationally, we may be likely to reuse the digitial sentiences with only minor adjustments outside of the environments for which they were optimized. This is already being done for deep neural networks now.

Similarly, we might expect more human suffering (individually) from AGI with goals orthogonal to our welfare, an argument against positive expected human welfare.

Comment by michaelstjules on Existential risk as common cause · 2018-12-09T19:25:07.687Z · score: 6 (6 votes) · EA · GW

You can get similar value-independence in favour of extinction by using "bads" instead of "goods". Many of the values in Oesterheld's list have opposites which could reasonably be interpreted as "bads", and some of them are already "bads", e.g. suffering, pain and racism.

Comment by michaelstjules on Existential risk as common cause · 2018-12-09T19:04:01.825Z · score: 7 (4 votes) · EA · GW

Besides the person-affecting views and disvalue of life covered here, if an individual has an Epicurean view of life and death (another kind of person-affecting view), i.e. death is not bad, then improving wellbeing should probably take priority. And while Epicureanism assigns 0 disvalue to death (ignoring effects on others), one could assign values arbitrarily close to 0.

There are also issues with dealing with infinities that make utilitarianism non-action guiding (it doesn't tell us what to do in most practical cases); you could probably throw these in with nihilism. E.g. if the universe is unbounded ("infinite") in space or time, then we can't change the total sum of utility, and that number is not even well-defined (not even +infinity or -infinity) with the usual definitions of convergence in the real numbers. If you assign any nonzero probability to an infinite universe, you end up with the same problem, but it's actually pretty likely that the universe is spatially unbounded. There are several attempts at solutions, but all of them have pretty major flaws, AFAIK.

Some person-affecting views can help, i.e. using a Pareto principle, but then it's not clear how to deal with individuals whose exact identities depend on your decisions (or maybe we just ignore them; many won't like that solution), and there are still many cases that can't be handled. There's discussion in this podcast, with some links for more reading (ctrl-F "Pareto" after expanding the transcript):

Rounding sufficiently small probabilities to 0 and considering only parts of the universe we're extremely confident we can affect can help, too. This proposed solution and a few others are discussed here:

You could also have a bounded vNM utility function, but this means assigning decreasing marginal value to saving lives, and how you divide decisions/events matters, e.g. "saving 1 life and then saving 1 life" > "saving 2 lives and then saving 0 lives".

For the unbounded time case (assuming we can handle or avoid issues with unbounded space, and people might prefer not to treat time and space differently):

Comment by michaelstjules on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-11-04T06:05:08.670Z · score: 1 (1 votes) · EA · GW

Regarding interpersonal comparisons,

So long as these differences are randomly distributed, they will wash out as ‘noise’ across large numbers of people

I think this is a crucial assumption that may not hold when comparing groups, i.e. there could be group differences (which could involve the same people, but before and after some event) in interpretations of the scales, due to differences in experiences between the two groups, e.g. a disability.

That immigrants to Canada seem to use the scales similarly to Canadians doesn't mean they weren't using the scales differently before they came to Canada. I think we actually discussed scale issues with life satisfaction on Facebook before (prompted by you?), and differences after adjusting for item responses seem to suggest different interpretations of the scale (or the items), or different relationships between the items. Two examples (cited in one of the papers in your reading list,

But there's an obvious response here: we should use IRT to adjust for different scale interpretations.

Comment by michaelstjules on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-11-04T04:14:34.947Z · score: 1 (1 votes) · EA · GW

I think that the evidence you present in section 4, e.g. that people interpret scales as equal-interval and that immigrants have similar SWB, could be a good response to this paper, though, because it suggests that we can interpret the discrete life satisfaction scale as cardinal and just aggregate it instead.

Comment by michaelstjules on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-11-01T04:18:13.480Z · score: 1 (1 votes) · EA · GW

The theoretical results don't depend on the scale being 3-point. Their argument deals directly with the assumed underlying normal distributions and transforms them into log-normal distributions with the order of the expected values reversed, so it doesn't matter how you've estimated the parameters of the normal distributions or if you've even done it at all.

In the case of life satisfaction scales, is there any empirical evidence we could use to decide the form of the underlying continuous distribution?

They do suggest that you could "use objective measures to calibrate cardinalizations of happiness", e.g. with incidence of mental illness, or frequencies of moods, as the authors have done something similar here

Comment by michaelstjules on EA Hotel with free accommodation and board for two years · 2018-06-22T02:57:15.747Z · score: 3 (3 votes) · EA · GW

V*ganism shows very high 'recidivism' rates in the general population. Most people who try to stop eating meat/animal products usually end up returning to eat these things before long.

FWIW, based on Faunalytics surveys, the recidivism rate seems to be about 50% for vegans motivated by animal protection specifically:

Comment by michaelstjules on EA Hotel with free accommodation and board for two years · 2018-06-22T02:34:19.145Z · score: 3 (3 votes) · EA · GW

There are protected characteristics, like race and gender, and the only way I can see EA/non-EA being covered is through beliefs. This first link only says religion specifically, but the second includes philosophical beliefs more generally:

More here:

I would guess that nonprofits that only serve people of a certain protected characteristics can also be legal, e.g. women's shelters. Maybe it could fall under Services and public functions, Premises or Associations: