Why I made a career switch

2019-03-12T09:00:03.902Z · score: 17 (10 votes)

The most serious moral illusion: arbitrary group selection

2019-02-21T22:45:43.560Z · score: 10 (5 votes)

Higher and more equal: a case for optimism

2018-12-26T22:10:55.362Z · score: 11 (8 votes)

Speciesism, arbitrariness and moral illusions

2018-12-17T22:30:59.940Z · score: 8 (3 votes)
Comment by stijnbruers on Reducing existential risks or wild animal suffering? · 2018-11-07T20:17:41.974Z · score: 0 (0 votes) · EA · GW

Thanks for the paper! Concerning the moral patients and mice: they indeed lack a capability to determine their reference values (critical levels) and express their utility functions (perhaps we can derive them from their revealed preferences). So actually this means those mice do not have a preference for a critical level or for a population ethical theory. They don't have a preference for total utilitarianism or negative utilitarianism or whatever. That could mean that we can choose for them a critical level and hence the population ethical implications, and those mice cannot complain against our choices if they are indifferent. If we strongly want total utilitarianism and hence a zero critical level, fine, then we can say that those mice also have a zero critical level. But if we want to avoid the sadistic repugnant conclusion in the example with the mice, fine, then we can set the critical levels of those mice higher, such that we choose the situation where those quadrillions of mice don't exist. Even the mice who do exist cannot complain against our choice for the non-existence of those extra quadrillion mice, because they are indifferent about our choice.

Comment by stijnbruers on Reducing existential risks or wild animal suffering? · 2018-11-06T13:37:24.505Z · score: 0 (2 votes) · EA · GW

Perhaps there is more of importance than merely welfare. Concerning the repugnant sadistic conclusion I can say two things. First, I am not willing to put myself and all my friends in extreme misery merely for the extra existence of quadrillions of people who have nothing but a small positive experience of tasting an apple. Second, when I would be one of those extra people living for a minute and tasting an apple, knowing that my existence involved the extreme suffering of billions of people who could otherwise have been very happy, I would rather not exist. That means even if my welfare of briefly tasting the apple (a nice juicy Pink Lady) is positive, I still have a preference for the other situation where I don't exist, so my preference (relative utility) in the situation where I exist is negative. So in the second situation where the extra people exist, if I'm one of the suffering people or one of the extra, apple-eating people, in both cases I have a negative preference for that situation. Or stated differently: in the first situation where only the billion happy people exist, no-one can complain (the non-existing people are not able to complain against their non-existence and against their forgone happiness of tasting an apple). In the second situation, where those billion people are in extreme misery, they could complain. The axiom that we should minimize the sum of complaints is as reasonable as the axiom that we should maximize the sum of welfare.

Comment by stijnbruers on Reducing existential risks or wild animal suffering? · 2018-11-06T06:53:34.941Z · score: 0 (0 votes) · EA · GW

I don't see why the A-Z comparison is unreliable, based on your example. Why would the intuitions behind the repugnant conclusion be less reliable than intuitions behind our choice for some axioms? And we're not merely talking about the repugnant conclusion, but about the sadistic repugnant conclusion, which is intuitivelly more repugnant. So suppose we have to choose between two situations. In the first situation, there is only one next future human generation after us (let's say a few billion people), all with very long and extremely happy lives. In the second situation, there are quadrillions of future human generations, with billions of people, but they only live for 1 minute where they can experience the joy of taking a bite from an apple. Except for the first generation who will extremely suffer for many years. So in order to have many future generations, the first of those future generations will have to live lives of extreme misery. And all the other future lives are nothing more than tasting an apple. Can the joy of quadrillions of people tasting an apple trump the extreme misery of billions of people for many years?

Comment by stijnbruers on Reducing existential risks or wild animal suffering? · 2018-11-04T13:04:05.708Z · score: 1 (1 votes) · EA · GW

I very much agree with these points you make. About choice dependence: I'll leave that up to every person for themselves. For example, if everyone strongly believes that the critical levels should be choice set independent, then fine, they can choose independent critical levels for themselves. But the critical levels indeed also reflect moral preferences, and can include moral uncertainty. So for example someone with a string credence in total utilitarianism might lower his or her critical level and make it choice set independent.

About the extreme preferences: I suggest people can choose a normalization procedure, such as variance normalization (cfr Owen-Cotton Barrett (http://users.ox.ac.uk/~ball1714/Variance%20normalisation.pdf) and here: https://stijnbruers.wordpress.com/2018/06/06/why-i-became-a-utilitarian/

"It's worth noting that the resulting theory won't avoid the sadistic repugnant conclusion unless every agent has very very strong moral preferences to avoid it. But I think you're OK with that. I get the impression that you're willing to accept it in increasingly strong forms, as the proportion of agents who are willing to accept it increases." Indeed!

Comment by stijnbruers on Reducing existential risks or wild animal suffering? · 2018-11-04T08:12:12.487Z · score: 0 (0 votes) · EA · GW

but the critical level c is variable, and can depend on the choice set. So suppose the choice set consists of two situations. In the first, I exist and I have a positive welfare (or utility) w>0. In the second case, I don't exist and there is another person with a negative utility u<0. His relative utility will also be u'<0. For any positive welfare I can pick a critical level c>0, but c<w-u', such that my relative utility w-c>u', which means it would be better if I exist. So you turned it around: instead of saying "for any critical level c there is a welfare w...", we should say: "for any welfare w there is a critical level c..."

Comment by stijnbruers on Reducing existential risks or wild animal suffering? · 2018-11-03T21:41:12.101Z · score: 0 (0 votes) · EA · GW

"If you think the idea of people with negative utility being created to prevent your happy existence is even more counterintuitive than people having negative welfare to produce your happy existence, it would seem your view would demand that you set a critical value of 0 for yourself." No, my view demands that we should not set the critical level too high. A strictly positive critical level that is low enough such that it would not result in the choice for that counter-intuitive situation, is still posiible.

"A situation where you don't exist but uncounted trillions of others are made maximally happy is going to be better in utilitarian terms (normal, critical-level, variable, whatever), regardless of your critical level (or theirs, for that matter)." That can be true, but still I prefer my non-existence in that case, so something must be negative. I call that thing relative utility. My relative utility is not about overall betterness, but about my own preference. A can be better than B in utilitarian terms, but still I could prefer B over A.

Comment by stijnbruers on Reducing existential risks or wild animal suffering? · 2018-11-03T11:46:41.192Z · score: 0 (0 votes) · EA · GW

“If individuals are allowed to select their own critical levels to respect their autonomy and preferences in any meaningful sense, that seems to imply respecting those people who value their existence and so would set a low critical level; then you get an approximately total view with regards to those sorts of creatures, and so a future populated with such beings can still be astronomically great.” Indeed: if everyone in the future (except me) would be a total utilitarian, willing to bite the bullet and accept the repugnant sadistic conclusion, setting a very low critical level for themselves, I would accept their choices and we end up with a variable critical level utilitarianism that is very very close to total utilitarianism (it is not exactly total utilitarianism, because I would be the only one with a higher critical level). So the question is: how many people in the future are willing to accept the repugnant sadistic conclusion?

“The treatment of zero levels seems inconsistent: if it is contradictory to set a critical level below the level one would prefer to exist, it seems likewise nonsensical to set it above that level.” Utility measures a preference for a certain situation, but this is independent from other possible situations. However, the critical level and hence the relative utility also takes into account other possible situations. For example: I have a happy life with a positive utility. But if one could choose another situation where I did not exist and everyone else was maximally happy and satisfied, I would prefer (if that would still be an option) that second situation, even if I don’t exist in that situation. That means my relative utility could be negative, if that second situation was eligible. So in a sense, in a particular choice set (i.e. when the second situation is available), I prefer my non-existence. Preferring my non-existence, even if my utility is positive, means I choose a critical level that is higher than my utility.

“You suggest that people set their critical levels based on their personal preferences about their own lives, but then you make claims about their choices based on your intuitions about global properties like the Repugnant Conclusion, with no link between the two.” I do not make claims about their choices based on my intuitions. All I can say is that if people really want to avoid the repugnant sadistic conclusion, they can do so by setting a high critical level. But to be altruistic, I have to accept the choices of everyone else. So if you all choose a critical level of zero, I will accept that, even if that means accepting the repugnant sadistic conclusion, which is very counter intuitive to me.

“The article makes much about avoiding repugnant sadistic conclusion, but the view you seem to endorse at the end would support creating arbitrary numbers of lives consisting of nothing but intense suffering to prevent the existence of happy people with no suffering who set their critical level to an even higher level than the actual one.” This objection to fixed critical level utilitarianism can be easily avoided with variable critical level utilitarianism. Suppose there is someone with a positive utility (a very happy person), who sets his critical level so high that a situation should be chosen where he does not exist, and where extra people with negative utilities exist. Why would he set such a high critical level? He cannot want that. This is even more counter-intuitive than the repugnant sadistic conclusion. With fixed critical level utilitarianism, such counter-intuitive conclusion can occur because everyone would have to accept the high critical level. But variable critical level utilitarianism can easily avoid it by taking lower critical levels.

Comment by stijnbruers on Reducing existential risks or wild animal suffering? · 2018-11-03T09:13:20.376Z · score: 0 (0 votes) · EA · GW

I honestly don't see yet how setting a high critical level to avoid the repugnant sadistic conclusion would automatically result in counter-intuitive problems with lexicality of a quasi-negative utilitarianism. Why would striking a compromise be less preferable than going all the way to a sadistic conclusion? (for me your example and calculations are still unclear: what is the choice set? What is the distribution of utilities in each possible situation?)

With rigidity I indeed mean having strong requirements on critical levels. Allowing to choose critical levels dependent on the choice set is an example that introduces much more flexibility. But again, I'll leave it up to everyone to decide for themselves how rigidly they prefer to choose their own critical levels. If you find the choice set dependence of critical levels and relative utilities undesirable, you are allowed to pick your critical level independently from the choice set. That's fine, but we should accept the freedom of others not to do so.

Comment by stijnbruers on Reducing existential risks or wild animal suffering? · 2018-11-02T12:25:10.598Z · score: 0 (0 votes) · EA · GW

I guess your argument fails because it still contains too much rigidity. For example: the choice of critical level can depend on the choice set: the set of all situations that we can choose. I have added a section in my original blog post, which I copy here. <0. However, suppose another situation S2 is available for us (i.e. we can choose situation S2), in which that person i will not exist, but everyone else is maximally happy, with maximum positive utilities. Although person i in situation S1 will have a positive utility, that person can still prefer the situation where he or she does not exist and everyone else is maximally happy. It is as if that person is a bit altruistic and prefers his or her non-existence in order to improve the well-being of others. That means his or her critical level C(i,S1) can be higher than the utility U(i,S1), such that his or her relative utility becomes negative in situation S1. In that case, it is better to choose situation S2 and not let the extra person be born. If instead of situation S2, another situation S2’ becomes available, where the extra person does not exist and everyone else has the same utility levels as in situation S1, then the extra person in situation S1 could prefer situation S1 above S2’, which means that his or her new critical level C(i,S1)’ remains lower than the utility U(i,S1). In other words: the choice of the critical level can depend on the possible situations that are eligible or available to the people who must make the choice about who will exist. If situations S1 and S2 are available, the chosen critical level will be C(i,S1), but if situations S1 and S2’ are available, the critical level can change into another value C(i,S1)’. Each person is free to decide whether or not his or her own critical level depends on the choice set.>> So suppose we can choose between two situations. In situation A, one person has utility 0 and another person has utility 30. In situation Bq, the first person has utility -10 and instead of a second person there are now a huge number of q persons, with very low but still positive utilities (i.e. low levels of k). If the extra people think that preferring Bq is sadistic/repugnant, they can choose higher critical levels such that in this choice set between A and Bq, situation A should be chosen. If instead of situation A we can choose situations B or C, the critical levels may change again. In the end, what this means is something like: let’s present to all (potential) people the choice set of all possible (electable) situations that we can choose. Now we let them choose their preferred situation, and let them then determine their own critical levels to obtain that preferred situation given that choice set.

Comment by stijnbruers on Reducing existential risks or wild animal suffering? · 2018-11-02T12:15:49.958Z · score: 0 (0 votes) · EA · GW

I would say the utility of a person in a situation S measures how strongly a person prefers that given situation, independently from other possible situations that we could have chosen. But in the end the thing that matters is someone’s relative utility, which can be written as the utility minus a personal critical level. This indeed reframes the discussion into one about where the zero point of utility should lie. In particular, when it comes to interpersonal comparisons of utility or well-being, the utilities are only defined up to an affine transformation, i.e. up to multiplication with a scalar and addition with a constant term. This possible addition of a term basically sets the zero point utility level. I have written more about it here: https://stijnbruers.wordpress.com/2018/07/03/on-the-interpersonal-comparability-of-well-being/

Comment by stijnbruers on Reducing existential risks or wild animal suffering? · 2018-10-30T16:36:20.197Z · score: 1 (1 votes) · EA · GW

Thanks for your comments

“In particular, you'll have a very hard time convincing anyone who takes morality to be mind-independent to accept this view. I would find the view much more plausible if the critical level were determined for each person by some other means.” But choosing a mind-independent critical level seems difficult. By what other means could we determine a critical level? And why should that critical level be the same for everyone and the same in all possible situations? If we can’t find an objective rule to select a universal and constant critical level, picking a critical level introduces an arbitrariness. This arbitrariness can be avoided by letting everyone choose for themselves their own critical levels. If I choose 5 as my critical level, and you choose 10 for your critical level, these choices are in a sense also arbitrary (e.g. why 5 and not 4?) but at least they respect our autonomy. Furthermore: I argued elsewhere that there is no predetermined universal critical level: https://stijnbruers.wordpress.com/2018/07/03/on-the-interpersonal-comparability-of-well-being/

“If you don't allow any, then I am free to choose a low negative critical level and live a very painful life, and this could be morally good. But that's more absurd than the sadistic repugnant conclusion, so you need some constraints.” I don’t think you are free to choose a negative critical level, because that would mean you would be ok to have a negative utility, and by definition that is something you cannot want. If your brain doesn’t like pain, you are not free to choose that from now on you like pain. And if your brain doesn’t want to be altered such that it likes pain, you are not free to choose to alter your brain. Neither are you free to invert your utility function, for example.

“You seem to want to allow people the autonomy to choose their own critical level but also require that everyone chooses a level that is infinitesimally less than their welfare level in order to avoid the sadistic repugnant conclusion.” That requirement is merely a logical requirement. If people want to avoid the sadistic repugnant conclusion, they will have to choose a high critical level (e.g. the maximum preferred level, to be safe). But there may be some total utilitarians who are willing to bite the bullet and accept the sadistic repugnant conclusion. I wonder how many total utilitarians there are.

“But also, I don't see how you can use the need to avoid the sadistic repugnant conclusion as a constraint for choosing critical levels without being really ad hoc.” What is ad hoc about it? If people want to avoid this sadistic conclusion, that doesn’t seem to be ad hoc to me. And if in order to avoid that conclusion they choose a maximum preferred critical level, that doesn’t seem ad hoc either.

“you might claim that all positive welfare is only of infinitesimal moral value but that (at least some) suffering is of non-infinitesimal moral disvalue.” As you mention, that also generates some counter-intuitive implications. The variable critical level utilitarianism (including the quasi-negative utilitarianism) can avoid those counter-intuitive implications that result from such lexicalities with infinitesimals. For example, suppose we can bring two people into existence. The first one will have a negative utility of -10, and suppose that person chooses 5 as his critical level. So his relative utility will be -15. The second person will have a utility +30. In order to allow his existence, that person can select a critical value infinitesimally below 15 (which is his maximally preferred critical level). Bringing those two people into existence will become infinitesimally good. And the second person will have a relative utility of 15, which is not infinitesimal (hence no lexicality issues here).

“If the expected value of working on x-risk according to CU is many times greater than the expected value of working on WAS according to QNU (which is plausible), then all else being equal, you need your credence in QNU to be many times greater than your credence in CU. We could easily be looking at a factor of 1000 here, which would require something like a credence < 0.1 in CU, but that's surely way too low, despite the sadistic repugnant conclusion.” I agree with this line of reasoning, and the ‘maximise expected choice-worthiness’ idea is reasonable. Personally, I consider this sadistic repugnant conclusion to be so extremely counter-intuitive that I give total utilitarianism a very very very low credence. But if say a majority of people are willing to bite the bullet and are really total utilitarians, my credence in this theory can strongly increase. In the end I am a variable critical level utilitarian, so people can decide for themselves their critical levels and hence their preferred population ethical theory. If more than say 0,1% of people are total utilitarianism (i.e. choose 0 as their critical level), reducing X-risks becomes dominant.

“I imagine we'd be better off working on large scale s-risks directly.” I agree with the concerns about s-risk and the level of priority of s-risk reduction, but I consider a continued wild animal suffering for millions of years as the most concrete example that we have so far about an s-risk.

Reducing existential risks or wild animal suffering?

2018-10-28T09:51:47.419Z · score: 1 (13 votes)
Comment by stijnbruers on Reducing Wild Animal Suffering Literature Library: Introductory Materials, Philosophical & Empirical Foundations · 2018-05-06T21:52:50.139Z · score: 1 (1 votes) · EA · GW

just to add for completeness (or as a way of self-promotion): the "psychological" foundations of WAS (in particular the cognitive biases that lead to an anti-intervention attitude): https://stijnbruers.wordpress.com/2016/07/20/moral-illusions-and-wild-animal-suffering-neglect/ https://stijnbruers.wordpress.com/2016/09/12/on-intervention-in-nature-human-arrogance-and-moral-blind-spots/

Comment by stijnbruers on Descriptive Population Ethics and Its Relevance for Cause Prioritization · 2018-04-15T19:26:40.739Z · score: 1 (1 votes) · EA · GW

This project seems to be a bit similar to an idea that I have. I start with a population ethical view of variable critical level utilitarianism https://stijnbruers.wordpress.com/2018/02/24/variable-critical-level-utilitarianism-as-the-solution-to-population-ethics/ So everyone can choose his or her own preferred critical level utility. Most people seem to agreggate around two values: 1) the totalists prefer a critical level of 0, which corresponds with total utilitarianism (the totalist view) and 2) the personalists or negativists prefer a conditionally maximum critical level (for example the utility of the most prefered state), which is close to negative utilitarianism and the person-affecting view. (I will not go into the conditionality part here) When we create new people, they can be either totalists or personalists (or something else, but that seems to be a minority. Or they can be in a morally uncertain, undecided superposition between totalists and personalists, but then we are allowed to choose for them their critical levels. If we make a choice for a situation where a totalist with a positive utility (well-being) is created, that positive utility counts as a benefit or a gratitude regarding our choice. If we caused the existence of a personalist (or negativist), we did not create a benefit. And if that personalist complains against our choice because it prefers another situation, we actually harmed that person. Now we have to add all benefits and harms (all gratitudes and complaints) for everyone who will exist in the choice that we will make. Concerning the far future and existential risks, we need to know how many totalists and personalists there will be in the future. Studying the current distribution of totalists and personalists can give us a good estimate. This might be related to the N-ratios of people. Totalists have low N-ratios, personalists/negativists have high N-ratios

Comment by stijnbruers on The person-affecting value of existential risk reduction · 2018-04-15T19:04:07.774Z · score: 1 (1 votes) · EA · GW

Perhaps interesting in this context: my current population ethical view of variable critical level utilitarianism https://stijnbruers.wordpress.com/2018/02/24/variable-critical-level-utilitarianism-as-the-solution-to-population-ethics/

Comment by stijnbruers on [deleted post] 2018-04-15T19:01:19.124Z

I suggest to leave it up to the other persons to decide whether they are benefitted. For example: I have a happy, positive life, so I claim that my parents benefitted me when they caused my existence. So there does exist someone (me, now, in this situation) who claims to be benefitted by the choice of someone else (my parents 38 years ago), even if in the counterfactual I do not exist. So my parents made a choice for a situation where there is a bit more benefit added to the total benefit. If you disagree in the sense that you don't think you were benefitted by your parents when they chose for your existence (even when you are as happy as I am), then that means your parents did not create an extra bit if benefit and you were not benefitted. More on this here: https://stijnbruers.wordpress.com/2018/02/24/variable-critical-level-utilitarianism-as-the-solution-to-population-ethics/

Comment by stijnbruers on Don't sweat diet? · 2017-01-07T01:08:30.246Z · score: 0 (0 votes) · EA · GW

I think there is a mistake in the calculation: Price (in THL donations) of veganism = 1/45 71.1 / 3.4 LIVES (instead of YEARS)/$ = $0.46/life. Assuming most lives are chickens who live on average 1/10 of a year (5-6 weeks), we get about $5/year vegan. This estimate is in line with ACE (https://animalcharityevaluators.org/research/donation-impact/) and Counting Animals (http://www.countinganimals.com/how-many-animals-does-a-vegetarian-save/): 400 lives/year vegan divided by 76000 lives/1000$= $5/year vegan. The latter is the marginal impact, which is slightly better than the average in the US: $50.000.000/year donations to vegan and animal farm organisations divided by 5.000.000 vegans= $10/year vegan. The offset price can increase in the future if it becomes more difficult to convert people to vegnism (if the low hanging fruit of meat eaters is already converted)

Comment by stijnbruers on Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering · 2016-09-22T21:25:04.126Z · score: 2 (2 votes) · EA · GW

I recently wrote a few articles on intervention in nature to decrease WAS (https://stijnbruers.wordpress.com/2016/07/20/moral-illusions-and-wild-animal-suffering-neglect/ https://stijnbruers.wordpress.com/2016/09/12/on-intervention-in-nature-human-arrogance-and-moral-blind-spots/), based on a presentation I gave at the animal rights conference (https://www.youtube.com/watch?v=VtjKP42MkWY&list=PLqPXWQAGKrla8wXF-Axy74rGvHLIlqnvF&index=10.)

Comment by stijnbruers on New climate change report from Giving What We Can · 2016-08-11T18:44:29.278Z · score: 0 (0 votes) · EA · GW

There has been made another estimate of DALY/ton CO2: http://www.leidenuniv.nl/cml/ssp/publications/recipe_characterisation.pdf (Goedkoop M. e.a. (2009). ReCiPe 2008. A life cycle impact assessment method which comprises harmonised category indicators at the midpoint and the endpoint level. Report I: Characterisation. Ministry of Housing, Spatial Planning and Environment, the Netherlands.) The result is about 0,0014 DALY/ton CO2 (page 31 table 3.7), which is 10 times higher than the WHO and Haydens Giving what we can estimates. What explains this difference?