Posts

Should animal advocates donate now or later? A few considerations and a request for more. 2019-11-13T07:30:50.554Z · score: 19 (7 votes)
MichaelStJules's Shortform 2019-10-24T06:08:48.038Z · score: 5 (2 votes)
Conditional interests, asymmetries and EA priorities 2019-10-21T06:13:04.041Z · score: 13 (14 votes)
What are the best arguments for an exclusively hedonistic view of value? 2019-10-19T04:11:23.702Z · score: 7 (4 votes)
Defending the Procreation Asymmetry with Conditional Interests 2019-10-13T18:49:15.586Z · score: 18 (14 votes)
Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests 2019-07-04T23:56:44.330Z · score: 9 (9 votes)

Comments

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-14T01:06:03.212Z · score: 1 (1 votes) · EA · GW

Ah, my mistake. This is an interesting consideration.

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-14T00:33:06.778Z · score: 4 (4 votes) · EA · GW

I think formal academic peer-review by experts in the relevant fields could potentially improve the accuracy and overall quality of your work (not that I think it's of low quality; I've been pretty impressed overall, but I'm also no expert myself). You might also just be able to reach out to academics for review, if you're not already doing that. There are several academics and researchers at other organizations (e.g. in animal behaviour/cognition, economics, philosophy) who I'd imagine are sympathetic, and could be open to reviewing work.

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-13T15:01:51.885Z · score: 1 (1 votes) · EA · GW

What are the most significant ways you've changed your mind recently in relation to EA and EA priorities, philosophy and ethics?

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-13T14:36:55.891Z · score: 11 (6 votes) · EA · GW

How did you first get involved in effective altruism? What are the main factors and events that drove you to it, and what keeps you working on it now?

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-13T14:35:12.658Z · score: 7 (3 votes) · EA · GW

What are your ethical and metaethical views?

Do you see altruism as an ethical obligation?

Whatever you're comfortable sharing.

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-13T14:27:25.949Z · score: 3 (2 votes) · EA · GW

Also, behaviour generally, like being more careful while walking outside, or in dealing with insects and spiders in your homes?

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-13T14:25:53.037Z · score: 7 (3 votes) · EA · GW

Do you see your animal welfare work as mostly focused on the West? Do you have any plans to look at emerging economies and non-Western countries?

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-13T14:06:52.255Z · score: 2 (2 votes) · EA · GW

Maybe an interesting (but probably too ambitious) ask would be to extend the Preventing Animal Cruelty and Torture Act to farmed animals (and imports) at the state-level or even more locally, or otherwise remove exceptions. I think such a survey could tell us a lot about attitudes towards farmed animals, at least.

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-13T14:02:50.048Z · score: 7 (3 votes) · EA · GW

What are your plans and hopes for RP for the next 5, 10 years and beyond?

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-13T03:04:21.748Z · score: 14 (6 votes) · EA · GW

I think their invertebrate sentience research could become a publication, similar to

Sneddon, L. U., Elwood, R. W., Adamo, S. A., & Leach, M. C. (2014). Defining and assessing animal pain. Animal behaviour, 97, 201-212. https://animalstudiesrepository.org/acwp_arte/69/

Comment by michaelstjules on Where are you donating this year and why – in 2019? Open thread for discussion. · 2019-12-13T01:37:40.631Z · score: 10 (5 votes) · EA · GW

This is my first year donating. I welcome feedback.

My general plan is to support animal welfare, specifically intervention and (sub-)cause prioritization research, international movement growth and the current best-looking interventions, filtered through the judgment of full-time researchers/grantmakers.

I donated $7K (Canadian) to the EA Animal Welfare Fund about a month ago. I think they're the best-positioned to identify otherwise neglected animal welfare funding opportunities when evidence is relatively scarce, given members working at different animal protection orgs, and Lewis Bollard's years of experience in grantmaking.

I'm looking at donating another $30-40K (Canadian) to be split primarily between the following groups, roughly in decreasing order of proportion of funding, although I haven't decided on the exact amounts:

1. ACE's Recommended Charity Fund. I think the EAA community's research supporting corporate campaigns and ACE's research specifically have improved considerably in the past while, so I'm pretty confident in their choices working on these. I'm also happy to see expansion to countries previously neglected by EAA funding and support for further research.

2. Rethink Priorities. I've been consistently impressed by their research for animals so far, and I'm keen to see further research, especially on ballot initiatives, for which I'm pretty optimistic. Also, it looks like they've got a lot of room for funding, and it would be pretty cool if they hired Peter Hurford full-time. Btw, they have an AMA going on now.

3. Charity Entrepreneurship. Also very impressed by their research for animals so far, both exploratory and in-depth, including a cluster-thinking approach. I hope to see more of it, and any new animal welfare charities they might start.

4. Possibly the EA Animal Welfare Fund again.

5. RC Forward. Both for my own donations and as a public good for EAs in Canada, since they allow Canadians to get tax credits for donations to EA charities. More here and here.


It's worth noting that Rethink Priorities and Charity Entrepreneurship have each received funding from Open Philanthropy Project (Farm Animal Welfare) and EA Funds recently; RP from the Animal Welfare Fund and CE most recently from the Meta Fund (and previously from the Animal Welfare Fund).


I have a few other research orgs in mind, and I might also donate to Sentience Politics, for their campaign to support the referendum to end factory farming in Switzerland (some discussion here on Facebook). I'm also wondering about Veganuary, but I'm not in a good position to judge their counterfactual impact from the numbers they present.

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-13T00:25:20.275Z · score: 4 (3 votes) · EA · GW

I wonder if a diet consisting primarily of farmed bivalves would be the most ethical, ignoring cost. I still think they're very unlikely to be sentient, and much less likely to be sentient than the insects routinely killed with pesticides.

This could depend substantially on the effects on wild animals and your beliefs about the welfare of wild animals. What are the effects of agriculture on the populations of insects, for example, and how would insects live and die otherwise?

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-13T00:01:18.534Z · score: 1 (1 votes) · EA · GW

Minor correction: Kim and Jason each have a PhD. Daniela is also working on one.

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-12T23:52:55.717Z · score: 10 (5 votes) · EA · GW

Can you tell us a bit more about which animal welfare ballot initiatives and other policies are on your radar to research?

It's worth mentioning that Switzerland has a referendum coming up to ban factory farming, with campaign headed by Sentience Politics. Basically, it will require all farmed animals (and animal product imports) to be farmed according to organic standards. Some discussion here; based on the polls described here, it looks pretty promising. (The Swiss did reject a dehorning ban earlier, though.)

It seems like it's worth looking into bigger changes like this. EDIT: just came across this on my Facebook feed:

A new poll, commissioned by Johns Hopkins University’s Center for a Livable Future and released Tuesday, surveys voter sentiment on banning CAFO construction for the first time. Polling group Greenberg Quinlan Rosner surveyed 1,000 registered Democrats, Republicans, and Independents from across the nation and found that 43 percent of respondents favored a national ban on the creation of new factory farms, as opposed to 38 percent of respondents who oppose a ban. But in Iowa, where more than 400 additional registered voters were asked the same questions, sentiment flipped: more people opposed a ban than favored one.
(...)
Interestingly, the divisions intensified once poll respondents were given additional information from both sides of the issue. They were read factual statements in support of a moratorium, which cited concerns about health and environmental impact, and statements arguing for preservation of the status quo, which emphasized the benefits of a plentiful supply of cheap meat. After reading those statements, nationwide support for a ban rose from 43 percent to 49 percent, but opposition rose almost as much, from 38 percent to 42 percent.

Surprisingly, no mention of animal welfare on that page.

Also, will Charity Entrepreneurship's research influence which interventions you look into? They've done both a lot of exploratory and in-depth research. What orgs have research that you expect to most influence your research direction?

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-12T23:16:04.012Z · score: 12 (10 votes) · EA · GW

What's your funding gap to hire Peter full-time?

Let's make this happen!

Comment by michaelstjules on We're Rethink Priorities. AMA. · 2019-12-12T23:15:12.947Z · score: 14 (7 votes) · EA · GW

(I have no formal ties to RP.)

FWIW,

Currently, our funding gap through the end of 2021 is $1.79M overall. This consists of gaps of $1.27M for animal research, $337k for longtermism research, and $177k for meta / other research respectively. We do accept and track restricted funds by cause area if that is of interest.

Even if you gave unrestricted funding, most of it (70%, if they allocate funding proportionally) would likely end up in research for animals, anyway. With unrestricted funding, if you think donating to the other orgs is at most ~70% as cost-effective as donating to RP, it's worth donating to RP, but I also don't think a factor of 0.7 should really sway your donations, given how much uncertainty we should have, anyway.

Comment by michaelstjules on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-12-08T17:55:12.190Z · score: 1 (1 votes) · EA · GW

I think you might have replied to the wrong comment.

Comment by michaelstjules on What is EA's story? · 2019-11-30T23:43:49.728Z · score: 5 (3 votes) · EA · GW

Reading about Julia and Jeff was part of the reason I got into EA in the first place (I don't remember if those were the particular articles). It wasn't just the fact that they were donating a substantial fraction of their income, but also that they saw it as an obligation. At the time, I was going through an existential crisis; I felt guilt and shame for living selfishly while others suffered (and these feelings are still important motivators for me). EA was the solution I found, and I decided to try earning to give.

Comment by michaelstjules on What is EA's story? · 2019-11-30T22:02:09.703Z · score: 7 (5 votes) · EA · GW

Some accounts:

http://bostonreview.net/books-ideas-mccoy-family-center-ethics-society-stanford-university/lives-moral-saints

https://slatestarcodex.com/2017/08/16/fear-and-loathing-at-effective-altruism-global-2017/

https://www.theguardian.com/world/2015/sep/22/extreme-altruism-should-you-care-for-strangers-as-much-as-family

https://www.theguardian.com/money/2019/nov/09/i-give-away-half-to-three-quarters-of-my-income-every-year

https://forum.effectivealtruism.org/posts/FA794RppcqrNcEgTC/why-are-you-here-an-origin-stories-thread

https://forum.effectivealtruism.org/posts/69wvx9vBmfpaovWTs/a-taxonomy-of-ea-origin-stories

Comment by michaelstjules on Opinion: Estimating Invertebrate Sentience · 2019-11-15T05:27:59.451Z · score: 2 (2 votes) · EA · GW

I'm a little surprised that the estimates for chickens and cows aren't higher. Personally, I find evidence of complex and varied emotions to be very compelling, especially social emotions, e.g. play behaviour, emotional empathy/contagion, affection and social attachments to particular individuals (companionship), helping behaviour (altruism), parenting generally, separation anxiety and perhaps even something like grief. Also, possible emotional reactions of cattle to learning. :P

I would be comfortable using the word 'love' to describe the attachments chickens and cows often have towards others, although it may of course be quite different from an adult human's experience of love, but perhaps not that different from an infant's or toddler's. It's hard for me to imagine an individual capable of love like this not being sentient.

I suppose I also give weight to anecdotes and videos of individual animals, though.

Comment by michaelstjules on Opinion: Estimating Invertebrate Sentience · 2019-11-15T02:19:11.099Z · score: 2 (2 votes) · EA · GW

How should we interpret ranges of probabilities here?

We can talk about confidence (credence) intervals for frequencies for the population we're sampling from for polls and surveys. For species (or individuals) with characteristics of interest (possibly a feature or its absence) , we could describe our probability distribution over the fraction of them that are sentient.

Another approach might be to try to quantify the sensitivity to new information, e.g. if we also observed another given capacity (or its absence), how much would our estimate change? If we model the probability that a species (or individual) will have a set of characteristics of interest given a fixed set of observed characteristics, we could compute a credence interval for our posterior probability of sentience with , over the distribution of conditional on observed characteristics.

Are either of these what some of you had in mind, roughly (even if you didn't actually calculate anthing)? Or something else?

Comment by michaelstjules on Should animal advocates donate now or later? A few considerations and a request for more. · 2019-11-14T06:25:04.850Z · score: 2 (2 votes) · EA · GW

I suppose many of the reasons I outline might be special cases of more generic reasons (especially for investing or donating to research), but it is worth pointing out what they look like in animal protection since it helps us weigh them more accurately. Some generic reasons might not apply to specific causes at all, and other generic reasons might be especially true for others.

I think 1-3 and 5 under giving now towards interventions are pretty specific to animal advocacy, although 5 applies to moral advocacy generally. I guess you could say 1 and 6 are special cases of the problem being solved eventually regardless, and 4 could be a consideration whenever there are incremental improvements.

I also just added a few more reasons which are fairly specific to animal protection in favour of giving later.

Comment by michaelstjules on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-13T18:53:20.827Z · score: 16 (9 votes) · EA · GW

What criteria were used to decide which orgs/individuals should be invited? Should we consider leaders at EA-recommended orgs or orgs doing cost-effective work in EA cause areas, but not specifically EA-aligned (e.g. Gates Foundation?), too? (This was a concern raised about representativeness of the EA handbook. https://forum.effectivealtruism.org/posts/MQWAsdm8MSzYNkD9X/announcing-the-effective-altruism-handbook-2nd-edition#KR2uKZqSmno7ANTQJ)

Because of this, I don't think it really makes sense to aggregate data over all cause areas. The inclusion criteria are likely to draw pretty arbitrary lines, and respondents will obviously tend to want to see more resources go to the causes they're working on, and will differ in other ways significantly by cause area. If the proportions of people working in a given cause don't match the proportion of EA funding people would like to see go to that cause, that is interesting, though, but we still can't take much away from it.

It seems weird to me that DeepMind and the Good Food Institute are on this list, but not, say, the Against Malaria Foundation, GiveDirectly, Giving What We Can, J-PAL, IPA, or the Humane League.

As stated, some orgs are small and so were not named, but still responded. Maybe a breakdown by the cause area for all the respondents would be more useful with the data you have already?

Comment by michaelstjules on Be the Match: a volunteer list for bone marrow donation · 2019-11-06T03:47:39.780Z · score: 4 (3 votes) · EA · GW

The post was in the negative for a bit, I think the day that it was posted or maybe the next day.

Comment by michaelstjules on Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' · 2019-11-06T03:40:16.170Z · score: 5 (2 votes) · EA · GW

The Supervenience Theorem is quite strong and interesting, but perhaps too strong for many with egalitarian or prioritarian intuitions. Indeed, this is discussed with respect to the conditions for the theorem. In its proof, it's shown that we should treat any problem like the original position behind the veil of ignorance (the one-person scenario; for individuals, we treat ourselves as having probability of being any of those individuals, and we consider only our own interests in that case), so that every interpersonal tradeoff is the same as a personal tradeoff. This is something that I'm personally quite skeptical of. In fact, if each individual ought to maximize their own expected utility in a way that is transitive and independent of irrelevant alternatives when only their own interests are at stake, then fixed-population Expected Totalism follows (for a fixed population, we should maximize the unweighted total expected utility). The Supervenience Theorem is something like a generalization of Harsanyi's Utilitarian Theorem this way. EDIT: Ah, it seems like this link is made indirectly through this paper, which is cited.

That being said, the theorem could also be seen as an argument for Expected Totalism, if each of its conditions can be defended or to whoever leans towards accepting them.

If we've already given up the independence of irrelevant alternatives (whether A or B is better should not depend on what other outcomes are available), it doesn't seem like much of an extra step to give up separability (whether A or B is better should only depend on what's not common to A and B) or Scale Invariance, which is implied by separability. There are different ways to care about the distribution of welfares, and prioritarians and egalitarians might be happy to reject Scale Invariance this way.

Prioritarians and egalitarians can also care about ex ante priority/equality, e.g. everyone deserves a fair chance ahead of time, and this would be at odds with Statewise Supervenience. For example, given H=heads and T=tails, each with probability 0.5, they might prefer the second of these two options, since it looks fairer to Adam ahead of time, as he actually gets a chance at a better life. Statewise Supervenience says these should be equivalent:


If someone cares about ex post equality, e.g. the final outcome should be fair to everyone in it, they might reject Personwise Supervenience, because personwise-equivalent scenarios can be unfair in their final outcomes. The first option here looks unfair to Adam if H happens (ex post), and unfair to Eve if T happens (ex post), but there's no such unfairness in the second option. Personwise Supervenience says we should be indifferent, because from Adam's point of view, ignoring Eve, there's no difference between these two choices, and similarly from Eve's point of view. Note that maximin, which is a limit of prioritarian views, is ruled out.

There are, of course, objections to giving these up. Giving up Personwise Supervenience seems paternalistic, or to override individual interests if we think individuals ought to maximize their own expected utilities. Giving up Statewise Supervenience also has its problems, as discussed in the paper. See also "Decide As You Would With Full Information! An Argument Against Ex Ante Pareto" by Marc Fleurbaey and Alex Voorhoeve, as well as one of my posts which fleshes out ex ante prioritarianism (ignoring the problem of personal identity) and the discussion there.

Comment by michaelstjules on Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' · 2019-11-06T01:34:55.444Z · score: 13 (4 votes) · EA · GW

There's also a video in which the author presents the work there. Here's the direct link.

Comment by michaelstjules on Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' · 2019-11-06T01:31:48.693Z · score: 1 (1 votes) · EA · GW

Regarding the definition of the Asymmetry,

2. If the additional people would certainly have good lives, it is permissible but not required to create them

is this second part usually stated so strongly, even in a straight choice between two options? Normally I only see "not required", not also "permissible", but then again, I don't normally see it as a comparison of two choices only. This rules out average utilitarianism, critical-level utilitarianism, negative utilitarianism, maximin and many other theories which may say that it's sometimes bad to create people with overall good lives, all else equal. Actually, basically any value-monistic consequentialist theory which is complete, transitive and satisfies the independence of irrelevant alternatives and non-antiegalitarianism, and avoids the repugnant conclusion is ruled out.

Comment by michaelstjules on Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' · 2019-11-06T01:31:19.206Z · score: 1 (1 votes) · EA · GW

Interesting!

What if we redefine rationality to be relative to choice sets? We might not have to depart too far from vNM-rationality this way.

The axioms of vNM-rationality are justified by Dutch books/money pumps and stochastic dominance, but the latter can be weakened, too, since many outcomes are indeed irrelevant, so there's no need to compare to them all. For example, there's no Dutch book or money pump that only involves changing the probabilities for the size of the universe, and there isn't one that only involves changing the probabilities for logical statements in standard mathematics (ZFC); it doesn't make sense to ask me to pay you to change the probability that the universe is finite. We don't need to consider such lotteries. So, if we can generalize stochastic dominance to be relative to a set of possible choices, then we just need to make sure we never choose an option which is stochastically dominated by another, relative to that choice set. That would be our new definition of rationality.

Here's a first attempt:

Let be a set of choices or probabilistic lotteries over outcomes (random variables), and let be the set of all possible outcomes which have nonzero probability in some choice from (or something more general to accommodate general probability measures). Then for , we say stochastically dominates with respect to if:

for all , and the inequality is strict for some . This can lift comparisons using , a relation , between elements of to random variables over the elements of . need not even be complete over or transitive, but stochastic dominance thus defined will be transitive (perhaps at the cost of losing some comparisons). could also actually be specific to , not just to .

We could play around with the definition of here.

When we consider choices to make now, we need to model the future and consider what new choices we will have to make, and this is how we would avoid Dutch books and money pumps. Perhaps this would be better done in terms of decision policies rather than a single decision at a time, though.

(This approach is based in part on "Exceeding Expectations: Stochastic Dominance as a General Decision Theory" by Christian Tarsney, which also helps to deal with Pascal's wager and Pascal's mugging.)

Comment by michaelstjules on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T16:46:23.134Z · score: 3 (5 votes) · EA · GW
Saying terrifying things can be costly, both socially and reputationally (and there's also the possible side effect of, well, making people terrified).

Is this the case in the AI safety community? If the reasoning for their views isn't obviously bad, I would guess that it's "cool" to say unpopular or scary but not unacceptable things, because the rationality community has been built in part on this.

Comment by michaelstjules on The illusion of science in comparative cognition · 2019-11-03T08:10:11.149Z · score: 2 (2 votes) · EA · GW

I'm not sure how important Krogh's Principle is in animal cognition research of the kind we're interested in; my impression is that the animals that are studied are primarily animals that are well-studied, like fruit flies, bees, mice, rats, cats, dogs, farmed animals and the stereotypically smart ones (corvids, parrots, elephants, cetaceans, primates), and the animals EAs are interested in fall into these groups. When I want to know about chicken cognition, I just look for studies on chickens. It's worth mentioning that Rethink Priorities stuck to relatively narrow taxons in their report.

I do agree that this research is likely to be biased overall to produce more positive than could be reproduced or generalized. However, I also think that the priors are already very skeptical (e.g. Morgan's canon over Occam's razor and despite common descent) so scientists are also likely to attribute fewer and less complex mental states to animals than I think best explains the evidence, and it's pretty clear that we've systematically underestimated their capacities, so it's likely the current state of research underestimates them overall, too.

Or, rather, researchers aren't using Bayesian reasoning in the first place, so they aren't really using priors at all in interpreting evidence; I think Morgan's canon is more like a p-value threshold than a prior.

Of course, we can just use our own priors in interpreting the evidence, and in doing so, we should take into account biases towards positive results in research.

Comment by michaelstjules on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T06:28:51.008Z · score: 1 (1 votes) · EA · GW

Good points.

Also, I think that at least some researchers are less likely to discuss their estimates publicly if they're leaning towards shorter timelines and a discontinuous takeoff, which subjects the public discourse on the topic to a selection bias.

Why do you think this?


EDIT: Ah, Matthew got to it first.

Comment by michaelstjules on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T06:23:59.157Z · score: 6 (3 votes) · EA · GW

I think another large part of the focus comes from their views on population ethics. For example, in the article, you can "save" people by ensuring they're born in the first place:

Let’s explore some hypothetical numbers to illustrate the general concept. If there’s a 5% chance that civilisation lasts for ten million years, then in expectation, there are 5000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce the risk of premature extinction by 1 percentage point, then these efforts would in expectation save 28 future generations. If each generation contains ten billion people, that would be 280 billion lives saved. If there’s a chance civilisation lasts longer than ten million years, or that there are more than ten billion people in each future generation, then the argument is strengthened even further.

(bold mine)

I discuss this further in my section "Implications for EA priorities" in this post of mine. I recommend trying this tool of theirs.

Comment by michaelstjules on Probability estimate for wild animal welfare prioritization · 2019-11-02T19:27:30.778Z · score: 2 (2 votes) · EA · GW

When you say "we do not invest in _ research", do you mean EAs specifically, or all humans? It's worth noting some people not associated with EA will probably do research in each area regardless.

The probability that if we do not invest in X-risk reduction research (but we invest in wild animal suffering reduction research instead), humans will go extinct and animals will not go extinct, and if we do invest in that X-risk research, humans will not go extinct, is p.

I'm having trouble understanding this probability. I don't think it can be interpreted as a single event (even conditionally), unless you're thinking of probabilities over probabilities or probabilities over statements, not actual events that can happen at specific times and places (or over intervals of time, regions in space).

Letting

= humans go extinct

= non-human animals go extinct

= we invest in X-risk reduction research (or work, in general)

= we invest in WAS research (or work, in general)

Then the probability of "if we do not invest in X-risk reduction research (but we invest in wild animal suffering reduction research instead), humans will go extinct and animals will not go extinct" looks like

while the probability of "if we do invest in that X-risk research, humans will not go extinct" looks like

The events being conditioned on between these two probabilities are not compatible since the first has , while the second has . So, I'm not sure taking their product would be meaningful either. I think it would make more sense to multiply these two probabilities by the expected value of their corresponding events and just compare them. In general, you would calculate:

Where is the value, is now the level of investment in X-risk work, is now the level of investment in WAS work and is the aggregate value. Then you would compare this for different values of and , i.e. different levels of investment (or compare the partial derivatives with respect to each of and , at a given level of and ; this would tell you the marginal expected value of extra resources going to each of X-risk work and WAS work).

With being 1 if humans go extinct and 0 otherwise (the indicator function), being 1 if non-humans animals go extinct and 0 otherwise, and depending on them, that expected value could further be broken down to get

You specify further that

This probability is the product of the probability that there will be a potential extinction event (e.g. 10%), the probability that, given such an event, the extra research in X-risk reduction (with the resources that would otherwise have gone to wild animal suffering research) to avoid that extinction event is both necessary and sufficient to avoid human extinction (e.g. 1%) and the probability that animals will survive the extinction event even if humans do not (e.g. 1%).

But you're conditioning on the probability of a potential extinction event as if X-risk reduction research has no effect on it, only the probability of actual human extinction from that event; X-risk research aims to address both.

The probability that is "both necessary and sufficient" for is also a bit difficult to think about. One way might be the following, but I think this would be difficult to work with, too:


Comment by michaelstjules on Against value drift · 2019-10-31T03:18:10.207Z · score: 2 (2 votes) · EA · GW

I think it's plausible that changing incentives and "better" options coming along might explain a lot of the drift. However, rather than "Power. Survival. Prestige. Odds of procreation.", I think they'll be less selfish things like family, or just things they end up finding more interesting ; maybe they'll just get bored with EA.

However, I think you underestimate how deeply motivated many people are to help others for their own sake, out of a sense of duty or compassion. Sure, this probably isn't most people, and maybe not even most EAs, although I wouldn't be surprised if it were.

https://slatestarcodex.com/2017/08/16/fear-and-loathing-at-effective-altruism-global-2017/

https://www.theguardian.com/world/2015/sep/22/extreme-altruism-should-you-care-for-strangers-as-much-as-family

http://bostonreview.net/books-ideas-mccoy-family-center-ethics-society-stanford-university/lives-moral-saints

https://forum.effectivealtruism.org/posts/4gKqaGdDLtxm6NKnZ/figuring-good-out-january

https://forum.effectivealtruism.org/posts/FA794RppcqrNcEgTC/why-are-you-here-an-origin-stories-thread

Comment by michaelstjules on Attempt at understanding the role of moral philosophy in moral progress · 2019-10-28T18:09:08.553Z · score: 3 (3 votes) · EA · GW

Have the concept of speciesism and the argument from species overlap/marginal cases been important for animal protection? I'd attribute them largely to philosophers.

I think we should also look at the influence the EA community has and where its ideas come from. What would EA look like without a given idea from philosophy?

Comment by michaelstjules on Be the Match: a volunteer list for bone marrow donation · 2019-10-28T01:59:37.090Z · score: 1 (1 votes) · EA · GW

I'm not sure these costs (the $170,000+) should be included, or if they are, they may need to be weighed down significantly. We're not looking at the cost-effectiveness of implementing such a program; we're looking at the cost-effectiveness of an extra person signing up and possibly an extra donation (with small probability). If you want to include the $170,000+, you should ask what else would have been done with that money, because it's not ours to use.

The organization gets it funding from the government and donations.

Comment by michaelstjules on Be the Match: a volunteer list for bone marrow donation · 2019-10-27T18:45:11.223Z · score: 1 (1 votes) · EA · GW

If each of those (on average) 800 matches were only 1% likely to donate if chosen, then the probability that none of them would donate would be very small, ~0.03%. The counterfactual expected impact would be much lower than just 1/800; it would be scaled down by a factor of 0.0003.

However, if each were only 0.1% likely to donate if chosen, then you're close to a 50% probability of saving an extra life (or helping an extra person) than would have been saved (helped) otherwise, which would actually be very cost-effective.

Comment by michaelstjules on Be the Match: a volunteer list for bone marrow donation · 2019-10-27T18:31:14.509Z · score: 1 (1 votes) · EA · GW

If you're chosen to donate, someone else is not, so they save time, although overall, more time is wasted if you don't sign up, because the organization will have to deal with an extra potential donor. Roughly one more person will be signed up in expectation if you sign up, unless you think you have an important effect on others. You're not wasting the time of anyone else who signed up if you sign up, since they were already going to pay that cost.

If you're not chosen to donate, you'll waste your own time, and the time of the organization.

Overall, though, if you're EA-minded, your time is probably worth much more than the average person who signed up, and I think this should dominate time considerations.

Comment by michaelstjules on Be the Match: a volunteer list for bone marrow donation · 2019-10-27T18:23:31.954Z · score: 14 (7 votes) · EA · GW

I don't get all the downvotes. Even if this turns out to not be very cost-effective (although maybe the post should be edited to highlight the initially missed considerations), it is helpful for us to see why as something to learn from.

It also makes the forum seem unfriendly to newcomers.

Comment by michaelstjules on Be the Match: a volunteer list for bone marrow donation · 2019-10-27T17:01:59.571Z · score: 1 (1 votes) · EA · GW

Ah, did you mean the time the person who would have donated in your place saves if you donate in their place? That's only $25-$50 per hour for 20-30 hours (+ recovery) for one person, not 799. The other 798 won't save any time, since nothing changes for them.

Comment by michaelstjules on Be the Match: a volunteer list for bone marrow donation · 2019-10-27T15:58:22.601Z · score: 1 (1 votes) · EA · GW

Unless we think they'd be doing something particularly valuable with that lost time, I probably wouldn't. That these people are signing up at all is a sign that they do care about others significantly, but it's not a sign that they care about cost-effectiveness.

EDIT: Also, you have approximately 0 effect on whether the others choose to sign up, so you have no control over these opportunity costs.

Comment by michaelstjules on Be the Match: a volunteer list for bone marrow donation · 2019-10-27T06:16:26.159Z · score: 4 (4 votes) · EA · GW

Yes, $1500 would be pretty competitive with GiveWell's top charities, and possibly better.

A few comments:

  • Maybe some workplaces would consider giving you extra paid time off to do this. If not, you can use sick/personal days. If you're earning to give or otherwise don't do direct work, then the lost time might not be much of a loss at all.
  • Does the time estimate include recovery time in bed?
  • The estimate doesn't really consider the counterfactual. You'd want to know the probability that the person you would have helped (or another person in their place) would have died had you not donated. That probability is not 1, because they might have been saved by someone else instead, and you would want to adjust the cost-effectiveness based on this probability. Maybe another way of putting it is this: what percentage of people who sign up for this actually end up donating? This won't give you the right probability I mentioned just before, but it will tell you something about it: if almost everyone who signs up ends up donating, then the probability that an extra person gets a donation if you donate is also close to 1.
Comment by michaelstjules on MichaelStJules's Shortform · 2019-10-27T03:47:41.826Z · score: 2 (2 votes) · EA · GW

If welfare is real-valued (specifically from an interval ), then Maximin (maximize the welfare of the worst off individual) and theories which assign negative value to the addition of individuals with non-maximal welfare satisfy the properties above.

Furthermore, if the following two properties also hold:

1. Extended Continuity, a modest definition of continuity for a theory comparing populations with real-valued welfares which must be satisfied by any order representable by a real-valued function that is continuous with respect to the welfares of the individuals in each population, and

2. Strong Pareto (according to one equivalent definition, under transitivity and the independence of irrelevant alternatives): if two outcomes with the same individuals in their populations differ only by the welfare of one individual, then the outcome in which that individual is better off is strictly better than the other,

then the theory must assign negative value to the addition of individuals with non-maximal welfare (and no positive value to the addition of individuals with maximal welfare) as long as any individual in the initial population has non-maximal welfare. In other words, the theory must be antinatalist in principle, although not necessarily in practice, since all else is rarely equal.


Proof : Suppose is any population with an individual with some non-maximal welfare and consider adding an individual who would also have some non-maximal welfare . Denote, for any small enough,

: the population , but where individual has welfare (which exists for all sufficiently small , since is non-maximal, and welfare comes from an interval).

Also denote

: the population containing only , with non-maximal welfare , and

: the population containing only , but with some welfare ( is non-maximal, so there must be some greater welfare level).

Then

where the first inequality follows from the hypothesis that it's better to improve the welfare of an existing individual than to add any others, and the second inequality follows from Strong Pareto, because the only difference is 's welfare.

Then, by Extended Continuity and the first inequality for all (sufficiently small) , we can take the limit (infimum) of as to get

so, it's no better to add even if they would have maximal welfare, and by transitivity (and the independence of irrelevant alternatives),

so it's strictly worse to add with non-maximal welfare. This completes the proof.

Comment by michaelstjules on MichaelStJules's Shortform · 2019-10-26T15:01:08.363Z · score: 2 (2 votes) · EA · GW

I also think this argument isn't specific to preferences, but could be extended to any interests, values or normative standards that are necessarily held by individuals (or other objects), including basically everything people value (see here for a non-exhaustive list). See Johann Frick’s paper and thesis which defend the procreation asymmetry, and my other post here.

Comment by michaelstjules on MichaelStJules's Shortform · 2019-10-25T04:19:43.867Z · score: 3 (3 votes) · EA · GW

If we think

1. it's always better to improve the welfare of an existing person (or someone who would exist anyway) than to bring others into existence, all else equal, and

2. two outcomes are (comparable and) equivalent if they have the same distribution of welfare levels (but possibly different identities) (this is often called Anonymity),

then not only would we reject Mere Addition (the claim that adding good lives, even those which are barely worth living but still worth living, is never bad), but the following would be true:

Given any two nonempty populations and , if any individual in is worse off than any individual in , then is worse than . Specifically, we shouldn't add to a population any individual who isn't at least as well off as the best off in the population, all else equal.

To see why, suppose , a member of with welfare , is better off than , a member of with welfare , so . Then consider

which is , but has instead of , with welfare .

which is , but has instead of , with welfare .

Then, is better than , by the first hypothesis, because the latter has all the same individuals from (and extras from ) with exactly the same welfare levels, except for (from and ) who is worse off with welfare instead of . So .

And is equivalent to , by the second hypothesis, because the only difference is that we've swapped the welfare levels of and . So .

So, by transitivity (and the independence of irrelevant alternatives),


Comment by michaelstjules on Probability estimate for wild animal welfare prioritization · 2019-10-25T02:53:50.387Z · score: 1 (1 votes) · EA · GW

To someone who already rejects Mere Addition, the Sadistic Conclusion is only a small cost, since if it's bad to add some lives with (seemingly) positive welfare, then it's a small step to accept that it can sometimes be worse to add lives with negative welfare over lives with positive welfare. The Very Sadistic Conclusion can be avoided by being very prioritarian, but not necessarily lexically prioritarian (at the cost of separability/independence without lexicality).

Comment by michaelstjules on Probability estimate for wild animal welfare prioritization · 2019-10-25T02:35:27.340Z · score: 1 (1 votes) · EA · GW

I think the tools to avoid all three of the the Repugnant Conclusion, the Very Repugnant Conclusion and the Very Sadistic Conclusion (or the similar conclusion you described here) left available to someone who accepts Mere Addition (or Dominance Addition) are worse than those available to someone who rejects it.

Using lexicality as you describe seems much worse than the way a suffering-focused view would use it, since it means rejecting Non-Elitism, so that you would prioritize the interests of a better off individual over a worse off one in a one-on-one comparison. Some degree of prioritarianism is widely viewed as plausible, and I'd imagine almost no one would find rejecting Non-Elitism acceptable. Rejecting Non-Elitism without using lexicality (like Geometrism) isn't much better, either. You can avoid this by giving up General Non-Extreme Priority (with or without lexicality) instead, and I wouldn't count this against such a view compared to a suffering-focused one.

However, under a total order over populations, to avoid the RC, someone who accepts Mere Addition must reject Non-Antiegalitarianism and Minimal Inequality Aversion (or Egalitarian Dominance, which is even harder to reject). Rejecting them isn't as bad as rejecting Non-Elitism, although I'm not yet aware of any theory which rejects them but accepts Non-Elitism. From this paper:

As mentioned above, Sider's theory violates this principle. Sider rejects his own theory, however, just because it favours unequal distributions of welfare. See Sider (1991, p. 270, fn 10). Ng states that 'Non-Antiegalitarianism is extremely compelling'. See Ng (1989, p. 239, fn 4). Blackorby, Bossert and Donaldson (1997, p. 210), hold that 'weak inequality aversion is satisfied by all ethically attractive . . . principles'. Fehige (1998, p. 12), asks rhetorically '. . . if one world has more utility than the other and distributes it equally, whereas the other doesn't, then how can it fail to be better?'. In personal communication, Parfit suggests that the Non-Anti-Egalitarianism Principle might not be convincing in cases where the quality of the good things in life are much worse in the perfectly equal population. We might assume, however, that the good things in life are of the same quality in the compared populations, but that in the perfectly equal population these things are equally distributed. Cf. the discussion of appeals to non-welfarist values in the last section.

And the general Non-Sadism condition is so close to Mere Addition itself that rejecting it (and accepting the Sadistic Conclusion) is not that great a cost to someone who already rejects Mere Addition, since they've already accepted that adding lives with what might be understood as positive welfare can be bad, and if it is bad, it's small step to accept that it can sometimes be worse than adding a smaller number of lives of negative welfare.

Comment by michaelstjules on MichaelStJules's Shortform · 2019-10-24T06:08:48.141Z · score: 3 (3 votes) · EA · GW

Fehige defends the asymmetry between preference satisfaction and frustration on rationality grounds. This is my take:

Let's consider a given preference from the point of view of a given outcome after choosing it, in which the preference either exists or does not, by cases:

1. The preference exists:

a. If there's an outcome in which the preference exists and is more satisfied, and all else is equal, it would have been irrational to have chosen this one (over it, and at all).

b. If there's an outcome in which the preference exists and is less satisfied, and all else is equal, it would have been irrational to have chosen the other outcome (over this one, and at all).

c. If there's an outcome in which the preference does not exist, and all else is equal, the preference itself does not tell us if either would have been irrational to have chosen.


2. The preference doesn't exist:

a. If there's an outcome in which the preference exists, regardless of its degree of satisfaction, and all else equal, the preference itself does not tell us if either would have been irrational to have chosen.


So, all else equal besides the existence or degree of satisfaction of the given preference, it's always rational to choose an outcome in which the preference does not exist, but it's irrational to choose an outcome in which the preference exists but is less satisfied than in another outcome.

(I made the same argument here, but this is a cleaner statement.)

Comment by michaelstjules on Conditional interests, asymmetries and EA priorities · 2019-10-24T06:06:02.429Z · score: 1 (1 votes) · EA · GW

Fehige defends the asymmetry between preference satisfaction and frustration on rationality grounds. This is my take:

Let's consider a given preference from the point of view of a given outcome after choosing it, in which the preference either exists or does not:

1. The preference exists:

a. If there's an outcome in which the preference exists and is more satisfied, and all else is equal, it would have been irrational to have chosen this one (over it, and at all).

b. If there's an outcome in which the preference exists and is less satisfied, and all else is equal, it would have been irrational to have chosen the other outcome (over this one, and at all).

c. If there's an outcome in which the preference does not exist, and all else is equal, the preference itself does not tell us if either would be irrational to have chosen.

2. The preference doesn't exist:

a. If there's an outcome in which the preference exists, regardless of its degree of satisfaction, and all else equal, the preference itself does not tell us if either would have been irrational to have chosen.

So, all else equal besides the existence or degree of satisfaction of the given preference, it's always rational to choose an outcome in which the preference does not exist, but it's irrational to choose an outcome in which the preference exists but is less satisfied than in another outcome.

(I made the same argument here, but this is a cleaner statement.)

Comment by michaelstjules on Conditional interests, asymmetries and EA priorities · 2019-10-23T04:09:49.011Z · score: 1 (1 votes) · EA · GW

I don't think the cases between asymmetric and symmetric views will necessarily turn out to be so ... symmetric (:P), since, to start, they each have different requirements to satisfy to earn the names asymmetric and symmetric, and how bad a conclusion will look can depend on whether we're dealing with negative or positive utilities or both. To be called symmetric, it should still satisfy Mere Addition, right?

Dropping continuity looks bad for everyone, in my view, so I won't argue further on that one.

However, what are the most plausible symmetric theories which avoid the Very Repugnant Conclusion and are still continuous? To be symmetric, it should still accept Mere Addition, right? Arrhenius has an impossibility theorem for the VRC. It seems to me the only plausible option is to give up General Non-Extreme Priority. Does such a symmetric theory exist, without also violating Non-Elitism (like Sider's Geometrism does)?

EDIT: I think I've thought of such a social welfare function. Do Geometrism or Moderate Trade-off Theory for the negative utilities (or whatever an asymmetric view might have done to prioritize the worst off), and then add the term for the rest, where is strictly continuous, increasing and bounded above.

Similarly, 'value receptacle'-style critiques seem a red herring, as even if they are decisive for preference views over hedonic ones in general, they do not rule between 'only thwarted preferences count' and 'satisfied preferences count too' in particular.

Why are value receptacle objections stronger for preferences vs hedonism than for thwarted only vs satisfied too?

If it's sometimes better to create new individuals than to help existing ones, then we are, at least in part, reduced to receptacles, because creating value by creating individuals instead of helping individuals puts value before individuals. It should matter that you have your preferences satisfied because you matter, but as value receptacles, it seems we're just saying that it matters that there are more satisfied preferences. You might object that I'm saying that it matters that there are fewer satisfied preferences, but this is a consequence, not where I'm starting from; I start by rejecting the treatment of interest holders as value receptacles, through Only Actual Interests (and No Transfer).

Is it good to give someone a new preference just so that it can be satisfied, even at the cost of the preferences they would have had otherwise? How is convincing someone to really want a hotdog and then giving them one doing them a service if they had no desire for one in the first place (and it would satisfy no other interests of theirs)? Is it better for them even in the case where they don't sacrifice other interests? Rather than doing what people want or we think they would want anyway, we would make them want things and do those for them instead. If preference satisfaction always counts in itself, then we're paternalists. If it doesn't always count but sometimes does, then we should look for other reasons, which is exactly what Only Actual Interests claims.

Of course, there's the symmetric question: does preference thwarting (to whatever degree) always count against the existence of those preferences, and if it doesn't, should we look for other reasons, too? I don't find either answer implausible. For example, is a child worse off for having big but unrealistic dreams? I don't think so, necessarily, but we might be able to explain this by referring to their other interests: dreaming big promotes optimism and wellbeing and prevents boredom, preventing the thwarting of more important interests. When we imagine the child dreaming vs not dreaming, we have not made all else equal. Could the same be true of not quite fully satisfied interests? I don't rule out the possibility that the existence and satisfaction of some interests can promote the satisfaction of other interests. But if, they don't get anything else out of their unsatisfied preferences, it's not implausible that this would actually be worse, as a rule, if we have reasonable explanations for when it wouldn't be worse.