Welfare Footprint Project - a blueprint for quantifying animal pain 2021-06-26T20:05:10.191Z
Voting open for Project for Awesome 2021! 2021-02-12T02:22:20.339Z
Project for Awesome 2021: Video signup and resources 2021-01-31T01:57:59.188Z
Project for Awesome 2021: Early coordination 2021-01-27T19:11:00.600Z
Even Allocation Strategy under High Model Ambiguity 2020-12-31T09:10:09.048Z
[Summary] Impacts of Animal Well‐Being and Welfare Media on Meat Demand 2020-11-05T09:11:38.138Z
Hedging against deep and moral uncertainty 2020-09-12T23:44:02.379Z
Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? 2020-06-22T16:41:58.831Z
Physical theories of consciousness reduce to panpsychism 2020-05-07T05:04:39.502Z
Replaceability with differing priorities 2020-03-08T06:59:09.710Z
Biases in our estimates of Scale, Neglectedness and Solvability? 2020-02-24T18:39:13.760Z
[Link] Assessing and Respecting Sentience After Brexit 2020-02-19T07:19:32.545Z
Changes in conditions are a priori bad for average animal welfare 2020-02-09T22:22:21.856Z
Please take the Reducing Wild-Animal Suffering Community Survey! 2020-02-03T18:53:06.309Z
What are the challenges and problems with programming law-breaking constraints into AGI? 2020-02-02T20:53:04.259Z
Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? 2020-01-11T19:09:00.931Z
Should animal advocates donate now or later? A few considerations and a request for more. 2019-11-13T07:30:50.554Z
MichaelStJules's Shortform 2019-10-24T06:08:48.038Z
Conditional interests, asymmetries and EA priorities 2019-10-21T06:13:04.041Z
What are the best arguments for an exclusively hedonistic view of value? 2019-10-19T04:11:23.702Z
Defending the Procreation Asymmetry with Conditional Interests 2019-10-13T18:49:15.586Z
Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests 2019-07-04T23:56:44.330Z


Comment by MichaelStJules on EA's abstract moral epistemology · 2021-08-02T04:53:25.801Z · EA · GW

Discussed again here:

Comment by MichaelStJules on Alice Crary's philosophical-institutional critique of EA: "Why one should not be an effective altruist" · 2021-08-02T04:52:54.605Z · EA · GW

Discussed again here:

Comment by MichaelStJules on DeepMind: Generally capable agents emerge from open-ended play · 2021-07-30T01:54:24.274Z · EA · GW

It seems like this could extend naturally to cooperative inverse reinforcement learning.  Basically, the real world is a new game the AI has to play, and humans decide the reward subjectively (rather than with some explicit rule). The AI has developed some general competence beforehand by playing games, but it has to learn the new rules in the real world, which are not explicit.

Comment by MichaelStJules on EA Forum feature suggestion thread · 2021-07-28T17:17:45.567Z · EA · GW

You can strong downvote on a "open listing" tag to try to get it removed from a post, and then just add a "closed listing" tag. I think once the tag score drops to 0, it gets removed.

Comment by MichaelStJules on DeepMind: Generally capable agents emerge from open-ended play · 2021-07-28T05:00:38.329Z · EA · GW

For what it's worth, I've mostly not been interested in AI safety/alignment (and am still mostly not), but this also seems like a pretty big deal to me. I haven't actually read the details, but this is basically not "narrow" AI anymore, right?

I guess the expressions "narrow" and "general" are a bit unfortunate, since I don't really want to call this either. I would want to reserve the term AGI for AI that can do at least this, but can also reason generally and abstractly, and excels at one-shot learning (although there are specific networks designed for one-shot learning, like Siamese networks. Actually, why aren't similar networks used more often,even as subnetworks?).

Comment by MichaelStJules on Can a Vegan Diet Be Healthy? A Literature Review · 2021-07-25T06:40:36.088Z · EA · GW

Across all studies, there was no evidence to support a causal relation between the consumption or avoidance of meat and any psychological outcomes. However, three studies provided evidence suggesting (contradictory) temporal relations between meat-abstention and depression and anxiety. Michalak, Zhang, and Jacobi (2012) demonstrated that the mean age at the adoption of meat-abstention (30.58 years) was substantially older than the mean age of the onset of metal disorder (24.69 years). These authors posited that mental disorders may lead to the adoption of a meat-less diet. The authors stated that individuals with mental disorders may “choose a vegetarian diet as a form of safety or self-protective behavior” (Michalak, Zhang, and Jacobi 2012, 6) due to the perception that plant-based diets are more healthful or because individuals with mental disorders may be “more aware of suffering of animals” (Michalak, Zhang, and Jacobi 2012, 2). Interestingly, these investigators also found that people with a lifetime diagnosis of psychological disorders consumed less fish and fast food. While these results conflict with previous research on fast food and mental health (Crawford et al. 2011), they support Matta et al.’s results and hypothesis that the exclusion of any food group, and especially meat and poultry, is associated with increased odds of having symptoms of psychological disorders (Matta et al. 2018).

Conversely, in their longitudinal analysis, Lavallee et al. (2019) found that meat-abstention was linked to “slight increases over time” (Lavallee et al. 2019, 153) in depression and anxiety in Chinese students. One important caveat when considering these disparate results on temporal relations may be differences in the factors that led to meat-abstention (e.g., religious practices, health and ethical considerations, or socio-economic status). For example, economically disadvantaged individuals who do not consume meat due to its relative cost may be at risk for ill-health for myriad reasons independent of their lack of meat consumption. Thus, future research examining temporal relations should establish clear distinctions between individuals and populations that abstain from meat consumption due to ethical, religious, and health-related perceptions, or those who do not consume meat for economic reasons.



In 2012, Beezhold and Johnston (2012) conducted a RCT in which 39 self-characterized omnivores (82% female) were assigned to one of three groups: lacto-vegetarian (i.e., avoided all animal foods except dairy), ovo-pescatarian (i.e., avoided meat and poultry but consumed fish and eggs), or omnivore (i.e., consumed meat and/or poultry at least once daily). Their results suggested that restricting meat, fish, and poultry improved some domains of short-term mood states. As detailed in our discussion, this study had major design flaws (e.g., potential observer-expectancy effects) and errors in interpretation and communication (e.g., nonequivalent groups at baseline, failure to recognize regression to the mean).

Comment by MichaelStJules on Can a Vegan Diet Be Healthy? A Literature Review · 2021-07-25T06:23:17.419Z · EA · GW

Dobersek, U., Wy, G., Adkins, J., Altmeyer, S., Krout, K., Lavie, C. J., & Archer, E. (2021). Meat and mental health: a systematic review of meat abstention and depression, anxiety, and related phenomena. Critical reviews in food science and nutrition, 61(4), 622-635.


Objective: To examine the relation between the consumption or avoidance of meat and psychological health and well-being.

Methods: A systematic search of online databases (PubMed, PsycINFO, CINAHL Plus, Medline, and Cochrane Library) was conducted for primary research examining psychological health in meat-consumers and meat-abstainers. Inclusion criteria were the provision of a clear distinction between meat-consumers and meat-abstainers, and data on factors related to psychological health. Studies examining meat consumption as a continuous or multi-level variable were excluded. Summary data were compiled, and qualitative analyses of methodologic rigor were conducted. The main outcome was the disparity in the prevalence of depression, anxiety, and related conditions in meat-consumers versus meat-abstainers. Secondary outcomes included mood and self-harm behaviors.

Results: Eighteen studies met the inclusion/exclusion criteria; representing 160,257 participants (85,843 females and 73,232 males) with 149,559 meat-consumers and 8584 meat-abstainers (11 to 96 years) from multiple geographic regions. Analysis of methodologic rigor revealed that the studies ranged from low to severe risk of bias with high to very low confidence in results. Eleven of the 18 studies demonstrated that meat-abstention was associated with poorer psychological health, four studies were equivocal, and three showed that meat-abstainers had better outcomes. The most rigorous studies demonstrated that the prevalence or risk of depression and/or anxiety were significantly greater in participants who avoided meat consumption.

Conclusion: Studies examining the relation between the consumption or avoidance of meat and psychological health varied substantially in methodologic rigor, validity of interpretation, and confidence in results. The majority of studies, and especially the higher quality studies, showed that those who avoided meat consumption had significantly higher rates or risk of depression, anxiety, and/or self-harm behaviors. There was mixed evidence for temporal relations, but study designs and a lack of rigor precluded inferences of causal relations. Our study does not support meat avoidance as a strategy to benefit psychological health.

Comment by MichaelStJules on Can a Vegan Diet Be Healthy? A Literature Review · 2021-07-25T05:39:46.978Z · EA · GW

Isabel Iguacel, Inge Huybrechts, Luis A Moreno, Nathalie Michels, Vegetarianism and veganism compared with mental health and cognitive outcomes: a systematic review and meta-analysis, Nutrition Reviews, Volume 79, Issue 4, April 2021, Pages 361–381,




Vegetarian and vegan diets are increasing in popularity. Although they provide beneficial health effects, they may also lead to nutritional deficiencies. Cognitive impairment and mental health disorders have a high economic burden.


A meta-analysis was conducted to examine the relationship between vegan or vegetarian diets and cognitive and mental health.

Data Sources

PubMed, Scopus, ScienceDirect, and Proquest databases were examined from inception to July 2018.

Study Selection

Original observational or interventional human studies of vegan/vegetarian diets were selected independently by 2 authors.

Data Extraction

Raw means and standard deviations were used as continuous outcomes, while numbers of events were used as categorical outcomes.


Of 1249 publications identified, 13 were included, with 17 809 individuals in total. No significant association was found between diet and the continuous depression score, stress, well-being, or cognitive impairment. Vegans/vegetarians were at increased risk for depression (odds ratio = 2.142; 95%CI, 1.105–4.148) and had lower anxiety scores (mean difference = −0.847; 95%CI, −1.677 to −0.018). Heterogeneity was large, and thus subgroup analyses showed numerous differences.


Vegan or vegetarian diets were related to a higher risk of depression and lower anxiety scores, but no differences for other outcomes were found. Subgroup analyses of anxiety showed a higher risk of anxiety, mainly in participants under 26 years of age and in studies with a higher quality. More studies with better overall quality are needed to make clear positive or negative associations.


Some specific important points about the methodology from the paper:

"When a study offered information about matched and nonmatched data, the matched data were used for analysis." but it looks like only one old study had matching.

"Only raw data (unadjusted) were used to perform the meta-analyses, as only 2 publications in the present meta-analysis included adjusted data." However, "Nevertheless, adjustment for confounders did not drastically change results in these 2 studies."

Comment by MichaelStJules on A Sequence Against Strong Longtermism · 2021-07-23T19:17:11.026Z · EA · GW

I don't think there's a consensus on whether physics is continuous or discrete, but I expect that what matters ethically is describable in discrete terms. Things like wavefunctions (or the motions of physical objects) could depend continuously on time or space. I don't think we know that there are finitely many configurations of a finite set of atoms, but maybe there are only finitely many functionally distinct ones, and the rest are effectively equivalent.

I think we've also probed scales smaller than Planck by observing gamma ray bursts, but I might be misinterpreting, and these were specific claims about specific theories of quantum gravity.

Also, a good Bayesian should grant the hypothesis of continuity nonzero credence.

FWIW, though, I don't think dealing with infinitely many possibilities is much of a problem as made out to be here. We can use (mixed-)continuous measures, and we can decide what resolutions are relevant and useful as a practical matter.

Comment by MichaelStJules on Mogensen & MacAskill, 'The paralysis argument' · 2021-07-19T20:39:02.763Z · EA · GW

Are these constraints on doing harm actually standard among non-consequentialists? I suspect they would go primarily for constraints on ex ante/foreseeable effects per person (already or in response to the paralysis argument), so that

  1. for each person and each extent of harm, the probability that you harm them to at least the given extent must be below some threshold, or
  2. for each person, the expected harm is below some threshold, or
  3. for each person, their expected value from the act is nonnegative (or close enough to 0), so only acts which leave no one in particular worse off in expectation than "doing nothing", i.e. weak ex ante Pareto improvements, maybe with a little bit of room.

The thresholds could also be soft and depend on benefits or act as a penalty to a consequentialist calculus, if you want to allow for much more significant benefits to outweigh lesser harms.

It might get tricky with possible future people, or maybe the constraints only really apply in a person-affecting way. Building off 3 above, you could sum expected harms (already including probabilities of existence which can vary between acts, or taking the difference of conditional expectations and weighting) across all actual and possible people, and use a threshold constraint that depends on the expected number of actual people. Where  represents the individual utilities in the world in which you choose a given action and  represents the utilities for "doing nothing",

  1. , or
  2. , for some (or all) values of  such that .

This could handle things like contributing too much to climate change (many possible people are ex ante worse off according to transworld identity) and preventing bad lives. With counterparts, extending transworld identity, you might be able to handle the nonidentity problem, too.

Some constraints might also be only on intentional or reckless/negligent acts, although we would be owed a precise definition for reckless/negligent.

Comment by MichaelStJules on What would you do if you had half a million dollars? · 2021-07-18T19:29:39.635Z · EA · GW

Besides others mentioned, consider also getting in touch with

  2. (I think this is s-risk-focused, given the team running it)
Comment by MichaelStJules on What would you do if you had half a million dollars? · 2021-07-18T19:19:25.900Z · EA · GW

I think robustness (or ambiguity aversion) favours reducing extinction risks without increasing s-risks and reducing s-risks without increasing extinction risks, or overall reducing both, perhaps with a portfolio of interventions. I think this would favour AI safety, especially that focused on cooperation, possibly other work on governance and conflict, and most other work to reduce s-risks (since it does not increase extinction risks), at least if we believe CRS and/or CLR that these do in fact reduce s-risks. I think Brian Tomasik comes to an overall positive view of MIRI in his recommendations page, and Raising for Effective Giving, also a project by the Effective Altruism Foundation like CLR, recommends MIRI in part because "MIRI’s work has the ability to prevent vast amounts of future suffering.".

Some work to reduce extinction risks seems reasonably likely to me on its own to increase s-risks, like biosecurity and nuclear risk reduction work, although there may also be arguments in favour related to improving cooperation, but I'm skeptical.

For what it's worth, I'm not personally convinced any particular AI safety work reduces s-risks overall, because it's not clear it reduces s-risks directly more than it increases them by reducing extinction risks, although I would expect CLR and CRS to be better donation opportunities for this given their priorities. I haven't spent a lot of time thinking about this, though.

Comment by MichaelStJules on What would you do if you had half a million dollars? · 2021-07-18T18:06:03.843Z · EA · GW

Of course life for farmed animals has got worse...but I think people believe we should successfully render factory farming redundant on account of cultivated meat.

I think there's recently more skepticism about cultured meat (see here, although I still expect factory farming to be phased out eventually, regardless), but either way, it's not clear a similar argument would work for artificial sentience, used as tools, used in simulations or even intentionally tortured. There's also some risk that nonhuman animals themselves will be used in space colonization, but that may not be where most of the risk is.

Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option.

It seems unlikely to me that we would go extinct, even conditional on "us" deciding it would be best. Who are "we"? There will probably be very divergent views (especially after space colonization, within and between colonies, and these colonies may be spatially distant and self-sufficient, so influencing them becomes much more difficult). You would need to get a sufficiently large coalition to agree and force the rest to go extinct, but both are unlikely, even conditional on "our" judgement that extinction would be better, and actively attempting to force groups into extinction may itself be an s-risk. In this way, an option value argument may go the other way, too: once TAI is here in a scenario with multiple powers or space colonization goes sufficiently far, going extinct effectively stops being an option.

Comment by MichaelStJules on What would you do if you had half a million dollars? · 2021-07-18T17:00:29.585Z · EA · GW

Note that s-risks are existential risks (or at least some s-risks are, depending on the definition). Extinction risks are specific existential risks, too.

Comment by MichaelStJules on All Possible Views About Humanity's Future Are Wild · 2021-07-14T20:33:55.880Z · EA · GW

I mostly agree with your assessments.

On the skeptical side, I think the most likely way(s) space colonization doesn't happen are that costs to do it would be too high for people to ever be able to afford or want to do it on a large scale (given opportunity costs), or at least before we go extinct for some other reason. Furthermore, if there's little interest or active opposition to allowing AIs (or artificial sentience) to colonize space on their own, then costs may increase significantly to feed biological humans and ensure there's enough oxygen for them, although it could still be more AIs than humans going.

I don't think assigning probability > 50% to these possibilities together is unreasonable, nor is assigning probabilities < 50%. If you forced me to choose a single number, I'd probably choose something close to 50-50 on whether large scale space colonization happens at all, because of how uncertain I am now. Something like only a 10% chance on each side would be about the limit for what I would consider for decision-making (except to illustrate) if I wanted to provide a range of cost-effectiveness estimates for any intervention related to this. I'm not sure I'd say anything outside this 10-90% range is unreasonable, but just outside what I'd consider worth entertaining for myself. I would want to see a really strong argument to entertain something as extreme as 1% on either side.

Comment by MichaelStJules on Pascal's Mugging and abandoning credences · 2021-07-09T23:13:03.296Z · EA · GW

I think timidity, as described in your first link, e.g. with a bounded social welfare function, is basically okay, but it's a matter of intuition (similarly, discomfort with Pascalian problems is a matter of intuition). However, it does mean giving up separability in probabilistic cases, and it may instead support x-risks reduction (depending on the details).

I would also recommend

Also, questions of fanaticism may be relevant for these x-risks, since it's not the probability of the risks that matter, but the difference you can make. There's also ambiguity, since it's possible to do more harm than good, by increasing the risk instead or increasing other risks (e.g. reducing extinction risks may increase s-risks, and you may be morally uncertain about how to weigh these).

Comment by MichaelStJules on A longtermist critique of “The expected value of extinction risk reduction is positive” · 2021-07-07T03:04:39.426Z · EA · GW

Each of the five mutually inconsistent principles in the Third Impossibility Theorem of Arrhenius (2000) is, in isolation, very hard to deny.


This post/paper points out that lexical total utilitarianism already satisfies all of Arrhenius's principles in his impossibility theorems (there are other background assumptions):

However, it’s recently been pointed out that each of Arrhenius’s theorems depends on a dubious assumption: Finite Fine-Grainedness. This assumption states, roughly, that you can get from a very positive welfare level to a very negative welfare level via a finite number of slight decreases in welfare. Lexical population axiologies deny Finite Fine-Grainedness, and so can satisfy all of Arrhenius’s plausible adequacy conditions. These lexical views have other advantages as well. They cohere nicely with most people’s intuitions in cases like Haydn and the Oyster, and they offer a neat way of avoiding the Repugnant Conclusion.


Also, for what it's worth, the conditions in these theorems often require a kind of uniformity that may only be intuitive if you're already assuming separability/additivity/totalism in the first place, e.g. (a) there exists some subpopulation A that satisfies a given condition for any possible disjoint unaffected common subpopulation C (i.e. the subpopulation C exists in both worlds, and the welfares in C are the same across the two worlds), rather than (b) for each possible disjoint unaffected common subpopulation C, there exists a subpopulation A that satisfies the condition (possibly a different A for a different C). The definition of separability is just that a disjoint unaffected common subpopulation C doesn't make a difference to any comparisons.

So, if you reject separability/additivity/totalism or are at least sympathetic to the possibility that it's wrong, then it is feasible to deny the uniformity requirements in the principles and accept weaker non-uniform versions instead. Of course, rejecting separability/additivity/totalism has other costs, though.

Comment by MichaelStJules on A longtermist critique of “The expected value of extinction risk reduction is positive” · 2021-07-07T00:37:33.135Z · EA · GW

I might have missed it in your post, but descendants of humans encountering a grabby alien civilization is itself an (agential) s-risk. If they are optimizing for spread and unaligned ethically with us, then we will be in the way, and they will have no moral qualms with using morally atrocious tactics, including spreading torture on an astronomical scale to threaten our values to get access to more space and resources, or we may be at war with them. If our descendants are also motivated to expand, and we encounter grabby aliens, how long would conflict between us go on for?

Comment by MichaelStJules on A longtermist critique of “The expected value of extinction risk reduction is positive” · 2021-07-06T23:45:42.567Z · EA · GW

Perfection Dominance Principle. Any world A in which no sentient beings experience disvalue, and all sentient beings experience arbitrarily great value, is no worse than any world B containing arbitrarily many sentient beings experiencing only arbitrarily great disvalue (possibly among other beings).[15]

I'm confused by the use of quantifiers here. Which of the following is what's intended?

  1. If A has only beings experiencing positive value and B has beings experiencing disvalue, then A is no worse than B? (I'm guessing not; that's basically just the procreation asymmetry.)
  2. For some level of value , some level of disvalue , and some positive integer , if A has only beings experiencing value at least , and B has at least N beings experiencing disvalue  or worse (and possibly other beings), then A is no worse than B.
  3. Something else similar to 2? Can  and/or  depend on A?
  4. Something else entirely?
Comment by MichaelStJules on Key Lessons From Social Movement History · 2021-07-02T18:16:17.071Z · EA · GW

Interesting. A few comments on this:

  1. I think ACE updated their page since you cited it, so the cumulative elasticity estimates are gone. Faunalytics has estimates: (EA Forum post). See the Elasticities tab here.
  2. A price elasticity of demand of -0.81 may be "inelastic" according to the comparison to -1, but it's actually surprisingly elastic to me. It's more elastic than demand for almost all animal products according to the estimates Faunalytics compiled.
  3. It would be good to briefly define the "cumulative elasticity factor", e.g. as the ratio of the percentage change in quantity over the percentage change in a demand shock/shift.
  4. There are at least two kinds of support to consider: the number of people getting abortions, and the percentage of people (or politicians) in favour of more liberal or restrictive abortion laws/policies, and the latter probably has little or even the opposite dependence on price, i.e. as abortion policy becomes more restrictive and prices increase, support for more liberal policies increases, so the these restrictions may not be stable, especially where the left wing is close to having or has majority power.
    1. Maybe this could be the case with animal products, too, but we have alternatives to animal products that are looking increasingly attractive (and people can switch to other animal products), whereas the alternatives to abortion for someone who wants an abortion are not very good, so it seems less likely to be the case for animal products.
Comment by MichaelStJules on Key Lessons From Social Movement History · 2021-07-02T15:51:28.348Z · EA · GW

Hmm, I guess with at least 40 correlations (before excluding some?), making this kind of adjustment will very likely leave you with no statistically significant correlations, unless you had some extremely small p-values (it looks like they were > 0.01, so not that small), but you could also take that as a sign that this kind of analysis is unlikely to be informative without retesting the same hypotheses separately on new data. EDIT: Actually, how many correlations did you test?

I think it's worth noting explicitly in writing the much higher risk that these are chance correlations, due the number of tests you did. It may also be worth reporting the original p-values and adjusted significance level (or adjusted p-values; I assume you can make the inverse p-value adjustments instead, but haven't checked if anyone does this).

It might also be worth reporting the number and proportion of statistically significant correlations you found (before and/or after the exclusions). If the tests were independent (they aren't), you'd expect around 5% if the null were true in all cases, just by chance. Just reasoning from my understanding of hypothesis tests, a higher proportion than 5% would increase your confidence in the claim that some of the statistically significant relationships you identified are likely non-chance relationships (or that the significant ones are dependent), and a similar or lower proportion would suggest they are chance (or that your study is underpowered, or that the insignificant ones are dependent).

I was going to suggest ANOVA F-tests with linear models for dependent variables of interest to get around the independent tests assumption, but unless you cut down the number of independent variables to less than the number of movements, the model will probably overfit using extreme coefficients and perfectly predict the dependent variable, and this wouldn't be informative. You could constrain their values to try to prevent this, but then this gets much messier and there's still no guarantee this will address the problem.

EDIT: Also, I'm not sure what kinds of tests you used, but with small sample sizes, my understanding is that tests based on resampling (permutation, bootstrapping, jackknife) tend to be more accurate than tests using asymptotic distributions (e.g. a normally distributed test statistic is often not a good approximation for a small sample), but this is a separate concern from adjusting for multiple tests. I'm also not sure how much this actually matters.

Comment by MichaelStJules on [Summary] Impacts of Animal Well‐Being and Welfare Media on Meat Demand · 2021-07-02T06:51:16.320Z · EA · GW

I didn't realize ACE had already covered this study a while back. See here.

Comment by MichaelStJules on Key Lessons From Social Movement History · 2021-07-02T06:32:04.462Z · EA · GW

Your recommendation on reducing focus on issue salience is consistent with Ezra Klein's opinion on the subject, which he discussed on the 80,000 Hours podcast here (and was surprising to me when I first heard it). Basically, the more attention an issue gets, the more polarized it gets.

Apparently Dylan Matthews tweeted a study on this; if anyone finds it, please share! :P

Comment by MichaelStJules on Key Lessons From Social Movement History · 2021-07-02T06:23:28.712Z · EA · GW

However, I found among the case studies that successful social change was negatively correlated with the use of corporate campaigns and negotiations. For example, the antislavery and children’s rights movements — among the most successful — spent little on corporate campaigns, while the less successful anti-abortion and Fair Trade movements spent relatively more.[26]


Does this correlation also go with when the movements were active and a general increase in corporate tactics over time? At least for your examples, antislavery and children's rights seem older than anti-abortion and Fair Trade.

I would guess the political and corporate situations today are quite different, with corporate influence stronger, perhaps especially in the US. Getting corporations to commit first should reduce their attempts to undermine legal reform or even get them to join in support to force their competitors to compete on a level playing field. Going through the corporate route first might also reduce the risk that the issue gets split between the political left and right, by preventing the industry from trying to build partisan support against it. This could also therefore reduce issue salience.

Comment by MichaelStJules on Key Lessons From Social Movement History · 2021-07-02T05:57:07.977Z · EA · GW

Are you suggesting that this particular disanalogy substantially weakens any of the specific claims or recommendations I make here?


Maybe not substantially, since I think I agree overall with these two recommendations, but it might weaken concerns about negative consequences from incremental tactics and the case for diversifying beyond corporate campaigns (as long as we keep supporting animal product substitutes, which should of course go beyond corporate campaigns, as long as doing so actually makes much difference).


(Less important, but with respect to this particular point, I think there's a similar effect from Targeted Regulation of Abortion Providers and other incremental anti-abortion legislation, which makes abortion more difficult or expensive. Restrictions on lethal injection and other methods of capital punishment similarly raise the price of capital punishment relative to other options, albeit only by a small amount, and capital punishment is already more expensive. And more tentatively,  there's an analogy with the welfare reforms implemented by some slaveowners. So I don't actually think that this disanalogy is as strong as many of the others we could point to.)

These are good points. Still, I think that if substitutes are approaching parity, the argument is that many people won't see any good reason to stick with conventional animal products, and there are risks that substitutes won't reach parity without increasing the costs of conventional animal products, and that getting to price parity without increasing the costs of conventional animal products will take much longer (although corporate campaigns are not the only way and maybe not the best way to raise their prices). I would guess that support and opposition to capital punishment and abortion are not very sensitive to cost, being instead driven primarily by ethical views (or political identity), but a large percentage of the population would switch to substitutes as they approach parity, since that's the main or only thing holding them back now, not some separate preference for conventional animal products or for animals to be farmed (although that's the case in many people, too).

Slavery could be a good analogy, since we're talking about the public's consumption habits and many people making their livings off of it (with animal agriculture, I'd guess there are proportionally far fewer animal farmers, but the industry has significant political influence anyway, and there's a lot of support for animal farmers from non-farmers).

Comment by MichaelStJules on Key Lessons From Social Movement History · 2021-07-02T05:10:53.509Z · EA · GW

I then estimated Spearman’s correlations between the variables and tested for statistical significance (p < 0.05), though there are many limitations to this sort of correlational historical evidence and to statistical tests with small sample sizes.[2]

[2] I generated scores for 43 different metrics and have not tested for significant correlations between all possible pairings. Additionally, I identified some correlations as significant but chose to exclude them if I believed them to be especially misleading, given known confounding factors or methodological difficulties. Given the high rate of Type II error, I only report significant correlations in the discussion below, rather than treating nonsignificant correlations as providing meaningful evidence that there is no relationship between two variables. This statistical analysis was only one input to help me clarify my thinking, rather than the main criterion for deciding the key lessons from social movement history. I used Spearman’s correlation rather than Pearson’s correlation because it depends on fewer assumptions such as continuous data and is less sensitive to outliers.


Did you adjust for multiple tests? It looks like you didn't adjust the level for significance down (0.05), so did you adjust p-values up?

Comment by MichaelStJules on Key Lessons From Social Movement History · 2021-07-02T02:06:00.757Z · EA · GW

It helps to distinguish possible goals (ending factory farming vs ending animal agriculture), since a given intervention might be incremental towards one but a large step towards another, and you draw conclusions about incremental reforms vs achieving the goal. If factory farming is far worse than other animal agriculture, then local factory farming bans are probably far better to pursue than animal agriculture bans, since they get most of the value, and are much more feasible. Even if it turned out that local factory farming bans were counterproductive towards animal agriculture bans, they would be worth pursuing anyway.

From a longtermist perspective, maybe it's not the case that factory farming is far worse, though. Plausibly it is if we're expecting some attractor state/lock-in event soon, and we need to take what we can get now, but if we're aiming for a wider moral circle later, maybe we should try to go straight for animal agriculture bans. We might shift away from welfare reforms to animal product substitutes, to get enough support for full animal agriculture bans in some regions. (Although the incremental welfare reforms might turn out to be valuable anyway, for momentum and reducing the gap for price parity.)

Comment by MichaelStJules on Key Lessons From Social Movement History · 2021-07-01T00:38:17.494Z · EA · GW

One potentially important disanalogy between the animal advocacy movement and others with respect to incremental reform is that incremental reforms often make animal products more expensive for consumers, and this can help cultured and plant-based substitutes achieve price parity, which may be an important catalyst, and it can generally make vegan foods more attractive.

Comment by MichaelStJules on Key Lessons From Social Movement History · 2021-07-01T00:25:57.268Z · EA · GW

Given your pessimism about individual outreach, the low (but surprisingly high) public support for banning animal agriculture, and the pushpack against incremental reforms, what paths do you see towards banning on a wide scale a) factory farming and b) animal agriculture generally?

Mainly through substitutes? Or is there a series of institutional reforms that are tractable and could get us the support to get there?

Factory farming bans specifically seem plausibly in reach in some regions.

Comment by MichaelStJules on Key Lessons From Social Movement History · 2021-07-01T00:16:21.273Z · EA · GW

Would regional factory farming bans (much lower stocking density limits, no intensive confinement like cages or crates; not bans of animal agriculture in general) count as incremental?

They would be incremental towards the goal of abolition, but not towards a large scale ban on factory farming?

Comment by MichaelStJules on [Meta] Is it legitimate to ask people to upvote posts on this forum? · 2021-06-29T19:13:57.717Z · EA · GW

I think it's okay to post in EA groups specific to the topic and share specifically with EAs who are likely to be interested, and that's likely to have a similar but smaller effect as explicitly asking people to upvote.

Comment by MichaelStJules on Can money buy happiness? A review of new data · 2021-06-28T17:01:27.777Z · EA · GW

I think being wealthy can detract from welfare in other ways. Maybe people are more likely to have shallow relationships, be more scrutinized and trust others less because of their wealth. So, it's possible it would peak, but I guess $75K seems low for this.

Comment by MichaelStJules on Shallow evaluations of longtermist organizations · 2021-06-28T01:26:41.745Z · EA · GW

For instance, requiring psychopathy tests for politicians, or psychological evaluation, seems very unrealistic.

Seems like you could do polling and start a ballot initiative where it looks promising, if anywhere. Starting small can get the momentum rolling and more attention to the issue, and then pick up support elsewhere.

Is there any particular reason you think it would be too unpopular or not work well? People might not like it in case it becomes a weapon used by the state to shut out political opponents, but maybe there are ways to prevent this, with bipartisan testers, or letting the subject choose at least one of the testers (who must have appropriate credentials). It could be like jury selection, with subjects allowed to challenge/strike potential testers (see strike for cause, peremptory challenge).

Also, we wouldn't need to require them to pass these tests; we could just publish the results so the public can be informed.

Maybe, in the US, it wouldn't be very effective other than in primaries, given how partisan things are.

Or do you think no useful tests could be made?

Comment by MichaelStJules on Welfare Footprint Project - a blueprint for quantifying animal pain · 2021-06-26T23:37:22.253Z · EA · GW

Most surprising to me is the similar intensity of suffering ("disabling pain") assigned to nest deprivation as the worst of keel bone fractures. This also seems to be one of the main advantages of cage-free systems over caged ones, with foraging deprivation (with the intensity of "hurtful pain") being another, based on their charts. The evidence for both is discussed in chapter 6 of their book. They define hurtful and disabling pain here as follows:

Hurtful: experiences in this category disrupt the ability of individuals to function optimally. Different from Annoying pain, the ability to draw attention away from the sensation of pain is reduced: awareness of pain is likely to be present most of the time, interspersed by brief periods during which pain can be ignored depending on the level of distraction provided by other activities. Individuals can still conduct routine activities that are important in the short-term (e.g. eating, foraging) and perform cognitively demanding tasks, but an impairment in their ability or motivation to do so is likely to be observed. Although animals may still engage in behaviors they are strongly motivated to perform (i.e., exploratory, comfort, sexual, and maintenance behaviors), their frequency or duration is likely to be reduced [55]. Engagement in positive activities with no immediate benefits (e.g., play in piglets, dustbathing in chickens) is not expected. Reduced alertness and inattention to ongoing stimuli may be present. The effect of (effective) drugs (e.g., analgesics if pain is physical, psychotropic drugs in the case of psychological pain) in the alleviation of symptoms is expected.


Disabling: pain at this level takes priority over most bids for behavioral execution, and prevents all forms of enjoyment or positive welfare. Pain is continuously distressing. Individuals affected by harms in this category often change their activity levels drastically (the degree of disruption in the ability of an organism to function optimally should not be confused with the overt expression of pain behaviors, which is less likely in prey species). Inattention and unresponsiveness to ongoing stimuli and surroundings is likely to be observed. Relief often requires higher drug dosages or more powerful drugs.

Comment by MichaelStJules on Issues with Using Willingness-to-Pay as a Primary Tool for Welfare Analysis · 2021-06-26T18:41:51.828Z · EA · GW

Ideally we'd move onto measures of subjective well-being, like life satisfaction and just use them directly, but I expect data to be much harder to obtain (at least I'd guess there's much less data now, and I expect trying to estimate the effects of various goods on life satisfaction would require large samples of subjects or experiments to detect effects). Your solution 1, using weights based on subjective well-being like you describe, seems like a good approach.

Comment by MichaelStJules on The case for strong longtermism - June 2021 update · 2021-06-22T21:14:29.976Z · EA · GW

Ya, maybe your representor should be a convex set, so that for any two functions in it, you can take any probabilistic mixture of them, and that would also be in your representor. This way, if you have one with expected value x and another with expected value y, you should have functions with each possible expected value between. So, if you have positive and negative EVs in your representor, you would also have 0 EV.

Do you mean negative EV is slightly extreme or ruling out negative EV is slightly extreme?

I think neglecting to look into and address ways something could be negative (e.g. a probability difference, EV) often leads us to unjustifiably assuming a positive lower bound, and I think this is an easy mistake to make or miss. Combining a positive lower bound with astronomical stakes would make the argument appear very compelling.

Comment by MichaelStJules on The case for strong longtermism - June 2021 update · 2021-06-22T06:04:40.082Z · EA · GW

I think complex cluelessness is essentially covered by the other subsections in the Cluelessness section. It's an issue of assigning numbers arbitrarily to the point that what you should do depends on your arbitrary beliefs. I don't think they succeed in addressing the issue, though, since they don't sufficiently discuss and address ways each of their proposed interventions could backfire despite our best intentions (they do discuss some in section 4, though). The bar is pretty high to satisfy any "reasonable" person.

Comment by MichaelStJules on The case for strong longtermism - June 2021 update · 2021-06-22T05:53:16.037Z · EA · GW

If, for instance, one had credences such that the expected number of future people was only 10^14, the status quo probability of catastrophe from AI was only 0.001%, and the proportion by which $1 billion of careful spending would reduce this risk was also only 0.001%, then one would judge spending on AI safety equivalent to saving only 0.001 lives per $100 – less than the near-future benefits of bednets. But this constellation of conditions seems unreasonable.


For example, we don’t think any reasonable representor even contains a probability function according to which efforts to mitigate AI risk save only 0.001 lives per $100 in expectation.

This isn't central so they don't elaborate much, but they are assuming here that we will not do more harm than good in expectation if we spend "carefully", and that seems arbitrary and unreasonable to me. See some discussion here.

Comment by MichaelStJules on What are some key numbers that (almost) every EA should know? · 2021-06-18T04:45:02.597Z · EA · GW

Maybe aggregate EA wealth (wealth held by EAs, or wealth intended for EA?), dominated by Open Phil and maybe a few other billionaires.

Comment by MichaelStJules on What are some key numbers that (almost) every EA should know? · 2021-06-18T04:40:33.321Z · EA · GW
  • Chicken deaths in a year: ~10^10

I think that's about right in the US, ~50 billion worldwide, but ~20 billion alive at any moment worldwide.

Comment by MichaelStJules on Non-consequentialist longtermism · 2021-06-07T08:09:49.400Z · EA · GW

Magnus Vinding defends suffering-focused ethics based on various non-consequentialist views in sections 6.6-6.12 of his book, Suffering-Focused Ethics: Defense and Implications, and argues for reducing s-risks in the same book as a consequence of suffering-focused ethics. I don't think he argues directly for reducing s-risks specifically based on these views (rather than generally reducing suffering), though, and I'm not sure these other views would recommend reducing s-risks over other ways to prevent suffering; it would depend on the specifics.

Comment by MichaelStJules on Non-consequentialist longtermism · 2021-06-07T07:43:51.299Z · EA · GW

From SEP:

Another emerging debate is whether contractualism can deliver plausible verdicts in cases involving risks of human extinction. The prima facie problem for contractualism here is that, because the outcome where we fail to avoid imminent extinction contains no future people at all, there is no (particular or representative) future person who has the standing to reasonably reject principles instructing present people to ignore extinction risks and focus entirely on meeting present needs. (Finneron-Burns 2017, Frick 2017.)

I'm not sure it follows that a contractualist should focus on present needs, though, since I think some contractualists would accept the procreation asymmetry, and so preventing futures with very bad lives could be important.

Rawls was a contractualist and argued for saving for future generations (assuming they will exist) based on the veil of ignorance; see 4.5 Rawls’s Just Savings Principle in the SEP article Intergenerational Justice:

Thus the correct principle is that which the members of any generation (and so all generations) would adopt as the one their generation is to follow and as the principle they would want preceding generations to have followed (and later generations to follow), no matter how far back (or forward) in time. (Rawls 1993: 274; Rawls 2001: 160)

Still, this seems to me to be a basically consequentialist argument, since, from my understanding, Rawls' treatment of the original position behind veil of ignorance is basically consequentialist.

The article also discusses rights-based approaches and other reasons to care for future generations.

Apparently contractualists are basically Kantian deontologists, though. On the other hand, contractarianism attempts to motivate ethical behaviour through rational self-interest without assuming concern for acting morally or taking the interests of others into account. See the SEP article on contractarianism, which contrasts the two in its introduction and in a few other places in the article.

Comment by MichaelStJules on Non-consequentialist longtermism · 2021-06-07T06:14:32.211Z · EA · GW

Maybe a deontological antinatalist ethics? Some may be interested in particular in (voluntary) human extinction, which would probably have very long term effects. Bringing someone into existence may be seen as a serious harm, exploitation or at least being reckless with the life of another, and so impermissible. However, the reasons to convince others to stop having kids may be essentially consequentialist, unless you have positive duties to others.

A proposal I've heard in contractualist and deontological theories is that to choose between two actions, you should prioritize the individual(s) with the strongest claim or who would be harmed the most (not necessarily the worst off, to contrast with Rawls' Difference Principle/maximin).  This is the "Greater Burden Principle" by the contractualist Scanlon. Tom Regan, the deontologist animal rights theorist, also endorsed it, as the "harm principle".

This principle might lend itself to longtermist thinking, but I'm not sure anyone has made a serious attempt to advocate for longtermism under such a view.

You might think that, unless you promote extinction, there is more likely to be someone in the distant future who will be harmed far more than anyone in the short-term future would be by promoting extinction than doing anything else, due to the huge number of chances for things to go very badly for an individual with a huge number of individuals in the future, or intentional optimization for suffering with advanced technology. Although I think contractualist and deontological views generally take additional people to be at best neutral in themselves, if you allowed for extra lives to be good in themselves, the individuals who would be harmed the most between extinction and non-extinction may be individuals in the distant future who would have lives with more value than any life so far, and not ensuring they exist may cause the greatest individual harm.

Furthermore, it has been argued that, according to contractualism, helping more people is better than helping fewer, when the individual harms are of the same magnitude, e.g. based on a tie-break argument or a veil of ignorance. See Suikkanen for some discussion.

There have also been recent attempts to adapt the Greater Burden Principle for cases with risk/uncertainty, since that has apparently been a problem. See Frick, for example. I think the handling of risk could be important for whether or not a theory endorses longtermism.

Comment by MichaelStJules on Non-consequentialist longtermism · 2021-06-05T02:39:32.257Z · EA · GW

Not a full theory, but Frick argues that humanity may have value in itself that's worth preserving:

Comment by MichaelStJules on Differences in the Intensity of Valenced Experience across Species · 2021-06-03T19:16:26.223Z · EA · GW

Setting aside unconscious processing and reflexive behaviour and assuming all neural paths from input to output go through conscious experience (they don't), there would be two ways to fix this and get back the original one-brain behaviour in response to the same inputs, while holding the size of the two brains constant: 

  1. reduce the intensity of the experiences across the two brains, and
  2. reduce the output response relative to intensity of experience across the two brains.


1 could also be divided into further steps for physical stimuli, for example noting that sensory pain perception and the affective response to pain are distinct:

  1. reduce the intensity of sensory perception across the two brains for a given stimuli intensity
  2. reduce the intensity of the affective response across the two brains for a given sensory perception intensity
  3. reduce the output response across the two brains for a given affective intensity.

And repeating the argument in the comment I'm replying to, the prior could be  for physical stimuli. Of course, this illustrates dependence on some pretty arbitrary and empirically ungrounded assumptions about how to divide up a brain.

I wouldn't be surprised if the average insect neuron fired more often than the average neuron in larger brains for similar behavioural responses to events, since larger brains could have a lot more room for redundancy. Maybe this can help prevent overfitting in a big brain, like "dropout" used while training deep artificial neural networks. This seems worth checking by comparing actual animal brains. The number of neurons (in the relevant parts of the brain) firing per second seems to matter more than just the number of neurons (in the relevant parts of the brain), and they may not scale linearly with each other in practice.

Comment by MichaelStJules on Christian Tarsney on future bias and a possible solution to moral fanaticism · 2021-06-01T06:05:23.869Z · EA · GW

Because people in the far future can't benefit us, save for immortality/revival scenarios, would contractualism give us much reason to ensure they come to exist, i.e. to continue to procreate and prevent extinction? Also, do contractualist theories tend to imply the procreation asymmetry, or even antinatalism?

It seems like contractualism and risk are tricky to reconcile, according to Frick, but he makes an attempt in his paper, Contractualism and Social Risk, discussed more briefly in section 1. Ethics of Risk here.

Comment by MichaelStJules on Animal Welfare Fund: Ask us anything! · 2021-05-31T00:34:57.689Z · EA · GW

Oh ya, you would probably have been aware of fishes caught for feed, but a recent estimate for their numbers is surprisingly huge (to me), to the extent that fish farming's welfare effects could pretty plausibly be dominated by the effects on wild fishes (and other wild aquatic animals). From the Aquatic Life Institute:

● Approximately 1.2 trillion aquatic animals are fed to other aquatic animals each year. This is approximately one-third to one-half of all animals fished.

● In order to produce the billions of fish that end up on the human plate, trillions of fish are processed, or fed live, as fish feed.

● Many of the fish we feed Salmon have similar welfare needs, thus creating a ‘welfare pyramid’ effect, as each farmed salmon must eat the biomass equivalent to 9 herring, or 120 anchovies, to be brought to harvest weight.

I think ALI is going ahead with recommending the replacement of fish feed, but this seems plausibly a bad thing to do (and more so the more weight you give to fishes than invertebrates), although I'm not sure either way.

That said, I tend to agree with Michael's thought that the indirect wild-animal impacts of diet may be more significant than many of the kinds of interventions that WAI could pull off because WAI-type interventions may not be focused on reducing numbers of wild animals, and without reducing numbers of wild animals, it's difficult for me to know if suffering is actually being reduced in light of cluelessness.

I do think WAI could come up with interventions that we could agree net reduce expected suffering while keeping populations roughly constant by reducing causes of suffering or death, paired with (more) humane population control (wildlife contraceptives, sterilization, or CRISPR to manage fertility rates, or more humane methods to cull or euthanize animals). However, these seem much harder to implement and scale to me, due to the costs, complexity and public disinterest or opposition. Humane insecticides in particular seem promising, though.

Comment by MichaelStJules on How do other EAs keep themselves motivated? · 2021-05-28T20:09:47.089Z · EA · GW

Although I don't go on Facebook as much lately, it's mostly EA and animal welfare stuff now.

I have some close EA friends I talk to and hang out with often.

Perhaps most importantly in my daily work, though, I prefer more structured work environments, and where there's some continuous social pressure to be productive, even just by working in an open office setting or coworking online. It's better for me if there's a scheduled meeting (maybe just 5 minutes, to discuss my plans for the day) every day to start my day, so that I owe it to others to start work by a certain time each day.

Comment by MichaelStJules on Please take the Reducing Wild-Animal Suffering Community Survey! · 2021-05-27T17:35:22.153Z · EA · GW

We shared the anonymized data and the contact info people wanted to be shared with the EA wild animal orgs (Animal Ethics, Wild Animal Initiative, Rethink Priorities), and the anonymized data with others in a wild animal welfare Slack channel (I can add you to it). Since it was Google Form, there was a decent amount of basic analysis already done automatically. The people working on the survey didn't have the time to analyze the data or write anything up (I think due to some unforeseen circumstances among the others, and I personally was starting an internship and have found splitting my focus to be very difficult while working remotely), and we agreed to hand it off. I'm not aware of the orgs or anyone else having any plans to publish anything.

Comment by MichaelStJules on Animal Welfare Fund: Ask us anything! · 2021-05-22T19:31:17.305Z · EA · GW

I guess it seems hard for me to understand thinking both: 

A) Diet change has more negative effects on wild animals than positive effects on farmed animals. 

And B) Diet changes’ negative effects on wild animals are in expectation greater than the positive effects from further work on wild animal welfare (e.g., of the sort WAI completes). 

But maybe I am misunderstanding. Do you think both of those?


In short, I think 

  1. A is reasonably likely to be true.
  2. If A is true, then B is very likely to be true, too (I'm less sure about the reverse implication).
  3. A's probability itself seems really uncertain to me, and I'm not comfortable picking one number before seeing models.  Picking 50% seems wrong, since I don't have evidential symmetry as in simple cluelessness; this is a case of complex cluelessness.

On 1, the main reasons diet change would be bad for wild animals would be through wild fishes and wild invertebrates (and Brian Tomasik's writing is where I'd start). Because of the number of animals involved (far more fishes and invertebrates than chickens, and there may be generational population effects since you prevent descendants, too, but maybe what matters most is carrying capacity), it seems pretty plausible these negative effects could heavily outweigh the positives for farmed animals. I think one thing Brian might not have been aware of at the time is that many wild fishes are caught to feed farmed fishes, so fish farming might be good for reducing wild fish populations. There's also all the plastic pollution from fishing that plausibly reduces populations, and not just fish populations. On the other hand, maybe the wild fishes get replaced with more populous r-selected species, and that's bad.

I think 2 is true, because 

  • I already think the number of wild animals affected will be larger from diet change, since this is a major ecosystem change whereas wild animal welfare interventions will be more targeted.
  • A implies the negative effects of diet change are quite large (enough to make up for the benefits to farmed chickens and farmed aquatic animals), and the worlds in which A is true but B is not are the (in my view) unlikely ones in which we're radically interfering in nature to help wild animals through population control or genetic interventions, because I'd guess that's what it takes to have a similarly large effect.

So for the -1000 to 900 effect on wild animals from diet change, something towards the low end seems more likely (through increasing populations through rewilding or not increasing populations as much by not increasing land use for agriculture as much) than something towards the high end (through a small increase in the probability of radical intervention in nature to help wild animals).

Then impacts specifically on wild animals cause the estimate to shift somewhat downward. Impacts on wild animals may be, say, [-1000, 900]. Say, mu=-50, sigma=~450 

The [-1000, 900] wasn't intended to be a confidence interval. These are the expected values of different models, and I have a lot of model uncertainty that's too hard to quantify to put everything together in one big model and get a single expected value out. I don't have just one expected value I'm willing to run with; it seems too arbitrary to pick knowing so little. Still, as I said in my previous paragraph, something near the low end seems more likely than something near the high end.

I also have deep uncertainty about the effects of climate change on wild animals, and diet change mitigates climate change.