Even Allocation Strategy under High Model Ambiguity 2020-12-31T09:10:09.048Z
[Summary] Impacts of Animal Well‐Being and Welfare Media on Meat Demand 2020-11-05T09:11:38.138Z
Hedging against deep and moral uncertainty 2020-09-12T23:44:02.379Z
Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? 2020-06-22T16:41:58.831Z
Physical theories of consciousness reduce to panpsychism 2020-05-07T05:04:39.502Z
Replaceability with differing priorities 2020-03-08T06:59:09.710Z
Biases in our estimates of Scale, Neglectedness and Solvability? 2020-02-24T18:39:13.760Z
[Link] Assessing and Respecting Sentience After Brexit 2020-02-19T07:19:32.545Z
Changes in conditions are a priori bad for average animal welfare 2020-02-09T22:22:21.856Z
Please take the Reducing Wild-Animal Suffering Community Survey! 2020-02-03T18:53:06.309Z
What are the challenges and problems with programming law-breaking constraints into AGI? 2020-02-02T20:53:04.259Z
Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? 2020-01-11T19:09:00.931Z
Should animal advocates donate now or later? A few considerations and a request for more. 2019-11-13T07:30:50.554Z
MichaelStJules's Shortform 2019-10-24T06:08:48.038Z
Conditional interests, asymmetries and EA priorities 2019-10-21T06:13:04.041Z
What are the best arguments for an exclusively hedonistic view of value? 2019-10-19T04:11:23.702Z
Defending the Procreation Asymmetry with Conditional Interests 2019-10-13T18:49:15.586Z
Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests 2019-07-04T23:56:44.330Z


Comment by michaelstjules on Is EA just about population growth? · 2021-01-18T07:41:05.988Z · EA · GW

The procreation asymmetry is one of my strongest intuitions. Essentially, it's never worse for an individual to never be born (for their own sake, since if they're not born, nothing can matter to them), but it is worse if they are born and have a bad/miserable life. Furthermore, I don't think additional good lives can make up for bad lives, so I believe in a hard asymmetry, and am an antinatalist. Thomas's paper discusses soft asymmetries, according to which good lives can make up for bad lives, but there's no point in adding more people (for their own sake, ignoring their effects on others) if the total welfare is guaranteed to be positive (or 0).

I'm also not sure that death is bad for the person who dies, since nothing can matter to them after they die, although, like with the procreation asymmetry, I think death can be better.

I've written about my views in my shortform, here, here and here. I'm roughly a negative prioritarian, close to a negative utilitarian, so I aim to minimize involuntary suffering.

Comment by michaelstjules on Is EA just about population growth? · 2021-01-17T20:05:48.524Z · EA · GW

There are many different person-affecting views that can avoid treating ensuring people are born like saving lives, and are compatible with statements like "This is for our grandchildren", although they may bring in their own counterintuitive issues. I would recommend the paper "The Asymmetry, Uncertainty, and the Long Term" by Teruji Thomas, in particular (although some parts are pretty technical, so maybe just watch the talk). Maybe also check out "Population Axiology" by Hilary Greaves for an overview of different theories, including person-affecting ones.

Also, if you think of saving lives as reducing the number of life years lost (to death), then preventing births saves lives. Minimizing total disability-adjusted life-years would be similar. This leads to an antinatalist position, though.

Comment by michaelstjules on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-16T19:35:58.760Z · EA · GW

I'm pretty suspicious about approaches which rely on personal identity across counterfactual worlds; it seems pretty clear that either there's no fact of the matter here, or else almost everything you can do leads to different people being born (e.g. by changing which sperm leads to their conception).

These approaches don't need to rely on personal identity across worlds; either they already "work" even without this (i.e. solve the nonidentity problem) or (I think) you can modify them into wide person-affecting views, using partial injections like the counterpart relations in this paper/EA Forum summary (but dropping the personal identity preservation condition, and using pairwise mappings between all pairs of options instead of for all available options at once).

And secondly, this leads us to the conclusion that unless we quickly reach a utopia where everyone has positive lives forever, then the best thing to do is end the world as soon as possible.

I don't see how this follows for the particular views I've mentioned, and I think it contradicts what I said about soft asymmetry, which does not rely on personal identity and which some of the views described in Thomas's paper and my attempt to generalize the view in my post satisfy (I'm not sure about Dasgupta's approach). These views don't satisfy the independence of irrelevant alternatives (most person-affecting views don't), and the option of ensuring everyone has positive lives forever is not practically available to us (except as an unlikely fluke, which an approach dealing with uncertainty appropriately should handle, like in Thomas's paper), so we can't use it to rule out other options.

Which I don't see a good reason to accept.

Even if they did imply this (I don't think they do), the plausibility of the views would be at least a reason to accept the conclusion, right? Even if you have stronger reasons to reject it.

Comment by michaelstjules on Scope-sensitive ethics: capturing the core intuition motivating utilitarianism · 2021-01-16T06:41:16.981Z · EA · GW

On more modest person-affecting views you might not be familiar with, I'd point you to  

  1. The Asymmetry, Uncertainty, and the Long Term by Teruji Thomas, and
  2. Dasgupta's approach discussed here:

I also wrote this post defending the asymmetry, and when I tried to generalize the approach to choosing among more than two options and multiple individuals involved*, I ended up with a soft asymmetry: considering only the interests of possible future people, it would never be worse if they aren't born, but it wouldn't be better either, unless the aggregate welfare were negative.

*using something like the beatpath method discussed in Thomas's paper to get a transitive but incomplete order on the option set

And I looked into something like modelling ethics as a graph traversal problem where you go from option A to option B if the individuals who would exist in A have more interest in B than in A (or the moral reasons from the point of view of A in favour of B outweigh those in favour of A), and either pick the option you visit the most asymptotically, or accumulate scores on the options depending on the difference in interest in options as you traverse, and then pick the option which dominates asymptotically (and also check multiple starting points).

Comment by michaelstjules on Why I'm focusing on invertebrate sentience · 2021-01-15T03:47:55.879Z · EA · GW

I agree, but I'm not sure how available this info has been, maybe until recently. This might be useful approximation:

Number of synapses could also be relevant, but I'd assume this data is even harder to find.

Comment by michaelstjules on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-14T20:15:02.793Z · EA · GW

(EDIT: Chris Meacham came up with a similar example here. I missed that comment before writing this one.)

On the Addendum, here's an example with three options, with four individuals  with welfares 1 through 4 split across the first two worlds.

  1. No extra people exist.

In world 1,  will be at their peak under any counterpart relation, and  will not be at their peak under any counterpart relation since their counterpart will have higher welfare 2 or 3 > 1 in world 2. In world 2,  and  can't both be at their peaks simultaneously, since one will have a counterpart with higher welfare 4 > 2, 3 in world 2. Therefore, both world 1 and world 2 cause harm, while world 3 is harmless, so only world 3 is permissible.

(EDIT: the following would also work, by the same argument:

  1. No extra people exist.)

The same conclusion follows with any identity constraints, since this just rules out some mappings.

In this way, I see the view as very perfectionist. The view is after all essentially that anything less than a maximally good life is bad (counts against that life), with some specification of how exactly we should maximize. This is similar to minimizing total DALYs, but DALYs use a common reference for peak welfare for everyone, 80-90 years at perfect health (and age discounting?).

Comment by michaelstjules on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-14T19:39:53.703Z · EA · GW

One might also consider condition (iii) of HMV (that is, in worlds where a subject doesn’t exist, we treat her welfare as if it is equal to 0) to be ad hoc. So we treat her as having welfare 0, but only for the purposes of comparing it to her welfare in other worlds. But we don’t actually think she has welfare 0 at that world, because she doesn’t exist. It feels a bit tailor made.


You might think people who exist can compare their own lives to nonexistence (or there are comparisons to be made on their behalf, since they have interests), but an individual who doesn't exist can't make any such comparisons (and there are no comparisons to make on their behalf, since they have no interests). From her point of view in the worlds where she exists, she does have welfare 0 in the worlds where she doesn't exist, but in the worlds where she doesn't exist, she has not point of view and is not a moral patient.

Or, existing people can have nonexisting counterparts, but nonexisting people do not get counterparts at all, since they're not moral patients.

Comment by michaelstjules on Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations" · 2021-01-14T19:24:03.690Z · EA · GW

Condition (4) in the definition of a saturating counterpart relations (that is, there is no other mapping that satisfies the first 3 conditions but which results in W1 having lower harm when combined with HMV) seems to be a bit ad hoc and designed to get him out of various situations, like the absurd conclusion, without having independent appeal.


One way to motivate this is that it's a generalization of symmetry. Counterparts are chosen so that their welfares match as closely as possible (after any personal identity-preservation constraints, which could be dropped), where the distance between two worlds is roughly measured in additive terms (rather than, say, the minimizing the maximum harm), which matches our additive aggregation for calculating harm.

If you took one world, and replaced all the identities with a disjoint set of identities of the same numbers, while preserving the distribution of welfare, adding condition (4) to the other conditions makes these worlds morally equivalent. If you switched the identities and changed exactly one welfare, then the mapping of identities would be one of the permissible mappings under condition (4). It picks out the intuitively correct mappings in these cases. Maybe condition (4) is unjustifiably strong for this, though.

Another way to look at it is that the harm in a given world is the minimum harm under all mappings satisfying conditions 1-3. Mappings which satisfy the minimum in some sense make the worlds most similar (under the other constraints and definition of harm).

Furthermore, if you were doing infinite ethics and didn't have any other way to match identities between worlds (e.g. locations) or had people left over who weren't (yet) matched (after meeting identity constraints), you could do something like this, too.  Pairwise, you could look at mappings of individual identities between the two worlds, and choose the mappings that lead to the minimum (infimum) absolute aggregate of differences in welfares, where the differences are taken between the mapped counterparts. So, this is choosing the mappings which make the two worlds look as similar as possible in terms of welfare distributions. The infimum might not actually be attained, but we're more interested in the number than the mappings, anyway. If, within some distance of the infimum (possibly 0, so an attained minimum), all the mappings lead to the same sign for the aggregate (assuming the aggregate isn't 0), then we could say one world is better than the other.

Comment by michaelstjules on RISC at UChicago is Seeking Ideas for Improving Animal Welfare · 2021-01-12T22:26:31.321Z · EA · GW

I'm not sure if this would be advocating cheating or just using the research that's already out there, but people should check out Charity Entrepreneurship's and Rethink Priorities' research on different new proposals.

Also check out ACE's and Founders Pledge's research on existing work, as well as what's getting funded by the CEA Animal Welfare Fund, Open Philanthropy Project and ACE Movement Grants.

Comment by michaelstjules on How much (physical) suffering is there? Part I: Humans · 2021-01-12T22:11:01.868Z · EA · GW

Hmm, this one has Deaths, YLDs and DALYs (among others in the advanced settings), so you could just use YLDs.

Comment by michaelstjules on How much (physical) suffering is there? Part I: Humans · 2021-01-11T03:29:16.485Z · EA · GW

One big issue with using DALYs as a proxy for suffering is that they count years of life lost due to death (up to some reference, I think the average of the longest life expectancy of any country, so 80-90 years), but you do not suffer after you are dead. I think you only want the YLDs, if you're just trying to estimate suffering. I think some datasets will give you both DALYs and YLLs, so you can just take the difference: YLDs =  DALYs - YLLs.

You might find some other useful posts with the Subjective Well-Being tag. Or, specifically, see the research by the Happier Lives Institute and Rethink Priorities on this topic.

Comment by michaelstjules on How much (physical) suffering is there? Part II: Animals · 2021-01-11T03:19:08.669Z · EA · GW

This might be of interest:

Some whales/dolphins have more neurons in their cortices than humans.

That being said, I'd be reluctant to rely too much on raw counts to decide moral weight. There are many other considerations. Check out Jason Schukraft's work for Rethink Priorities.

Comment by michaelstjules on Two Nice Experiments on Democracy and Altruism · 2021-01-03T19:12:37.039Z · EA · GW

The constitution and supreme courts are also important. For example, the first few Muslim bans by Trump were found unconstitutional: these decisions represented the interests of non-voting foreigners.

On the other hand, a woman in Switzerland was denied citizenship through a vote:

Comment by michaelstjules on Even Allocation Strategy under High Model Ambiguity · 2021-01-01T21:01:56.916Z · EA · GW

So for the maximin we are minimizing over all  joint distributions that are  -close to our initial guess?


Yes. That's more accurate than what I said (originally), since you use a single joint distribution for all of the options, basically a distribution over  , for  options, and you look at distributions -close to that joint distribution.


If I can't tell the options apart any more, how is the 1/n strategy better than just investing everything into a random option? Is it just about variance reduction? Or is the distance metric designed such that shifting the distributions into "bad territories" for more than one of the options requires more movement? 

Hmm, good point. I was just thinking about this, too. It's worth noting that in Proposition 3, they aren't just saying that the 1/N distribution is optimal, but actually that in the limit as , it's the only distribution that's optimal.

I think it might be variance reduction, and it might require risk-aversion, since they require the risk functionals/measures to be convex (I assume strictly), and one of the two example they use of risk measures explicitly penalizes the variance of the allocation (and I think it's the case for the other). When you increase , the radius of the neighbourhood around the joint distribution, you can end up with options which are less correlated or even inversely correlated with one another, and diversification is more useful in those cases. They also allow negative allocations, too, so because the optimal allocation is positive for each, I expect that it's primarily because of variance reduction from diversification across (roughly) uncorrelated options. I made some edits.

For donations, maybe decreasing marginal returns could replace risk-aversion for those who aren't actually risk-averse with respect to states of the world, but I don't think it will follow from their result, which assumes constant marginal returns.

Comment by michaelstjules on Two Nice Experiments on Democracy and Altruism · 2020-12-31T09:44:41.008Z · EA · GW

Great post! The results aren't super surprising, but it's nice to see them anyway. Has there been much empirical evidence on this before?

Milton Friedman argued for welfare on similar grounds:

Friedman’s argument comes in chapter 9 of his Capitalism and Freedom , and is based on the idea that private attempts at relieving poverty involve what he called “neighborhood effects” or positive externalities. Such externalities, Friedman argues, mean that private charity will be undersupplied by voluntary action.

[W]e might all of us be willing to contribute to the relief of poverty, provided everyone else did. We might not be willing to contribute the same amount without such assurance.

Paul Christiano also wrote a blog post titled Moral public goods.


For what it's worth, we might also want to consider harms of democracy. With democracy that better represents voters, we might expect higher taxes and therefore less money for the Gates Foundation and the Open Philanthropy Project, and it may or may not be worth that loss.

Also, you need to be signed into Google to open the links; it might be better to replace them with the original links.

Comment by michaelstjules on A case against strong longtermism · 2020-12-31T09:22:59.518Z · EA · GW

This might also be of interest: 

The Sequential Dominance Argument for the Independence Axiom of Expected Utility Theory by Johan E. Gustafsson, which argues for the Independence Axiom with stochastic dominance, a minimal rationality requirement, and also against the Allais paradox and Ellsberg paradox (ambiguity aversion). 

However, I think a weakness in the argument is that it assumes the probabilities exist and are constant throughout, but they aren't defined by assumption in the Ellsberg paradox. In particular, looking at the figure for case 1, the argument assumes p is the same when you start at the first random node as it is looking forward when you're at one of the two choice nodes, 1 or 2. In some sense, this is true, since the colours of the balls don't change between, but you don't have a subjective estimate of p by assumption and "unknown probability" is a contradiction in terms for a Bayesian. (These are notes I took when I read the paper a while ago, so I hope they make sense! :P.)

Another weakness is that I think these kinds of sequential lotteries are usually only relevant in choices where an agent is working against you or trying to get something from you (e.g. money for their charity!), which also happen to be the cases where ambiguity aversion is most useful. You can't set up such a sequential lottery for something like the degree of insect consciousness, P vs NP,  or whether the sun will rise tomorrow.

See my discussion with Owen Cotton-Barratt.

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-30T02:37:42.263Z · EA · GW

There are non-measurable sets (unless you discard the axiom of choice,  but then you'll run into some significant problems.) Indeed, the existence of non-measurable sets is the reason for so much of the measure-theoretic formalism. 

This depends on the space. 

It's at least true for real-valued intervals with continuous measures, of course, but I think you're never going to ask for the measure of a non-measurable set in real-world applications, precisely because they require the axiom of choice to construct (at least for the real numbers, and I'd assume, by extension, any subset of any ), and no natural set you'll be interested in that comes up in an application will require the axiom of choice (more than dependant choice) to construct. I don't think the existence of non-measurable sets is viewed as a serious issue for applications.

It is not true in a countable measure space (or, at least, you could always extend the measure to get this to hold), since assuming each singleton (like ) is measurable, every union of countably many singletons is measurable, and hence every subset is measurable ( is a countable union of singletons,  countable) . In particular, if you're just interested in the number of future people, assuming there are at most countably infinitely many (so setting aside the many-worlds interpretation of quantum mechanics for now), then your space is just the set of non-negative integers, which is countable.

using infinite sets (which clearly one would have to do if reasoning about all possible futures)

You could group outcomes to represent them with finite sets. Bayesians get to choose the measure spaces/propositions they're interested in. But again, I don't think dealing with infinite sets is so bad in applications.

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T16:54:53.608Z · EA · GW

I'd say they mean you can effectively ignore the differences in terminal value in the short term, e.g. the welfare of individuals in the short term only really matters for informing long-term consequences and effectively not in itself, since it's insignificant compared to differences in long-term value.

In other words, short-term welfare is effectively not an end in itself.

Comment by michaelstjules on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2020-12-29T02:37:02.820Z · EA · GW

Maybe there could be groups of countries that agree to coordinate on this, and isolate themselves physically from countries outside their own groups?

I guess there might be ways to deliver disease through the air or wild (wild animals), or just sneaking into a country. The solution to that is domes. :P

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T23:57:00.223Z · EA · GW

Either way, the problems to work on would be chosen based on their longterm potential. It's not clear that say global health and poverty would be among those chosen. Institutional decision-making and improving the scientific process might be better candidates.

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T23:44:30.561Z · EA · GW

You refuse to commit to a belief about x, but commit to one about y and that's inconsistent.

I would rephrase as "You say you refuse to commit to a belief about x, but seem to act as if you've committed to a belief about x". Specifically, you say you have no idea about the number of future people, but it seems like you're saying we should act as if we believe it's not huge (in expectation). The argument for strong longtermism you're trying to undermine (assuming we get the chance of success and sign roughly accurate, which to me is more doubtful) goes through for a wide range of numbers. It seems that you're committed to the belief that expected number is less than , say, since you write in response "This paragraph illustrates one of the central pillars of longtermism. Without positing such large numbers of future people, the argument would not get off the ground".

Maybe I'm misunderstanding. How would you act differently if you were confident the number was far less than  in expectation, say  (about 100 times the current population), rather than have no idea?

I don't think I agree - would you commit to a belief about what Genghis Khan was thinking on his 17th birthday?


... but they'd be arbitrary, so by definition don't tell us anything about the world? 

There are certainly things I would commit to believing he was not thinking about, like modern digital computers (probability > ), and I'd guess he thought about food/eating at some point during the day (probability > 0.5). Basically, either he ate that day (more likely than not) and thought about food before or while eating, or he didn't eat and thought about food because he was hungry. Picking precise numbers would indeed be fairly arbitrary and even my precise bounds are pretty arbitrary, but I think these bounds are useful enough to make decisions based on if I had to, possibly after a sensitivity analysis.

If I were forced to bet on whether Genghis Khan thought about food on a randomly selected day during his life (randomly selected to avoid asymmetric information), I would bet yes.

We have theories of neurophysiology, and while none of them conclusively tells us that animals definitely feel pain, I think that's the best explanation of our current observations.

I agree, but also none of these theories tell us how much a chicken can suffer relative to humans, as far as I know, or really anything about this, which is important in deciding how much to prioritize them, if at all. There are different suggestions for how the amount of suffering scales with brain size within the EA community, and there are arguments for these, but they're a priori and fairly weak. This is one of the most recent discussions.

Comment by michaelstjules on What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? · 2020-12-27T23:45:51.288Z · EA · GW

You don't think regulating the sale and use of the technologies necessary to engineer diseases would be enough?

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T21:30:54.702Z · EA · GW

There are definitely well-defined measures on any set (e.g. pick one atomic outcome to have probability 1 and the rest 0); there's just not only one, and picking exactly one would be arbitrary. But the same is true for any set of outcomes with at least two outcomes, including finite ones (or it's at least often arbitrary when there's not enough symmetry for equiprobability).

For the question of how many people will exist in the future, you could use a Poisson distribution. That's well-defined, whether or not it's a reasonable distribution to use.

Of course, trying to make your space more and more specific will run into feasibility issues.

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T20:59:08.859Z · EA · GW

To me this seems like you're making a rough model with a bunch of assumptions like that past use, threats and protocols increase the risks, but not saying by how much or putting confidences or estimates on anything (even ranges). Why not think the risks are too low to matter despite past use, threats and protocols?

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T20:35:52.818Z · EA · GW

I think you should specify a time period (e.g. the next 100 years) or feasibly preventable existential catastrophes. Could the heat death of the universe be an existential catastrophe? If so, I think the future population might be infinite, since anything less might be considered an existential catastrophe.

I'm not the author of this post, but I don't have only one probability distribution for this, and I don't think there's any good way to justify any particular one (although you might rule some out for being less reasonable).

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T20:22:45.451Z · EA · GW

This is a very strange criticism - he says the proposition is provably false but also has nonzero probability.

He said it has zero probability but is still useful, not nonzero probability.

"It  relies on the provably false probabilistic induction". Popper was a scientific irrationalist because he denied the rationality of induction. If you deny the rationality of induction, then you must be sceptical about all scientific theories that purport to be confirmed by observational evidence. Inductive sceptics must hold that if you jumped out of a tenth floor balcony, you would be just as likely to float upwards as fall downwards. Equally, do you think that smoking causes lung cancer? Do you think that scientific knowledge has increased over the last 200 years? If you do, then you're not an inductive sceptic. Inductive scepticism can't be used to ground a criticism that distinguishes uncertain long-termist probability estimates from probability estimates based on "hard data".

I think you're overinterpreting the claim (or Ben's claim is misleading, based on what's cited). You don't have to give equal weight to all hypotheses. You might not even define their weights. The proof cited shows that the ratio of probabilities between two hypotheses doesn't change in light of new evidence that would be implied by both theories. Some theories are ruled out or made less likely in light of incompatible evidence. Of course, there are always "contrived" theories that survive, but it's further evidence in the future, Occam's razor or priors that we use to rule them out.

It also assigns very low probability to some hypotheses that are not logically or analytically false but have little to no observational support, such as "smoking does not increase the risk of lung cancer". If 'reject' means "assigns <0.001% probability to", then Bayesianism obviously does reject some hypotheses.

This depends on your priors, which may be arbitrarily skeptical of causal effects.

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T19:40:54.481Z · EA · GW

"Assuming that in  2100 the world looks the same as it did during the time of past nuclear near misses, and nuclear misses are distributionally similar to actual  nuclear strikes, and [a bunch of other assumptions], then the probability of a nuclear war before 2100 is x". 

We can debate the merits of such a model, but I think it's clear  that it would be of limited use.  

But we also have to make similar (although less strong) assumptions and have generalization error even with RCTs. Doesn't GiveWell make similar assumptions about the impacts of most of their recommended charities? As far as I know, there are recent studies of GiveDirectly's effects, but the "recent" studies of the effects of the interventions of the other charities have probably had their samples chosen years ago, so their effects might not generalize to new locations. Where's the cutoff for your skepticism? Should we boycott the GiveWell-recommended charities whose ongoing intervention impacts of terminal value  (lives saved, quality of life improvements) are not being measured rigorously in their new target areas, in favour of GiveDirectly?

To illustrate the issue of generalization, GiveWell did a pretty arbitrary adjustment for El Niño for deworming, although I think this is the most suspect assumption I've seen them make.

See Eva Vivalt's research on generalization (in the Causal Inference section) or her talk here.

Comment by michaelstjules on Prabhat Soni's Shortform · 2020-12-27T19:01:40.317Z · EA · GW

Some things that are extinction risks are also s-risks or at least risk causing a lot of suffeing, e.g. AI risk, large-scale conflict. See Common ground for longtermists by Tobias Baumann for the Center for Reducing Suffering.

But ya, downside-focused value systems typically accept the procreation asymmetry, so future people not existing is not bad in itself.

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T18:26:43.791Z · EA · GW

None of this is to say that we shouldn't be working on nuclear threat, of course. There are good arguments for why this is a big problem that have nothing to do with probability and subjective credences.

Can you give some examples? I expect that someone could respond "That could be too unlikely to matter enough" to each of them, since we won't have good enough data.

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T09:52:47.700Z · EA · GW

One response might be that if there are unintended negative consequences, we can address those later or separately. Sometimes it will be the case that optimizing for some positive effect optimizes a negative effect, but usually these won't correspond. So, the most cost-effective ways to save lives won't be the ways that maximize the negative effects of population growth - those same negative effects will be cheaper to obtain through something other than population growth -, and we can probably find more cost-effective ways to offset those effects. I wrote a post about hedging like this.

Comment by michaelstjules on What's a good reference for finding (more) ethical animal products? · 2020-12-27T09:22:44.538Z · EA · GW


I think a lot of EAs think dairy does not cause that much harm by quantity, and lacto-vegetarianism seems popular (although so is veganism). Similarly, people buy clothing much less frequently than food, so I'd expect it to matter less, too (although I'd guess mink fur is pretty bad, since the animals are so small and farmed in pretty horrific conditions).


For what it's worth, part of the reason I went vegan initially was because it didn't seem worth the effort or cost to get animal products I'd be confident were sufficiently ethical.

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T08:51:32.698Z · EA · GW

I share your concerns with using arbitrary numbers and skepticism of longtermism, but I wonder if your argument here proves too much. It seems like you're acting as if you're confident that the number of people in the future is not huge, or that the interventions are otherwise not so impactful (or they do more harm than good), but I'm not sure you actually believe this. Do you? 

It sounds like you're skeptical of AI safety work, but it also seems what you're proposing is that we should be unwilling to commit to beliefs on some questions (like the number of people in the future), and then deprioritize longtermism as a result, but, again, doing so means acting as if we're committed to beliefs that would make us pessimistic about longtermism.

I think it's more fair to think that we don't have enough reason to believe longtermist work does much good at all, or more good than harm (and generally be much more skeptical of causal effects with little evidence), than it is to be extremely confident that the future won't be huge.

I think you do need to entertain arbitrary probabilities, even if you're not a longtermist, although I don't think you should commit to a single joint probability distribution. You can do a sensitivity analysis.

Here's an example: how do we decide between human-focused charities and animal charities, given the pretty arbitrary nature of assigning consciousness probabilities to nonhuman animals and the very arbitrary nature of assigning intensities of suffering to nonhuman animals?

I think the analogous response to your rejection of longtermism here would be to ignore your effects on animals, not just with donations or your career, but in your everyday life, too. But, based on this conclusion, we could reverse engineer what kinds of credences you would have to commit to if you were a Bayesian to arrive at such a conclusion (and there could be multiple compatible joint distributions). And then it would turn out you're acting as if you're confident that factory farmed chickens suffer very little (assuming you're confident in the causal effects of certain interventions/actions), and you're suggesting everyone else should act as if factory farmed chickens suffer very little.

Comment by michaelstjules on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T07:47:50.143Z · EA · GW

What do you think about using ranges of probabilities instead of single (and seemingly arbitrary) sharp probabilities and doing sensitivity analysis? I suppose when there's no hard data, there might be no good bounds for the ranges, too, although Scott Alexander has argued against using arbitrarily small probabilities.

Comment by michaelstjules on A case against strong longtermism · 2020-12-23T18:47:40.344Z · EA · GW

Greaves and MacAskill do discuss risk aversion, uncertainty/ambiguity aversion and the issue of seemingly arbitrary probabilities in sections 4.2 and 4.5. They admit that risk aversion with respect to the difference one makes does undermine strong longtermism (and I think ambiguity aversion with respect to the difference one makes would, too, although it might also lead you to doing as little as possible to avoid backfiring), although they cited (Snowden, 2015) claiming that aversion with respect to the difference on makes is too agent-relative and therefore incompatible with impartiality.

Apparently they're working on another paper with Mogensen on these issues.

They also point out that organizations like GiveWell, deal with cluelessness by effectively assuming it away, and you haven't really addressed this point. However, I think the steelman for GiveWell is that they're extremely skeptical about causal effects (or optimistic about the speculative long-term causal effects of their charities' interventions) and possibly uncertainty/ambiguity-averse with respect to the difference one makes (EDIT: although it's not clear that this justifies ignoring speculative future effects; rather it might mean assuming worst cases).

See also the following posts and the discussion:

Greaves and MacAskill, in my view, don't adequately address concerns about skepticism of causal effects and the value of their specific proposals. I discuss this in this thread and this thread.

Comment by michaelstjules on A case against strong longtermism · 2020-12-23T17:55:25.789Z · EA · GW

On the expected value argument, are you referring to this?

The answer I think lies in an oft-overlooked fact about expected values: that while probabilities are random variables, expectations are not. Therefore there are no uncertainties associated with predictions made in expectation. Adding the magic words “in expectation” allows longtermists to make predictions about the future confidently and with absolute certainty.

Based on the link to the wiki page for random variables, I think Vaden didn't mean that the probabilities themselves follow some distributions, but was rather just identifying probability distributions with the random variables they represent, i.e., given any probability distribution, there's a random variable distributed according to it.

However, I do think his point does lead us to want to entertain multiple probability distributions.

If you did have probabilities over your outcome probabilities or aggregate utilities, I'd think you could just take iterated expectations. If   is the aggregate utility,  and  then you'd just take the expected value of  with respect to  first, and calculate:

If the dependence is more complicated (you talk about correlations), you might use (something similar to) the law of total expectation.

And you'd use Gilboa and Schmeidler's maxmin expected value approach if you don't even have a joint probability distribution over all of the probabilities.

A more recent alternative to maxmin is the maximality rule, which is to rule out any choices whose expected utilities are weakly dominated by the expected utilities of another specific choice.

Mogensen comes out against this rule in the end for being too permissive, though. However, I'm not convinced that's true, since that depends on your particular probabilities. I think you can get further with hedging.

Comment by michaelstjules on Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure · 2020-12-20T18:45:08.168Z · EA · GW

Ah, in the quote I took, I thought you were comparing s-risks to x-risks where the good is lost when giving non-negligible credence to non-negative views, but you're comparing s-risks to far worse s-risks (x-risk-scale s-risks). I misread; my mistake.

Comment by michaelstjules on Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure · 2020-12-20T14:10:58.302Z · EA · GW

You might be interested in Teruji Thomas, "The Asymmetry, Uncertainty, and the Long Term", which covers different kinds of procreation asymmetries and concludes with the section "6 Extinction Risk Revisited". Some of he paper is petty technical, although the conclusion isn't. You could read section 6, watch the talk (25 minutes), and then read section 6 again.

Comment by michaelstjules on Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure · 2020-12-20T13:53:07.883Z · EA · GW

Fair enough on the definitions. I had this talk in mind, but Max Daniel made a similar point about the definition in parentheses. I'm not sure people have cases like astronomical numbers of (not extremely severe) headaches in mind, but I suppose without any kind of lexicality, there might not be any good way to distinguish. I would think something more like your hellish example + billions of times more happy people would be more illustrative. Some EAs working on s-risks do hold lexical views.

EDIT: below was based on a misreading.

With even a tiny weight on views valuing good parts of future civilization the former could be an extremely good world, while the latter would be a disaster by any reasonable mixture of views. Even with a fanatical restriction to only consider suffering and not any other moral concerns, the badness  of the former should be almost completely ignored relative to the latter if there is non-negligible credence assigned to both.

This to me requires pretty specific assumptions about how to deal with moral uncertainty. It sounds like you're assuming a common scale between the theories (maximizing expected choice-worthiness), but that too could lead to fanaticism if you give any credence to lexicality. While I think there's an intuitive case for it when comparing certain theories (e.g. suffering should be valued roughly the same regardless of the theory), assuming a common scale also seems like the most restrictive approach to moral uncertainty among those discussed in the literature, and I'm not aware of any other approach that would lead to your conclusion. If you gave equal weight to negative utilitarianism and classical utilitarianism, for example, and used any other approach to moral uncertainty, it's plausible to me that s-risks would come out ahead of x-risks (although there's some overlap in causes, so you might work on both).

You could even go up a level to and use a method for moral uncertainty for your uncertainty over which approach to moral uncertainty to use on normative theories, and as long as you don't put most of your credence in a common scale approach, I don't think your conclusion would follow.

Comment by michaelstjules on Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure · 2020-12-19T17:42:31.013Z · EA · GW

Just a clarification: s-risks (risks of astronomical suffering) are existential risks. I think you may be thinking of extinction risks, specifically. Some existential risks, taken broadly enough, are both extinction risks and s-risks, e.g. AI risks, although the focus of work may be different depending on the more specific kind of AI risk.

EDIT: I stand corrected. See Carl Shulman's reply.

Comment by michaelstjules on [Summary] Impacts of Animal Well‐Being and Welfare Media on Meat Demand · 2020-12-19T06:57:49.200Z · EA · GW

Also, Ezra Klein is leaving Vox for the New York Times. I'm not sure if he'll write about animal welfare there, but if he does, this would be huge. There's also a risk that Vox won't be as good without him.

Comment by michaelstjules on A case against strong longtermism · 2020-12-18T12:58:13.275Z · EA · GW

I think the probability of these events regardless of our influence is not what matters; it's our causal effect that does. Longtermism rests on the claim that we can predictably affect the longterm future positively. You say that it would be overconfident to assign probabilities too low in certain cases, but that argument also applies to the risk of well-intentioned longtermist interventions backfiring, e.g. by accelerating AI development faster than we align it, an intervention leading to a false sense of security and complacency, or the possibility that the future could be worse if we don't go extinct. Any intervention can backfire. Most will accomplish little. With longtermist interventions, we may never know, since the feedback is not good enough.

I also disagree that we should have sharp probabilities, since this means making fairly arbitrary but potentially hugely influential commitments. That's what sensitivity analysis and robust decision-making under deep uncertainty are for. The requirement that we should have sharp probabilities doesn't rule out the possibility that we could come to vastly different conclusions based on exactly the same evidence, just because we have different priors or weight the evidence differently.

Comment by michaelstjules on Ask Rethink Priorities Anything (AMA) · 2020-12-16T03:24:39.576Z · EA · GW

Maybe Aquatic Life Institute or Fish Welfare Initiative would work on this. I'm not sure if they're already aware. I think it would be closer to ALI's work.

Comment by michaelstjules on [Summary] Impacts of Animal Well‐Being and Welfare Media on Meat Demand · 2020-12-13T08:47:24.048Z · EA · GW

There are also other things the animal protection movement does that often gets attention in major outlets, like 

  1. Undercover investigations: Animal Equality, Mercy for Animals, others
  2. Lawsuits: The Nonhuman Rights Project, Animal Legal Defense Fund, Richman Law Group, The Albert Schweitzer Foundation; see the ACE filter
  3. Ballot initiatives: Sentience Politics in Switzerland, many groups got involved in state initiatives in the US (including OpenPhil). See this study on the effect of ads on egg consumption surrounding Proposition 2.
  4. Protests.
Comment by michaelstjules on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-11T20:29:30.370Z · EA · GW

No, I agree with that.

Comment by michaelstjules on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-09T21:17:47.729Z · EA · GW

Open Phil and the CEA Global Health and Development Fund have each made a grant to One for the World before, Open Phil has made grants to Founders Pledge, and the EA Infrastructure Fund has made grants to TLYCS, One for the World, RC Forward, Raising for Effective Giving, Founders Pledge, a tax deductible status project run by Effective Altruism Netherlands, Generation Pledge,, Lucius Caviola and Joshua Greene (, EA Giving Tuesday and Effektiv Spenden.

Of the 4 EA funds, the EA Infrastructure Fund has paid out the least to date, though, and it looks like they all started paying out in 2017.

Comment by michaelstjules on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-09T20:53:31.597Z · EA · GW

The multiplier for the single organization you want to support within the mix could still be > 1. From this:

According to their own calculations, for every $1 spent

  • Raising For Effective Giving (REG) raised $8 to various effective charities ($3.21 for MIRI, $1.37 for AMF, $0.81 for animal charities) (2015 report)
  • The Life You Can Save (TLYCS) raised $2.27 to AMF and $3.17 to their other top charities (2015 report)
  • Giving What We Can (GWWC) raised $6 to their top charities, estimated $104 if future donations were included[1] (2009-2014 report)

These are old numbers, though.

Comment by michaelstjules on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-09T20:48:11.125Z · EA · GW

The talk also argues for focusing on the largest donors, which in EA usually means Open Phil. But that’s less of an option for multiplier organizations as Open Phil’s EA “program does not fund organizations focused primarily on raising money for effective charities or organizations primarily focused on animal welfare or global poverty (though organizations in these categories might qualify for support under another focus area, e.g. farm animal welfare).”

You wouldn't necessarily approach large donors to fund TLYCS itself (although you could), you could approach them to directly fund the charities TLYCS supports. I think that's what Stefan had in mind.

Also, they could fund TLYCS through their global health and poverty program instead. They've funded One for  the World. The EA Infrastructure Fund has also funded TYLCS among many other multiplier orgs.

Comment by michaelstjules on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-09T20:31:05.450Z · EA · GW

That's fair, but if the fundraising org (e.g. TLYCS) was independent of a charity evaluator (e.g. GiveWell) and took all of its recommendations from them, then this seems like it would be okay. I know TLYCS support more than just GiveWell-recommended charities, though.

Comment by michaelstjules on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-09T17:32:49.534Z · EA · GW

There's also the question of whether the multiplier charities are just replacing some of the fundraising that the beneficiary charities would do in their absence, and whether or not they're better at it.

Could the beneficiary charities coordinate to fund the multiplier charity? Why don't they? Is it because they think their own fundraising is better, or that their regular donors wouldn't like that, or something else?

Comment by michaelstjules on What are the most common objections to “multiplier” organizations that raise funds for other effective charities? · 2020-12-09T15:56:39.481Z · EA · GW

I really don’t think my argument is about risk aversion at all. I think it’s about risk-neutral expected (counterfactual) value. The fact that it is extraordinarily difficult to imagine my donations to a multiplier charity having any counterfactual impact informs my belief about the likely probability of my donations to such an organization having a counterfactual impact, which is an input in my expected value calculus.

If, as Jon suggests, the average impact scales well (even if historically not smoothly), unless you confirm that your donation won't make a difference, and most donations make little difference, in the unlikely event that it does (because you push them past such a threshold), it can make a huge difference to make up for it, enough so that the expected value looks good even if you don't know whether you'll push them past such a threshold. It's similar to this argument for veg*nism.

It seems odd to suggest that being counterfactually responsible for an operational expansion into a new region is among the most plausible ways that a small-dollar gift to the AMF, for instance, has an impact. Clearly, such a gift allows the AMF to distribute more nets where they are currently operating, even if no such expansion into a new region is presently on the table.

Have you confirmed this about AMF? In the case of GiveDirectly, they give to whole villages at a time, so maybe the benefactors in the "marginal" village will get more or less, but I imagine there's a cutoff where they won't bother and just wait for more donations instead. Similarly, the school-based deworming charities might wait until they have enough to deworm another whole school. Of course, these villages and schools might be really small, so it might not matter too much unless you're making very small donations.