## Posts

Hedging against deep and moral uncertainty 2020-09-12T23:44:02.379Z · score: 32 (16 votes)
Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? 2020-06-22T16:41:58.831Z · score: 9 (10 votes)
Physical theories of consciousness reduce to panpsychism 2020-05-07T05:04:39.502Z · score: 29 (18 votes)
Replaceability with differing priorities 2020-03-08T06:59:09.710Z · score: 17 (9 votes)
Biases in our estimates of Scale, Neglectedness and Solvability? 2020-02-24T18:39:13.760Z · score: 93 (44 votes)
[Link] Assessing and Respecting Sentience After Brexit 2020-02-19T07:19:32.545Z · score: 16 (5 votes)
Changes in conditions are a priori bad for average animal welfare 2020-02-09T22:22:21.856Z · score: 18 (11 votes)
Please take the Reducing Wild-Animal Suffering Community Survey! 2020-02-03T18:53:06.309Z · score: 24 (13 votes)
What are the challenges and problems with programming law-breaking constraints into AGI? 2020-02-02T20:53:04.259Z · score: 6 (2 votes)
Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? 2020-01-11T19:09:00.931Z · score: 15 (6 votes)
Should animal advocates donate now or later? A few considerations and a request for more. 2019-11-13T07:30:50.554Z · score: 21 (8 votes)
MichaelStJules's Shortform 2019-10-24T06:08:48.038Z · score: 7 (4 votes)
Conditional interests, asymmetries and EA priorities 2019-10-21T06:13:04.041Z · score: 19 (17 votes)
What are the best arguments for an exclusively hedonistic view of value? 2019-10-19T04:11:23.702Z · score: 7 (4 votes)
Defending the Procreation Asymmetry with Conditional Interests 2019-10-13T18:49:15.586Z · score: 24 (16 votes)
Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests 2019-07-04T23:56:44.330Z · score: 10 (9 votes)

Comment by michaelstjules on Differences in the Intensity of Valenced Experience across Species · 2020-10-31T07:34:08.093Z · score: 2 (1 votes) · EA · GW

As for the paper, it seems neutral between the view that the raw number of neurons firing is correlated with valence intensity (which is the view I was disputing) and the view that the proportional number of neurons firing (relative to some brain region) is correlated with valence intensity. So I’m not sure the paper really cuts any dialectical ice. (Still a super interesting paper, though, so thanks for alerting me to it!)

One argument against proportion mattering (or at least in a straightforward way):

1. Suppose a brain responds to some stimuli and you record its pattern of neuron firings.
2. Then, suppose you could repeat exactly the same pattern of neuron firings, but before doing so, you remove all the neurons that wouldn't have fired anyway. By doing so, you have increased the proportion of neurons that fire compared to 1.

I think 1 and 2 should result in the exact same experiences (and hence same intensity) since the difference is just some neurons that didn't do anything or interact with the rest of the brain, even though 2 has a greater proportion of neurons firing. The claim that their presence/absence makes a difference to me seems unphysical, because they didn't do anything in 1 where they were present. Or it's a claim that what's experienced in 1 depends on what could have happened instead, which also seems unphysical, since these counterfactuals shouldn't change what actually happened. Number of firing neurons, on the other hand, only tracks actual physical events/interactions.

I had a similar discussion here, although there was pushback against my views.

This seems like a pretty good reason to reject a simple proportion account, and so it does seem like it's really the number firing that matters in a given brain, or the same brain with neurons removed (or something like graph minors, more generally, so also allowing contractions of paths). This suggests that if one brain A can be embedded into another B, and so we can get A from B by removing neurons and/or connections from B, then B has more intense experiences than A, ignoring effects of extra neurons in B that may actually decrease intensity, like inhibition (and competition?).

Comment by michaelstjules on Differences in the Intensity of Valenced Experience across Species · 2020-10-31T01:08:48.241Z · score: 2 (1 votes) · EA · GW

All fair points.

So I don’t think it’s implausible to assign split-brain patients 2x moral weight.

What if we only destroyed 1%, 50% or 99% of their corpus callosum? Would that mean increasing degrees of moral weight from ~1x to ~2x? What is it about cutting these connections that increases moral weight? Is it the increased independence?

Maybe this an inherently normative question, and there's no fact of the matter which has "more" experience? Or we can't answer this through empirical research? Or we're just nowhere near doing so?

Comment by michaelstjules on Differences in the Intensity of Valenced Experience across Species · 2020-10-30T16:36:37.479Z · score: 9 (3 votes) · EA · GW

I agree with 1. I think it weakens the force of the argument, but I'm not sure it defeats it.

2 might be a crux. I might say that unity is largely illusory and integration comes in degrees (so it's misleading to count consciousnesses with integers) since we can imagine cutting connections between two regions of a brain one at a time (e.g. between our two hemispheres), and even if you took distinct conscious brains and integrated/unified them, we might think the unified brain would matter at least as much as the separate brains (this is Schulman's thought experiment).

Also related: https://www.nickbostrom.com/papers/experience.pdf

There could also be hidden qualia. There may be roughly insect brains in your brain, but "you" are only connected to a small subset of their neurons (or only get feedforward connections from them). Similarly, you could imagine connecting your brain to someone else's only partially so that their experiences remain mostly hidden to you.

Maybe a better real-world argument would be split-brain patients? Is it accurate to say there are distinct/separate consciousnesses in each hemisphere after splitting, and if that's the case, shouldn't we expect their full unsplit brain to have at least roughly the same moral weight as the two split brains, even though it's more unified (regardless of any lateralization of valence)? If not, we're suggesting that splitting the brains actually increases moral weight; this isn't a priori implausible, but I lean against this conclusion.

On 3, at least within brains, there seems to be a link between intensity and number of responsive neurons, e.g.: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3179932/

In particular, this suggests (assuming a causal role, with more neurons that are responsive causing more intense experiences) that if we got rid of neurons in these regions and so had fewer of them (and therefore fewer of them to be responsive), we would decrease the intensities of the experiences (and valence).

Comment by michaelstjules on Differences in the Intensity of Valenced Experience across Species · 2020-10-30T03:16:15.538Z · score: 10 (3 votes) · EA · GW

Excited to read through this! Thanks!

I apologize if you addressed this and I missed it, since I'm still reading.

In response to the section Decision-Making, my impression is that brain parallelism/duplication thought experiments e.g. Schulman, Tomasik, are a reason to expect greater intensity in larger brains, and evolution would have to tune overall motivation, behaviour and attention to be less sensitive to intensity of valence compared to smaller brains in order to achieve adaptive behaviour.

If you took a person, duplicated their brain and connected the copy to the same inputs and outputs, the system with two brains would experience twice as much valence (assuming the strength of the signal is maintained when it's split to get to each brain). Its outputs would get twice the signal, too, so the system would overreact compared to if there had just been one brain. Setting aside unconscious processing and reflexive behaviour and assuming all neural paths from input to output go through conscious experience (they don't), there would be two ways to fix this and get back the original one-brain behaviour in response to the same inputs, while holding the size of the two brains constant:

1. reduce the intensity of the experiences across the two brains, and
2. reduce the output response relative to intensity of experience across the two brains.

I think we should expect both to happen if we reoptimized this system (holding brain size constant and requiring the original single-brain final behaviour), and I'd expect the system to have 1x to 2x  the intensity of experience as the original one brain, and be 1x to 2x less responsive (at outputs) for each intensity of experience. In general, making N copies of the same brain (so N times larger) would give 1x to Nx the intensity. This range is not so helpful, though, since it allows us, at the extremes, to weight brain size linearly, or not at all!

I think  is a natural choice for the amount by which the intensity is increased and the response is decreased as the mean (or mode?) of a prior distribution, since we use the same factor increase/decrease for each. But this relies on a very speculative symmetry. The factors could also depend on the intensity of the experience instead of being uniform across experiences. On the other hand, Schulman supports at least N times the moral weight, but his argument doesn't involve reoptimizing:

I'd guess the collective mind would be at least on the same order of consciousness and impartial moral weight as the separated minds

Some remarks:

1. This isn't to say we'd weight whole brains, since much of what happens in larger brains is not relevant to intensity of valence.
2. Evolution may be too unlike this thought experiment, so we shouldn't have much confidence in the argument.
3. This assumes an additive calculus and no integration between the two brains. I'd expect the N brains to each have less intense valence than the original, so if we were sufficiently prioritarian, we might actually prioritize a single brain over the N after fixing the N. Or maybe this is a reductio of prioritarianism if we think integration doesn't actually matter.
4. The N-brain system has a lot of redundancy. It could repurpose N-1 of the brains for something else, and just keep the one to preserve the original one-brain behaviour (or behaviour that's at least as adaptive). The extra N-1 brains worth of processing could or could not involve extra valence. I think this is a good response to undermine the whole argument, although we'd have to believe none of the extra total processing is used for extra valence (or that there's less valence in the larger brain, which seems unlikely).
5. Maybe some redundancy is useful, too, but how much? Does it give us finer discrimination (more just noticeable differences) or more robust/less noisy discrimination (taking the "consensus" of the activations of more neurons)? It also matters whether this happens in conscious or unconscious processing, but (I assume) human brains are larger than almost all other animals' in similar brain regions, including those related to valence.
6. Maybe there are genes that contribute to brain size kind of generally (with separate genes for how the extra neurons are used), or for both regions necessary for valence and others that aren't, so intensity was increased as a side-effect of some other useful adaptation, and motivation had to decrease in response.
Comment by michaelstjules on A new strategy for broadening the appeal of effective giving (GivingMultiplier.org) · 2020-10-30T01:21:54.735Z · score: 2 (1 votes) · EA · GW

Some potential drawbacks are:

1. Greater risk of running out of matching funds for specific EA charities.
2. Meta/community EA charities are kind of a public good in EA, so may go underfunded. You could require all matchers to match such charities (or at least one of them), though, although it would be good to ensure there's consensus on what counts as a meta/community charity. E.g. 80,000 Hours is the closest to one of those listed now (and none of the others seem like meta/community EA charities), but they're also pretty explicitly a longtermist organization and have their own cause priorities.
Comment by michaelstjules on A new strategy for broadening the appeal of effective giving (GivingMultiplier.org) · 2020-10-29T07:20:11.240Z · score: 10 (4 votes) · EA · GW

Are you considering registering as a charity in other countries, too?

Or, maybe you can partner with RC Forward for Canada, the EA Foundation for Switzerland and Germany, and CEA for the UK and Netherlands to extend tax credits/deductions to those countries.

Comment by michaelstjules on A new strategy for broadening the appeal of effective giving (GivingMultiplier.org) · 2020-10-29T07:01:59.476Z · score: 5 (4 votes) · EA · GW

Are you considering cause-specific matching funders? Or allow matchers to choose among which charities they'll match (as long as it's at least 2, to ensure regular donors still have some counterfactual impact).

This could allow the individual CEA Funds, GiveWell, ACE,  etc. to put matching funds there, too.

I also worry that if you don't allow cause-specific matching funders, you'll get fewer of them. I'd personally prefer to earmark my donations to specific causes.

Comment by michaelstjules on Donation Match for ACE Movement Grants · 2020-10-28T06:51:01.103Z · score: 6 (3 votes) · EA · GW

This is a non-illusory match per ACE’s marketing policy.

In some sense, yes, but we also don't know what that donor would do with their money otherwise. Maybe they'll donate it to another EAA charity anyway.

Double Up Drive actually commits the funds to a group of charities, and you can estimate how much you're moving away from other charities towards your preferred one(s).

Comment by michaelstjules on The Vegan Value Asymmetry and its Consequences · 2020-10-28T06:42:23.878Z · score: 2 (1 votes) · EA · GW

I think these are relevant:

https://fakenous.net/?p=1529

https://onlinelibrary.wiley.com/doi/full/10.1111/nous.12210

https://philpapers.org/rec/TARMUF

However, I think deontologists reject this kind of interpretation of their views. For one, trying to fit their views into a expected value or decision-theoretic framework basically assumes consequentialism from the outset, which they reject.

Comment by michaelstjules on The Vegan Value Asymmetry and its Consequences · 2020-10-28T06:38:34.508Z · score: 2 (1 votes) · EA · GW

Therefore the asymmetry might be useful to remind us that if we care about animal suffering, we might also need to care about animal flourishing [4]. Perhaps this involves conservation, or other interventions- I’m not sure.

You can do good by preventing more harm (e.g. suffering) than you cause, and I think this would be the typical vegan EA response.

Comment by michaelstjules on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T18:44:40.722Z · score: 2 (1 votes) · EA · GW

Woops, ya, you're right.

Comment by michaelstjules on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T17:55:07.438Z · score: 6 (3 votes) · EA · GW

Also, for what it's worth, even if negative utilitarians happened to dominate the wild animal welfare orgs (which apparently they don't, see Abraham's and Will's answers), for cooperative and strategic reasons, I think advocacy for wiping out animals would probably be counterproductive. Trying to wipe out animals sneakily would also be high-risk (in case it's found out), and we should support transparency/honesty as EAs.

Some related discussion with further links here.

Comment by michaelstjules on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T17:39:06.711Z · score: 2 (1 votes) · EA · GW

I think Abraham suggested there were at least 8: he co-founded 2 and worked at another 6.

Comment by michaelstjules on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T17:35:51.720Z · score: 4 (2 votes) · EA · GW

Weren't that paper and Brian's work pretty much the only EA-aligned (welfarist/consequentialist) writings on the topic until recently? And Towards Welfare Biology also covers more than just that one result.

Comment by michaelstjules on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T17:23:10.822Z · score: 2 (1 votes) · EA · GW

I co-founded 2 of and have worked at another of the 6 organizations that have worked on wild animal welfare with an EA lens.

I don't recall there being this many EA-aligned orgs working on wild animal welfare! :O Which ones were they?

I know Utility Farm and Wild Animal Suffering Research merged into Wild Animal Initiative. There's Animal Ethics and Rethink Priorities. Were the other orgs sub-projects of these?

Comment by michaelstjules on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T17:08:05.843Z · score: 2 (1 votes) · EA · GW

Isn't it only one thing in the space of possible solutions that makes you nervous: wiping out animals?

Comment by michaelstjules on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T17:04:04.953Z · score: 4 (2 votes) · EA · GW

For people who think wild-animal lives are net positive, there are many things that contain even more sentient value than rainforest.

Doesn't this lead to replacement anyway for welfarists/consequentialists, like discussed here? I.e. we should replace rainforest with things that produce more value.

At the same time, I feel like the discourse on this topic can be a bit disingenuous sometimes, where people whose actions otherwise don't indicate much concern for the moral importance of the action-omission distinction (esp. when it comes to non-persons) suddenly employ rhetorical tactics that make it sound like "wrongly thinking animal lives are negative" is a worse mistake than "wrongly thinking they are positive".

It may be intuitions about reversibility. It's harder to bring a species back than it is to eliminate it. Or, not only welfare matters to them. Or, maybe they really shouldn't consider themselves consequentialists.

Comment by michaelstjules on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T16:46:32.664Z · score: 10 (6 votes) · EA · GW

Also Simon from Wild Animal Initiative has written about the importance of reversibility (and persistence) in wild animal interventions. Talk here. Wiping out animals is not very reversible.

Comment by michaelstjules on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T08:57:08.567Z · score: 3 (2 votes) · EA · GW

That being said, there are some other useful insights from that work and surrounding discussion besides correcting the error, e.g. "when the probability of suffering increases, the severity of suffering should decrease", and this can be applied to animals who are likely to die shortly after being born, which have been part of the focus of wild animal welfare in EA.

Comment by michaelstjules on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T08:33:37.581Z · score: 2 (1 votes) · EA · GW

I agree. I'm not sure when I first heard about it; it might actually have been Zach pointing out that it was wrong on Facebook, but even if the proof had been correct, it still seemed like it was proving too much, so I think I'd have assumed the assumptions were too strong.

Then again, this might be hindsight bias.

Comment by michaelstjules on Hedging against deep and moral uncertainty · 2020-10-26T08:27:41.372Z · score: 2 (1 votes) · EA · GW

Thinking about it, in general, it seems to me that the ranges of possible effects of interventions could be unbounded, so then you'd have to accept some chance of having a negative impact in the corresponding cause areas. Perhaps this is something your general framework could be augmented to take into account e.g. could one set a maximum allowed probability of having a negative effect in one cause area, or would it be sufficient to have a positive expected effect in each area?

So, it's worth distinguishing between

1. quantified uncertainty, or, risk, when you can put a single probability on something, and
2. unquantified uncertainty, when you can't decide among multiple probabilities).

If there's a quantified risk of negative, but your expected value is positive under all of the worldviews you find plausible enough to consider anyway (e.g. for all cause areas), then you're still okay under the framework I propose in this post. I am effectively suggesting that it's sufficient to have a positive expected effect in each area (although there may be important considerations that go beyond cause areas).

However, you might have enough cluelessness that you can't find any portfolio that's positive in expected value under all plausible worldviews like this. That would suck, but I would normally accept continuing to look for robustly positive expected value portfolios as a good option (whether or not it is robustly positive).

Comment by michaelstjules on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T08:03:46.616Z · score: 15 (6 votes) · EA · GW

Some work pushing back on the view that net welfare in the wild is negative:

1. How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong. by Zach Freitas-Groff
2. Life History Classification and Insect herbivores, life history and wild animal welfare by Kim Cuddington (Rethink Priorities)
3. The Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik by Michael Plant
Comment by michaelstjules on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T07:43:42.978Z · score: 6 (3 votes) · EA · GW

I think organizations working on wild animal welfare are trying to distance themselves from negative utilitarian views and any impression that they'll support the destruction of ecosystems or wiping out animals, and at least some people working at them have symmetric views. I don't know that most have negative views. Well, this is my impression of Wild Animal Initiative and Rethink Priorities. I suspect Animal Ethics might be more negative-leaning than them, but I'm not sure.

I say this as a negative consequentialist myself. I don't think good lives, even if they're possible (I'm doubtful), can make up for bad lives. The procreation asymmetry is one of my strongest intuitions, is actually a pretty common intuition generally, and I think there are few ways to apply it as a welfarist consequentialist without ending up at a principled antinatalism (although there may be instrumental reasons to reject antinatalism in practice for someone with such asymmetric views). They all require giving up the independence of irrelevant alternatives, e.g. this paper, this paper, and Dasgupta's approach discussed here (although I think this is not an unreasonable thing to do).

Comment by michaelstjules on Research Summary: The Subjective Experience of Time · 2020-10-25T02:48:46.473Z · score: 4 (2 votes) · EA · GW

Antonia Shann also wrote a summary of one of your post for Faunalytics:

https://faunalytics.org/songbirds-honeybees-reprioritizing-welfare-based-on-the-subjective-experience-of-time/

Comment by michaelstjules on Hedging against deep and moral uncertainty · 2020-10-25T01:08:08.199Z · score: 4 (2 votes) · EA · GW

I think the overall approach you've taken is good, and it's cool to see you've worked through this. This is also the kind of example I had in mind, although I didn't bother to work with estimates.

I do think it would be better to use some projections for animal product consumption and fertility rates in the regions MC works in (I expect consumption per capita to increase and fertility to decrease) to include effects of descendants and changing consumption habits, since these plausibly could end up dominating the effects of MC, or at least on animals (and you also have to decide on your population ethics: does the happiness of the additional descendants contribute to the good compared to if they were never born?). Then, there are also timelines for alternatives proteins (e.g. here), but these are much more speculative to me.

I also personally worry that cage-free campaigns could be net negative in expectation (at least in the short-term, without further improvements), mostly since on-farm mortality rates are higher in cage-free systems. See some context and further discussion here. I believe that corporate campaigns work, though, so I think we could come up with a target for a corporate campaign that we'd expect to be robustly positive for animals. I think work for more humane slaughter is robustly positive.  Family planning interventions might be the most promising, see this new charity incubated by Charity Entrepreneurship and their supporting report, including their estimated cost-effectiveness of:

1. "$144 per unintended birth averted", and 2. "377 welfare points gained per dollar spent" for farmed animals. (I don't know off-hand if they're including descendants or projected changes in consumption in this figure.) However, this new charity doesn't have any track record yet, so it's in some ways more speculative than GiveWell charities or THL. CE does use success probabilities in their models, but this is a parameter that you might want to do a sensitivity analysis to. (Disclosure: I'm an animal welfare research intern for Charity Entrepreneurship.) Finally, Founders Pledge did a direct comparison between THL and AMF, including sensitivity analysis to moral weights, that might be useful. Comment by michaelstjules on Use resilience, instead of imprecision, to communicate uncertainty · 2020-10-24T00:29:29.581Z · score: 2 (1 votes) · EA · GW I do think everything eventually starts from your ass. Often you make some assumptions, collect evidence (and iterate between these first two) and then apply a model, so the numbers don't directly come from your ass. If I said that the probability of human extinction in the next 10 seconds was 50% based on a uniform prior, you would have a sense that this is worse than a number you could come up with based on assumptions and observations, and it feels like it came more directly from the ass. (And it would be extremely suspicious, since you could ask the same for 5 seconds, 20 seconds, and a million years. Why did 10 seconds get the uniform prior?) I'd rather my choices of actions be in some sense robust to assumptions (and priors, e.g. the reference class problem) that I feel are most unjustified, e.g. using a sensitivity analysis, as I'm often not willing to commit to putting a prior over those assumptions, precisely because it's way too arbitrary and unjustified. I might be willing to put ranges of probabilities. I'm not sure there's been a satisfactory formal characterization of robustness, though. (This is basically cluster thinking.) Each time you make an assumption, you're pulling something out of your ass, but if you check competing assumptions, that's less arbitrary to me. Comment by michaelstjules on Use resilience, instead of imprecision, to communicate uncertainty · 2020-10-23T20:36:07.792Z · score: 0 (2 votes) · EA · GW I mentioned this deeper in this thread, but I think precise probabilities are epistemically unjustifiable. Why not 1% higher or 1% lower? If you can't answer that question, then you're kind of pulling numbers out of your ass. In general, at some point, you have to make a 100% commitment to a given model (even a complex one with submodels) to have sharpe probabilities, and then there's a burden of proof to justify exactly that model. Eg if you have X% credence in a theory that produces 30% and Y% credence in a theory that produces 50%, then your actual probability is just a weighted sum. Then you have to justify X% and Y% exactly, which seems impossible; you need to go further up the chain until you hit an unjustified commitment, or until you hit a universal prior, and there are actually multiple possible universal priors and no way to justify the choice of one specific one. If you try all universal priors from a justified set of them, you'll get ranges of probabilities. (This isn't based on my own reading of the literature; I'm not that familiar with it, so maybe this is wrong.) Comment by michaelstjules on Use resilience, instead of imprecision, to communicate uncertainty · 2020-10-23T20:22:15.674Z · score: 4 (2 votes) · EA · GW The discussion here might be related, and specifically this paper that was shared. However, you can use a credible interval without any theoretical commitments, only practical ones. From this post: Give an expected error/CI relative to some better estimator - either a counterpart of yours ("I think there's a 12% chance of a famine in South Sudan this year, but if I spent another 5 hours on this I'd expect to move by 6%"); or a hypothetical one ("12%, but my 95% CI for what a superforecaster median would be is [0%-45%]"). This works better when one does not expect to get access to the 'true value' ("What was the 'right' ex ante probability Trump wins the 2016 election?") This way, you can say that your probabilities are actually sharp at any moment, but more or less prone to change given new information. That being said, I think people are doing something unjustified by having precise probabilities ("Why not 1% higher or lower?"), and I endorse something that looks like the maximality rule in Maximal Cluelessness for decision theory, although I think we need to aim for more structure somehow, since as discussed in the paper, it makes cluelessness really bad. I discuss this a little in this post (in the summary), and in this thread. This is related to ambiguity aversion and deep uncertainty. Comment by michaelstjules on The Risk of Concentrating Wealth in a Single Asset · 2020-10-22T23:14:52.111Z · score: 2 (1 votes) · EA · GW My thinking is that donating during drawdowns might be particularly bad, both personally and for your longer term donation strategy, since you're selling low and "locking in" large losses in your portfolio. So minimizing drawdown allows you to better plan your budget and donations, and allows you more flexibility in timing your donations. You might find a particularly good donation opportunity during a drawdown period that will only be available during that period, but it'll be extra costly (personally and to future donations) to donate then, so avoiding such drawdowns seems like an especially good thing to do. Also, Sharpe penalizes extreme upside compared to Sortino, which seems weird to me. Is it actually the Sharpe ratio that should be maximized with isoelastic utility (assuming log-normal returns, was it?)? But broadly speaking, if you use the ulcer index as your measure of risk, concentrating in a small number of assets looks even worse than if you use standard deviation, so the case for diversification is even stronger. Makes sense. Comment by michaelstjules on MichaelStJules's Shortform · 2020-10-22T05:03:39.121Z · score: 2 (1 votes) · EA · GW I think my argument builds off the following from "The value of existence" by Gustaf Arrhenius and Wlodek Rabinowicz (2016): Consequently, even if it is better for p to exist than not to exist, assuming she has a life worth living, it doesn’t follow that it would have been worse for p if she did not exist, since one of the relata, p, would then have been absent. What does follow is only that non-existence is worse for her than existence (since ‘worse’ is just the converse of ‘better’), but not that it would have been worse if she didn’t exist. The footnote that expands on this: Rabinowicz suggested this argument already back in 2000 in personal conversation with Arrhenius, Broome, Bykvist, and Erik Carlson at a workshop in Leipzig; and he has briefly presented it in Rabinowicz (2003), fn. 29, and in more detail in Rabinowicz (2009a), fn. 2. For a similar argument, see Arrhenius (1999), p. 158, who suggests that an affirmative answer to the existential question “only involves a claim that if a person exists, then she can compare the value of her life to her non-existence. A person that will never exist cannot, of course, compare “her” non-existence with her existence. Consequently, one can claim that it is better … for a person to exist … than … not to exist without implying any absurdities.” Cf. also Holtug (2001), p. 374f. In fact, even though he accepted the negative answer to the existential question (and instead went for the view that it can be good but not better for a person to exist than not to exist), Parfit (1984) came very close to making the same point as we are making when he observed that there is nothing problematic in the claim that one can benefit a person by causing her to exist: “In judging that some person’s life is worth living, or better than nothing, we need not be implying that it would have been worse for this person if he had never existed. --- Since this person does exist, we can refer to this person when describing the alternative [i.e. the world in which she wouldn’t have existed]. We know who it is who, in this possible alternative, would never have existed” (pp. 487-8, emphasis in original; cf. fn. 9 above). See also Holtug (2001), Bykvist (2007) and Johansson (2010). Comment by michaelstjules on Recommendations for prioritizing political engagement in the 2020 US elections · 2020-10-22T00:18:51.679Z · score: 3 (2 votes) · EA · GW Is Future Forward on your radar? It has support from a couple people associated with EA, including Moskovitz. Although maybe they'll hit diminishing returns with such large donors? https://www.futureforwardusa.org/ https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valley https://www.nytimes.com/2020/10/20/us/politics/future-forward-super-pac.html Also, Other significant Moskovitz bets this cycle have included millions to the Voter Participation Center, a voter-turnout organization that has been supercharged by tech money over the last two years, and Vote Tripling, a “relational organizing” approach to encourage friends to vote. Comment by michaelstjules on Life Satisfaction and its Discontents · 2020-10-22T00:12:30.585Z · score: 4 (2 votes) · EA · GW Related: optimism and pessimism bias. Even honeybees. Comment by michaelstjules on Open and Welcome Thread: October 2020 · 2020-10-19T04:02:55.176Z · score: 2 (1 votes) · EA · GW IIRC, Charity Navigator had some plans to look into cost-effectiveness/impact for a while, so maybe this was an easy way to expand their work into this? Interesting to see that this was supported by the Gates Foundation. More discussion in this EA Forum post. Comment by michaelstjules on The Risk of Concentrating Wealth in a Single Asset · 2020-10-18T23:47:33.831Z · score: 2 (1 votes) · EA · GW What are your thoughts on using max drawdown instead of volatility, or the Sortino ratio instead of Sharpe? Personally I'm more partial to both of them, maybe in part because it makes planning for the future easier, but maybe it's also giving into the endowment effect? Portfolio Visualizer allows you to minimize max (historical) drawdown for a given target return. Comment by michaelstjules on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-17T00:01:27.181Z · score: 3 (2 votes) · EA · GW I think it's plausible many EAs would not want to interact with a Trump supporter regularly, and while I doubt it would cost them their job or get them banned from EA global, I do wonder if it would count against them in trying to get a job at EA orgs. I think this is more likely in the effective animal advocacy space, which is influenced by the broader animal advocacy/activism space, and so seems further left than EA on average. Comment by michaelstjules on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-16T22:50:23.562Z · score: 6 (3 votes) · EA · GW He has in the past used evidence-based reasoning in other EA-related issues, particularly for the animal space which is his focus. Well, only one example comes to mind specifically, surrounding the debate on cage-free campaigns with Open Phil. See here, here and here. I'm personally skeptical of the disruption tactics DxE has used (under his lead). There was another debate on that, starting here , which suggested their disruption tactics might do more harm than good (DxE's official response was taken down , but you can find it here. Wayne didn't write it.). I'm more supportive of their open rescue work, but I think evidence there is also lacking. EDIT: I would also see the other comments here about DxE being cult-like under his leadership, though, and other criticism in the piece Dale shared. Comment by michaelstjules on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-16T01:06:42.492Z · score: 7 (2 votes) · EA · GW On the object-level question for a decision process for supporting/opposing particular candidates/parties, I think we should look for fairly strong consensus in favour of doing so for a given decision, if it's expected to reflect back on the EA community generally. If 1. <X% more EAs (weighted by engagement or selecting only among engaged EAs, and maybe weighted/selecting based on knowledge of the relevant issues and discussion) are in favour of it than are against getting involved, or 2. at least Y% of EAs (weighted again) are against getting involved, then we should not get involved. I'd guess X% > 20% and 10% < Y% < 30%. If you think getting involved is wasteful, i.e. worse than the counterfactual use of those resources, you should vote against getting involved. Comment by michaelstjules on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-16T00:38:30.218Z · score: 6 (3 votes) · EA · GW I think it's worth further distinguishing between political engagement generally and supporting/opposing political candidates or parties, since parties come with a lot of baggage that EA doesn't want to commit/associate to and is more zero-sum. Animal welfare initiatives and the Zurich initiative are political, but they 1. are in line with EA cause prioritization and don't commit/associate us to anything more than we already are committed/associated to (e.g. views on other controversial topics) 2. don't touch the usual culture war issues politics is getting very polarized over that EAs might find unimportant or EAs are themselves divided on. 3. aren't so zero-sum within EA because of the narrow focus. While many EAs don't prioritize those causes and find them wasteful, I think far fewer find them (very) actively harmful (except insofar as they take resources away from more important things). When you support or oppose a politician, there are many ways in which they could be good or bad according to a given EA, and you're more likely to actually do harm according to some other EA's values. Comment by michaelstjules on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T18:55:18.619Z · score: 2 (1 votes) · EA · GW No it's not! I'm assuming you're referring to my analogy with protecting the president, rather than my claim "Avoiding the threat in the first place to avoid its costs is a reason to cancel the event", which seems obvious given the risk that they will follow through on the threat (although you may have stronger reasons in the opposite direction.) Protecting the president has costs and is avoiding the action of letting the president go unprotected, which you would prefer if there were no threats or risks of threats. How does "Avoiding the action because you know you'll be threatened until you change course is the same as submitting to the threat" apply to cancelling but not this? I guess you can look at bodyguards as both preventative and retaliatory (they'll kill attackers), but armoured vehicles seem purely preventative. EDIT: One possible difference from purely strategic threats is that the people threatening to cancel you (get you fired, ruin your reputation, etc., which you don't have much control over) might actually value both making and following through on their threats to cancel as good things, rather than see following through as a necessary but unfortunate cost to make their future threats more persuasive. What do they want more, to cancel problematic people (to serve justice and/or signal virtue), or for there to be fewer problematic people? If the former, they may just be looking for appropriate targets to cancel and excuses to cancel them, so you'd mark yourself as a target by appearing problematic to them. I'm not sure this is that different from protecting the president, though, since some also just value causing harm to the president and the country. Comment by michaelstjules on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T16:55:26.255Z · score: 3 (2 votes) · EA · GW (When I write "explicit threat(s)" below, I'm mostly thinking demands from outsiders to cancel the event and risks of EA Munich or its organizers being cancelled or explicit threats from outsiders to cancel EA Munich without necessarily following through.) Abstractly, sure, the game theory is similar, since cancelling is also a cost, but I think the actual payoffs/costs can be very different, as you may be exposing yourself to more risk, and being explicitly threatened at all can incur additional (net) costs beyond the cost of cancellation. Also, if we were talking about not planning the event in the first place (that's another way to avoid the action, although that's not what happened here), it'll go unnoticed, so you wouldn't be known as someone who submits to threats to make yourself a target for more. A group won't likely be known for not inviting certain controversial speakers in the first place. I think in this case, we can say the game theory is pretty different due to asymmetric information. Cancelling early can also reduce the perception of submission to others who would make threats compared to cancelling after explicit threats, since explicit threats bring attention with them. As I wrote, there are costs that come from being threatened that are additional to just (the costs of) cancelling the event that you can avoid if you're never explicitly threatened in the first place. It's easier to avoid negative perceptions (like being known as “the group that invited Peter Singer”, as Julia mentioned) if you didn't plan the event in the first place or cancelled early before any threat was made (and even if no explicit threat was made at all). Once a threat is actually made, negative perceptions are more likely to result even if you submit, since threats bring negative perceptions with them. Cancelling after being threatened might seem like giving an apology after being caught, so might not appear genuine or the cancellation will just be less memorable than the threats and what lead to them (the association with particular figures). Comment by michaelstjules on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T04:31:10.318Z · score: -3 (4 votes) · EA · GW In this case, AFAIK, no one in particular was making a threat yet. So, instead, not canceling the event is exposing yourself to a potential threat and the loss (whether you submit or not, or even retaliate) that would result. Avoiding the threat in the first place to avoid its costs is a reason to cancel the event. Canceling is like hiring bodyguards for the president and transporting them in an armoured vehicle, instead of leaving them exposed to attacks and then retaliating afterwards if they are attacked. Comment by michaelstjules on Getting money out of politics and into charity · 2020-10-10T11:01:02.618Z · score: 2 (1 votes) · EA · GW I would guess there are laws preventing this kind of thing. Comment by michaelstjules on Getting money out of politics and into charity · 2020-10-10T10:57:30.366Z · score: 6 (3 votes) · EA · GW Could this be abused by people who donate to charity anyway? Suppose I'm a Democrat and was planning on donating$100 to charity and $100 to the Democrats. I first put$100 on the platform.

1. If the Republicans end up donating more, my platform donation counterfactually goes to charity and the Republicans get $100 less for their campaign. And now I can donate$100 directly to the Democrats. So, instead of $100 to charity +$100 to Democrats as originally planned, I can now do $200 to charity ($100 from me, $100 from Republicans),$100 to Democrats and -\$100 to Republicans.

2. If the Democrats donate more, my platform donation counterfactually goes to the Democrats. And I can donate again to charity. (There's no gain here.)

Comment by michaelstjules on Getting money out of politics and into charity · 2020-10-10T10:39:30.169Z · score: 2 (1 votes) · EA · GW

I agree that the first thing you point out is a problem, but let me just point out: in the event that it becomes a problem, that means that our platform is already a wild success.

Alternatively, people will predict this and then refuse to use it in the first place in those cases.

Comment by michaelstjules on Timeline Utilitarianism · 2020-10-10T10:31:01.401Z · score: 6 (2 votes) · EA · GW

Interesting idea!

In light of the relativity of simultaneity, that whether A happens before or after B can depend on your reference frame, you might have to just choose a reference frame or somehow aggregate over multiple reference frames (and there may be no principled way to do so). If you just choose your own reference frame, your theory becomes agent-relative, and may lead to disagreement about what's right between people with the same moral and empirical beliefs, except for the fact they're choosing different reference frames.

Maybe the point was mostly illustrative, but I'd lean against using any kind of average (including mean, median, etc.) without special care for negative cases. If the average is negative, you can improve it by adding negative lives, as long as they're better than the average.

Comment by michaelstjules on MichaelStJules's Shortform · 2020-10-05T21:43:38.749Z · score: 4 (2 votes) · EA · GW

The "cardinal hedonist" might object that X (e.g. introspective judgement of intensity) could be identical to our hedonistic experiences, or does track their cardinality closely enough.

I think, as a matter of fact, X will necessarily involve extra (neural) machinery that can distort our judgements, as I illustrate with the reinforcement learning case. It could be that our judgements are still approximately correct despite this, though.

Most importantly, the accuracy of our judgements depends on their being something fundamental that they're tracking in the first place, so I think hedonists who use cardinal judgements of intensity owe us a good explanation for where this supposed cardinality comes from, which I expect is not possible with our current understanding of neuroscience, and I'm skeptical that it will ever be possible. I think there's a great deal of unavoidable arbitrariness in our understanding of consciousness.

Comment by michaelstjules on Expected value theory is fanatical, but that's a good thing · 2020-10-02T01:41:18.770Z · score: 2 (1 votes) · EA · GW

Oh, also you wrote " is better than " in the definition of Minimal Tradeoffs, but I think you meant the reverse?

But there is a worry that if you don't make it a fixed r then you could have an infinite sequence of decreasing rs but they don't go arbitrarily low. (e.g., 1, 3/4, 5/8, 9/16, 17/32, 33/64, ...)

Isn't the problem if the 's approach 1? Specifically, for each lottery, get the infimum of the 's that work (it should be ), and then take the supremum of those over each lottery. Your definition requires that this supremum is < 1.

Yep, that utility function is bounded, so using it and EU theory will avoid Fanaticism and bring on this problem. So much the worse for that utility function, I reckon.

And, in a sense, we're not just comparing lotteries here. L_risky + B is two independent lotteries summed together, and we know in advance that you're not going to affect B at all. In fact, it seems like B is the sort of thing you shouldn't have to worry about at all in your decision-making. (After all, it's a bunch of events off in ancient India or in far distant space, outside your lightcone.) In the moral setting we're dealing with, it seems entirely appropriate to cancel B from both sides of the comparison and just look at L_risky and L_safe, or to conditionalise the comparison on whatever B will actually turn out as: some b. That's roughly what's going on there.

Hmm, I think this kind of stochastic separability assumption implies risk-neutrality (under the assumption of independence of irrelevant alternatives?), since it will force your rankings to be shift-invariant. If you do maximize the expected value of some function of the total utilitarian sum (you're a vNM-rational utilitarian), then I think it should rule out non-linear functions of that sum.

However, what if we maximize the expected value of some function of the difference we make (e.g. compared to a "business as usual" option, subtracting the value of that option)? This way, we have to ignore the independent background  since it gets cancelled, and we can use a bounded vNM utility function on what's left. One argument I've heard against this (from section 4.2 here) is that it's too agent-relative, but the intuition for stochastic separability itself seems kind of agent-relative, too. I suppose there are slightly different ways of framing stochastic separability, "What I can't affect shouldn't change what I should do" vs "What isn't affected shouldn't change what's best", with only the former agent-relative, although also more plausible given agent-relative ethics. If I reject agent relative ethics, neither seems so obvious.

Comment by michaelstjules on MichaelStJules's Shortform · 2020-10-01T03:44:41.997Z · score: 2 (1 votes) · EA · GW

Here's an illustration with math. Let's consider two kinds of hedonic experiences,  and , with at least three different (signed) intensities each,  and , respectively, with . These intensities are at least ordered, but not necessarily cardinal like real numbers or integers and we can't necessarily compare  and . For example,  and  might be pleasure and suffering generally (with suffering negatively signed), or more specific experiences of these.

Then, what X does is map these intensities to numbers through some function,

satisfying  and . We might even let  and  be some ordered continuous intervals, isomorphic to a real-valued interval, and have  be continuous and increasing on each of  and , but again, it's  that's introducing the cardinalization and commensurability (or a different cardinalization and commensurability from the real one, if any); these aren't inherent to  and .

Comment by michaelstjules on MichaelStJules's Shortform · 2020-10-01T03:31:17.552Z · score: 3 (2 votes) · EA · GW

This is an argument against hedonic utility being cardinal and for widespread commensurability between hedonic experiences of different kinds. It seems that our tradeoffs, however we arrive at them, don't track the moral value of hedonic experiences.

Let X be some method or system by which we think we can establish the cardinality and/or commensurability of our hedonic experiences, and rough tradeoff rates. For example, X=reinforcement learning system in our brains, our actual choices, or our judgements of value.

If X is not identical to our hedonic experiences, then it may be the case that X is itself what's forcing the observed cardinality and/or commensurability onto our hedonic experiences. But if it's X that's doing this, and it's the hedonic experiences themselves that are of moral value, then that cardinality and/or commensurability are properties of X, not our hedonic experiences themselves. So the observed cardinality and/or commensurability is a moral illusion.

Here's a more specific illustration of this argument:

Do our reinforcement systems have access to our whole experiences (or the whole hedonic component), or only some subsets of those neurons that are firing that are responsible for them? And what if they're more strongly connected to parts of the brain for certain kinds of experiences than others? It seems like there's a continuum of ways our reinforcement systems could be off or even badly off, so it would be more surprising to me that it would track true moral tradeoffs perfectly. Change (or add or remove) one connection between a neuron in the hedonic system and one in the reinforcement system, and now the tradeoffs made will be different, without affecting the moral value of the hedonic states. If the link between hedonic intensity and reinforcement strength is so fragile, what are the chances the reinforcement system has got it exactly right in the first place? Should be 0 (assuming my model is right).

At least for similar hedonic experiences of different intensities, if they're actually cardinal, we might expect the reinforcement system to capture some continuous monotonic transformation and not a linear transformation. But then it could be applying different monotonic transformations to different kinds of hedonic experiences. So why should we trust the tradeoffs between these different kinds of hedonic experiences?

Comment by michaelstjules on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-29T21:52:53.530Z · score: 4 (2 votes) · EA · GW

I also disagree with the idea that "capitalism"(just to pick one example) is the joint root cause for most of the world's ills.

A. This is obviously wrong compared to something like evolution.

B. Global poverty predates capitalism and so does wild animal suffering, pandemic risk, asteroid risk, etc. (Also other problems commonly talked about like racism, sexism, biodiversity loss)

C. No obvious reason why non-capitalist individual states (in an anarchic world order) would not still have major coordination problems around man-made existential risks and other issues.

D. Indeed, we have empirical experience of the bickering and rising tensions between Communist states in the mid-late 1900s.

A leftist might not claim capitalism is the only joint root cause. But to respond to each:

A. Can't change the past, so not useful.

B. This isn't a counterfactual claim about what would happen if we replaced capitalism with some specific different system. Capitalism allows these issues, while another system might not, so in counterfactual terms, capitalism can still be a cause. (But socialist countries were often racist and homophobic. So socialism doesn't solve the issue, but again, many of today's (Western?) leftists aren't only concerned with capitalism, but also oppression and hierarchy generally, and may have different specific systems in mind.) I don't know to what extent leftists think of causes in such counterfactual terms instead of historical terms, though.

C. Leftists might think certain systems would be better than capitalist ones on these issues, and have reasons for those beliefs. For what it's worth, systems also shape people's attitudes or attitudes would covary with the system, so if greed is a major cause of these issues and it's suppressed under a specific non-capitalist system, this might partially address these issues. Also, some leftists want to reform the global world order, too. Socialist world government? Leftists disagree on how much should be top-down vs decentralized, though.

D. Not the systems they have in mind anymore. I think a lot of (most?) (Western?) leftists have moved onto some kind of social democracy (technically still capitalist), democratic socialism or anarchism.