Posts

How good is The Humane League compared to the Against Malaria Foundation? 2020-04-29T13:40:38.361Z · score: 62 (26 votes)
Founders Pledge Charity Recommendation: Action for Happiness 2020-03-05T11:27:46.602Z · score: 35 (15 votes)

Comments

Comment by aidangoth on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-28T13:06:35.479Z · score: 1 (1 votes) · EA · GW

Thanks for the clarification - I see your concern more clearly now. You're right, my model does assume that all balls were coloured using the same procedure, in some sense - I'm assuming they're independently and identically distributed.

Your case is another reasonable way to apply the maximum entropy principle and I think it's points to another problem with the maximum entropy principle but I think I'd frame it slightly differently. I don't think that the maximum entropy principle is actually directly problematic in the case you describe. If we assume that all balls are coloured by completely different procedures (i.e. so that the colour of one ball doesn't tell us anything about the colours of the other balls), then seeing 99 red balls doesn't tell us anything about the final ball. In that case, I think it's reasonable (even required!) to have a 50% credence that it's red and unreasonable to have a 99% credence, if your prior was 50%. If you find that result counterintuitive, then I think that's more of a challenge to the assumption that the balls are all coloured in such a way that learning the colour of some doesn't tell you anything about the colour of the others rather than a challenge to the maximum entropy principle. (I appreciate you want to assume nothing about the colouring processes rather than making the assumption that the balls are all coloured in such a way that learning the colour of some doesn't tell you anything about the colour of the others, but in setting up your model this way, I think you're assuming that implicitly.)

Perhaps another way to see this: if you don't follow the maximum entropy principle and instead have a prior of 30% that the final ball is red and then draw 99 red balls, in your scenario, you should maintain 30% credence (if you don't, then you've assumed something about the colouring process that makes the balls not independent). If you find that counterintuitive, then the issue is with the assumption that the balls are all coloured in such a way that learning the colour of some doesn't tell you anything about the colour of the others because we haven't used the principle of maximum entropy in that case.

I think this actually points to a different problem with the maximum entropy principle in practice: we rarely come from a position of complete ignorance (or complete ignorance besides a given mean, variance etc.), so it's actually rarely applicable. Following the principle sometimes gives counterintuive/unreasonable results because we actually know a lot more than we realise and we lose much of that information when we apply the maximum entropy principle.

Comment by aidangoth on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-26T23:16:31.203Z · score: 4 (3 votes) · EA · GW

The maximum entropy principle does give implausible results if applied carelessly but the above reasoning seems very strange to me. The normal way to model this kind of scenario with the maximum entropy prior would be via Laplace's Rule of Succession, as in Max's comment below. We start with a prior for the probability that a randomly drawn ball is red and can then update on 99 red balls. This gives a 100/101 chance that the final ball is red (about 99%!). Or am I missing your point here?

Somewhat more formally, we're looking at a Bernoulli trial - for each ball, there's a probability p that it's red. We start with the maximum entropy prior for p, which is the uniform distribution on the interval [0,1] (= beta(1,1)). We update on 99 red balls, which gives a posterior for p of beta(100,1), which has mean 100/101 (this is a standard result, see e.g. conjugate priors - the beta distribution is a conjugate prior for a Bernoulli likelihood).

The more common objection to the maximum entropy principle comes when we try to reparametrise. A nice but simple example is van Fraassen's cube factory: a factory manufactures cubes up to 2x2x2 feet, what's the probability that a randomly selected cube has side length less than 1 foot? If we apply the maximum entropy principle (MEP), we say 1/2 because each cube has length between 0 and 2 and MEP implies that each length is equally likely. But we could have equivalently asked: what's the probability that a randomly selected cube has face area less than 1 foot squared? Face area ranges from 0 to 4, so MEP implies a probability of 1/4. All and only those cubes with side length less than 1 have face area less than 1, so these are precisely the same events but MEP gave us different answers for their probabilities! We could do the same in terms of volume and get a different answer again. This inconsistency is the kind of implausible result most commonly pointed to.

Comment by aidangoth on The 80,000 Hours job board is the skeleton of effective altruism stripped of all misleading ideologies · 2020-08-08T09:42:54.793Z · score: 15 (6 votes) · EA · GW

An important difference between overall budgets and job boards is that budgets tell you how all the resources are spent whereas job boards just tell you how (some of) the resources are spent on the margin. EA could spend a lot of money on some area and/or employ lots of people to work in that area without actively hiring new people. We'd miss that by just looking at the job board.

I think this is a nice suggestion for getting a rough idea of EA priorities but because of this + Habryka's observation that the 80k job board is not representative of new jobs in and around EA, I'd caution against putting much weight on this.

Comment by aidangoth on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-07T10:22:12.803Z · score: 1 (1 votes) · EA · GW

The latex isn't displaying well (for me at least!) which makes this really hard to read. You just need to press 'ctrl'/'cmd' and '4' for inline latex and 'ctrl'/'cmd' and 'M' for block :)

Comment by aidangoth on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-07T10:19:13.781Z · score: 15 (7 votes) · EA · GW

I found the answers to this question on stats.stackexchange useful for thinking about and getting a rough overview of "uninformative" priors, though it's mainly a bit too technical to be able to easily apply in practice. It's aimed at formal Bayesian inference rather than more general forecasting.

In information theory, entropy is a measure of (lack of) information - high entropy distributions have low information. That's why the principle of maximum entropy, as Max suggested, can be useful.

Another meta answer is to use Jeffrey's prior. This has the property that it is invariant under a change of coordinates. This isn't the case for maximum entropy priors in general and is a source of inconsistency (see e.g. the partition problem for the principle of indifference, which is just a special case of the principle of maximum entropy). Jeffrey's priors are often unwieldy, but one important exception is for the interval (e.g. for a probability), for which the Jeffrey's prior is the distribution. See the red line in the graph at the top of the beta distribution Wikipedia page - the density is spread to the edges close to 0 and 1.

This relates to Max's comment about Laplace's Rule of Succession: taking N_v = 2, M_v = 1 corresponds to the uniform distribution on (which is just beta(1,1)). This is the maximum entropy entropy distribution on . But as Max mentioned, we can vary N_v and M_v. Using Jeffrey's prior would be like setting N_v = 1 and M_v = 1/2, which doesn't have as nice an interpretation (1/2 a success?) but has nice theoretical features. Especially useful if you want to put the density around 0 and 1 but still have mean 1/2.

There's a bit more discussion of Laplace's Rule of Sucession and Jeffrey's prior in an EA context in Toby Ord's comment in response to Will MacAskill's Are we living at the most influential time in history?

Finally, a bit of a cop-out, but I think worth mentioning, is the suggestion of imprecise credences in one of the answers to the stats.stackexchange question linked above. Select a range of priors and seeing how much they converge, you might find prior choice doesn't matter that much and when it does matter, I expect this could be useful for determining your largest uncertainties.

Comment by aidangoth on [Stats4EA] Uncertain Probabilities · 2020-05-28T00:00:21.354Z · score: 2 (2 votes) · EA · GW

Reflecting on this example and your x-risk questions, this highlights the fact that in the beta(0.1,0.1) case, we're either very likely fine or really screwed, whereas in the beta(20,20) case, it's similar to a fair coin toss. So it feels easier to me to get motivated to work on mitigating the second one. I don't think that says much about which is higher priority to work on though because reducing the risk in the first case could be super valuable. The value of information narrowing uncertainty in the first case seems much higher though.

Comment by aidangoth on [Stats4EA] Uncertain Probabilities · 2020-05-27T23:50:56.589Z · score: 5 (3 votes) · EA · GW

Nice post! Here's an illustrative example in which the distribution of matters for expected utility.

Say you and your friend are deciding whether to meet up but there's a risk that you have a nasty, transmissible disease. For each of you, there's the same probability that you have the disease. Assume that whether you have the disease is independent of whether your friend has it. You're not sure if has a beta(0.1,0.1) distribution or a beta(20,20) distribution, but you know that the expected value of is 0.5.

If you meet up, you get +1 utility. If you meet up and one of you has the disease, you'll transmit it to the other person, and you get -3 utility. (If you both have the disease, then there's no counterfactual transmission, so meeting up is just worth +1.) If you don't meet up, you get 0 utility.

It makes a difference which distribution has. Here's an intuitive explanation. In the first case, it's really unlikely that one of you has it but not the other. Most likely, either (i) you both have it, so meeting up will do no additional harm or (ii) neither of you has it, so meeting up is harmless. In the second case, it's relatively likely that one of you has the disease but not the other, so you're more likely to end up with the bad outcome.

If you crunch the numbers, you can see that it's worth meeting up in the first case, but not in the second. For this to be true, we have to assume conditional independence: that you and your friend having the disease are independent events, conditional on the probability of an arbitrary person having the disease being . It doesn't work if we assume unconditional independence but I think conditional independence makes more sense.

The calculation is a bit long-winded to write up here, but I'm happy to if anyone is interested in seeing/checking it. The gist is to write the probability of a state obtaining as the integral wrt of the probability of that state obtaining, conditional on , multiplied by the pdf of (i.e. ). Separate the states via conditional independence (i.e. ) and plug in values (e.g. P(you have it|p)=p) and integrate. Here's the calculation of the probability you both have it, assuming the beta(0.1,0.1) distribution. Then calculate the expected utility of meeting up as normal, with the utilities above and the probabilities calculated in this way. If I haven't messed up, you should find that the expected utility is positive in the beta(0.1,0.1) case (i.e. better to meet up) and negative in the beta(20,20) case (i.e. better not to meet up).

Comment by aidangoth on How good is The Humane League compared to the Against Malaria Foundation? · 2020-05-12T16:23:12.962Z · score: 1 (1 votes) · EA · GW

Thanks, this is a good criticism. I think I agree with the main thrust of your comment but in a bit of a roundabout way.

I agree that focusing on expected value is important and that ideally we should communicate how arguments and results affect expected values. I think it's helpful to distinguish between (1) expected value estimates that our models output and (2) the overall expected value of an action/intervention, which is informed by our models and arguments etc. The guesstimate model is so speculative that it doesn't actually do that much work in my overall expected value, so I don't want to overemphasise it. Perhaps we under-emphasised it though.

The non-probabilistic model is also speculative of course, but I think this offers stronger evidence about the relative cost-effectiveness than the output of the guesstimate model. It doesn't offer a precise number in the same way that the guesstimate model does but the guesstimate model only does that by making arbitrary distributional assumptions, so I don't think it adds much information. I think that the non-probabilistic model offers evidence of greater cost-effectiveness of THL relative to AMF (given hedonism, anti-speciesism) because THL tends to come out better and sometimes comes out much, much better. I also think this isn't super strong evidence but that you're right that our summary is overly agnostic, in light of this.

In case it's helpful, here's a possible explanation for why we communicated the findings in this way. We actually came into this project expecting THL to be much more cost-effective, given a wide range of assumptions about the parameters of our model (and assuming hedonism, anti-speciesism) and we were surprised to see that AMF could plausibly be more cost-effective. So for me, this project gave an update slightly in favour of AMF in terms of expected cost-effectiveness (though I was probably previously overconfident in THL). For many priors, this project should update the other way and for even more priors, this project should leave you expecting THL to be more cost-effective. I expect we were a bit torn in communicating how we updated and what the project showed and didn't have the time to think this through and write this down explicitly, given other projects competing for our time and energy. It's been helpful to clarify a few things through this discussion though :)

Comment by aidangoth on How good is The Humane League compared to the Against Malaria Foundation? · 2020-05-05T13:33:19.734Z · score: 9 (7 votes) · EA · GW

Thanks for raising this. It's a fair question but I think I disagree that the numbers you quote should be in the top level summary.

I'm wary of overemphasising precise numbers. We're really uncertain about many parts of this question and we arrived at these numbers by making many strong assumptions, so these numbers don't represent our all-things-considered-view and it might be misleading to state them without a lot of context. In particular, the numbers you quote came from the Guesstimate model, which isn't where the bulk of the work on this project was focused (though we could have acknowledged that more). To my mind, the upshot of this investigation is better described by this bullet in the summary than by the numbers you quote:

  • In this model, in most of the most plausible scenarios, THL appears better than AMF. The difference in cost-effectiveness is usually within 1 or 2 orders of magnitude. Under some sets of reasonable assumptions, AMF looks better than THL. Because we have so much uncertainty, one could reasonably believe that AMF is more cost-effective than THL or one could reasonably believe that THL is more cost-effective than AMF.
Comment by aidangoth on How good is The Humane League compared to the Against Malaria Foundation? · 2020-05-01T14:08:52.499Z · score: 7 (4 votes) · EA · GW

Thanks for this. I think this stems from the same issue as your nitpick about AMF bringing about outcomes as good as saving lives of children under 5. The Founders Pledge Animal Welfare Report estimates that THL historically brought about outcomes as good as moving 10 hen-years from battery cages to aviaries per dollar, so we took this as our starting point and that's why this is framed in terms of moving hens from battery cages to aviaries. We should have been clearer about this though, to avoid suggesting that the only outcomes of THL are shifts from battery cages to aviaries.

Comment by aidangoth on How good is The Humane League compared to the Against Malaria Foundation? · 2020-05-01T13:57:26.056Z · score: 5 (3 votes) · EA · GW

Thanks for this comment, you raise a number of important points. I agree with everything you've written about QALYs and DALYs. We decided to frame this in terms of DALYs for simplicity and familiarity. This was probably just a bit confusing though, especially as we wanted to consider values of well-being (much) less than 0 and, in principle, greater than 1. So maybe a generic unit of hedonistic well-being would have been better. I think you're right that this doesn't matter a huge amount because we're uncertain over many orders of magnitude for other variables, such as the moral weight of chickens.

The trade-off problem is really tricky. I share your scepticism about people's actual preferences tracking hedonistic value. We just took it for granted that there is a single, privileged way to make such trade-offs but I agree that it's far from obvious that this is true. I had in mind something like "a given experience has well-being -1 if an idealised agent/an agent with the experiencer's idealised preferences would be indifferent between non-existence and a life consisting of that experience as well as an experience of well-being 1". There are a number of problems with this conception, including the issue that there might not be a single idealised set of preferences for these trade-offs, as you suggest. I think we needed to make some kind of assumption like this to get this project off the ground but I'd be really interested to hear thoughts/see future discussion on this topic!

Comment by aidangoth on Founders Pledge Charity Recommendation: Action for Happiness · 2020-03-20T19:56:15.774Z · score: 2 (2 votes) · EA · GW

Yes, feeling much better now fortunately! Thanks for these thoughts and studies, Derek.

Given our time constraints, we did make some judgements relatively quickly but in a way that seemed reasonable for the purposes of deciding whether to recommend AfH. So this can certainly be improved and I expect your suggestions to be helpful in doing so. This conversation has also made me think it would be good to explore six monthly/quarterly/monthly retention rates rather than annual ones - thanks for that. :)

Our retention rates for StrongMinds were also based partly on this study, but I wasn't involved in that analysis so I'm not sure on the details of the retention rates there.

Comment by aidangoth on Founders Pledge Charity Recommendation: Action for Happiness · 2020-03-16T14:29:44.999Z · score: 2 (2 votes) · EA · GW

Yes, we had physical health problems in mind here. I appreciate this isn't clear though - thanks for pointing out. Indeed, we are aware of the underestimation of the badness of mental health problems and aim to take this into account in future research in the subjective well-being space.

Comment by aidangoth on Founders Pledge Charity Recommendation: Action for Happiness · 2020-03-16T14:26:48.478Z · score: 2 (2 votes) · EA · GW

Thanks very much for this thoughtful comment and for taking the time to read and provide feedback on the report. Sorry about the delay in replying - I was ill for most of last week.

1. Yes, you're absolutely right. The current bounds are very wide and they represent extreme, unlikely scenarios. We're keen to develop probabilistic models in future cost-effectiveness analyses to produce e.g. 90% confidence intervals and carry out sensitivity analyses, probably using Guesstimate or R. We didn't have time to do so for this project but this is high on our list of methodological improvements.

2. Estimating the retention rates is challenging so it's helpful for us to know that you think our values are too high. We based this primarily on our retention rate for StrongMinds, but adjusted downwards. It's possible we anchored on this too much. However, it's not clear to me that our values are too high. In particular, if our best-guess retention rate for AfH is too high, then this is probably also true for StrongMinds. Since we're using StrongMinds as a benchmark, this might not change our conclusions very much.

The total benefits are calculated somewhat confusingly and I appreciate you haven't had the chance to look at the CEA in detail. If is the effect directly post-treatment and is the retention rate, we calculated the total benefits as


That is, we assume half a year of full effect, and then discount each year that follows by each time. We calculated it in this way because for StrongMinds, we had 6 month follow-up data. However, it's not clear that this approach is best in this case. It might have been better to:

  • Assume 0.15 years at full effect
    • Since the study has only an 8 week follow-up, as you mention
  • Assume somewhere in between 0.15 and 0.5 years at full effect
    • Since the effects still looked very good at 8 week follow-up (albeit with no control) and evidence from interventions such as StrongMinds that suggest longer-lasting effects still seems somewhat relevant

Finally, I think there are good reasons to prefer AfH over CBT in high-income countries, even if our CEA suggests they are similarly cost-effectiveness in terms of depression. (Though they might not be strong enough to convince you that AfH and e.g. StrongMinds are similarly cost-effective.)

  • AfH aims to improve well-being broadly, not just by treating mental health problems.
    • Although much -- perhaps most -- of the benefits of AfH's courses come from reduction in depression, some of the benefits to e.g. happiness, life satisfaction and pro-social behaviour aren't captured by measuring depression
  • Our CEA is very conservative in some respects
    • The effect sizes we used (after our Bayesian analysis) are about 30% as large as reported in the study
      • If CBT effects aren't held to similar levels of scrutiny, then we can't compare cost-effectiveness fairly
    • We think that the wider benefits of AfH's scale-up could be very large
      • We focused just on the scale-up of the Exploring What Matters courses because this is easiest to measure
      • The happiness movement that AfH is leading and growing could be very beneficial, e.g. widely sharing materials on AfH's website, bringing (relatively small) benefits to a large number of people

That said, I think it's worth reconsidering our retention rates when we review this funding opportunity. Thanks for your input.

3. This is correct. We did not account for the opportunity cost of facilitators' or participants' time. As always, there are many factors and given time constraints, we couldn't account for all of them. We thought that these costs would be small compared to the benefits of the course so we didn't prioritise their inclusion. I don't think we explicitly mentioned the opportunity cost of time in the report though, so thanks for pointing this out.

Comment by aidangoth on Does anyone have any recommendations for landmine charities, or know of impact assessments? · 2020-02-04T10:14:53.249Z · score: 3 (2 votes) · EA · GW

Here's another option that detects landmines with rats: https://www.apopo.org/en

Can't comment on cost-effectiveness compared to other similar organisations but it won a Skoll Award for Social Entrepreneurship in 2009 http://skoll.org/organization/apopo/ http://skoll.org/about/skoll-awards/ https://en.m.wikipedia.org/wiki/Skoll_Foundation#The_Skoll_Awards_for_Social_Entrepreneurship

Comment by aidangoth on What should Founders Pledge research? · 2019-09-10T17:00:38.356Z · score: 10 (3 votes) · EA · GW

Scott Aaronson and Giulio Tononi (the main advocate of IIT) and others had an interesting exchange on IIT which goes into the details more than Muehlhauser's report does. (Some of it is cited and discussed in the footnotes of Muehlhauser's report, so you may well be aware of it already.) Here, here and here.

Comment by aidangoth on Reducing existential risks or wild animal suffering? · 2018-11-04T14:56:42.566Z · score: 1 (1 votes) · EA · GW

Great -- I'm glad you agree!

I do have some reservations about (variance) normalisation, but it seems like a reasonable approach to consider. I haven't thought about this loads though, so this opinion is not super robust.

Just to tie it back to the original question, whether we prioritise x-risk or WAS will depend on the agents who exist, obviously. Because x-risk mitigation is plausibly much more valuable on totalism than WAS mitigation is on other plausible views, I think you need almost everyone to have very very low (in my opinion, unjustifiably low) credence in totalism for your conlusion to go through. In the actual world, I think x-risk still wins. As I suggested before, it could be the case that the value of x-risk mitigation is not that high or even negative due to s-risks (this might be your best line of argument for your conclusion), but this suggests prioritising large scale s-risks. You rightly pointed out that million years of WAS is the most concrete example of s-risk we currently have. It seems plausible that other and larger s-risks could arise in the future (e.g. large scale sentient simulations), which though admittedly speculative, could be really big in scale. I tend to think general foundational research aiming at improving the trajectory of the future is more valuable to do today than WAS mitigation. What I mean by 'general foundational research' is not entirely clear, but, for instance, thinking about and clarifying that seems more important than WAS mitigation.

Comment by aidangoth on Reducing existential risks or wild animal suffering? · 2018-11-04T12:11:56.798Z · score: 1 (1 votes) · EA · GW

I'm making a fresh comment to make some different points. I think our earlier thread has reached the limit of productive discussion.

I think your theory is best seen as a metanormative theory for aggregating both well-being of existing agents and the moral preferences of existing agents. There are two distinct types of value that we should consider:

prudential value: how good a state of affairs is for an agent (e.g. their level of well-being, according to utilitarianism; their priority-weighted well-being, according to prioritarianism).

moral value: how good a state of affairs is, morally speaking (e.g. the sum of total well-being, according to totalism; or the sum of total priority-weighted well-being, according to prioritarianism).

The aim of a population axiology is to determine the moral value of state of affairs in terms of the prudential value of the agents who exist in that state of affairs. Each agent can have a preference order on population axiologies, expressing their moral preferences.

We could see your theory as looking at the prudential of all the agents in a state of affairs (their level of well-being) and their moral preferences (how good they think the state of affairs is compared to other state of affairs in the choice set). The moral preferences, at least in part, determine the critical level (because you take into account moral intuitions, e.g. that the sadistic repugnant conclusion is very bad, when setting critical levels). So the critical level of an agent (on your view) expresses moral preferences of that agent. You then aggregate the well-being and moral preferences of agents to determine overall moral value -- you're aggregating not just well-being, but also moral preferences, which is why I think this is best seen as a metanormative theory.

Because the critical level is used to express moral preferences (as opposed to purely discounting well-being), I think it's misleading and the source of a lot of confusion to call this a critical level theory -- it can incorporate critical level theories if agents have moral preferences for critical level theories -- but the theory is, or should be, much more general. In particular, in determining the moral preferences of agents, one could (and, I think, should) take normative uncertainty into account, so that the 'critical level' of an agent represents their moral preferences after moral uncertainty. Aggregating these moral preferences means that your theory is actually a two-level metanormative theory: it can (and should) take standard normative uncertainty into account in determining the moral preferences of each agent, and then aggregates moral preferences across agents.

Hopefully, you agree with this characterisation of your view. I think there are now some things you need to say about determining the moral preferences of agents and how they should be aggregated. If I understand you correctly, each agent in a state of affairs looks at some choice set of states of affairs (states of affairs that could obtain in the future, given certain choices?) and comes up with a number representing how good or bad the state of affairs that they are in is. In particular, this number could be negative or positive. I think it's best just to aggregate moral preferences directly, rather than pretending to use critical levels that we subtract from levels of well-being, and then aggregate 'relative utility', but that's not an important point.

I think the coice-set dependence of moral preferences is not ideal, but I imagine you'll disagree with me here. In any case, I think a similar theory could specified that doesn't rely on this choice-set dependence, though I imagine it might be harder to avoid the conclusions you aim to avoid, given choice-set independence. I haven't thought about this much.

You might want to think more about whether summing up moral preferences is the best way to aggregate them. This form of aggregation seems vulnerable to extreme preferences that could dominate lots of mild preferences. I haven't thought much about this and don't know of any literature on this directly, but I imagine voting theory is very relevant here. In particular, the theory I've described looks just like a score voting method. Perhaps, you could place bounds on scores/moral preferences somehow to avoid the dominance of very strong preferences, but it's not immediately clear to me how this could be done justifiably.

It's worth noting that the resulting theory won't avoid the sadistic repugnant conclusion unless every agent has very very strong moral preferences to avoid it. But I think you're OK with that. I get the impression that you're willing to accept it in increasingly strong forms, as the proportion of agents who are willing to accept it increases.

Comment by aidangoth on Reducing existential risks or wild animal suffering? · 2018-11-02T15:57:51.736Z · score: 0 (0 votes) · EA · GW

I'm not entirely sure what you mean by 'rigidity', but if it's something like 'having strong requirements on critical levels', then I don't think my argument is very rigid at all. I'm allowing for agents to choose a wide range of critical levels. The point is though, that given the well-being of all agents and critical levels of all agents except one, there is a unique critical level that the last agent has to choose, if they want to avoid the sadistic repugnant conclusion (or something very similar). At any point in my argument, feel free to let agents choose a different critical level to the one I have suggested, but note that doing so leaves you open to the sadistic repugnant conclusion. That is, I have suggested the critical levels that agents would choose, given the same choice set and given that they have preferences to avoid the sadistic repugnant conclusion.

Sure, if k is very low, you can claim that A is better than Bq, even if q is really really big. But, keeping q fixed, there's a k (e.g. 10^10^10) such that Bq is better than A (feel free to deny this, but then your theory is lexical). Then at some point (assuming something like the continuity), there's a k such that A and Bq are equally good. Call this k'. If k' is very low, then you get the sadistic repugnant conclusion. If k' is very high, you face the same problems as lexical theories. If k' not too high or low, you strike a compromise that makes the conclusions of each less bad, but you face both of them, so it's not clear this is preferable. I should note that I thought of and wrote up my argument fairly quickly and quite late last night, so it could be wrong and is worth checking carefully, but I don't see how what you've said so far refutes it.

My earlier points relate to the strangeness of the choice set dependence of relative utility. We agree that well-being should be choice set independent. But by letting the critical level be choice set dependent, you make relative utility choice set dependent. I guess you're OK with that, but I find that undesirable.

Comment by aidangoth on Reducing existential risks or wild animal suffering? · 2018-11-02T01:30:47.302Z · score: 0 (0 votes) · EA · GW

Thanks for the reply!

I agree that it's difficult to see how to pick a non-zero critical level non-arbitrarily -- that's one of the reasons I think it should be zero. I also agree that, given critical level utilitarianism, it's plausible that the critical level can vary across people (and across the same person at different times). But I do think that whatever the critical level for a person in some situation is, it should be independent of other people's well-being and critical levels. Imagine two scenarios consisting of the same group of people: in each, you have have the exact same life/experiences and level of well-being, say, 5; you're causally isolated from everyone else; the other people have different levels of well-being and different critical levels in each scenario such that in the first scenario, the aggregate of their moral value (sum well-being minus critical level for each person) is 1, and in the second this quantity is 7. If I've understood you correctly, in the first case, you should set your critical level to 6 - a, and in the second you should set it to 12 - a, where a is infinitesimal, so that the total moral value in each case is a, so that you avoid the sadistic repugnant conclusion. Why have a different level in each case? You aren't affected by anyone else -- if you were, you would be in a different situation/live a different life so could maybe justify a different critical level. But I don't see how you can justify that here.

This relates to my point on it seeming ad hoc. You're selecting your critical level to be the number such that when you aggregate moral value, you get an infinitesimal so that you avoid the sadistic repugnant conclusion, without other justification for setting the critical level at that level. That strikes me as ad hoc.

I think you introduce another element of arbitrariness too. Why set your critical level to 12 - a, when the others could set theirs to something else such that you need only set yours to 10 - a? There are multiple different critical levels you could set yours to, if others change theirs too, that give you the result you want. Why pick one solution over any other?

Finally, I don't think you really avoid the problems facing lexical value theories, at least not without entailing the sadistic repugnant conclusion. This is a bit technical. I've edited it to make it as clear as I can, but I think I need to stop now; I hope it makes sense. The main idea is to highlight a trade-off you have to make between avoiding the repugnant conclusion and avoiding the counter-intuitive implications of lexical value theories.

Let's go with your example: 1 person at well-being -10, critical level 5; 1 person at well-being 30, so they set their critical level to 15 - a, so that the overall moral value is a. Now suppose:

(A) We can improve the first person's well-being to 0 and leave the second person at 30, or (B) We can improve the second person's well-being to 300,000 and leave the first person at -10.

Assume the first person keeps their critical level at 5 in each case. If I've understood you correctly, in the first case, the second person should set their critical level to 25 - b, so that the total moral value is an infinitesimal, b; and in the second case, they should set it to 299,985 - c, so that again, the total moral value is an infinitesimal, c. If b > c or b = c, we get the problems facing lexical theories. So let's say we choose b and c such that c > b. But if we also consider:

(C) improving the second person's well-being to 31 and leave the first person at -10

We choose critical level 16 - d, I assume you want b > d because I assume you want to say that (C) is worse than (A). So if x(n) is the infinitesimal used when we can increase the second person's well-being to n, we have x(300,000) > b > x(31). At some point, we'll have m such that x(m+1) > b > x(m) (assuming some continuity, which I think is very plausible), but for simplicity, let's say there's an m such that x(m) = b. For concreteness, let's say m = 50, so that we're indifferent between increasing the second person's well-being to 50 and increasing the first person's to 0.

Now for a positive integer q, consider:

(Bq) We have q people at positive well-being level k, and the first person at well-being level -10.

Repeating the above procedure (for fixed q, letting k vary), there's a well-being level k(q) such that we're indifferent between (A) and (Bq). We can do this for each q. Then let's say k(2) = 20, k(4) = 10, k(10) = 4, k(20) = 2, k(40) = 1 and so on... (This just gives the same ordering as totalism in these cases; I just chose factors of 40 in that sequence to make the arithmetic nice.) This means we're indifferent between (A) and 40 people at well-being 1 with one person at -10, so we'd rather have 41 people at 1 and one person at -10 than (A). Increasing 41 allows us to get the same result with well-being levels even lower than 1 -- so this is just the sadistic repugnant conclusion. You can make it less bad by discounting positive well-being, but then you'll inherit the problems facing lexical theories. Say you discount so that as q (the number of people) tends to infinity, the well-being level at which you're indifferent with (A) tends to some positive number -- say 10. Then 300,000 people at level 10 and one person at level -10 is worse than (A). But that means you face the same problem as lexical theories because you've traded vast amounts of positive well-being for a relatively small reduction in negative well-being. The lower you let this limit be, the closer you get to the sadistic repugnant conclusion, and the higher you let it be, the more your theory looks like lexical negative utilitarianism. You might try to get round this by appealing to something like vagueness/indeterminacy or incommensurability, but these approaches also have counter-intuitive results.

You're theory is an interesting way to avoid the repugnant conclusions, and in some sense, it strikes a nice balance between totalism and lexical negative utilitarianism, but it also inherits the weaknesses of at least one of them. And I must admit, I find the complete subjectiveness of the critical levels bizarre and very hard to stomach. Why not just drop the messy and counter-intuitive subjectively set variable critical level utilitarianism and prefer quasi-negative utilitarianism based on lexical value? As we've both noted, that view is problematic, but I don't think it's more problematic than what you're proposing and I don't think its problems are absolutely devastating.

Comment by aidangoth on Reducing existential risks or wild animal suffering? · 2018-10-30T13:28:12.494Z · score: 2 (2 votes) · EA · GW

Nice post! I enjoyed reading this but I must admit that I'm a bit sceptical.

I find your variable critical level utilitarianism troubling. Having a variable critical level seems OK in principle, but I find it quite bizarre that moral patients can choose what their critical value is i.e. they can choose how morally valuable their life is. How morally good or bad a life is doesn't seem to be a matter of choice and preferences. That's not to say people can't disagree about where the critical level should be, but I don't see why this disagreement should reflect a difference in individual's own critical levels -- plausibly these disagreements are about other people's as well. In particular, you'll have a very hard time convincing anyone who takes morality to be mind-independent to accept this view. I would find the view much more plausible if the critical level were determined for each person by some other means.

I'd be interested to hear what kind of constraints you'd suggest on choosing levels. If you don't allow any, then I am free to choose a low negative critical level and live a very painful life, and this could be morally good. But that's more absurd than the sadistic repugnant conclusion, so you need some constraints. You seem to want to allow people the autonomy to choose their own critical level but also require that everyone chooses a level that is infinitesimally less than their welfare level in order to avoid the sadistic repugnant conclusion -- there's a tension here that needs resolved. But also, I don't see how you can use the need to avoid the sadistic repugnant conclusion as a constraint for choosing critical levels without being really ad hoc.

I think you'd be better arguing for quasi-negative utilitarianism directly or in some other way: you might claim that all positive welfare is only of infinitesimal moral value but that (at least some) suffering is of non-infinitesimal moral disvalue. It's really difficult to get this to work though, because you're introducing value lexicality, i.e. some suffering is infinitely worse than any amount of happiness. This implies that you would prefer to relieve a tiny amount of non-infinitesimal suffering over experiencing any finite amount of happiness. And plausibly you'd prefer to avoid a tiny but non-infinitesimal chance of a tiny amount of non-infinitesimal suffering over a guaranteed experience of any finite amount of happiness. This seems more troubling than the sadistic repugnant conclusion to me. I think you can sweeten the pill though by setting the bar of non-infinitesimal suffering quite high e.g. being eaten alive. This would allow trade-offs between most suffering and happiness as usual (allowing the sadistic repugnant conclusion concerning happiness and the 'lesser' forms of suffering) but still granting lexical superiority to extreme suffering. This strikes me as the most plausible view in this region of population ethical theories, I'd be interested to hear what you think.

Even if you get a plausible version of quasi-negative utilitarianism (QNU) that favours WAS over x-risk, I don't think the conclusion you want will follow easily when moral uncertainty is taken into account. How do you propose to decide what to do under normative uncertainty? Even if you find quasi-negative utilitarianism (QNU) more plausible than classical utilitarianism (CU), it doesn't follow that we should prioritise WAS unless you take something like the 'my favourite theory' approach to normative uncertainty, which is deeply unsatisfying. The most plausible approaches to normative uncertainty (e.g. 'maximise expected choice-worthiness') take both credences in the relevant theories and the value the theories assign to outcomes into account. If the expected value of working on x-risk according to CU is many times greater than the expected value of working on WAS according to QNU (which is plausible), then all else being equal, you need your credence in QNU to be many times greater than your credence in CU. We could easily be looking at a factor of 1000 here, which would require something like a credence < 0.1 in CU, but that's surely way too low, despite the sadistic repugnant conclusion.

A response you might make is that the expected value of preventing x-risk according to CU is actually not that high (or maybe even negative), due to increased chances of s-risks, given that we don't go extinct. But if this is the case, we're probably better off focusing on those s-risks rather than WAS, since they'd have to be really really big to bring x-risk mitigation down to WAS level on CU. It's possible that working on WAS today is a good way to gain information and improve our chances of good s-risk mitigation in the future, especially since we don't know very much about large s-risks and don't have experience mitigating them. But I think it would be suspiciously convenient if working on WAS now turned out to be the best thing for future s-risk mitigation (even on subjective expected value terms given our current evidence). I imagine we'd be better off working on large scale s-risks directly.