Exploring a Logarithmic Tolerance of Sufferingpost by David Reber (derber) · 2021-04-12T01:39:18.679Z · EA · GW · 3 comments
(This post arose out of confusion I had when considering "neutrality against making happy people". Consider this an initial foray into exploring suffering-happiness tradeoffs, which is not my background; I'd gladly welcome pointers to related work if it sounds like this direction has already been considered.)
There are two approaches for how to trade-off suffering and happiness which don't sit right with me when considering astronomical scenarios.
- Linear Tolerance: Set some (possibly large) constant . Then amount of suffering is offset by amount of happiness so long as .
My impression is that Linear Tolerance is pretty common among EAers (and please correct this if I'm wrong). For example, this is my understanding of most usages of "net benefit", "net positive", and so on: it's a linear tolerance of . This seems ok for the quantities of suffering/happiness we encounter in the present and near-future, but in my opinion becomes unpalatable in astronomical quantities.
- No Significant Tolerance: There exists some threshold of suffering such that no amount of happiness can offset if .
This is almost verbatim "Torture-level suffering cannot be counterbalanced", and perhaps the practical motivation behind "Neutrality against making happy people" (creating a person which has 99% chance of being happy and otherwise experiences intense suffering isn't worth the risk; or, creating a person who experiences 1 unit of intense suffering for any units of happiness isn't worth it). However, this seems to either A. claim infrequent-and-intense suffering is worse than frequent-but-low suffering, or B. accept frequent-but-low suffering as equally bad, and prefer to kill off even almost-entirely happy lifeforms as soon as the threshold is exceeded. Since my life is lower than almost-entirely happy yet I find it worth living, I am unsatisfied with this approach.
Toward Logarithmic Tradeoffs
I think the primary intuitions Linear Tolerance and No Significant Tolerance are trying to tap into are:
- it seems like small amounts of suffering can be offset by large amounts of happiness
- but once suffering gets large enough, the amount of happiness needed to offset it seems unimaginable (to the point of being impossible)
I don't think these need to contradict each other:
- Log Tolerance: Set coefficients . Then amount of suffering is offset by amount of happiness so long as .
Log Tolerance is stricter than Linear Tolerance: the marginal tradeoff rate of will eventually drop below any linear tradeoff rate . Furthermore, in the limit the cumulative "effective" linear tradeoff rate of goes to zero.
Meanwhile, Log Tolerance also requires nigh-impossible amounts of happiness to offset intense suffering: while technically goes to infinity, nobody has ever observed it to do so. Consequently any astronomically expanding sentience/civilization would need to get better and better at reducing suffering. On the other hand, because is monotonically increasing, the addition of almost-entirely happy life is always permissible, which I suspect fits better with the intuitions of most longtermists.
The practical impact Log Tolerance would have on how longtermists analyze risks is to shift from "does this produce more happiness than suffering" to "does this produces mechanisms by which happiness can grow exponentially relative to the growth of suffering?"
For example, one way we could stay below a log upper bound is if some fixed percentage of future resources are committed to reducing future s-risk as much as possible.
- Are there any messy ethical implications of log tolerance?
- I think any sublinear, monotonically nondecreasing function satisfying would have the same nice properties. Perhaps another function would allow for more/less suffering, or model the marginal tradeoff rate as decreasing at different rates, etc.
My analysis assumes the existence of measures on happiness and suffering. Perhaps this limits it to utilitarian views of value? ↩︎
where suffering measure may have been implicitly scaled to reflect how much worse it is than happiness ↩︎
by almost-entirely happy, I mean only experiencing infinitesimal suffering ↩︎
Without loss of generality we can assume is just the natural logarithm ↩︎
I came up with this while focused on asymptotic behavior, so I'm only considering the nonnegative support of the tolerance function. I don't know how to interpret a negative tolerance, and suspect it's not useful. ↩︎
Comments sorted by top scores.