Posts

Exploring a Logarithmic Tolerance of Suffering 2021-04-12T01:39:18.679Z
Confusion about implications of "Neutrality against Creating Happy Lives" 2021-04-11T15:54:34.503Z
derber's Shortform 2021-04-11T06:46:05.617Z

Comments

Comment by David Reber (derber) on Exploring a Logarithmic Tolerance of Suffering · 2021-04-12T19:06:29.144Z · EA · GW

Here I'm using and to denote amounts of suffering/happiness, whether constrained to one individual or spread among many (or even distributed among some non-individualistic sentience).

Using exponentially-scaled linear tolerance seems equivalent mathematically. If anything, it highlights to me that how you define the measures for happiness and suffering is quite impactful, and needs to be carefully considered.

Comment by David Reber (derber) on derber's Shortform · 2021-04-11T23:55:03.639Z · EA · GW

# Logarithmic Tolerance of Suffering

There are two approaches for how to trade-off suffering and happiness which don't sit right with me when considering astronomical scenarios.
1. Linear Tolerance: Set some (possibly large) constant $c$. Then $x$ amount of suffering is offset by $y$ amount of happiness so long as $y>cx$.

My impression is that Linear Tolerance is pretty common among EAers (and *please* correct this if I'm wrong). For example, this is my understanding of most usages of "net benefit", "net positive", and so on: it's a linear tolerance of $c=1$ (where suffering measure may have been implicitly scaled to reflect how much worse it is than happiness). This seems ok for the quantities of suffering/happiness we encounter in the present and near-future, but in my opinion becomes unpalatable in astronomical quantities.

2. No Significant Tolerance: There exists some threshold $t$ of suffering such that no amount of happiness $y$ can offset $x$ if $x>t$.

This is almost verbatim "Torture-level suffering cannot be counterbalanced", and perhaps the practical motivation behind "Neutrality against making happy people" (creating a person which has 99% chance of being happy and otherwise experiences intense suffering isn't worth the risk; or, creating a person who experiences 1 unit of intense suffering for any $y$ units of happiness isn't worth it). However, this seems to either A. claim infrequent-and-intense suffering is worse than frequent-but-low suffering, or B. accept frequent-but-low suffering as equally bad, and prefer to kill off even almost-entirely happy lifeforms as soon as the threshold $t$ is exceeded (where by almost-entirely happy, I mean only experiencing infinitesimal suffering). Since my life is lower than almost-entirely happy yet I find it worth living, I am unsatisfied with this approach.

## Toward Logarithmic Tradeoffs
I think the primary intuitions Linear Tolerance and No Significant Tolerance are trying to tap into are:
* it seems like small amounts of suffering can be offset by large amounts of happiness
* but once suffering gets large enough, the amount of happiness needed to offset it seems unimaginable (to the point of being impossible)

I don't think these need to contradict each other:

3. Log Tolerance: Set coefficients $a,b$. Then $x$ amount of suffering is offset by $y$ amount of happiness so long as $a+b\log(y)>x$.

Log Tolerance is stricter than Linear Tolerance: the marginal tradeoff rate of $\frac{d}{dx}\log(x)=\frac{1}{x}$ will eventually drop below any linear tradeoff rate $c$. Furthermore, in the limit the cumulative "effective" linear tradeoff rate of $\frac{\log(x)}{x}$ goes to zero.

Meanwhile, Log Tolerance also requires nigh-impossible amounts of happiness to offset intense suffering: while $\log$ *technically* goes to infinity, nobody has every observed it to do so. Consequently any astronomically expanding sentience/civilization would need to get better and better at reducing suffering. On the other hand, because $\log$ is monotonically increasing, the addition of almost-entirely happy life is always permissible, which I suspect fits better with the intuitions of most longtermists.

One way we could stay below a log upper bound is if some fixed percentage of future resources are committed to reducing future s-risk as much as possible.

The practical impact Log Tolerance would have on how longtermists analyze risks is to shift from "does this produce more happiness than suffering" to "does this produces mechanisms by which happiness can grow exponentially relative to the growth of suffering?"

notes:
* Without loss of generality we can assume $\log$ is just the natural logarithm
* I came up with this while focused on asymptotic behavior, so I'm only considering the nonnegative support of the tolerance function. I don't know how to interpret a negative tolerance, and suspect it's not useful.


## Open Questions
* Are there any messy ethical implications of log tolerance?
* I think any sublinear, monotonically nondecreasing function $f$ satisfying $\lim_{x\to\infty}\frac{f(x)}{x}=0$ would have the same nice properties; but may allow for more/less suffering, or model the marginal tradeoff rate as decreasing at different rates, etc.

Comment by David Reber (derber) on derber's Shortform · 2021-04-11T00:51:44.104Z · EA · GW

As I understand, the following two positions are largely accepted in the EA community:

  1. Temporal position should not impact ethics (hence longtermism)
  2. Neutrality against creating happy lives

But if we are time-agnostic, then neutrality against making happy lives seems to imply a preference for extinction over any future where even a tiny amount of suffering exists.

So am I missing something here? (Perhaps "neutrality against creating happy lives" can't be expressed in a way that's temporally agnostic?)