[Link] The option value of civilization

post by technicalities · 2019-01-06T09:58:17.919Z · score: 0 (3 votes) · EA · GW · 2 comments

Linkpost for this letter by "CK", shared on Marginal Revolution:

I think discounting is the wrong financial metaphor to use when discussing the moral worth of the present vs. the future. Instead, we should look to option pricing theory...

The key idea is that the total moral worth of the universe has some positively skewed distribution: there are more ways for things to be good than there are for it to be bad. Let’s take this as a given for now... [like] the payout profile of a call option...

there’s a fundamental difference between the value of the option, and the value of the underlying. Translated to moral terms, we should distinguish between the value of present, and the ultimate moral worth of the universe...
Let’s start with the question of the value of the present vs. the value of the future. In my view, that language is confused. The value of the future is unknowable and can’t be affected directly. We should stop talking as if we can. We can only affect things like the value of the present and the volatility and overall trajectory of the historical process... In moral terms, delta is interpreted as the derivative of the moral worth of the universe with respect to the value of the present. “How much should we care about the present?” can be restated as “What is the delta [of] the option?”

...If you think the potential value of the future is vastly greater than the value of the present (i.e. if you think our option is only slightly in-the-money) you should care less about the value of the present. But if the option is deep in-the-money — if civilization is secure and of great value — we should care more about increasing its value.

...as volatility increases, delta decreases. In moral terms: the greater the range of historical outcomes, the less we should care about the precise moment we’re in now. If we think history is highly dynamic, that the space of potential outcomes is very large, and that the far future can be vastly more valuable than the present, we should care less about the specific value of the present. Similarly, if we think we’re close to the end of history, we should focus on incremental tweaks to improve the value of the present.

There's even a note on S-risk options balancing:

...I think it’s vastly more likely for civilization and value to simply be wiped out, than it is for a monstrously evil future to occur. But if you disagree, you can account for it in the option framework. The more likely an evil future, the more symmetric our payout profile. You can think of humanity as owning some combination of a long call and a short put. If our portfolio contains equal positions in each, our total delta is 1 — implying that the value of our options position is identical to the value of the underlying. Translated into moral terms: the more symmetric we think future outcomes are, the more we should care about the present.

...I’m sure someone in the Effective Altruism community has kicked these ideas around; I’m just not aware of it. If you know of any related work, I’d love to be pointed in the right direction.

I know Toby Ord has been playing with a homologous hazard model - but we'll have to wait for the book(?).

2 comments

comment by Kit · 2019-01-09T09:03:29.598Z · score: 3 (2 votes) · EA · GW

I think the author has confused one type of payoff diagram (the probability density function of a variable) with another (the payoff of an option plotted against the value of the underlying variable). This results in a number of the claims in the piece being reversed. There seems to be confusion between other parameters too.

In finance, an option is the right to do something (typically some variant of 'buy asset A for price P'). The most surprising thing is that the piece doesn't establish where the optionality comes from. I think it just draws an analogy between the distribution of future outcomes and the payoff of an option, then treats the value of the future like an option. As above, I think this is incorrect.

One way the conclusions would carry is if one asserts that the future is net positive in expectation largely independently of size, and thus one should make the future bigger (a particular version of more variance). This argument is coherent but does not involve options.

A plausible source of optionality is that future generations might have some control over whether the world continues.* (Finance jargon would call this a put option on the future of the world with a low strike.)

Under this interpretation:

  • Standard options pricing theory applies when you can trade the thing you have an option on. Here that isn't the case: one cannot buy and sell the future to hedge the option delta.
  • One should instead use more straightforward expected utility calculations taking into account the expected actions of future actors. e.g. a crude simplification: P(world good)×(how good)×P( future actors let world continue | good) - P(world bad)×(how bad)×P(future actors let world continue | bad). Standard financial options pricing would give a very different formula.
  • The volatility point does hold, but for the above reasons, not the analogy drawn in the piece. One should simply be willing to trade extra potential upside (before future generations intervene) for extra potential downside (before intervention) in proportion to how much future generations have the power to stop bad outcomes closer to the time.
  • The claim that we are long a call and short a put seems false -- I think this is just drawing an incorrect analogy as noted in the first paragraph. I think the situation is more like owning an X% put on the future, where X% is the chance that future altruists have control over whether the long-term future exists. (You could alternatively see the overall situation as long X calls and long (1-X) futures.) This weakens but does not nullify the amount that a balanced upside and downside leads to focusing more on the present.

*Seems unclear but worth considering.

In conclusion, I got some interesting thinking out of reading this piece, but disagree with most of it.

comment by G Gordon Worley III (gworley3) · 2019-01-07T18:45:06.119Z · score: 1 (1 votes) · EA · GW

I don't know enough about options it seems for this to feel like it's giving me a useful additional way to think about moral weight of future patients relative to present ones, but I expect if I did know enough about them this would feel like a useful model to employ intuitions about options to explain an issue in ethics, similar to the way preference theory is often useful for helping to make sense of some questions related to values.