A case against strong longtermism

post by vadmas · 2020-12-15T20:56:40.521Z · EA · GW · 82 comments

Hello! My name is Vaden Masrani and I'm a grad student at UBC in machine learning. I'm a friend of the community and have been very impressed with all the excellent work done here, but I've become very worried about the new longtermist trend developing recently.

I've written a critical review of longtermism here in hopes that bringing an 'outsiders' perspective might help stimulate some new conversation in this space. I'm posting the piece in the forum hoping that William MacAskill and Hilary Greaves might see and respond to it. There's also a little reddit discussion forming as well that might be of interest to some. 

Cheers!

82 comments

Comments sorted by top scores.

comment by Owen_Cotton-Barratt · 2020-12-15T22:04:48.499Z · EA(p) · GW(p)

Thanks! I think that there's quite a lot of good content in your critical review, including some issues that really should be discussed more. In my view there are a number of things to be careful of, but ultimately not enough to undermine the longtermist position. (I'm not an author on the piece you're critiquing, but I agree with enough of its content to want to respond to you.)

Overall I feel like a lot of your critique is not engaging directly with the case for strong longtermism; rather you're pointing out apparently unpalatable implications. I think this is a useful type of criticism, but one that often leads me suspecting that neither side is simply-incorrect, but rather looking for a good synthesis position which understands all of the important points. (Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.)

The point I most appreciate you making is that it seems like strong longtermism could be used to justify ignoring all sorts of pressing present problems. I think that this is justifiably concerning, and deserves attention. However my view is more like "beware naive longtermism" (rather like "beware naive utilitarianism") rather than thinking that the entire framework is lost.

To expand on that:

  • I think that a properly interpreted version of strong longtermism would not recommend that the world ignores present-day issues
    • Indeed, building towards the version of the future which is most likely to produce really good longterm outcomes will mean both removing acute pains and issues for the world, and broadly fostering good-decision-making (which would lead to people solving urgent and tractable issues)
    • Of course there are a lot of things wrong with the world and we're not very close to optimal global allocation of resources, so I think it's acceptable as a form of triage to say "right now while these extremely pressing global issues (existential risk etc.) are so severely neglected, we'd prefer to devote marginal resources there than to solving immediate suffering"
  • I think that "strong longtermism" (as analysed by philosophers) won't end up being the best version of action-guiding advice to spread (even on longtermist grounds), because there will be too much scope for naive interpretation; rather we'll end up building up a deeper repertoire of things to communicate

(I'll address a few other points in replies to this comment, for better threading and because they seem less centrally important to me.)

Replies from: Owen_Cotton-Barratt, Owen_Cotton-Barratt, vadmas
comment by Owen_Cotton-Barratt · 2020-12-15T22:56:23.021Z · EA(p) · GW(p)

In response to the plea at the end (and quoting of Popper) to focus on the now over the utopian future: I find myself sceptical and ultimately wanting to disagree with the literal content, and yet feeling that there is a good deal of helpful practical advice there:

  • I don't think that we must focus on the suffering now over thinking about how to help the further-removed future
    • I do think that if all people across time were united in working for the good, then our comparative advantage as being the only people who could address current issues (for both their intrinsic and instrumental value) would mean that a large share of our effort would be allocated to this
  • I do think that attempts to focus on hard-to-envision futures risk coming to nothing (or worse) because of poor feedback loops
    • In contrast tackling issues that are within our foresight horizon allows us to develop experience and better judgement about how to address important issues (while also providing value along the way!)
    • I don't think this means we should never attempt such work; rather we should do so carefully, and in connection with what we can learn from wrestling with more imminent challenges
comment by Owen_Cotton-Barratt · 2020-12-15T22:20:12.924Z · EA(p) · GW(p)

Regarding the point about the expectation of the future being undefined: I think this is correct and there are a number of unresolved issues around exactly when we should apply expectations, how we should treat them, etc.

Nonetheless I think that we can say that they're a useful tool on lots of scales, and many of the arguments about the future being large seem to bite without relying on getting far out into the tails of our hypothesis space. I would welcome more work on understanding the limits of this kind of reasoning, but I'm wary of throwing the baby out with the bathwater if we say we must throw our hands up rather than reason at all about things affecting the future.

To see more discussion of this topic, I particularly recommend Daniel Kokotajlo's series of posts on tiny probabilities of vast utilities [? · GW].

Replies from: Owen_Cotton-Barratt
comment by Owen_Cotton-Barratt · 2020-12-15T22:36:30.759Z · EA(p) · GW(p)

As a minor point, I don't think that discounting the future really saves you from undefined expectations, as you're implying. I think that on simple models of future growth -- such as are often used in practice -- it does, but if you give some credence to wild futures with crazy growth rates, then it's easy to make the entire thing undefined even through a positive discount rate for pure time preference.

comment by vadmas · 2020-12-16T22:01:47.500Z · EA(p) · GW(p)

Hey Owen - thanks for your feedback! Just to respond to a few points - 

>Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.

Would be able to elaborate a bit on where the weaknesses are? I see in the thread  you agree the argument is correct (and from googling your name I see you have a pure math background! Glad it passes  your sniff-test :) ). If we agree EVs are undefined over possible futures, then in the Shivani example, this is like comparing 3 lives to NaN.  Does this not refute at least 1 / 2 of the assumptions longtermism needs to 'get off the ground'?  

> Overall I feel like a lot of your critique is not engaging directly with the case for strong longtermism; rather you're pointing out apparently unpalatable implications.

Just to comment here - yup I intentionally didn't address the philosophical arguments in favor of longtermism, just because I felt that criticizing the incorrect use of expected values was a "deeper" critique and one which I hadn't seen made on the forum before.  What would the argument for strong longtermism look like without the expected value calculus? It's my impression that EVs are central to the claim that we can and should concern ourselves with the future 1 billion years from now. 

Also my hope was that this would highlight a methodological error (equating made up numbers to real data) that could be rectified, whether or not you buy my other arguments about longtermism.  I'd be a lot more sympathetic with longtermism in general if the proponents were careful to adhere to the methodological rule of only ever comparing subjective probabilities with other subjective probabilities  (and not subjective probabilities with objective ones, derived from data). 

> I would welcome more work on understanding the limits of this kind of reasoning, but I'm wary of throwing the baby out with the bathwater if we say we must throw our hands up rather than reason at all about things affecting the future.

Yup totally - if you permit me a shameless self plug, I wrote about an alternative way to reason here.

> As a minor point, I don't think that discounting the future really saves you from undefined expectations, as you're implying.

Oops sorry no wasn't implying that - two orthogonal arguments.

>I do think that if all people across time were united in working for the good

People are united across time working for the good! Each generation does what it can to make the world a little bit better for its descendants, and in this way we are all united. 

 

Replies from: Owen_Cotton-Barratt, Owen_Cotton-Barratt, Owen_Cotton-Barratt
comment by Owen_Cotton-Barratt · 2020-12-16T23:28:06.884Z · EA(p) · GW(p)

>Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.

Would be able to elaborate a bit on where the weaknesses are? I see in the thread  you agree the argument is correct (and from googling your name I see you have a pure math background! Glad it passes  your sniff-test :) ). 

I think it proves both too little and too much.

Too little, in the sense that it's contingent on things which don't seem that related to the heart of the objections you're making. If we were certain that the accessible universe were finite (as is suggested by (my lay understanding of) current physical theories), and we had certainty in some finite time horizon (however large), then all of the EVs would become defined again and this technical objection would disappear.

In that world, would you be happy to drop your complaints? I don't really think you should, so it would be good to understand what the real heart of the issue is.

Too much, in the sense that if we apply the argument naively then it appears to rule out using EVs as a decision-making tool in many practical situations (where subjective probabilities are fed into the process), including many where we have practical experience of it and it has a good track record.

Overall, my take is something like:

  • This is a technical obstruction around use of EVs, and one which might turn out to be important
  • We know that EVs seem like a really important/useful tool in a wide range of domains
    • Including:
      • ones with small probabilities (e.g. seatbelts)
      • ones based on subjective probabilities (e.g. talk to traders about their use of them)
  • Since EVs seem useful at least for reasoning about finite-horizon worlds, it would be way premature to discard them
    • Instead let's keep on using them and see where it gets
    • Let's remain cautious, particularly in cases which most risk brushing up against pathologies
    • Let's give the technical obstruction a bit of attention, and see if we can come up with anything better (see e.g. Tarsney's work on stochastic dominance)

If we agree EVs are undefined over possible futures, then in the Shivani example, this is like comparing 3 lives to NaN.  

[Mostly an aside] I think the example has been artificially simplified to make the point cleaner for an audience of academic philosophers, and if you take account of indirect effects from giving to AMF then properly we should be comparing NaN to NaN. But I agree that we should not be trying to make any longtermist decisions by literally taking expectations of the number of future lives saved.

Does this not refute at least 1 / 2 of the assumptions longtermism needs to 'get off the ground'?  

Not in my view. I don't think we should be using expectations over future lives as a fundamental decision-making tool, but I do think that thinking in terms of expectations can be helpful for understanding possible future paths. I think it's a moderately robust point that the long-term impacts of our actions are predictably a bigger deal than the short-term impacts -- and this point would survive for example artificially capping the size of possible futures we could reach.

(I think it's a super important question how longtermists should make decisions; I'll write up some more of my thoughts on this sometime.)

Replies from: ben_chugg
comment by ben_chugg · 2020-12-17T20:39:57.172Z · EA(p) · GW(p)

Hi Owen! Really appreciate you engaging with this post. (In the interest of full disclosure, I should say that I'm the Ben acknowledged in the piece, and I'm in no way unbiased. Also, unrelatedly, your story of switching from pure maths to EA-related areas has had a big influence over my current trajectory, so thank you for that :) ) 

I'm confused about the claim 

I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value.

This seems in direct opposition to what the authors say (and what Vaden quoted above), namely that:

The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years

I understand that they may not feel this way, but it is what they argued for and is, consequently, the idea that deserves to be criticized. Next, you write that if

we had certainty in some finite time horizon (however large), then all of the EVs would become defined again and this technical objection would disappear.

I don't think so. The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us to compute over possible futures?  

Finally, I'm not sure what to make of 

e.g. if someone tried the reasoning from the Shivani example in earnest rather than as a toy example in a philosophy paper I think it would rightly get a lot of criticism

When reading their paper, I honestly did not read it as a toy example. And I don't believe the authors state it as such.  When discussing Shivani's options they write:

Our remaining task, then, is to show that there does indeed exist at least one option available to Shivani with the property that its far-future expected value (over BAU) is significantly greater than the best available short-term expected value (again relative to BAU). That is the task of the remainder of this section. 

and when discussing AI risk in particular:

There is also a wide consensus among diverse leading thinkers (both within and outside the AI Research community) to the effect that the risks we have just hinted at are indeed very serious ones, and that much more should be done to mitigate them.

Considering  that the Open Philanthropy Project has poured millions into AI Safety, that it's listed as a top cause by 80K, and that EA's far-future-fund makes payouts to AI safety work, if Shivani's reasoning isn't to be taken seriously then now is probably a good time to make that abundantly clear. Apologies for the harshness in tone here, but for an august institute like GPI to make normative suggestions in its research and then expect no one to act on them is irresponsible. 

Anyway, I'm a huge fan of 95% of EA's work, but really think it has gone down the wrong path with longtermism. Sorry for the sass -- much love to all :) 

Replies from: Max_Daniel, Owen_Cotton-Barratt, Flodorner, djbinder
comment by Max_Daniel · 2020-12-18T19:20:58.135Z · EA(p) · GW(p)

The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us to compute over possible futures?  

I can see two possible types of arguments here, which are importantly different.

  1. Arguments aiming to show that there can be no probability measure - or at least no "non-trivial" one - on some relevant set such as the set of all possible futures.
  2. Arguments aiming to show that, among the many probability measures that can be defined on some relevant set, there is no, or no non-arbitrary way to identify a particular one.

[ETA: In this comment [EA(p) · GW(p)], which I hadn't seen before writing mine, Vaden seems to confirm that they were trying to make an argument of the second rather than the first kind.]

In this comment I'll explain why I think both types of arguments would prove too much and thus are non-starters. In other comments I'll make some more technical points about type 1 [EA(p) · GW(p)] and type 2 [EA(p) · GW(p)] arguments, respectively.

(I split my points between comments so the discussion can be organized better and people can use up-/downvotes in a more fine-grined way)

I'm doing this largely because I'm worried that to some readers the technical language in Vaden's post and your comment will suggest that longtermism specifically faces some deep challenges that are rooted in advanced mathematics. But in fact I think that characterization would be seriously mistaken (at least regarding the issues you point to). Instead, I think that the challenges either have little to do with the technical results you mention or that the challenges are technical but not specific to longtermism. 

[After writing I realized that the below has a lot of overlap with what Owen [EA(p) · GW(p)] and Elliot [EA(p) · GW(p)] have written earlier. I'm still posting it because there are slight differences and there is no harm in doing so, but people who read the previous discussions may not want to read this.]

Both types of arguments prove too much because they (at least based on the justifications you've given in the post and discussion here) are not specific to longtermism at all. They would e.g. imply that I can't have a probability distribution over how many guests will come to my Christmas party tomorrow, which is absurd.

To see this, note that everything you say would apply in a world that ends in two weeks, or to deliberations that ignore any effects after that time. In particular, it is still true that the set of these possible 'short futures' is infinite (my house mate could enter the room any minute and shout any natural number), and that the possible futures contains things that, like your example of a sequence of black and white balls, have no unique 'natural' structure or measure (e.g. the collection of atoms in a certain part of my table, or the types of possible items on that table).

So these arguments seem to show that we can never meaningfully talk about the probability of any future event, whether it happens in a minute or in a trillion years. Clearly, this is absurd.

Now, there is a defence against this argument, but I think this defence is just as available to the longtermist as it is to (e.g.) me when thinking about the number of guests at my Christmas party next week. 

This defence is that for any instance of probabilistic reasoning about the future we can simply ignore most possible futures, and in fact only need to reason over specific properties of the future. For instance, when thinking about the number of guests to my Christmas party, I can ignore people shouting natural numbers or the collection of objects on my table - I don't need to reason about anything close to a complete or "low-level" (e.g. in terms of physics) description of the future. All I care about is a single natural number - the number of guests - and each number corresponds to a huge set of futures at the level of physics.

But this works for many if not all longtermist cases as well! The number of people in one trillions years is a natural number, as is the year in which transformative AI is being developed, etc. Whether or not identifying the relevant properties, or the probability measure we're adopting, is harder than for typical short-term cases - and maybe prohibitively hard - is an interesting and important question. But it's an empirical question, not one we should expect to answer by appealing to mathematical considerations around the cardinality or measurability of certain sets.

Separately, there may be an interesting question about how I'm able to identify the high-level properties I'm reasoning about - whether that high-level property is the number of people coming to my party or the number of people living in a trillion years. How do I know I "should pay attention" only to the number of party guests and not which natural numbers they may be shouting? And how am I able to "bridge" between more low-level descriptions of futures (e.g. a list of specific people coming to the party, or a video of the party, or even a set of initial conditions plus laws of motion for all relevant elementary particles)? There may be interesting questions here, but I think these are questions for philosophy or psychology who in my view aren't particularly illuminated by referring to concepts from measure theory. (And again, they aren't specific to longtermism.)

Replies from: Max_Daniel, Max_Daniel
comment by Max_Daniel · 2020-12-18T19:26:56.879Z · EA(p) · GW(p)

Technical comments on type-1 arguments (those aiming to show there can be no probability measure). [Refer to the parent comment [EA(p) · GW(p)] for the distinction between type 1 and type 2 arguments.]

I basically don't see how such an argument could work. Apologies if that's totally clear to you and you were just trying to make a type-2 argument. However, I worry that some readers might come away with the impression that there is a viable argument of type 1 since Vaden and you mention issues of measurability and infinite cardinality. These relate to actual mathematical results showing that for certain sets, measures with certain properties can't exist at all.

However, I don't think this is relevant to the case you describe. And I also don't think it can be salvaged for an argument against longtermism. 

First, in what sense can sets be "immeasurable"? The issue can arise in the following situation. Suppose we have some set (in this context "sample space" - think of the elements at all possible instances of things that can happen at the most fine-grained level), and some measure  (in this context "probability" - but it could also refer to something we'd intuitively call length or volume) we would like to assign to some subsets (the subsets in this context are "events" - e.g. the event that Santa Clause enters my room now is represented by the subset containing all instances with that property).

In this situation, it can happen that there is no way to extend this measure to all subsets. 

The classic example here is the real line as base set. We would like a measure that assigns measure || to each interval  (the set of real numbers from  to ), thus corresponding to our intuitive notion of length. E.g. the interval  should have length 4.

However, it turns out that there is no measure that assigns each interval its length and 'works' for all subsets of the real numbers. I.e. each way of extending the assignment to all subsets of the real line would violate one of the properties we want measures to have (e.g. the measure of an at most countable disjoint union of sets should be the sum of the measures of the individual sets).

Thus we have to limit ourselves to assigning a measure to only some subsets. (In technical terms: we have to use a -algebra that's strictly smaller than the full set of all subsets.) In other words, there are some subsets the measure of which we have to leave undefined. Those are immeasurable sets.

Second, why don't I think this will be a problem in this context?

  • At the highest level, note that even if we are in a context with immeasurable sets this does not mean that we get no (probability) measure at all. It just means that the measure won't "work" for all subsets/events. So for this to be an objection to longtermism, we would need a further argument for why specific events we care about are immeasurable - or in other words, why we can't simply limit ourselves to the set of measurable events.
    • Note that immeasurable sets, to the extent that we can describe them concretely at all, are usually highly 'weird'. If you try to google for pictures of standard examples like Vitali sets you won't find a single one because we essentially can't visualize them. Indeed, by design every set that we can construct from intervals by countably many standard operations like intersections and unions is measurable. So at least in the case of the real numbers, we arguably won't encounter immeasurable sets "in practice".
    • Note also that the phenomenon of immeasurable sets enables a number of counterintuitive results, such as the Banach-Tarski theorem. Loosely speaking this theorem says we can cut up a ball into pieces, and then by moving around those pieces and reassembling them get a ball that has twice the volume of the original ball; so for example "a pea can be chopped up and reassembled into the Sun".
      • But usually the conclusion we draw from this is not that it's meaningless to use numbers to refer to the coordinates of objects in space, or that our notion of volume is meaningless and that "we cannot measure the volume of objects" (and to the extent there is a problem it doesn't exclusively apply to particularly large objects - just as any problem relevant to predicting the future wouldn't specifically apply to longtermism). At most, we might wonder whether our model of space as continuous in real-number coordinates "breaks down" in certain edge cases, but we don't think that this invalidates pragmatic uses of this model that never use its full power (in terms of logical implications).
  • Immeasurable subsets are a phenomenon intimately tied to uncountable sets - i.e. ones that are even "larger" than the natural numbers (for instance, the real numbers are uncountable, but the rational numbers are not). This is roughly because the relevant concepts like -algebras and measures are defined in terms of countably many operations like unions or sums; and if you "fix" the measure of some sets in a way that's consistent at all, then you can uniquely extend this to all sets you can get from those by taking complements and countable intersections and unions. In particular, if in a countable set you fix the measure of all singleton sets containing just one element, then this defines a unique measure on the set of all subsets.
    • Your examples of possible futures where people shout different natural numbers involve only countable sets. So it's hard to see how we'd get any problem with immeasurable sets there.
      • You might be tempted to modify the example to argue that the set of possible futures is uncountably infinite because it contains people shouting all real numbers. However, (i) it's not clear if it's possible for people to shout any real number, (ii) even if it is then all my other remarks still apply, so I think this wouldn't be a problem, certainly none specific to longtermism.
        • Regarding (i), the problem is that there is no general way to refer to an arbitrary real number within a finite window of time. In particular, I cannot "shout" an infinite and non-period decimal expansion; nor can I "shout" a sequence of rational numbers that converges to the real number I want to refer to (except maybe in a few cases where the sequence is a closed-form function of n).
          • More generally, if utterances are individuated by the finite sequence of words I'm using, then (assuming a finite alphabet) there are only countably many possible utterances I can make. If that's right then I cannot refer to an arbitrary real number precisely because there are "too many" of them.
      • Similarly, the set of all sequences of black or white balls is uncountable, but it's unclear whether we should think that it's contained in the set of all possible futures.
    • More importantly: if there were serious problems due to immeasurable sets - whether with longtermism or elsewhere - we could retreat to reasoning about a countable subset. For instance, if I'm worried that predicting the development of transformative AI is problematic because "time from now" is measured in real numbers, I could simply limit myself to only reasoning about rational numbers of (e.g.) seconds from now.
      • There may be legitimate arguments for this response being 'ad hoc' or otherwise problematic. (E.g. perhaps I would want to use properties of rational numbers that can only be proven by using real numbers "within the proof".) But especially given the large practical utility of reasoning about e.g. volumes of space or probabilities of future events, I think it at least shows that immeasurability can't ground a decisive knock-down argument.
Replies from: Max_Daniel
comment by Max_Daniel · 2020-12-19T13:37:21.464Z · EA(p) · GW(p)

As even more of an aside, type 1 arguments would also be vulnerable to a variant of Owen's objection that they "prove too little" [EA(p) · GW(p)].

However, rather than the argument depending too much on contingent properties of the world (e.g. whether it's spatially infinite), the issue here is that they would depend on the axiomatization of mathematics.

The situation is roughly as follows: There are two different axiomatizations of mathematics with the following properties: 

  • In both of them all maths that any of us are likely to ever "use in practice" works basically the same way.
  • For parallel situations (i.e. assignments of measure to some subsets of some set, which we'd like to extend to a measure on all subsets) there are immeasurable subsets in exactly one of the axiomatizations.

Specifically, for example, for our intuitive notion of "length" there are immeasurable subsets of the real numbers in the standard axiomatization of mathematics (called ZFC here). However, if we omit a single axiom - the axiom of choice - and replace it with an axiom that loosely says that there are weirdly large sets then every subset of the real numbers is measurable. [ETA: Actually it's a bit more complicated, but I don't think in a way that matters here. It doesn't follow directly from these other axioms that everything is measurable, but using these axioms it's possible to construct a "model of mathematics" in which that holds. Even less importantly, we don't totally omit the axiom of choice but replace it with a weaker version.]

I think it would be pretty strange if the viability of longtermism depended on such considerations. E.g. imagine writing a letter to people in 1 million years explaining why you didn't choose to try to help more rather than fewer of them. Or imagine getting such a letter from the distant past. I think I'd be pretty annoyed if I read "we considered helping you, but then we couldn't decide between the axiom of choice and inaccessible cardinals ...".

comment by Max_Daniel · 2020-12-19T13:18:26.119Z · EA(p) · GW(p)

Technical comments on type-2 arguments (i.e. those that aim to show there is no, or no non-arbitrary way for us to identify a particular probability measure.) [Refer to the parent comment [EA(p) · GW(p)] for the distinction between type 1 and type 2 arguments.]

I think this is closer to the argument Vaden was aiming to make despite the somewhat nonstandard use of "measurable" (cf. my comment on type 1 arguments [EA(p) · GW(p)] for what measurable vs. immeasurable usually refers to in maths), largely because of this part (emphasis mine) [ETA: Vaden also confirms this in this comment [EA(p) · GW(p)], which I hadn't seen before writing my comments]:

But don’t we apply probabilities to infinite sets all the time? Yes - to measurable sets. A measure provides a unique method of relating proportions of infinite sets to parts of itself, and this non-arbitrariness is what gives meaning to the notion of probability. While the interval between 0 and 1 has infinitely many real numbers, we know how these relate to each other, and to the real numbers between 1 and 2.

Some comments:

  • Yes, we need to be more careful when reasoning about infinite sets since some of our intuitions only apply to finite sets. Vaden's ball reshuffling example and the "Hilbert's hotel" thought experiment they mention are two good examples for this.
  • However, the ball example only shows that one way of specifying a measure no longer works for infinite sample spaces: we can no longer get a measure by counting how many instances a subset (think "event") consists of and dividing this by the number of all possible samples because doing so might amount to dividing infinity by infinity.
    • (We can still get a measure by simply setting the measure of any infinite subset to infinity, which is permitted for general measures, and treating something finite divided by infinity as 0. However, that way the full infinite sample space has measure infinity rather than 1, and thus we can't interpret this measure as probability.)
    • But this need not be problematic. There are a lot of other ways for specifying measures, for both finite and infinite sets. In particular, we don't have to rely on some 'mathematical structure' on the set we're considering (as in the examples of real numbers that Vaden is giving) or other a priori considerations; when using probabilities for practical purposes, our reasons for using a particular measure will often be tied to empirical information.
      • For example, suppose I have a coin in my pocket, and I have empirical reasons (perhaps based on past observations, or perhaps I've seen how the coin was made) to think that a flip of that coin results in heads with probability 60% and tails with probability 40%. When reasoning about this formally, I might write down {H, T} as sample space, the set of all subsets as -algebra, and the unique measure  with .
        • But this is not because there is any general sense in which the set  is more "measurable" than the set of all sequences of black or white balls. Without additional (e.g. empirical) context, there is no non-arbitrary way to specify a measure on either set. And with suitable context, there will often be a 'natural' or 'unique' measure for either because the arbitrariness is defeated by the context.
      • This works just as well when I have no "objective" empirical data. I might simply have a gut feeling that the probability of heads is 60%, and be willing to e.g. accept bets corresponding to that belief. Someone might think that that's foolish if I don't have any objective data and thus bet against me. But it would be a pretty strange objection to say that me giving a probability of 60% is meaningless, or that I'm somehow not able or not allowed to enter such bets.
      • This works just as well for infinite sample spaces. For example, I might have a single radioactive atom in front of me, and ask myself when it will decay. For instance, I might want to know the probability that this atom will decay within the next 10 minutes. I won't be deterred by the observation that I can't get this probability by counting the number of "points in time" in the next 10 minutes and divide them by the total number of points in time. (Nor should I use 'length' as derived from the structure of the real numbers, and divide 10 by infinity to conclude that the probability is zero.) I will use an exponential distribution - a probability distribution on the real numbers which, in this context, is non-arbitrary: I have good reasons to use it and not some other distribution.
        • Note that even if we could get the probability by counting it would be the wrong one because the probability that the atom decays isn't uniform. Similarly, if I have reasons to think that my coin is biased, I shouldn't calculate probabilities by naive counting using the set . Overall, I struggle to see how the availability of a counting measure is important to the question whether we can identify a "natural" or "unique" measure.
    • More generally, we manage to identify particular probability measures to use on both finite and infinite sample spaces all the time, basically any time we use statistics for real-world applications. And this is not because we're dealing with particularly "measurable" or otherwise mathematically special sample spaces, and despite the fact that there are lots of possible probability measures that we could use.
      • Again, I do think there may be interesting questions here: How do we manage to do this? But again, I think these are questions for psychology or philosophy that don't have to do with the cardinality or measurability of sets.
    • Similarly, I think that looking at statistical practice suggests that your challenge of "can you write down the measure space?" is a distraction rather than pointing to a substantial problem. In practice we often treat particular probability distributions as fundamental (e.g. we're assuming that something is normally distributed with certain parameters) without "looking under the hood" at the set-theoretic description of random variables. For any given application where we want to use a particular distribution, there are arbitrarily many ways to write down a measure space and a random variable having that distribution; but usually we only care about the distribution and not these more fundamental details, and so aren't worried by any "non-uniqueness" problem.

The most viable anti-longtermist argument I could see in the vicinity would be roughly as follows:

  • Argue that there is some relevant contingent (rather than e.g. mathematical) difference between longtermist and garden-variety cases.
    • Probably one would try to appeal to something like the longtermist cases being more "complex" relative to our reasoning and computational capabilities.
    • One could also try an "argument from disagreement": perhaps our use of probabilities when e.g. forecasting the number of guests to my Christmas party is justified simply by the fact that ~everyone agrees how to do this. By contrast, in longtermist cases, maybe we can't get such agreement.
  • Argue that this difference makes a difference for whether we're justified to use subjective probabilities or expected values, or whatever the target of the criticism is supposed to be.

But crucially, I think mathematical features of the objects we're dealing with when talking about common practices in a formal language are not where we can hope to find support for such an argument. This is because the longtermist and garden-variety cases don't actually differ relevantly regarding these features.

Instead, I think the part we'd need to understand is not why there might be a challenge, but how and why in garden-variety cases we're able to overcome that challenge. Only then can we assess whether these - or other - "methods" are also available to the longtermist.

Replies from: brekels
comment by brekels · 2020-12-24T20:44:14.533Z · EA(p) · GW(p)

Hi Max!   Again, I agree the  longtermist and garden-variety cases may not actually differ regarding the measure-theoretic features in Vaden's post, but some additional comments here.

But it would be a pretty strange objection to say that me giving a probability of 60% is meaningless, or that I'm somehow not able or not allowed to enter such bets.

Although "probability of 60%" may be less meaningful than we'd like / expect, you are certainly allowed to enter such bets.   In fact, someone willing to take the other side suggests that he/she disagrees.    This highlights the difficulty of converging on objective probabilities for future outcomes which aren't directly subject to domain-specific science (e.g. laws of planetary motion).   Closer in time, we might converge reasonably closely on an unambiguous  measure, or appropriate parametric statistical model.

Regarding the "60% probability" for future outcomes, a  useful thought experiment for me was how I might reason about the risk profile of bets made on open-ended future outcomes.   I quickly become less convinced I'm estimating meaningful risk the further out I go.    Further, we only run the future once, so it's hard to actually confirm our probability is meaningful (as for repeated coin flips).    We could make longtermist bets by transferring $ btwn our far-future offspring, but can't tell who comes out on top "in expectation" beyond simple arbitrages.

This defence is that for any instance of probabilistic reasoning about the future we can simply ignore most possible futures

Honest question being new to EA...  is it not problematic to restrict our attention to possible futures or aspects of futures which are relevant to a single issue at a time?   Shouldn't we calculate Expected Utility over billion year futures for all  current interventions, and set our relative propensity for actions = exp{α * EU } / normalizer ?   

For example,  the downstream effects of donating to Anti-Malaria would be difficult to reason about, but we are clueless as to whether its EU would be dwarfed by AI safety on the billion yr timescale, e.g. bringing the entire world out of poverty limiting political risk leading to totalitarian government.

Replies from: Max_Daniel
comment by Max_Daniel · 2020-12-27T20:49:12.188Z · EA(p) · GW(p)

Honest question being new to EA...  is it not problematic to restrict our attention to possible futures or aspects of futures which are relevant to a single issue at a time?   Shouldn't we calculate Expected Utility over billion year futures for all  current interventions, and set our relative propensity for actions = exp{α * EU } / normalizer ?

Yes, I agree that it's problematic. We "should" do the full calculation if we could, but in fact we can't because of our limited capacity for computation/thinking.

But note that in principle this situation is familiar. E.g. a CEO might try to maximize the long-run profits of her company, or a member of government might try to design a healthcare policy that maximizes wellbeing. In none of these cases are we able to do the "full calculation", albeit my a less dramatic margin than for longtermism. 

And we don't think that the CEO's or the politician's effort are meaningless or doomed or anything like that. We know that they'll use heuristics, simplified models, or other computational shortcuts; we might disagree with them which heuristics and models to use, and if repeatedly queried with "why?" both they and we would come to a place where we'd struggle to justify some judgment call or choice of prior or whatever. But that's life - a familiar situation and one we can't get out of.

comment by Owen_Cotton-Barratt · 2020-12-17T23:55:53.992Z · EA(p) · GW(p)

Anyway, I'm a huge fan of 95% of EA's work, but really think it has gone down the wrong path with longtermism. Sorry for the sass -- much love to all :) 

It's all good! Seriously, I really appreciate the engagement from you and Vaden: it's obvious that you both care a lot and are offering the criticism precisely because of that. I currently think you're mistaken about some of the substance, but this kind of dialogue is the type of thing which can help to keep EA intellectually healthy.

I'm confused about the claim 

>I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value.

This seems in direct opposition to what the authors say (and what Vaden quoted above), that 

>The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years

I understand that they may not feel this way, but it is what they argued for and is, consequently, the idea that deserves to be criticized.

So my interpretation had been that they were using a technical sense of "evaluating actions", meaning something like "if we had access to full information about consequences, how would we decide which ones were actually good".

However, on a close read I see that they're talking about ex ante effects. This makes me think that this is at least confusingly explained, and perhaps confused. It now seems most probable to me that they mean something like "we can ignore the effects of the actions contained in the first 100 years, except insofar as those feed into our understanding of the longer-run effects". But the "except insofar ..." clause would be concealing a lot, since 100 years is so long that almost all of our understanding of the longer-run effects must go via guesses about the long-term goodness of the shorter-run effects.

[As an aside, I've been planning to write a post about some related issues; maybe I'll move it up my priority stack.]

The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us to compute over possible futures? 

I like the question; I think this may be getting at something deep, and I want to think more about it.

Nonetheless, my first response was: while I can't write this down, if we helped ourselves to some cast-iron guarantees about the size and future lifespan of the universe (and made some assumptions about quantization) then we'd know that the set of possible futures was smaller than a particular finite number (since there would only be a finite number of time steps and a finite number of ways of arranging all particles at each time step). Then even if I can't write it down, in principle someone could write it down, and the mathematical worries about undefined expectations go away.

The reason I want to think more about it is that I think there's something interesting about the interplay between objective and subjective probabilities here. How much should it help me as a boundedly rational actor to know that in theory a fully rational actor could put a measure on things, if it's practically immeasurable for me?

Considering  that the Open Philanthropy Project has poured millions into AI Safety, that its listed as a top cause by 80K, and that EA's far-future-fund makes payouts to AI safety work, if Shivani's reasoning isn't to be taken seriously then now is probably a good time to make that abundantly clear. Apologies for the harshness in tone here, but for an august institute like GPI to make normative suggestions in its research and then expect no one to act on them is irresponsible.

Sorry, I made an error here in just reading Vaden's quotation of Shivani's reasoning rather than looking at it in full context.

In the construction of the argument in the paper Shivani is explicitly trying to compare the long-term effects of action A to the short-term effects of action B (which was selected to have particularly good short-term effects). The paper argues that there are several cases where the former is larger than the latter. It doesn't follow that A is overall better than B, because the long-term effects of B are unexamined.

The comparison of of AMF to AI safety that was quoted felt like a toy example to me because it obviously wasn't trying to be a full comparison between the two, but was rather being used to illustrate a particular point. (I think maybe the word "toy" is not quite right.)

In any case I consider it a minor fault of the paper that one could read just the section quoted and reasonably come away with the impression that comparing the short-term number of lives saved by AMF with the long-term number of lives expected to be saved by investing in AI safety was the right way to compare between those two opportunities. (Indeed one could come away with the impression that the AMF price to save a life was the long-run price, but in the structure of the argument being used they need it to be just the short-term price.)

Note that I do think AI safety is very important, and I endorse the actions of the various organisations you mention. But I don't think that comparing some long-term expectation on one side with a short-term expectation on the other is the right argument for justifying this (particularly versions which make the ratio-of-goodness scale directly with estimates of the size of the future), and that was the part I was objecting to. (I think this argument is sometimes seen in earnest "in the wild", and arguably on account of that the paper should take extra steps to make it clear that it is not the argument being made.)

comment by Flodorner · 2020-12-18T19:44:18.303Z · EA(p) · GW(p)

"The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. "

This claim seems confused, as every nonempty set allows for the definition of a probability measure on it  and measures on function spaces exist ( https://en.wikipedia.org/wiki/Dirac_measure , https://encyclopediaofmath.org/wiki/Wiener_measure ). To obtain non-existence, further properties of the measure such as translation-invariance need to be required (https://aalexan3.math.ncsu.edu/articles/infdim_meas.pdf) and it is not obvious to me that we would necessarily require such properties. 

Replies from: vadmas
comment by vadmas · 2020-12-18T23:03:20.792Z · EA(p) · GW(p)

See discussion below w/ Flodorner on this point :) 

You are Flodorner! 

comment by djbinder · 2020-12-18T00:12:54.461Z · EA(p) · GW(p)

I don't think so. The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us to compute over possible futures?  

 

It certainly not obvious that the universe is infinite in the sense that you suggest. Certainly nothing is "provably infinite" with our current knowledge. Furthermore, although we may not be certain about the properties of our own universe, we can easily imagine worlds rich enough to contain moral agents yet which remain completely finite. For instance, you could image a cellular automata with a finite grid size and which only lasted for a finite duration.

However, perhaps the more important consideration is the in principle set of possible futures that we must consider when doing EV calculations, rather than the universe we actually inhabit, since even if our universe is finite we would never be able to convince our selves of this with certainty. Is it this set of possible futures that you think suffers from "immeasurability"?

Replies from: vadmas
comment by vadmas · 2020-12-18T03:25:25.741Z · EA(p) · GW(p)

if we helped ourselves to some cast-iron guarantees about the size and future lifespan of the universe (and made some assumptions about quantization) then we'd know that the set of possible futures was smaller than a particular finite number (since there would only be a finite number of time steps and a finite number of ways of arranging all particles at each time step). Then even if I can't write it down, in principle someone could write it down, and the mathematical worries about undefined expectations go away.

 

It certainly not obvious that the universe is infinite in the sense that you suggest. Certainly nothing is "provably infinite" with our current knowledge. Furthermore, although we may not be certain about the properties of our own universe, we can easily imagine worlds rich enough to contain moral agents yet which remain completely finite. For instance, you could image a cellular automata with a finite grid size and which only lasted for a finite duration.

Aarrrgggggg was trying to resist weighing in again ... but I think there's some misunderstanding of my argument here. I wrote:

The set of all possible futures is infinite,  regardless of whether we consider the life of the universe to be infinite. Why is this? Add to any finite set of possible futures a future where someone spontaneously shouts “1”!, and a future where someone spontaneously shouts “2”!, and a future where someone spontaneously shouts “3!”  (italics added)

A few comments:

  • We're talking about possible universes, not actual ones, so cast-iron guarantees about the size and future lifespan of the universe are irrelevant (and impossible anyway).
  • I intentionally framed it as someone shouting a natural number in order to circumvent any counterargument based on physical limits of the universe. If someone can think it, they can shout it.
  • The set of possible futures is provably infinite because the "shouting a natural number" argument established a one-to-one correspondence between the set of possible (triple emphasis on the word * possible * ) futures, and the set of natural numbers, which are provably infinite (see proof here ).
  • I'm not using fancy or exotic mathematics here, as Owen can verify. Putting sets in one-to-one correspondence with the natural numbers is the standard way one proves a set is countably infinite. (See https://en.wikipedia.org/wiki/Countable_set).
  • Physical limitations regarding the largest number that can be physically instantiated are irrelevant to answering the question "is this set finite or infinite"? Mathematicians do not say the set of natural numbers are finite because there are a finite number of particles in the universe. We're approaching numerology territory here...

Okay this will hopefully be my last comment, because I'm really not trying to be a troll in the forum or anything. But please represent my argument accurately!

Replies from: Alex HT, Isaac_Dunn
comment by Alex HT · 2020-12-18T12:08:34.413Z · EA(p) · GW(p)

You really don't seem like a troll! I think the discussion in the comments on this post is a very valuable conversation and I've been following it closely. I think it would be helpful for quite a few people for you to keep responding to comments

Of course, it's probably a lot of effort to keep replying carefully to things, so understandable if you don't have time :)

comment by Isaac_Dunn · 2020-12-18T12:59:15.203Z · EA(p) · GW(p)

I second what Alex has said about this discussion being very valuable pushback against ideas that have got some traction - at the moment I think that strong longtermism seems right, but it's important to know if I'm mistaken! So thank you for writing the post & taking some time to engage in the comments.

On this specific question, I have either misunderstood your argument or think it might be mistaken. I think your argument is "even if we assume that the life of the universe is finite, there are still infinitely many possible futures - for example, the infinite different possible universes where someone shouts a different natural number".

But I think this is mistaken, because the universe will end before you finish shouting most natural numbers. In fact, there would only be finitely many natural numbers you could finish shouting before the universe ends, so this doesn't show there are infinitely many possible universes. (Of course, there might be other arguments for infinite possible futures.)

More generally, I think I agree with Owen's point that if we make the (strong) assumption the universe is finite in duration and finite in possible states, and can quantise time, then it follows that there are only finite possible universes, so we can in principle compute expected value.

So I'd be especially interested if you have any thoughts on whether expected value is in practice an inappropriate tool to use (e.g. with subjective probabilities) even assuming in principle it is computable. For example, I'd love to hear when (if at all) you think we should use expected value reasoning, and how we should make decisions when we shouldn't.

Replies from: vadmas
comment by vadmas · 2020-12-18T20:21:22.855Z · EA(p) · GW(p)

Hey Issac,

On this specific question, I have either misunderstood your argument or think it might be mistaken. I think your argument is "even if we assume that the life of the universe is finite, there are still infinitely many possible futures - for example, the infinite different possible universes where someone shouts a different natural number".

But I think this is mistaken, because the universe will end before you finish shouting most natural numbers. In fact, there would only be finitely many natural numbers you could finish shouting before the universe ends, so this doesn't show there are infinitely many possible universes.

Yup you've misunderstood the argument. When we talk about the set of all future possibilities, we don't line up all the possible futures and iterate through them sequentially. For example, if we say it's possible tomorrow might either rain, snow, or  hail, we * aren't  * saying that it will first rain, then snow, then hail. Only one of them will actually happen.

Rather we are discussing the set of possibilities {}, which has no intrinsic order, and in this case has a cardinality of 3.  

Similarly with the set of all possible futures. If we let  represent a possible future where someone shouts the number , then the set of all possible futures is {, ... }, which has cardinality  and again no intrinsic ordering. We aren't saying here that a single person will shout all numbers between 1 and , because as with the  weather example, we're talking about what might possibly happen, not what actually  happens. 

More generally, I think I agree with Owen's point that if we make the (strong) assumption the universe is finite in duration and finite in possible states, and can quantise time, then it follows that there are only finite possible universes, so we can in principle compute expected value.

No this is wrong.  We don't consider physical constraints when constructing the set of future possibilities - physical constraints come into the picture later.  So in the weather example, we could include into our set of future possibilities something absurd, and which violates known laws of physics. For example we are free to construct a set like {}. 

Then we factor in physical constraints by assigning probability 0 to the absurd scenario. For example our probabilities might be {}.

But no laws of physics are being violated with the scenario "someone shouts the natural number i".  This is why this establishes a one-to-one correspondence between the set of future possibilities and the natural numbers, and why we can say the set of future possibilities is (at least) countably infinite. (You could establish that the set of future possibilities is uncountably infinite as well by having someone shout a single digit in Cantor's diagonal argument, but that's beyond what is necessary to show that EVs are undefined.

For example, I'd love to hear when (if at all) you think we should use expected value reasoning, and how we should make decisions when we shouldn't. 

Yes I think that the EV style-reasoning popular on this forum should be dropped entirely because it leads to absurd conclusions, and basically forces people to think along a single dimension. 

So for example I'll produce some ridiculous future scenario (Vaden's x-risk: In the year 254 012 412 there will be a war over blueberries in the Qualon region of delta quadrant , which causes an unfathomable amount of infinite suffering ) and then say: great, you're free to set your credence about this scenario as high or as low as you like. 

But now I've trapped you! Because I've forced you to think about the scenario only in terms of a single 1 dimensional credence-slider. Your only move is to set your credence-slider really really small, and I'll set my suffering-slider really really high, and then using EVs, get you to dedicate your income and the rest of your life to Blueberry-Safety research.

Note also that EV style reasoning is only really popular in this community. No other community of researchers reasons in this way, and they're able to make decisions just fine. How would any other community reason about my scenario? They would reject it as absurd and be done with it. Not think along a single axis (low credence/high credence). 

That's the informal answer, anyway. Realizing that other communities don't reason in this way and are able to make decisions just fine should at least be a clue that dropping EV style arguments isn't going to result in decision-paralysis.

The more formal answer is to consider using an entirely different epistemology, which doesn't deal with EVs at all. This is what my vague comments about the 'framework' were eluding to in the piece. Specifically, I have in mind  Karl Popper's critical rationalism, which is at the foundation of modern science. CR is about much more than that, however. I discuss what a CR approach to decision making would look like in this piece if you want some longer thoughts on it. 

But anyway, I digress... I don't expect people to jettison their entire worldview just because some random dude on the internet tells them to. But for anyone reading who might be curious to know where I'm getting a lot of these ideas from (few are original to me), I'd recommend  Conjectures and Refutations.  If you want to know what an alternative to EV style reasoning looks like, the answers are in that book.

(Note:  This is a book many people haven't read because think they already know the gist. "Oh, C&R! That's the book about falsification, right?" It's about much much more than that :) ) 

Replies from: Mauricio
comment by Mauricio · 2020-12-20T20:51:15.299Z · EA(p) · GW(p)

Hi Vaden, thanks again for posting this! Great to see this discussion. I wanted to get further along C&R before replying, but:

no laws of physics are being violated with the scenario "someone shouts the natural number i".  This is why this establishes a one-to-one correspondence between the set of future possibilities and the natural numbers

If we're assuming that time is finite and quantized, then wouldn't these assumptions (or, alternatively, finite time + the speed of light) imply a finite upper bound on how many syllables someone can shout before the end of the universe (and therefore a finite upper bound on the size of the set of shoutable numbers)? I thought Isaac was making this point; not that it's physically impossible to shout all natural numbers sequentially, but that it's physically impossible to shout any of the natural numbers (except for a finite subset).

(Although this may not be crucial, since I think you can still validly make the point that Bayesians don't have the option of, say, totally ruling out faster-than-light number-pronunciation as absurd.)

Note also that EV style reasoning is only really popular in this community. No other community of researchers reasons in this way, and they're able to make decisions just fine.

Are they? I had the impression that most communities of researchers are more interested in finding interesting truths than in making decisions, while most communities of decision makers severely neglect large-scale problems (e.g. pre-2020 pandemic preparedness, farmed animal welfare). (Maybe there's better ways to account for scope than EV, but I'd hesitate to look for them in conventional decision making.)

comment by Owen_Cotton-Barratt · 2020-12-16T23:49:31.554Z · EA(p) · GW(p)

People are united across time working for the good! Each generation does what it can to make the world a little bit better for its descendants, and in this way we are all united. 

I meant if everyone were actively engaged in this project. (I think there are plenty of people in the world who are just getting on with their thing, and some of them make the world a bit worse rather than a bit better.)

Overall though I think that longtermism is going to end up with practical advice which looks quite a lot like "it is the duty of each generation to do what it can to make the world a little bit better for its descendants"; there will be some interesting content in which dimensions of betterness we pay most attention to (e.g. I think that the longtermist lens on things makes some dimension like "how much does the world have its act together on dealing with possible world-ending catastrophes?" seem really important).

Replies from: vadmas
comment by vadmas · 2020-12-17T05:04:15.325Z · EA(p) · GW(p)

Overall though I think that longtermism is going to end up with practical advice which looks quite a lot like "it is the duty of each generation to do what it can to make the world a little bit better for its descendants."

Goodness, I really hope so. As it stands, Greaves and MacAskill are telling people that they can “simply ignore all the effects [of their actions] contained in the first 100 (or even 1000) years”, which seems rather far from the practical advice both you and I hope they arrive at.

Anyway, I appreciate all your thoughtful feedback - it seems like we agree much more than we disagree, so I’m going to leave it here :)

Replies from: Owen_Cotton-Barratt
comment by Owen_Cotton-Barratt · 2020-12-17T10:24:31.013Z · EA(p) · GW(p)

I think the crucial point of outstanding disagreement is that I agree with Greaves and MacAskill that by far the most important effects of our actions are likely to be temporally distant. 

I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value. Of course, there are also important instrumental reasons to attend to the intrinsic value of various effects, so I don't think intrinsic value should be ignored either.

Replies from: AGB, Adam Binks
comment by AGB · 2020-12-18T13:21:05.435Z · EA(p) · GW(p)

In their article vadmas writes:

Strong longtermism goes beyond its weaker counterpart in a significant way. While longtermism says we should be thinking primarily about the far-future consequences of our actions (which is generally taken to be on the scale of millions or billions of years), strong longtermism says this is the only thing we should think about.

Some of your comments, including this one, seem to me to be defending simple or weak longtermism ('by far the most important effects are likely to be temporally distant'), rather than strong longtermism as defined above. I can imagine a few reasons for this:

  1. You don't actually agree with strong longtermism
  2. You do agree with strong longtermism, but I (and presumably vadmas) am misunderstanding what you/MacAskill/Greaves mean by strong longtermism; the above quote is, presumably unintentionally, misunderstanding their views. In this case I think it would be good to hear what you think the 'strong' in 'strong longermism' actually means. 
  3. You think the above quote is compatible with what you've written above.

At the moment, I don't have a great sense of which one is the case, and think clarity on this point would be useful. I could also have missed an another way to reconcile these. 

Replies from: Owen_Cotton-Barratt
comment by Owen_Cotton-Barratt · 2020-12-18T16:26:50.842Z · EA(p) · GW(p)

I think it's a combination of a couple of things.

  1. I'm not fully bought into strong longtermism (nor, I suspect, are Greaves or MacAskill), but on my inside view it seems probably-correct.

When I said "likely", that was covering the fact that I'm not fully bought in.

  1. I'm taking "strong longtermism" to be a concept in the vicinity of what they said (and meaningfully distinct from "weak longtermism", for which I would not have said "by far"), that I think is a natural category they are imperfectly gesturing at. I don't agree with with a literal reading of their quote, because it's missing two qualifiers: (i) it's overwhelmingly what matters rather than the only thing; & (ii) of course we need to think about shorter term consequences in order to make the best decisions for the long term.

Both (i) and (ii) are arguably technicalities (and I guess that the authors would cede the points to me), but (ii) in particular feels very important.

comment by Adam Binks · 2020-12-19T00:51:08.014Z · EA(p) · GW(p)

I think this is a good point, I'm really enjoying all your comments in this thread:)

It strikes me that one way that the next century effects of our actions might be instrumentally useful is that they might give some (weak) evidence as to what the longer term effects might be.

All else equal, if some action causes a stable, steady positive effect each year for the next century, then I think that action is more likely to have a positive long term effect than some other action which has a negative effect in the next century. However this might be easily outweighed by specific reasons to think that the action's longer run effects will differ.

comment by Owen_Cotton-Barratt · 2020-12-16T23:43:31.503Z · EA(p) · GW(p)

Also my hope was that this would highlight a methodological error (equating made up numbers to real data) that could be rectified, whether or not you buy my other arguments about longtermism.  I'd be a lot more sympathetic with longtermism in general if the proponents were careful to adhere to the methodological rule of only ever comparing subjective probabilities with other subjective probabilities  (and not subjective probabilities with objective ones, derived from data). 

I'm sympathetic to something in the vicinity of your complaint here, striving to compare like with like, and being cognizant of the weaknesses of the comparison when that's impossible (e.g. if someone tried the reasoning from the Shivani example in earnest rather than as a toy example in a philosophy paper I think it would rightly get a lot of criticism).

(I don't think that "subjective" and "objective" are quite the right categories here, btw; e.g. even the GiveWell estimates of cost-to-save-a-life include some subjective components.)

In terms of your general sympathy with longtermism -- it makes sense to me that the behaviour of its proponents should affect your sympathy with those proponents.  And if you're thinking of the position as a political stance (who you're allying yourself etc.) then it makes sense that it could affect your sympathy with the position. But if you're engaged in the business of truth-seeking, why does it matter what the proponents do? You should ignore the bad arguments and pay attention to the best ones you can see -- whether or not anyone actually made them. (Of course I'm expressing a super idealistic position here, and there are practical reasons not to be all the way there, but I still think it's worth thinking about.) 

Replies from: AGB
comment by AGB · 2020-12-18T14:00:25.015Z · EA(p) · GW(p)

But if you're engaged in the business of truth-seeking, why does it matter what the proponents do? You should ignore the bad arguments and pay attention to the best ones you can see

If someone who I have trusted with working out the answer to a complicated question makes an error that I can see and verify, I should also downgrade my assessment of all their work which might be much harder for me to see and verify. 

Related: Gell-Mann Amnesia
(Edit: Also related, Epistemic Learned Helplessness)

Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

The correct default response to this effect, in my view, mostly does not look like 'ignoring the bad arguments and paying attention to the best ones'. That's almost exactly the approach the above quote describes and (imo correctly) mocks; ignoring the show business article because your expertise lets you see the arguments are bad and taking the Palestine article seriously because the arguments appear to be good.  

I think the correct default response is something closer to 'focus on your areas of expertise, and see how the proponents conduct themselves within that area. Then use that as your starting point for guessing at their accurracy in areas which you know less well'.

Of course I'm expressing a super idealistic position here, and there are practical reasons not to be all the way there 

I appreciate stuff like the above is part of why you wrote this. I still wanted to register that I think this framing is backwards; I don't think you should evaluate the strength of arguments across all domains as they come and then adjust for trustworthiness of the person making them; in general I think it's much better (measured by believing more true things) to assess the trustworthiness of the person in some domain you understand well and only then adjust to a limited extent based on the apparent strength of the arguments made in other domains. 

It's plausible that this boils down to a question of 'how good are humans at assessing the strength of arguments in areas they know little about'. In the ideal, we are perfect. In reality, I think I am pretty terrible at it, in pretty much exactly the way the Gell-Mann quote describes, and so want to put minimal weight on those feelings of strength; they just don't have enough predictive power to justify moving my priors all that much. YMMV. 

Replies from: Owen_Cotton-Barratt
comment by Owen_Cotton-Barratt · 2020-12-18T16:08:16.207Z · EA(p) · GW(p)

I appreciate the points here. I think I might be slightly less pessimistic than you about the ability to evaluate arguments in foreign domains, but the thrust of why I was making that point was because: I think for pushing out the boundaries of collective knowledge it's roughly correct to adopt the idealistic stance I was recommending; & I think that Vaden is engaging in earnest and noticing enough important things that there's a nontrivial chance they could contribute to pushing such boundaries (and that this is valuable enough to be encouraged rather than just encouraging activity that is likely to lead to the most-correct beliefs among the convex hull of things people already understand).

Replies from: AGB
comment by AGB · 2020-12-19T06:19:19.082Z · EA(p) · GW(p)

Ah, gotcha. I agree that the process of scientific enquiry/discovery works best when people do as you said.

I think it’s worth distinguishing between that case where taking the less accurate path in the short-term has longer-term benefits, and more typical decisions like ‘what should I work on’, or even just truth-seeking that doesn’t have a decision directly attached but you want to get the right answer. There are definitely people who still believe what you wrote literally in those cases and ironically I think it’s a good example of an argument that sounds compelling but is largely incorrect, for reasons above.

Replies from: MichaelA
comment by MichaelA · 2021-03-09T06:32:47.680Z · EA(p) · GW(p)

Just wanted to quickly hop in to say that I think this little sub-thread contains interesting points on both sides, and that people who stumble upon it later may also be interested in Forum posts tagged “epistemic humility” [? · GW].

comment by zdgroff · 2020-12-23T00:04:46.897Z · EA(p) · GW(p)

Thanks for writing this. I think it's very valuable to be having this discussion. Longtermism is a novel, strange, and highly demanding idea, so it merits a great deal of scrutiny. That said, I agree with the thesis and don't currently find your objections against longtermism persuasive (although in one case I think they suggest a specific set of approaches to longtermism).

I'll start with the expected value argument, specifically the note that probabilities here are uncertain and therefore random valuables, whereas in traditional EU they're constant. To me a charitable version of Greaves and MacAskill's argument is that, taking the expectation over the probabilities times the outcomes, you have a large future in expectation. (What you need for the randomness of probabilities to sink longtermism is for the probabilities to correlate inversely and strongly with the size of the future.) I don't think they'd claim the probabilities are certain.

Maybe the claim you want to make, then, is that we should treat random probabilities differently from certain probabilities, i.e. you should not "take expectations" over probabilities in the way I've described. The problem with this is that (a) alternatives to taking expectations over probabilities have been explored in the literature, and they have a lot of undesirable features; and (b) alternatives to taking expectations over probabilities do not necessarily reject longtermism. I'll discuss (b), since it involves providing an example for (a).

(b) In economics at least, Gilboa and Schmeidler (1989) propose what's probably the best-known alternative to EU when the probabilities are uncertain, which involves maximizing expected utility for the prior according to which utility is the lowest, sort of a meta-level risk aversion. They prove that this is the optimal decision rule according to some remarkably weak assumptions. If you take this approach, it's far from clear you'll reject longtermism: more likely, you end up with a sort of longtermism focused on averting long-term suffering, i.e. focused on maximizing expected value according to the most pessimistic probabilities. There's a bunch of other approaches, but they tend to have similar flavors. So alternatives on EU may agree on longtermism and just disagree on the flavor of it.

(a) Moving away from EU leads to a lot of problems. As I'm sure you know given your technical background, EU derives from a really nice set of axioms (The Savage Axioms). Things go awry when you leave it. Al-Najjar and Weinstein (2009) offer a persuasive discussion of this (H/T Phil Trammell). For example, non-EU models imply information aversion. Now, a certain sort of information aversion might make sense in the context of longtermism. In line with your Popper quote, it might make sense to avoid information about the feasibility of highly-specific future scenarios. But that's not really the sort of information non-EU models imply aversion to. Instead, they imply aversion to info that would shift you toward the option that currently has a lot of ambiguity about it because you dislike it based on its current ambiguity.

So I don't think we can leave behind EU for another approach to evaluating outcomes. The problems, to me, seem to lie elsewhere. I think there are problems with the way we're arriving at probabilities (inventing subjective ones that invite biases and failing to adequately stick to base rates, for example). I also think there might be a point to be made about having priors on unlikely conclusions so that, for example, the conclusion of strong longtermism is so strange that we should be disinclined to buy into it based on the uncertainty about probabilities feeding into the claim. But the approach itself seems right to me. I honestly spent some time looking for alternative approaches because of these last two concerns I mentioned and came away thinking that EU is the best we've got.

I'd note, finally, that I take the utopianism point well and wold like to see more discussion of this. Utopian movements have a sordid history, and Popper is spot-on. Longtermism doesn't have to be utopian, though. Avoiding really bad outcomes, or striving for a middling outcome, is not utopian. This seems to me to dovetail with my proposal in the last paragraph to  improve our probability estimates. Sticking carefully to base rates and things we have some idea about seems to be a good way to avoid utopianism and its pitfalls. So I'd suggest a form of longtermism that is humble about what we know and strives to get the least-bad empirical data possible, but I still think longtermism comes out on top.
 

Replies from: MichaelStJules, MichaelStJules, Mauricio
comment by MichaelStJules · 2020-12-31T09:22:59.518Z · EA(p) · GW(p)

This might also be of interest: 

The Sequential Dominance Argument for the Independence Axiom of Expected Utility Theory by Johan E. Gustafsson, which argues for the Independence Axiom with stochastic dominance, a minimal rationality requirement, and also against the Allais paradox and Ellsberg paradox (ambiguity aversion). 

However, I think a weakness in the argument is that it assumes the probabilities exist and are constant throughout, but they aren't defined by assumption in the Ellsberg paradox. In particular, looking at the figure for case 1, the argument assumes p is the same when you start at the first random node as it is looking forward when you're at one of the two choice nodes, 1 or 2. In some sense, this is true, since the colours of the balls don't change between, but you don't have a subjective estimate of p by assumption and "unknown probability" is a contradiction in terms for a Bayesian. (These are notes I took when I read the paper a while ago, so I hope they make sense! :P.)

Another weakness is that I think these kinds of sequential lotteries are usually only relevant in choices where an agent is working against you or trying to get something from you (e.g. money for their charity!), which also happen to be the cases where ambiguity aversion is most useful. You can't set up such a sequential lottery for something like the degree of insect consciousness, P vs NP,  or whether the sun will rise tomorrow.

See my discussion with Owen Cotton-Barratt [EA(p) · GW(p)].

comment by MichaelStJules · 2020-12-23T17:55:25.789Z · EA(p) · GW(p)

On the expected value argument, are you referring to this?

The answer I think lies in an oft-overlooked fact about expected values: that while probabilities are random variables, expectations are not. Therefore there are no uncertainties associated with predictions made in expectation. Adding the magic words “in expectation” allows longtermists to make predictions about the future confidently and with absolute certainty.

Based on the link to the wiki page for random variables, I think Vaden didn't mean that the probabilities themselves follow some distributions, but was rather just identifying probability distributions with the random variables they represent, i.e., given any probability distribution, there's a random variable distributed according to it.

However, I do think his point does lead us to want to entertain multiple probability distributions.

If you did have probabilities over your outcome probabilities or aggregate utilities, I'd think you could just take iterated expectations. If   is the aggregate utility,  and  then you'd just take the expected value of  with respect to  first, and calculate:

If the dependence is more complicated (you talk about correlations), you might use (something similar to) the law of total expectation.

And you'd use Gilboa and Schmeidler's maxmin expected value approach if you don't even have a joint probability distribution over all of the probabilities.

A more recent alternative to maxmin is the maximality rule, which is to rule out any choices whose expected utilities are weakly dominated by the expected utilities of another specific choice.

https://academic.oup.com/pq/article-abstract/71/1/141/5828678

https://globalprioritiesinstitute.org/andreas-mogensen-maximal-cluelessness/

https://forum.effectivealtruism.org/posts/WSytm4XG9DrxCYEwg/andreas-mogensen-s-maximal-cluelessness [EA · GW]

Mogensen comes out against this rule in the end for being too permissive, though. However, I'm not convinced that's true, since that depends on your particular probabilities. I think you can get further with hedging [EA · GW].

Replies from: zdgroff
comment by zdgroff · 2020-12-24T20:23:47.780Z · EA(p) · GW(p)

Yeah, that's the part I'm referring to. I take his comment that expectations are not random variables to be criticizing taking expectations over expected utility with respect to uncertain probabilities.

I think the critical review of ambiguity aversion I linked to us sufficiently general that any alternatives to taking expectations with respect to uncertain probabilities will have seriously undesirable features.

comment by Mauricio · 2020-12-26T19:22:42.569Z · EA(p) · GW(p)

Hi Zach, thanks for this!

I have two doubts about the Al-Najjar and Weinstein paper--I'd be curious to hear your (or others') thoughts on these.

First, I'm having trouble seeing where the information aversion comes in. A simpler example than the one used in the paper seems to be enough to communicate what I'm confused about: let's say an urn has 100 balls that are each red or yellow, and you don't know their distribution. Someone averse to ambiguity would (I think) be willing to pay up to $1 for a bet that pays off $1 if a randomly selected ball is red or yellow. But if they're offered that bet as two separate decisions (first betting on a ball being red, and then betting on the same ball being yellow), then they'd be willing to pay less than $0.50 for each bet. So it looks like preference inconsistency comes from the choice being spread out over time, rather than from information (which would mean there's no incentive to avoid information). What am I missing here?

(Maybe the following is how the authors were thinking about this? If you (as a hypothetical ambiguity-averse person) know that you'll get a chance to take both bets separately, then you'll take them both as long as you're not immediately informed of the outcome of the first bet, because you evaluate acts, not by their own uncertainty, but by the uncertainty of your sequence of acts as a whole (considering all acts whose outcomes you remain unaware of). This seems like an odd interpretation, so I don't think this is it.)

[edit: I now think the previous paragraph's interpretation was correct, because otherwise agents would have no way to make ambiguity averse choices that are spread out over time and consistent, in situations like the ones presented in the paper. The 'oddness' of the interpretation seems to reflect the oddness of ambiguity aversion: rather than only paying attention to what might happen differently if you choose one action or another, ambiguity aversion involves paying attention to possible outcomes that will not be affected by your action, since they might influence the uncertainty of your action.]

Second, assuming that ambiguity aversion does lead to information aversion, what do you think of the response that "this phenomenon simply reflects a [rational] trade-off between the intrinsic value of information, which is positive even in the presence of ambiguity, and the value of commitment"?

Replies from: zdgroff
comment by zdgroff · 2020-12-26T23:41:50.356Z · EA(p) · GW(p)

Thanks! Helpful follow-ups.

On the first point, I think your intuition does capture the information aversion here, but I still think information aversion is an accurate description. Offered a bet that pays $X if I pick a color and then see if a random ball matches that color, you'll pay more than for a bet that pays $X if a random ball is red. The only difference between these situations is that you have more information in the latter: you know the color to match is red. That makes you less willing to pay. And there's no obvious reason why this information aversion would be something like a useful heuristic.

I don't quite get the second point. Commitment doesn't seem very relevant here since it's really just a difference in what you would pay for each situation. If one comes first, I don't see any reason why it would make sense to commit, so I don't think that strengthens the case for ambiguity aversion in any way. But I think I might be confused here.

Replies from: Mauricio
comment by Mauricio · 2020-12-27T00:10:14.659Z · EA(p) · GW(p)

Thanks!

Offered a bet that pays $X if I pick a color and then see if a random ball matches that color, you'll pay more

I'm not sure I follow. If I were to take this bet, it seems that the prior according to which my utility would be lowest is: you'll pick a color to match that gives me a 0% chance of winning. So if I'm ambiguity averse in this way, wouldn't I think this bet is worthless?

(The second point you bring up would make sense to me if this first point did, although then I'd also be confused about the papers' emphasis on commitment.)

Replies from: zdgroff
comment by zdgroff · 2020-12-30T18:46:14.529Z · EA(p) · GW(p)

Sorry—you're right that this doesn't work. To clarify, I was thinking that the method of picking the color should be fixed ex-ante (e.g. "I pick red as the color with 50% probability"), but that doesn't do the trick because you need to pool the colors for ambiguity to arise.

The issue is that the problem the paper identifies does not come up in your example. If I'm offered the two bets simultaneously, then an ambiguity averse decision maker, like an EU decision maker, will take both bets. If I'm offered the bets sequentially without knowing I'll be offered both when I'm offered the first one, then neither an ambiguity-averse nor a risk-averse EU decision-maker will take them.  The reason is that the first one offers the EU decision-maker a 50% chance of winning, so given risk-aversion its value is less than 50% of $1. So your example doesn't distinguish a risk-averse EU decision-maker from an ambiguity-averse one.

So I think unfortunately we need to go with the more complicated examples in the paper. They are obviously very theoretical. I think it could be a valuable project for someone to translate these into more practical settings to show how these problems can come up in a real-world sense.

comment by elliottthornley · 2020-12-18T11:20:11.171Z · EA(p) · GW(p)

Hi Vaden,

Cool post! I think you make a lot of good points. Nevertheless, I think longtermism is important and defensible, so I’ll offer some defence here.

First, your point about future expectations being undefined seems to prove too much. There are infinitely many ways of rolling a fair die (someone shouts ‘1!’ while the die is in the air, someone shouts ‘2!’, etc.). But there is clearly some sense in which I ought to assign a probability of 1/6 to the hypothesis that the die lands on 1.

Suppose, for example, that I am offered a choice: either bet on a six-sided die landing on 1 or bet on a twenty-sided die landing on 1. If both probabilities are undefined, then it seems I can permissibly bet on either. But clearly I ought to bet on the six-sided die.

Now you may say that we have a measure over the set of outcomes when we’re rolling a die and we don’t have a measure over the set of futures. But it’s unclear to me what measure could apply to die rolls but not to futures.

And, in any case, there are arguments for the claim that we must assign probabilities to hypotheses like ‘The die lands on 1’ and ‘There will exist at least 10^16 people in the future.’ If we don’t assign probabilities, we are vulnerable to getting Dutch-booked and accuracy-dominated.

Suppose, then, that you accept that we must assign probabilities to the relevant hypotheses. Greaves and MacAskill’s point is that all reasonable-sounding probability assignments imply that we ought to pursue longtermist interventions (given that we accept their moral premise, which I discuss later). Consider, for example, the hypothesis that humanity spreads into space and that 10^24 people exist in the future. What probability assignment to this hypothesis sounds reasonable? Opinions will differ to some extent, but it seems extremely overconfident to assign this hypothesis a probability of less than one in one billion. On a standard view about the relationship between probabilities and rational action, that would imply a willingness to stake £1 billion on the hypothesis, losing it all if the hypothesis turns out true and winning an extra £2 if the hypothesis turns out false (assuming, for illustration’s sake only, that utility is linear with respect to money across this interval).

The case is the same with other empirical hypotheses that Greaves and MacAskill consider. To get the result that longtermist interventions don’t maximise expected value, you have to make all kinds of overconfident-sounding probability assignments, like ‘I am almost certain that humanity will not spread to the stars,’ ‘I am almost certain that smart, well-motivated people with billions of pounds of resources would not reduce extinction risk by even 0.00001%,’ ‘I am almost certain that billions of pounds of resources devoted to further research on longtermism would not unearth a viable longtermist intervention,’ etc. So, as it turns out, accepting longtermism does not commit us to strong claims about what the future will be like. Instead, it is denying longtermism that commits us to such claims.

So, to summarise the above, we have to assign probabilities to empirical hypotheses, on pain of getting Dutch-booked and accuracy-dominated. And all reasonable-seeming probability assignments imply that we should pursue longtermist interventions.

Now, this final sentence is conditional on the truth of Greaves and MacAskill’s moral premises. In particular, it depends on their claim that we ought to have a zero rate of pure time preference. 

The first thing to note is that the word ‘pure’ is important here. As you point out, ‘we should be biased towards the present for the simple reason that tomorrow may not arrive.’ Greaves and MacAskill would agree. Longtermists incorporate this factor in their arguments, and it does not change their conclusions. Ord calls it ‘discounting for the catastrophe rate’ in The Precipice, and you can read more about the role it plays there.

When Greaves and MacAskill claim that we ought to have a zero rate of pure time preference, they are claiming that we ought not care less about consequences purely because they occur later in time. This pattern of caring really does seem indefensible. Suppose, for example, that a villain has set a time-bomb in an elementary school classroom. You initially think it is set to go off in a year’s time, and you are horrified. In a year’s time, 30 children will die. Suppose that the villain then tells you that they’ve set the bomb to go off in ten years’ time. In ten years’ time, 30 children will die. Are you now less horrified? If you had a positive rate of pure time preference, you would be. But that seems absurd.

As Ord points out, positive rates of pure time preference seem even less defensible when we consider longer time scales: ‘At a rate of pure time preference of 1 percent, a single death in 6,000 years’ time would be vastly more important than a billion deaths in 9,000 years. And King Tutankhamun would have been obliged to value a single day of suffering in the life of one of his contemporaries as more important than a lifetime of suffering for all 7.7 billion people alive today.’

Thanks again for the post! It’s good to see longtermism getting some critical examination.

Replies from: ben_chugg, MichaelStJules
comment by ben_chugg · 2020-12-18T17:32:50.752Z · EA(p) · GW(p)

Hi Elliott, just a few side comments from someone sympathetic to Vaden's critique: 

I largely agree with your take on time preference. One thing I'd like to emphasize is that thought experiments used to justify a zero discount factor are typically conditional on knowing that future people will exist, and what the consequences will be. This is useful for sorting out our values, but less so when it comes to action, because we never have such guarantees. I think there's often a move made where people say "in theory we should have a zero discount factor, so let's focus on the future!". But the conclusion ignores that in practice we never have such unconditional knowledge of the future.  

Re: the dice example: 

First, your point about future expectations being undefined seems to prove too much. There are infinitely many ways of rolling a fair die (someone shouts ‘1!’ while the die is in the air, someone shouts ‘2!’, etc.). But there is clearly some sense in which I ought to assign a probability of 1/6 to the hypothesis that the die lands on 1.

True - there are infinitely many things that can happen while the die is in the air, but  that's not the outcome space about which we're concerned. We're concerned about  the result of the roll, which is a finite space with six outcomes. So of course probabilities are defined in that case (and in the 6 vs 20 sided die case). Moreover, they're defined by us, because we've chosen that a particular mathematical technique applies relatively well to the situation at hand. When reasoning about all possible futures however, we're trying to shoehorn in some mathematics that is not appropriate to the problem (math is a tool - sometimes it's useful, sometimes it's not). We can't even write out the outcome space in this scenario, let alone define a probability measure over it. 

So, to summarise the above, we have to assign probabilities to empirical hypotheses, on pain of getting Dutch-booked and accuracy-dominated. And all reasonable-seeming probability assignments imply that we should pursue longtermist interventions.

 Once you buy into the idea that you must quantify all your beliefs with numbers, then yes - you have to start assigning probabilities to all eventualities, and they must obey certain equations. But you can drop that framework completely. Numbers are not primary - again, they are just a tool. I know this community is deeply steeped in Bayesian epistemology, so this is going to be an uphill battle, but assigning credences to beliefs is not the way to generate knowledge. (I recently wrote about this briefly  here.) Anyway, the Bayesianism debate is a much longer one (one that  I think the community needs to have, however), so I won't yell about any longer, but I do want to emphasize that it is only one way  to reason about the world (and leads to many paradoxes and inconsistencies, as you all know). 

Appreciate your engagement :)  

Replies from: elliottthornley
comment by elliottthornley · 2020-12-19T09:55:30.686Z · EA(p) · GW(p)

Thanks!

Your point about time preference is an important one, and I think you're right that people sometimes make too quick an inference from a zero rate of pure time preference to a future-focus, without properly heeding just how difficult it is to predict the long-term consequences of our actions. But in my experience, longtermists are very aware of the difficulty. They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0. Nevertheless, they think that the long-term consequences of some very small subset of actions are predictable enough to justify undertaking those actions.

On the dice example, you say that the infinite set of things that could happen while the die is in the air is not the outcome space about which we're concerned. But can't the longtermist make the same response? Imagine they said: 'For the purpose of calculating a lower bound on the expected value of reducing x-risk, the infinite set of futures is not the outcome space about which we're concered. The outcome space about which we're concerned consists of the following two outcomes: (1) Humanity goes extinct before 2100, (2) Humanity does not go extinct before 2100.'

And, in any case, it seems like Vaden's point about future expectations being undefined still proves too much. Consider instead the following two hypotheses and suppose you have to bet on one of them: (1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So  it seems like these probabilities are not undefined after all.

Replies from: Owen_Cotton-Barratt, brekels
comment by Owen_Cotton-Barratt · 2020-12-19T16:59:33.366Z · EA(p) · GW(p)

They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0.

Just want to register strong disagreement with this. (That is, disagreement with the position you report, not disagreement that you know people holding this position.) I think there are enough variables in the world that have some nonzero expected impact on the long term future that for very many actions we can usually hazard guesses about their impact on at least some such variables, and hence about the expected impact of the individual actions (of course in fact one will be wrong in a good fraction of cases, but we're talking about in expectation).

Note I feel fine about people saying of lots of activities "gee I haven't thought about that one enough, I really don't know which way it will come out", but I think it's a sign that longtermism is still meaningfully under development and we should be wary of rolling it out too fast.

comment by brekels · 2020-12-23T17:22:24.884Z · EA(p) · GW(p)

And, in any case, there are arguments for the claim that we must assign probabilities to hypotheses like ‘The die lands on 1’ and ‘There will exist at least 10^16 people in the future.’ If we don’t assign probabilities, we are vulnerable to getting Dutch-booked 

The Dutch-Book argument relies on your willingness to take both sides of a bet at a given odds or probability (see Sec. 1.2 of your link).     It doesn't tell you that you must  assign probabilities, but if you do and are willing to bet on them, they must be consistent with probability axioms.

It may be an interesting shift in focus to consider where you would be ambivalent between betting for  or against  the proposition that ">= 10^24 people exist in the future", since, above, you reason only about  taking and not laying  a billion to one odds.   An inability to find such a value might cast doubt on the usefulness of probability values here. 

 

(1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So  it seems like these probabilities are not undefined after all.

I don't believe this relies on any probabilistic argument, or assignment of probabilities, since the superiority of bet (2) follows from logic.    Similarly, regardless of your beliefs about the future population, you can win now by arbitrage (e.g. betting against (1) and for (2)) if I'm willing to take both sides of both bets at the same odds.

Correct me if I'm wrong, but I understand a Dutch-book to be taking advantage of my own inconsistent credences (which don't obey laws of probability, as above).    So once I build my set of assumptions about future worlds, I should reason probabilistically within that worldview, or else you can arbitrage me subject to my willingness to take both sides.  

If you set your own set of self-consistent assumptions for reasoning about future worlds, I'm not sure how to bridge the gap.     We might debate the reasonableness of assumptions or priors that go into our thinking.   We might negotiate odds at which we would bet on ">= 10^24 people exist in the future", with our far-future progeny  transferring $ based on the outcome,  but I see no way of objectively resolving who is making a "better bet" at the moment

comment by MichaelStJules · 2020-12-18T12:58:13.275Z · EA(p) · GW(p)

I think the probability of these events regardless of our influence is not what matters; it's our causal effect that does. Longtermism rests on the claim that we can predictably affect the longterm future positively. You say that it would be overconfident to assign probabilities too low in certain cases, but that argument also applies to the risk of well-intentioned longtermist interventions backfiring, e.g. by accelerating AI development faster than we align it, an intervention leading to a false sense of security and complacency, or the possibility that the future could be worse if we don't go extinct. Any intervention can backfire. Most will accomplish little. With longtermist interventions, we may never know, since the feedback is not good enough.

I also disagree that we should have sharp probabilities, since this means making fairly arbitrary but potentially hugely influential commitments. That's what sensitivity analysis and robust decision-making under deep uncertainty are for. The requirement that we should have sharp probabilities doesn't rule out the possibility that we could come to vastly different conclusions based on exactly the same evidence, just because we have different priors or weight the evidence differently.

comment by Patrick · 2020-12-21T04:37:29.908Z · EA(p) · GW(p)

I will primarily focus on The case for strong longtermism, listed as “draft status” on both Greaves and MacAskill’s personal websites as of November 23rd, 2020. It has generated quite a lot of conversation within the effective altruism (EA) community despite its status, including multiple podcast episodes on 80000 hours podcast (one, two, three), a dedicated a multi-million dollar fund listed on the EA website, numerous blog posts, and an active forum discussion [? · GW].

"The Case for Strong Longtermism" is subtitled "GPI Working Paper No. 7-2019," which leads me to believe that it was originally published in 2019. Many of the things you listed (two of the podcast episodes, the fund, and several of the blog and forum posts) are from before 2019. My impression is that the paper (which I haven't read) is more a formalization and extension of various existing ideas than a totally new direction for effective alturism.

The word "longtermism" is new [EA · GW], which may contribute to the impression that the ideas it describes are too. This is true in some cases, but many people involved with effective altruism have long been concerned about the very long run.

Replies from: vadmas
comment by vadmas · 2020-12-22T05:30:08.542Z · EA(p) · GW(p)

Oops good catch, updated the post with a link to your comment. 

comment by vadmas · 2020-12-19T19:09:07.219Z · EA(p) · GW(p)

Hi all! Really great to see all the engagement with the post! I'm going to write a follow up piece responding to many of the objections raised in this thread. I'll post it in the forum in a few weeks once it's complete - please reply to this comment if you have any other questions and I'll do my best to address all of them in the next piece :)

comment by tobytrem · 2021-07-22T10:25:37.471Z · EA(p) · GW(p)

Thanks for writing this, I'm reading a lot of critiques of longtermism at the moment and this is a very interesting one.

Apart from the problems that you raise with expected value reasoning about future events, you also question the lack of pure time preference in the Greaves-MacAskill paper. You make a few different points here, some of which could co-exist with longtermism and some couldn't. I was wondering how much of your disagreement might be meaningfully recast as a differing opinion on how large your impartial altruistic budget should be, as an individual or as a society? 

I think this might be helpful because you say things like: "While longtermism says we should be thinking primarily about the far-future consequences of our actions (which is generally taken to be on the scale of millions or billions of years), strong longtermism says this is the only thing we should think about." This is slightly misleading because the paper stresses that strong longtermism is only true of your genuinely impartial altruistic resources, therefore even on the section on deontic strong longtermism, there is no claim about what we should exclusively care about. (This seems uber nitpicky but I think it is consequential- though the authors of the paper may have stronger views on the ideal size of impartial altruistic budgets, they are very careful not to tie strong longtermism to the truth of those far less rigourously defined arguments). 

However, a belief that we should be biased towards the present (such as you say you hold) could be understood as shrinking the amount of your time and resources you think should be spent on impartial causes at all, and consequently also on longtermist causes. 

To disambiguate, your other claim that "We should prefer good things to happen sooner, because that might help us to bring these good things about" could plausibly bear on how we should use our impartial resources, as an instrumental reason for acting as if  we were partial towards the present. There isn't an argument in your essay for this but it could be an interesting lever to push on because it would disagree with longtermism more directly. 

comment by Greg_Colbourn · 2020-12-18T13:51:07.502Z · EA(p) · GW(p)

This [The ergodicity problem in economics] seems like it could be important, and might fit in somewhere with the discussions of expected utility. I haven't really got my head around it though.

Starting with $100, your bankroll increases 50% every time you flip heads. But if the coin lands on tails, you lose 40% of your total. Since you’re just as likely to flip heads as tails, it would appear that you should, on average, come out ahead if you played enough times because your potential payoff each time is greater than your potential loss. In economics jargon, the expected utility is positive, so one might assume that taking the bet is a no-brainer.

Yet in real life, people routinely decline the bet. Paradoxes like these are often used to highlight irrationality or human bias in decision making. But to Peters, it’s simply because people understand it’s a bad deal.

Here’s why. Suppose in the same game, heads came up half the time. Instead of getting fatter, your $100 bankroll would actually be down to $59 after 10 coin flips. It doesn’t matter whether you land on heads the first five times, the last five times or any other combination in between.

comment by MichaelStJules · 2020-12-23T18:47:40.344Z · EA(p) · GW(p)

Greaves and MacAskill do discuss risk aversion, uncertainty/ambiguity aversion and the issue of seemingly arbitrary probabilities in sections 4.2 and 4.5. They admit that risk aversion with respect to the difference one makes does undermine strong longtermism (and I think ambiguity aversion with respect to the difference one makes would, too, although it might also lead you to doing as little as possible to avoid backfiring), although they cited (Snowden, 2015) claiming that aversion with respect to the difference on makes is too agent-relative and therefore incompatible with impartiality.

Apparently they're working on another paper with Mogensen on these issues.

They also point out that organizations like GiveWell, deal with cluelessness by effectively assuming it away, and you haven't really addressed this point. However, I think the steelman for GiveWell is that they're extremely skeptical about causal effects (or optimistic about the speculative long-term causal effects of their charities' interventions) and possibly uncertainty/ambiguity-averse with respect to the difference one makes (EDIT: although it's not clear that this justifies ignoring speculative future effects; rather it might mean assuming worst cases).

See also the following posts and the discussion:

Greaves and MacAskill, in my view, don't adequately address concerns about skepticism of causal effects and the value of their specific proposals. I discuss this in this thread [EA(p) · GW(p)] and this thread [EA(p) · GW(p)].

comment by Flodorner · 2020-12-22T12:16:10.843Z · EA(p) · GW(p)

I wrote up my understanding of Popper's argument on the impossibility of predicting one's own knowledge (Chapter 22 of The Open Universe) that came up in one of the comment threads. I am still a bit confused about it and would appreciate people pointing out my misunderstandings.

Consider a predictor:

A1: Given a sufficiently explicit prediction task, the predictor predicts correctly

A2: Given any such prediction task, the predictor takes time to predict and issue its reply (the task is only completed once the reply is issued).

T1: A1,A2=> Given a self-prediction task, the predictor can only produce a reply after (or at the same time as) the predicted event

T2: A1,A2=> The predictor cannot predict future growth in its own knowledge

A3: The predictor takes longer to produce a reply, the longer the reply is

A4: All replies consist of a description of a physical system and use the same (standard) language.

A1 establishes implicit knowledge of the predictor about the task. A2, A3 and A4 are there to account for the fact that the machine needs to make its prediction explicit.

A5: Now, consider two identical predictors, Tell and Told. At t=0 give Tell the task to predict Told's state (including it's physically issued reply) at t=1 from Told's state at t=0. Give Told the task to predict a third predictor's state (this seems to later be interpreted as Tell's state) at t = 1 from that predictor's state at t=0 (such that Tell and Told will be in the exact same state at t=0).

  • If I understand correctly, this implies that Tell and Told will be in the same state all the time, as future states are just a function of the task and the initial state.

T3: If Told has not started issuing its reply at t=1, Tell won't have completed its task at t=1

  • Argument: Tell must issue its reply to complete the task, but Tell has to go through the same states as Told in equal periods of time, so it cannot have started issuing its reply.

T4: If Told has completed its task at t=1, Tell will complete its task at t=1.

  • Argument: Tell and Told are identical machines

T5: Tell cannot predict its own future growth in knowledge

  • Argument: Completing the prediction would take until the knowledge is actually obtained.

A6: The description of the physical state of another description (that is for example written on a punch card) cannot be shorter than said other description.

T6: If Told has completed its task at t=1, Tell must have taken longer to complete its task

  • This is because its reply is longer than TOLD's given that it needs to describe TOLD's reply.

T6 contradicts T4, so some of the assumptions must be wrong.

  • A5 and A1 are some of the most shaky assumptions. If A1 fails, we cannot predict the future. If A5 fails, there is a problem with self-referential predictions.

Initial thoughts: 

This seems to establish too little, as it is about deterministic predictions. Also, the argument does not seem to preclude partial predictions about certain aspects  of the world's state (for example,  predictions that are not concerned with the other predictor's physical output might go through). Less relevantly, the argument heavily relies on (pseudo) self-references and Popper distinguishes between explicit and implicit knowledge and only explicit knowledge seems to be affected by the argument. It is not clear to me that making an explicit prediction about the future necessarily requires me to make all of the knowledge gains I have until then explicit (If we are talking about determinstic predictions of the whole world's state, I might have to, though, especially if I predict state-by-state ). 

Then, if all of my criticism was invalid and the argument was true, I don't see how we could predict anything in the future at all (like the sun's existence or the coin flips that were discussed in other comments). Where is the qualitative difference between short- and long-term predictions? (I agree that there is a quantitative one, and it seems quite plausible that some lontermists are undervaluing that.)

I am also slightly discounting the proof, as it uses a lot of words that can be interpreted in different ways. It seems like it is often easier to overlook problems and implicit assumptions in that kind of proof as opposed to a more formal/symbolic proof. 

Popper's ideas seem to have interesting overlap with MIRI's work. 

Replies from: vadmas, Max_Daniel, vadmas
comment by vadmas · 2020-12-22T17:26:16.412Z · EA(p) · GW(p)

I don't see how we could predict anything in the future at all (like the sun's existence or the coin flips that were discussed in other comments). Where is the qualitative difference between short- and long-term predictions? 

 

Haha just gonna keep pointing you to places where Popper writes about this stuff b/c it's far more comprehensive than anything I could write here :) 

This question (and the questions re. climate change Max asked in another thread)  are the focus of Popper's book The Poverty of Historicism, where  "historicism" here means "any philosophy that tries to make long-term predictions about human society" (i.e marxism, fascism, malthusianism, etc).  I've attached a screenshot for proof-of-relevance:  


 (Ben and I discuss historicism here fwiw.) I have a pdf of this one, dm me if you want a copy :)

comment by Max_Daniel · 2020-12-22T12:42:57.939Z · EA(p) · GW(p)

Popper's ideas seem to have interesting overlap with MIRI's work. 

Yeah, I was also vaguely reminded of e.g. logical induction when I read the summary of Popper's argument in the text Vaden linked elsewhere in this discussion.

Replies from: vadmas
comment by vadmas · 2020-12-22T17:46:54.366Z · EA(p) · GW(p)

Yes! Exactly! Hence why I keep bringing him up :) 

comment by vadmas · 2020-12-22T16:52:48.561Z · EA(p) · GW(p)

Impressive write up! Fun historical note - in a footnote Popper says he got the idea of formulating the proof using prediction machines from personal communication with the "late Dr A. M. Turing". 

comment by Flodorner · 2020-12-18T19:34:38.559Z · EA(p) · GW(p)

I am confused about the precise claim made regarding the Hilbert Hotel and measure theory.  When you say "we have no  measure over the set of all possible futures",  do you mean that no such measures exist (which would be incorrect without further requirements:  https://en.wikipedia.org/wiki/Dirac_measure , https://encyclopediaofmath.org/wiki/Wiener_measure ), or that we don't have a way of choosing the right measure?  If it is the latter,  I agree that this is an important challenge, but I'd like to highlight that the situation is not too different from the finite case in which there is still an infinitude of possible measures for a given set to choose from. 

Replies from: Max_Daniel, vadmas
comment by Max_Daniel · 2020-12-20T16:20:43.764Z · EA(p) · GW(p)

(I was also confused by this, and wrote a couple of comments [EA(p) · GW(p)] in response. I actually think they don't add much to the overall discussion, especially now that Vaden has clarified below what kind of argument they were trying to make. But maybe you're interested given we've had similar initial confusions.)

comment by vadmas · 2020-12-18T21:01:28.275Z · EA(p) · GW(p)

Yup, the latter. This is why the lack-of-data problem is the other core part of my argument. Once data is in the picture, now  we can start to get traction. There is something to fit the measure to, something to be wrong about, and a means of adjudicating between which choice of measure is better than which other choice. Without data, all this probability talk is just idol speculation painted with a quantitative veneer. 

Replies from: Flodorner
comment by Flodorner · 2020-12-18T21:24:36.834Z · EA(p) · GW(p)

Ok, makes sense. I  think that our ability to make predictions about the future steeply declines with increasing time horizions, but find it somewhat implausible that it would become entirely uncorrelated with what is actually going to happen in finite time. And it does not seem to be the case that data supporting long term predictions is impossible to get by: while it might be pretty hard to predict whether AI risk is going to be a big deal by whatever measure, I can still be fairly certain that the sun will exist in a 1000 years; in part due to a lot of data collection and hypothesis testing done by physicist. 

Replies from: Greg_Colbourn, vadmas
comment by Greg_Colbourn · 2020-12-21T15:01:57.950Z · EA(p) · GW(p)

"while it might be pretty hard to predict whether AI risk is going to be a big deal by whatever measure, I can still be fairly certain that the sun will exist in a 1000 years"

These two things are correlated.

Replies from: Flodorner
comment by Flodorner · 2020-12-22T10:05:39.996Z · EA(p) · GW(p)

They are, but I don't think that the correlation is strong enough to invalidate my statement. P(sun will exist|AI risk is a big deal) seems quite large to me. Obviously, this is not operationalized very well...

comment by vadmas · 2020-12-18T23:00:00.529Z · EA(p) · GW(p)

Yes, there are certain rare cases where longterm prediction is possible. Usually these involve astronomical systems, which are unique because they are cyclical in nature and unusually unperturbed by the outside environment. Human society doesn't share any of these properties unfortunately, and long term historical prediction runs into the impossibility proof in epistemology anyway.  

Replies from: Flodorner
comment by Flodorner · 2020-12-20T13:53:54.835Z · EA(p) · GW(p)

I don't think I buy the impossibility proof as predicting future knowledge in a probabilistic manner is possible (most simply, I can predict that if I flip a coin now, that there's a 50/50 chance I'll know the coin landed on heads/tails in a minute). I think there is some important true point behind your intuition about how knowledge (especially of more complex form than about a coin flip) is hard to predict, but I am almost certain you  won't be able to find any rigorous mathematical proof for  this intuition because reality is very fuzzy (in a mathematical sense, what exactly is the difference between the coin flip and knowledge about future technology?) so I'd be a lot more excited about other types of arguments (which will likely only support weaker claims). 

Replies from: vadmas
comment by vadmas · 2020-12-21T06:00:08.805Z · EA(p) · GW(p)

I don't think I buy the impossibility proof as predicting future knowledge in a probabilistic manner is possible (most simply, I can predict that if I flip a coin now, that there's a 50/50 chance I'll know the coin landed on heads/tails in a minute).

 

In this example you aren't predicting future knowledge, you're predicting that you'll have knowledge in the future - that is, in one minute, you will know the outcome of the coin flip. I too think we'll gain knowledge in the future, but that's very different from predicting the content of that future knowledge today. It's the difference between saying "sometime in the future we will have a theory that unifies quantum mechanics and general relativity" and describing the details of future theory itself.

I am almost certain you  won't be able to find any rigorous mathematical proof for  this intuition

The proof is here: https://vmasrani.github.io/assets/pdf/poverty_historicism_quote.pdf

(And who said proofs have to be mathematical? Proofs have to be logical - that is, concerned with deducing true conclusions from true premises - not mathematical, although they often take mathematical form.)  

Replies from: Max_Daniel, Max_Daniel, Flodorner
comment by Max_Daniel · 2020-12-21T11:32:03.417Z · EA(p) · GW(p)

The proof [for the impossibility of certain kinds of long-term prediction] is here: https://vmasrani.github.io/assets/pdf/poverty_historicism_quote.pdf

Note that in that text Popper says:

The argument does not, of course, refute the possibility of every kind of social prediction; on the contrary, it is perfectly compatible with the possibility of testing social theories - for example economic theories - by way of predicting that certain developments will take place under certain conditions. It only refutes the possibility of predicting historical developments to the extent to which they may be influenced by the growth of our knowledge.

And that he rejects only

the possibility of a theoretical history; that is to say, of a historical social science that would correspond to theoretical physics.

My guess is that everyone in this discussion (including MacAskill and Greaves) agree with this, at least as claims about what's currently possible in practice. On the other hand, it seems uncontroversial that some form of long-run predictions are possible (e.g. above you've conceded they're possible for some astronomical systems).

Thus it seems to me that the key question is whether longtermism requires the kind of predictions that aren't feasible - or whether longtermism is viable with the sort of predictions we can currently make. And like Flodorner I don't think that mathematical or logical arguments will be much help with that question.

Why can't we be longtermists while being content to "predict that certain developments will take place under certain conditions"?

Replies from: Max_Daniel
comment by Max_Daniel · 2020-12-21T11:43:04.700Z · EA(p) · GW(p)

Regarding Popper's claim that it's impossible to "predict historical developments to the extent to which they may be influenced by the growth of our knowledge":

I can see how there might be a certain technical sense in which this is true, though I'm not sufficiently familiar with Popper's formal arguments to comment in detail.

However, I don't think the claim can be true in the everyday sense (rather than just for a certain technical sense of "predicting") that arguably is relevant when making plans for the future.

For example, consider climate change. It seems clear that between now and, say, 2100 our knowledge will grow in various ways that are relevant: we'll better understand the climate system, but perhaps even more crucially we'll know more about the social and economic aspects (e.g. how people will to adapt to a warmer climate, how much emission reductions countries will pursue, ...) and on how much progress we've made with developing various relevant technologies (e.g. renewable energy, batteries, carbon capture and storage, geoengineering, ...). 

The latter two seem like paradigm examples of things that would be "impossible to predict" in Popper's sense. But does it follow that regarding climate change we should throw our hands up in the air and do nothing because it's "impossible to predict the future"? Or that climate change policy faces some deep technical challenge?

Maybe all we are doing when choosing between climate change policies in Popper's terms is "predicting that certain developments will take place under certain conditions" rather than "predicting historical developments" simpliciter. But as I said, then this to me just suggests that as longtermists we will be just fine using "predictions of certain developments under certain conditions".

I find it hard to see why there would be a qualitative difference between longtermism (as a practical project) and climate change mitigation which implies that the former is infeasible while the latter is a worthwhile endeavor.

comment by Max_Daniel · 2020-12-21T11:13:50.796Z · EA(p) · GW(p)

In this example [coin flip] you aren't predicting future knowledge, you're predicting that you'll have knowledge in the future - that is, in one minute, you will know the outcome of the coin flip.

If we're giving a specific probability distribution for the outcome of the coin flip, it seems like we're doing more than that: 

Consider that we would predict to know the outcome of the coin flip in one minute no matter what we think the odds of heads are.

Therefore, if we do give specific odds (such as 50%), we're doing more than just saying we'll know the outcome in the future.

Replies from: brekels
comment by brekels · 2020-12-24T19:57:27.681Z · EA(p) · GW(p)

Hi Max_Daniel!   I'm sympathetic to both your and Vaden's arguments, so I may try to bridge the gap on climate change vs. your Christmas party vs. longtermism.

Climate change is a problem now, and we have past data to support projecting already-observed effects into the future.   So statements of the sort "if  current data projected forward with no notable intervention, the Earth would be uninhabitable in x years."    This statement is reliant on some assumptions about future data vs. past data, but we can be reasonably clear about them and debate them.    

Future knowledge will undoubtedly help things and reframe certain problems, but a key point is that we know where to start gathering data on some of the aspects you raise:  "how ppl will adapt", "how can we develop renewable energy or batteries", etc, because climate change is already a well defined problem.  We have current knowledge that will help us get off the ground.

I agree the measure theoretic arguments may prove too much, but the number of people at your Christmas party is an unambiguously posed question  for which you have data on how many people you invited, how flaky your friends are, etc.   

In both cases, you may use probabilistic predictions, based on a set of assumptions, to compel others to act on climate change or compel yourself to invite more people.

the key question is whether longtermism requires the kind of predictions that aren’t feasible 

At the risk of oversimplification by using AI Safety example as a representative longtermist argument,  the key difference is that we haven't created or observed human-level AI, or even those which can adaptively set their own goals.

There are meaningful arguments we can use to compel others to discuss issues of safety (in algorithm development, government regulation, etc).   After all, it will be a human process to develop and deploy these AI, and we can set guardrails by focused discussion today.

Vaden's point seems to be that arguments that rely on expected values or probabilities are of significantly less value in this case.   We are not operating in a well-defined problem, with already-available or easily -collectable data,  because we haven't even created the AI.   

This seems to be the key point about  "predicting future knowledge" being fundamentally infeasible (just as people in 1900 couldn't meaningfully reason about the internet, let alone make expected utility calculations).  Again, we're not as ignorant as ppl in 1900 and may have a sense this problem is important, but can we actually make concrete progress with respect to killer robots today?

Everyone on this forum may have their own assumptions about the future AI, or climate change for that matter.   We may not ever be able to align our priors and sufficiently agree on the future, but for the purposes of planning and allocating resources, the discussion around climate change seems significantly more grounded.

Replies from: Max_Daniel
comment by Max_Daniel · 2020-12-27T20:35:41.626Z · EA(p) · GW(p)

Hi brekels, I think these are fair points. In particular, I think we may be able to agree on the following statement as well as more precise versions of it:

We may not ever be able to align our priors and sufficiently agree on the future, but for the purposes of planning and allocating resources, the discussion around climate change seems significantly more grounded [than the one about e.g. AI safety].

In my view, the key point is that, say, climate change and AI safety differ in degree but not in kind regarding whether we can make probabilistic predictions, should take action now, etc.

In particular, consider the following similarities:

  • I agree that for climate change we utilize extrapolations of current trends such as "if  current data projected forward with no notable intervention, the Earth would be uninhabitable in x years." - But in principle we can do the same for AI safety, e.g. "if Moore's Law continued, we could buy a brain-equivalent of compute for $X in Y years."
    • Yes, it's not straightforward to say what a "brain-equivalent of compute" is, or why this matters. But neither is it straightforward to e.g. determine when the Earth becomes "uninhabitable". (Again, I might concede that the latter notion is in some sense easier to define - my point it just that I don't see a qualitative difference.)
  • You say we haven't yet observed human-level AI. But neither have we observed (at least not directly an on a planetary scale), say, +6 degrees of global warming  compared to pre-industrial times. Yes, we have observed anthropogenic climate change, but we've also observed AI systems developed by humans including specific failure modes (e.g. misspecified rewards, biased training data, or lack of desired generalization in response to distributional shift).
    • In various ways it sounds right to me that we have "more data" on climate change, or that the problem of more severe climate change is "more similar" to current climate change than the problem of misaligned transformative AI is to current AI failure modes. But again, to me this seems like "merely" a difference in degree.

Separately, I think that if we try hard to find the most effective intervention to avoid some distant harm (say, one we think would occur in the year 2100, or even 2050), we will have to confront the "less well-defined" and "more uncertain" aspects of the future anyway, no matter whether the harm we're considering has some relatively well-understood core (such as climate change). 

This is because, whether we like it or not, these less well-defined issues such as the future of technology, governance, economic and political systems, etc., as well as interactions with other, less predictable, issues (e.g. migration, war, inequality, ...) will make a massive difference to how some apparently predictable harm will in fact affect different people, how we in fact might be able to prevent or respond to it etc.

E.g. it's not that much use if I can predict how much warming we'd get by 2100 conditional on a certain amount of emissions (though note that even in this seemingly "well-defined" case a lot hinges on which prior over climate sensitivity we use, since that has a large affect on the a posteriori probability of bad tail scenarios - and how to determine that prior isn't something we can just "read off" from any current observation) if I don't know for even the year 2050 the state of nuclear fusion, carbon capture and storage, geoengineering, solar cell efficiency, batteries, US-China relations, or whether in the meantime a misaligned AI system killed everyone.

It seems to me that the alternative, i.e. planning based on just those aspects of the future that seem "well-defined" or "predictable", leads to things like the Population Bomb or Limits to Growth, i.e. things that have a pretty bad track record.

comment by Flodorner · 2020-12-21T09:39:13.574Z · EA(p) · GW(p)

It seems like the proof critically hinges on assertion 2) which is not proven in your link. Can you point me to the pages of the book that contain the proof?

I agree that proofs are logical, but since we're talking about probabilistic predictions,  I'd be very skeptical of the relevance of a proof that does not involve mathematical reasoning,

Replies from: vadmas
comment by vadmas · 2020-12-21T22:07:59.618Z · EA(p) · GW(p)

Yep it's Chapter 22 of The Open Universe (don't have a pdf copy unfortunately) 

comment by AlexBrown · 2020-12-16T09:29:12.852Z · EA(p) · GW(p)

Great and interesting theme!

comment by MichaelA · 2021-05-03T15:36:06.836Z · EA(p) · GW(p)

(I've just written a bunch of thoughts on this post in a new EA Forum post [EA · GW].)

comment by Max_Daniel · 2021-01-05T13:42:48.047Z · EA(p) · GW(p)

Just saw this, which sounds relevant to some of the comment discussion here:

We are excited to announce that

@anderssandberg

will give a talk at the OKPS about which kinds of historical predictions are possible and impossible, and where Popper's critique of 'historicism' overshoots its goals.

https://twitter.com/OxfordPopper/status/1343989971552776192?s=20

Replies from: vadmas
comment by vadmas · 2021-01-05T17:25:57.110Z · EA(p) · GW(p)

Nice yeah Ben and I will be there! 

comment by Matt Boyd · 2021-07-23T08:20:42.184Z · EA(p) · GW(p)

Hi Vaden, 

I'm a bit late to the party here, I know. But I really enjoyed this post. I thought I'd add my two cents worth. Although I have a long term perspective on risk and mitigation, and have long term sympathies, I don't consider myself a strong longtermist. That said, I wouldn't like to see anyone (eg from policy circles) walk away from this debate with the view that it is not worth investing resources in existential risk mitigation. I'm not saying that's what necessarily comes through, but I think there is important middle ground (and this middle ground may actually instrumentally lead to the outcomes that strong longtermists favour, without the need to accept the strong longtermist position). 

I think it is just obvious that we should care about the welfare of people here and now. However, the worst thing that can happen to people existing now is for all of them to be killed. So it seems clear that funnelling some resources into x-risk mitigation, here and now, is important. And the primary focus should always be those x-risks that are most threatening in the near term (and the target risks will no doubt change with time, eg I would say it is biotechnology in the next 5-10 years, then perhaps climate or nuclear, and then AI, followed by rarer natural risks, or emerging technological risks, etc while all the while building cross-cutting defences such as institutions and resilience). As you note, every generation becomes the present generation and every x-risk will have it's time. We can't ignore future x-risks, for this very reason. Each future risk 'era' will become present and we had better be ready. So resources should be invested in future x-risks, or at least in understanding their timing. 

The issue I have with strong-longtermism lies in the utility calculations. The Greaves/MacAskill paper presents a table of future human lives that is based on the carrying capacity of the Earth, solar system, etc. However, even here today we do not advocate some imperative that humans must reproduce right up to the carrying capacity of the Earth. In fact many of us think this would be wrong for a number of reasons. To factor 'quadrillions' or any definite number at all into the calculations is to miss the point that we (the moral agents) get to determine (morally speaking) the right number of future people, and we might not know how many this is yet. Uncertainty about moral progress means that we cannot know what the morally correct number is, because theory and argument might evolve across time (and yes, it's probably obvious but I don't accept that non-actual, and never-actual people can be harmed, and I don't accept that non-existence is a harm). 

However, there seems to be value in SOME humans persisting in order that these projects might be continued and hopefully resolved. Therefore, I don't think we should be putting speculative utilities into our 'in expectation' calculations. There are independent arguments for preventing x-risk than strong-longtermism, and the emotional response it generates from many, potentially including aversive policymakers makes it a risky strategy to push. Even if EA is to be motivated by strong-longtermism, it may be useful to advocate an 'instrumental' theory of value in order to achieve the strong-longtermist agenda. There is a possibility that some of EA's views can themselves be an information hazard. Being right is not always being effective, and therefore not always altruistic. 

**