Re. Longtermism: A response to the EA forum (part 2) 2021-03-01T18:13:10.915Z
Proving too much: A response to the EA forum 2021-02-15T19:00:51.088Z
A case against strong longtermism 2020-12-15T20:56:40.521Z


Comment by vadmas on Thoughts on "A case against strong longtermism" (Masrani) · 2021-05-04T15:58:23.721Z · EA · GW

Hey! Can't respond most of your points now unfortunately,  but just a few quick things :) 

(I'm working on a followup piece at the moment and will try to respond to some of your criticisms there) 

My central point is the 'inconsequential in the grand scheme of things' one you highlight here. This is why I end the essay with this quote:

> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a distant future, but also of their more immediate effects. We must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next. Besides, we should never attempt to balance anybody’s misery against somebody else’s happiness. 

The "undefined" bit also "proves too much"; it basically says we can't predict anything ever, but actually empirical evidence and common sense both strongly indicate that we can make many predictions with better-than-chance accuracy

Just wanted to flag that I responded to the 'proving too much' concern here:  Proving Too Much

Comment by vadmas on Possible misconceptions about (strong) longtermism · 2021-03-12T02:42:46.520Z · EA · GW

Very balanced assessment! Nicely done :) 

Comment by vadmas on Re. Longtermism: A response to the EA forum (part 2) · 2021-03-02T19:30:27.806Z · EA · GW

Oops sorry haha neither did I! "this" just meant low-engagement, not  your excellent advice about title choice. Updated :) 

Comment by vadmas on Re. Longtermism: A response to the EA forum (part 2) · 2021-03-02T17:26:44.596Z · EA · GW

Hehe taking this as a sign I'm overstaying my welcome. Will finish the last post of the series though and move on :) 

Comment by vadmas on Proving too much: A response to the EA forum · 2021-02-16T20:24:05.194Z · EA · GW

You're correct, in practice you wouldn't - that's the 'instrumentalist' point made in the latter half of the post 

Comment by vadmas on Proving too much: A response to the EA forum · 2021-02-15T22:55:39.359Z · EA · GW

Both actually! See section 6 in Making Ado Without Expectations - unmeasurable sets are one kind of expectation gap (6.2.1) and 'single-hit' infinities are another (6.1.2)

Comment by vadmas on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-10T21:27:05.396Z · EA · GW

Worth highlighting the passage that the "mere ripples" in the title  refers to  for those  skimming the comments:

 Referring to events like “Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts [sic], World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS" Bostrom writes that
  these types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things—from the perspective of humankind as a whole—even the worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species. 

Mere ripples! That’s what World War II—including the forced sterilizations mentioned above, the Holocaust that killed 6 million Jews, and the death of some 40 million civilians—is on the Bostromian view. This may sound extremely callous, but there are far more egregious claims of the sort. For example, Bostrom argues that the tiniest reductions in existential risk are morally equivalent to the lives of billions and billions of actual human beings. To illustrate the idea, consider the following forced-choice scenario:

Bostrom’s altruist: Imagine that you’re sitting in front of two red buttons. If you push the first button, 1 billion living, breathing, actual people will not be electrocuted to death. If you push the second button, you will reduce the probability of an existential catastrophe by a teeny-tiny, barely noticeable, almost negligible amount. Which button should you push?

 For Bostrom, the answer is absolutely obvious: you should push the second button! The issue isn’t even close to debatable. As Bostrom writes in 2013, even if there is “a mere 1 per cent chance” that 10^54 conscious beings living in computer simulations come to exist in the future, then “the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point  is worth a hundred billion times as much as a billion human lives.” So, take a billion human lives, multiply it by 100 billion, and what you get is the moral equivalent of reducing existential risk on the assumption that there is a “one billionth of one billionth of one percentage point” that we run vast simulations in which 10^54 happy people reside. This means that, on Bostrom’s view, you would be a grotesque moral monster not to push the second button. Sacrifice those people! Think of all the value that would be lost if you don’t!

Comment by vadmas on A case against strong longtermism · 2021-01-05T17:25:57.110Z · EA · GW

Nice yeah Ben and I will be there! 

Comment by vadmas on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-30T00:52:29.921Z · EA · GW

What is your probability distribution across the size of the future population, provided there is not an existential catastrophe? 

Do you for example think there is a more than 50% chance that it is greater than 10 billion?


I don't have a probability distribution across the size of the future population. That said, I'm happy to interpret the question in the colloquial, non-formal sense, and just take >50% to mean "likely". In that case, sure, I think it's likely that the population will exceed 10 billion. Credences shouldn't be taken any more seriously than that - epistemologically equivalent to survey questions where the respondent is asked to tick a very unlikely, unlikely, unsure, likely, very likely  box.

Comment by vadmas on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T23:12:28.813Z · EA · GW

Granted any focus on AI work necessarily reduces the amount of attention going towards near-term issues, which I suppose is your point. 

 Yep :) 

Comment by vadmas on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T20:55:00.974Z · EA · GW

I don't consider human extermination by AI to be a 'current problem' - I think that's where the disagreement lies.  (See my blogpost for further comments on this point) 

Comment by vadmas on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T20:38:13.160Z · EA · GW

(as far as I can tell their entire point is that you can always do an expected value calculation and "ignore all the effects contained in the first 100" years)


Yes, exactly. One can always find some  expected value calculation that allows one to ignore present-day suffering. And worse, one can keep doing this between now and eternity, to ignore all suffering forever. We can describe this using the language of "falsifiability" or "irrefutability" or  whatever - the word choice doesn't really matter here. What matters is that this is a very dangerous game to be playing.

Comment by vadmas on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T20:27:44.930Z · EA · GW

Yikes... now I'm even more worried ... :| 

Comment by vadmas on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T20:15:11.935Z · EA · GW

Firstly, you and vadmas seem to assume number 2 is the case.


Oops nope the exact opposite! Couldn't possibly agree more strongly with

Working on current problems allows us to create moral and scientific knowledge that will help us make the long-run future go well

Perfect, love it, spot on. I'd be 100%  on board with longtermism if this is what it's about - hopefully conversations like these can move it there. (Ben makes this point near the end of our podcast conversation fwiw)

Do you in fact think that knowledge creation has strong intrinsic value? I, and I suspect most EAs, only think knowledge creation is instrumentally valuable. 

Well, both. I do think it's intrinsically valuable to learn about reality, and I support research into fundamental physics, biology, history, mathematics, ethics etc for that reason. I think it would be intellectually impoverishing to only support research that has immediate and foreseeable practical benefits. But fortunately knowledge creation also  has enormous  instrumental value. So it's not a one-or-the other thing. 

Comment by vadmas on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T20:08:33.691Z · EA · GW

I don't see how that gets you out of facing the question


Check out chapter 13 in Beginning of Infinity when you can - everything I was saying in that post is much better explained there :) 

Comment by vadmas on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T08:57:53.298Z · EA · GW

Hey Mauricio! Two brief comments - 

Some others are focused on making decisions. From this angle,  EV maximization and Bayesian epistemology were never supposed to be frameworks for creating knowledge--they're frameworks for turning knowledge into decisions, and your arguments don't seem to be enough for refuting them as such.

Yes agreed, but these two things become intertwined when a philosophy  makes people decide to stop creating knowledge. In this case, it's longtermism preventing the creation of moral and scientific knowledge by grinding the process of error correction to a halt, where "error correction" in this context means continuously reevaluating philanthropic organizations based on their near and medium term consequences, in order to compare results obtained against results expected. 

Vaden offers a promising approach to making decisions, but it just passes the buck on this--we'll still need an answer to my question when we get to his step 2

Both approaches pass on the buck, that's why I defined 'creativity' here to mean:  'whatever unknown software the brain is running to get out of the infinite regress problem.' And one doesn't  necessarily need to answer your question, because  there's no requirement that  the criticism take EV form (although it can). 

Comment by vadmas on A case against strong longtermism · 2020-12-22T17:46:54.366Z · EA · GW

Yes! Exactly! Hence why I keep bringing him up :) 

Comment by vadmas on A case against strong longtermism · 2020-12-22T17:26:16.412Z · EA · GW

I don't see how we could predict anything in the future at all (like the sun's existence or the coin flips that were discussed in other comments). Where is the qualitative difference between short- and long-term predictions? 


Haha just gonna keep pointing you to places where Popper writes about this stuff b/c it's far more comprehensive than anything I could write here :) 

This question (and the questions re. climate change Max asked in another thread)  are the focus of Popper's book The Poverty of Historicism, where  "historicism" here means "any philosophy that tries to make long-term predictions about human society" (i.e marxism, fascism, malthusianism, etc).  I've attached a screenshot for proof-of-relevance:  

 (Ben and I discuss historicism here fwiw.) I have a pdf of this one, dm me if you want a copy :)

Comment by vadmas on A case against strong longtermism · 2020-12-22T16:52:48.561Z · EA · GW

Impressive write up! Fun historical note - in a footnote Popper says he got the idea of formulating the proof using prediction machines from personal communication with the "late Dr A. M. Turing". 

Comment by vadmas on A case against strong longtermism · 2020-12-22T05:30:08.542Z · EA · GW

Oops good catch, updated the post with a link to your comment. 

Comment by vadmas on A case against strong longtermism · 2020-12-21T22:07:59.618Z · EA · GW

Yep it's Chapter 22 of The Open Universe (don't have a pdf copy unfortunately) 

Comment by vadmas on A case against strong longtermism · 2020-12-21T06:00:08.805Z · EA · GW

I don't think I buy the impossibility proof as predicting future knowledge in a probabilistic manner is possible (most simply, I can predict that if I flip a coin now, that there's a 50/50 chance I'll know the coin landed on heads/tails in a minute).


In this example you aren't predicting future knowledge, you're predicting that you'll have knowledge in the future - that is, in one minute, you will know the outcome of the coin flip. I too think we'll gain knowledge in the future, but that's very different from predicting the content of that future knowledge today. It's the difference between saying "sometime in the future we will have a theory that unifies quantum mechanics and general relativity" and describing the details of future theory itself.

I am almost certain you  won't be able to find any rigorous mathematical proof for  this intuition

The proof is here:

(And who said proofs have to be mathematical? Proofs have to be logical - that is, concerned with deducing true conclusions from true premises - not mathematical, although they often take mathematical form.)  

Comment by vadmas on A case against strong longtermism · 2020-12-19T19:09:07.219Z · EA · GW

Hi all! Really great to see all the engagement with the post! I'm going to write a follow up piece responding to many of the objections raised in this thread. I'll post it in the forum in a few weeks once it's complete - please reply to this comment if you have any other questions and I'll do my best to address all of them in the next piece :)

Comment by vadmas on A case against strong longtermism · 2020-12-18T23:03:20.792Z · EA · GW

See discussion below w/ Flodorner on this point :) 

You are Flodorner! 

Comment by vadmas on A case against strong longtermism · 2020-12-18T23:00:00.529Z · EA · GW

Yes, there are certain rare cases where longterm prediction is possible. Usually these involve astronomical systems, which are unique because they are cyclical in nature and unusually unperturbed by the outside environment. Human society doesn't share any of these properties unfortunately, and long term historical prediction runs into the impossibility proof in epistemology anyway.  

Comment by vadmas on A case against strong longtermism · 2020-12-18T21:01:28.275Z · EA · GW

Yup, the latter. This is why the lack-of-data problem is the other core part of my argument. Once data is in the picture, now  we can start to get traction. There is something to fit the measure to, something to be wrong about, and a means of adjudicating between which choice of measure is better than which other choice. Without data, all this probability talk is just idol speculation painted with a quantitative veneer. 

Comment by vadmas on A case against strong longtermism · 2020-12-18T20:21:22.855Z · EA · GW

Hey Issac,

On this specific question, I have either misunderstood your argument or think it might be mistaken. I think your argument is "even if we assume that the life of the universe is finite, there are still infinitely many possible futures - for example, the infinite different possible universes where someone shouts a different natural number".

But I think this is mistaken, because the universe will end before you finish shouting most natural numbers. In fact, there would only be finitely many natural numbers you could finish shouting before the universe ends, so this doesn't show there are infinitely many possible universes.

Yup you've misunderstood the argument. When we talk about the set of all future possibilities, we don't line up all the possible futures and iterate through them sequentially. For example, if we say it's possible tomorrow might either rain, snow, or  hail, we * aren't  * saying that it will first rain, then snow, then hail. Only one of them will actually happen.

Rather we are discussing the set of possibilities {}, which has no intrinsic order, and in this case has a cardinality of 3.  

Similarly with the set of all possible futures. If we let  represent a possible future where someone shouts the number , then the set of all possible futures is {, ... }, which has cardinality  and again no intrinsic ordering. We aren't saying here that a single person will shout all numbers between 1 and , because as with the  weather example, we're talking about what might possibly happen, not what actually  happens. 

More generally, I think I agree with Owen's point that if we make the (strong) assumption the universe is finite in duration and finite in possible states, and can quantise time, then it follows that there are only finite possible universes, so we can in principle compute expected value.

No this is wrong.  We don't consider physical constraints when constructing the set of future possibilities - physical constraints come into the picture later.  So in the weather example, we could include into our set of future possibilities something absurd, and which violates known laws of physics. For example we are free to construct a set like {}. 

Then we factor in physical constraints by assigning probability 0 to the absurd scenario. For example our probabilities might be {}.

But no laws of physics are being violated with the scenario "someone shouts the natural number i".  This is why this establishes a one-to-one correspondence between the set of future possibilities and the natural numbers, and why we can say the set of future possibilities is (at least) countably infinite. (You could establish that the set of future possibilities is uncountably infinite as well by having someone shout a single digit in Cantor's diagonal argument, but that's beyond what is necessary to show that EVs are undefined.

For example, I'd love to hear when (if at all) you think we should use expected value reasoning, and how we should make decisions when we shouldn't. 

Yes I think that the EV style-reasoning popular on this forum should be dropped entirely because it leads to absurd conclusions, and basically forces people to think along a single dimension. 

So for example I'll produce some ridiculous future scenario (Vaden's x-risk: In the year 254 012 412 there will be a war over blueberries in the Qualon region of delta quadrant , which causes an unfathomable amount of infinite suffering ) and then say: great, you're free to set your credence about this scenario as high or as low as you like. 

But now I've trapped you! Because I've forced you to think about the scenario only in terms of a single 1 dimensional credence-slider. Your only move is to set your credence-slider really really small, and I'll set my suffering-slider really really high, and then using EVs, get you to dedicate your income and the rest of your life to Blueberry-Safety research.

Note also that EV style reasoning is only really popular in this community. No other community of researchers reasons in this way, and they're able to make decisions just fine. How would any other community reason about my scenario? They would reject it as absurd and be done with it. Not think along a single axis (low credence/high credence). 

That's the informal answer, anyway. Realizing that other communities don't reason in this way and are able to make decisions just fine should at least be a clue that dropping EV style arguments isn't going to result in decision-paralysis.

The more formal answer is to consider using an entirely different epistemology, which doesn't deal with EVs at all. This is what my vague comments about the 'framework' were eluding to in the piece. Specifically, I have in mind  Karl Popper's critical rationalism, which is at the foundation of modern science. CR is about much more than that, however. I discuss what a CR approach to decision making would look like in this piece if you want some longer thoughts on it. 

But anyway, I digress... I don't expect people to jettison their entire worldview just because some random dude on the internet tells them to. But for anyone reading who might be curious to know where I'm getting a lot of these ideas from (few are original to me), I'd recommend  Conjectures and Refutations.  If you want to know what an alternative to EV style reasoning looks like, the answers are in that book.

(Note:  This is a book many people haven't read because think they already know the gist. "Oh, C&R! That's the book about falsification, right?" It's about much much more than that :) ) 

Comment by vadmas on A case against strong longtermism · 2020-12-18T03:25:25.741Z · EA · GW

if we helped ourselves to some cast-iron guarantees about the size and future lifespan of the universe (and made some assumptions about quantization) then we'd know that the set of possible futures was smaller than a particular finite number (since there would only be a finite number of time steps and a finite number of ways of arranging all particles at each time step). Then even if I can't write it down, in principle someone could write it down, and the mathematical worries about undefined expectations go away.


It certainly not obvious that the universe is infinite in the sense that you suggest. Certainly nothing is "provably infinite" with our current knowledge. Furthermore, although we may not be certain about the properties of our own universe, we can easily imagine worlds rich enough to contain moral agents yet which remain completely finite. For instance, you could image a cellular automata with a finite grid size and which only lasted for a finite duration.

Aarrrgggggg was trying to resist weighing in again ... but I think there's some misunderstanding of my argument here. I wrote:

The set of all possible futures is infinite,  regardless of whether we consider the life of the universe to be infinite. Why is this? Add to any finite set of possible futures a future where someone spontaneously shouts “1”!, and a future where someone spontaneously shouts “2”!, and a future where someone spontaneously shouts “3!”  (italics added)

A few comments:

  • We're talking about possible universes, not actual ones, so cast-iron guarantees about the size and future lifespan of the universe are irrelevant (and impossible anyway).
  • I intentionally framed it as someone shouting a natural number in order to circumvent any counterargument based on physical limits of the universe. If someone can think it, they can shout it.
  • The set of possible futures is provably infinite because the "shouting a natural number" argument established a one-to-one correspondence between the set of possible (triple emphasis on the word * possible * ) futures, and the set of natural numbers, which are provably infinite (see proof here ).
  • I'm not using fancy or exotic mathematics here, as Owen can verify. Putting sets in one-to-one correspondence with the natural numbers is the standard way one proves a set is countably infinite. (See
  • Physical limitations regarding the largest number that can be physically instantiated are irrelevant to answering the question "is this set finite or infinite"? Mathematicians do not say the set of natural numbers are finite because there are a finite number of particles in the universe. We're approaching numerology territory here...

Okay this will hopefully be my last comment, because I'm really not trying to be a troll in the forum or anything. But please represent my argument accurately!

Comment by vadmas on A case against strong longtermism · 2020-12-17T05:04:15.325Z · EA · GW

Overall though I think that longtermism is going to end up with practical advice which looks quite a lot like "it is the duty of each generation to do what it can to make the world a little bit better for its descendants."

Goodness, I really hope so. As it stands, Greaves and MacAskill are telling people that they can “simply ignore all the effects [of their actions] contained in the first 100 (or even 1000) years”, which seems rather far from the practical advice both you and I hope they arrive at.

Anyway, I appreciate all your thoughtful feedback - it seems like we agree much more than we disagree, so I’m going to leave it here :)

Comment by vadmas on A case against strong longtermism · 2020-12-16T22:01:47.500Z · EA · GW

Hey Owen - thanks for your feedback! Just to respond to a few points - 

>Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.

Would be able to elaborate a bit on where the weaknesses are? I see in the thread  you agree the argument is correct (and from googling your name I see you have a pure math background! Glad it passes  your sniff-test :) ). If we agree EVs are undefined over possible futures, then in the Shivani example, this is like comparing 3 lives to NaN.  Does this not refute at least 1 / 2 of the assumptions longtermism needs to 'get off the ground'?  

> Overall I feel like a lot of your critique is not engaging directly with the case for strong longtermism; rather you're pointing out apparently unpalatable implications.

Just to comment here - yup I intentionally didn't address the philosophical arguments in favor of longtermism, just because I felt that criticizing the incorrect use of expected values was a "deeper" critique and one which I hadn't seen made on the forum before.  What would the argument for strong longtermism look like without the expected value calculus? It's my impression that EVs are central to the claim that we can and should concern ourselves with the future 1 billion years from now. 

Also my hope was that this would highlight a methodological error (equating made up numbers to real data) that could be rectified, whether or not you buy my other arguments about longtermism.  I'd be a lot more sympathetic with longtermism in general if the proponents were careful to adhere to the methodological rule of only ever comparing subjective probabilities with other subjective probabilities  (and not subjective probabilities with objective ones, derived from data). 

> I would welcome more work on understanding the limits of this kind of reasoning, but I'm wary of throwing the baby out with the bathwater if we say we must throw our hands up rather than reason at all about things affecting the future.

Yup totally - if you permit me a shameless self plug, I wrote about an alternative way to reason here.

> As a minor point, I don't think that discounting the future really saves you from undefined expectations, as you're implying.

Oops sorry no wasn't implying that - two orthogonal arguments.

>I do think that if all people across time were united in working for the good

People are united across time working for the good! Each generation does what it can to make the world a little bit better for its descendants, and in this way we are all united.