Posts

When unprecedented times meet unprecedented opportunities 2020-12-31T00:19:54.571Z
Strong Longtermism, Irrefutability, and Moral Progress 2020-12-26T19:44:00.920Z

Comments

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2021-01-18T23:05:57.054Z · EA · GW

Hi Linch! 

We can look at their track record on other questions, and see how reliably (or otherwise) different people's predictions track reality.

I'd rather not rely on the authority of past performance to gauge whether someone's arguments are good. I think we should evaluate the arguments directly. If they are, they'll stand on their own regardless of someone's prior luck/circumstance/personality. 

In general I'm not a fan of this particular form of epistemic anarchy where people say that they can't know anything with enough precision under uncertainty to give numbers, and then act as if their verbal non-numeric intuitions are enough to carry them through consistently making accurate decisions. 

I would actually argue that it's the opposite of epistemic anarchy. Admitting that we can't know the unknowable changes our decision calculus: Instead of focusing on making the optimal decision, we recognize that all  decisions will have unintended negative consequences which we'll have to correct. Fostering an environment of criticism and error-correction becomes paramount. 

It's easy to lie (including to yourself) with numbers, but it's even easier to lie without them.

I'd disagree. I think trying to place probabilities on inherently unknowable events lends us a false sense of security. 

(All said with a smile of course :) ) 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2021-01-09T05:48:12.987Z · EA · GW

Personally I think equating strong longtermism with longtermism is not really correct.

 

Agree! While I do have problems with (weak?) longtermism, this post is a criticism of strong longtermism :)

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-30T00:14:08.832Z · EA · GW

If you are agnostic about that, then you must also be agnostic about the value of GiveWell-type stuff

Why? GiveWell charities have developed theories about the effects of various interventions. The theories have been tested and, typically, found to be relatively robust. Of course, there is always more to know, and  always ways we could improve the theories. 

I don't see how this relates to not being able to develop a statistical estimate of the probability we go extinct tomorrow. (Of course, I can just give  you a number and call it "my belief that we'll go extinct tomorrow," but this doesn't get us anywhere. The question is whether it's accurate - and what accuracy means in this case.) What would be the parameters of such a model? There are uncountably many things - most of them unknowable - which could affect such an outcome.  

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T20:09:36.988Z · EA · GW

Agree with almost all of this. This is why it was tricky to argue against, and also why I say (somewhere? podcast maybe?) that I'm not particularly worried about the current instantiation of longtermism, but what this kind of logic could justify

I totally agree that most of the existential threats currently  tackled by  the EA community are real problems (nuclear threats, pandemics, climate change, etc). 

I would note that the Greaves and MackAskill paper actually has a section putting forward 'advancing progress' as a plausible longtermist intervention!

Yeah - but I found this puzzling. You don't need longtermism to think this is a priority - so why  adopt it? If you instead adopt a problem/knowledge focused ethics,  then you get to keep all the good aspects of longtermism (promoting progress, etc), but don't open yourself up to what (in my view) are its drawbacks. I try to say this in the "Antithesis of Moral Progress" section, but obviously did a terrible job haha :) 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T19:49:09.145Z · EA · GW

I think I agree,  but there's  a lot smuggled into the phrase "perfect information on expected value". So much in fact that I'm not sure I can quite follow the thought experiment. 

When I think of "perfect information on expected value", my first thought is something like a game of roulette. There's no uncertainty (about  what can affect the system), only chance. We understand all the parameters of the system and can write down a model. To say something like this about the future means we would be basically omniscient - we would know what  sort of future knowledge will be developed, etc. Is this also what you had in mind?

(To complicate matters,  the roulette analogy is imperfect. For a typical game of roulette we can write down a pretty robust probabilistic model. But it's only a model. We could also study the precise physics of that particular roulette board,  model the hand spinning the wheel (is that how roulette works? I don't even know), take into account the initial position, the toss of the  white ball,  and so on and so forth. If we spent a long time doing this, we could come up with a model which was more accurate than our basic probabilistic model. This is all to say that models are tools suited for a particular purpose. So it's unclear to me what the model would be for the future which allowed us to write down a precise model - which is implicitly required for EV calculations). 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T16:46:43.279Z · EA · GW

There are non-measurable sets (unless you discard the axiom of choice,  but then you'll run into some significant problems.) Indeed, the existence of non-measurable sets is the reason for so much of the measure-theoretic formalism. 

If you're not taking a measure theoretic approach, and instead using propositions (which  I guess, it should be assumed that you are, because this approach grounds Bayesianism), then using infinite sets (which clearly one would have to do if reasoning about all possible futures) leads to paradoxes. As E.T. Jaynes writes in Probability Theory and the Logic of Science: 

It is very important to note that our consistency theorems have been established only for probabilities assigned on finite sets of propositions ... In laying down this rule of conduct, we are only following the policy that mathematicians from Archimedes to Gauss have considered clearly necessary for nonsense avoidance in all of mathematics. (pg. 43-44). 

(Vaden makes this point in the podcast.) 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-29T16:25:52.664Z · EA · GW

What I meant by this was that I think you and Ben both seem to assume that strong longtermists don't want to work on near-term problems. I don't think this is a given (although it is of course fair to say that they're unlikely to only want to work on near-term problems).

Mostly agree here - this was the reason for some of the (perhaps cryptic) paragraphs in the Section "the Antithesis of Moral Progress." Longtermism erodes our ability to make progress to whatever extent it has us not working on real problems. And, to the extent that it does have us working on real problems,  then I'm not sure what longtermism is actually adding.

Also, just a nitpick on terminology - I dislike the term "near-term" problems, because it seems to imply that there is a well-defined class of "future" problems that we can choose to work on. As if there were a set of problems, and they could be classified as either short-term or long-term. But the fact is that the only problems are near-term problems; everything else is just a guess about what the  future might hold. So I see it less about choosing what kinds of problems to work on, but a choice between working on real problems, or conjecturing about future ones, and I  think the latter is actively harmful. 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T19:57:13.284Z · EA · GW

Thanks AGB, this is helpful. 

I agree that longtermism is core part of the movement, and probably commands a larger share of adherents than I imply. However, I'm not sure to what extent strong longtermism is supported. My sense is that while most people agree with the general thrust of the philosophy, many would be uncomfortable with "ignoring the effects" of the near term, and remain focused on near-term problems. I didn't want to claim that a majority of EAs  supported longtermism broadly-defined, but then only criticize a subset of those views. 

I hadn't seen the results of the EA Survey - fascinating. 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T02:16:29.715Z · EA · GW

Thanks for the engagement! 

I think you're mistaking Bayesian epistemology with Bayesian mathematics. Of course, no one denies Bayes' theorem. The question is: to what should it be applied? Bayesian epistemology holds that rationality consists in updating your beliefs in accordance with Bayes' theorem. As this LW post puts it: 

Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our beliefs to be correct, we must do so. 

Next, it's not that "Bayesianism is the right approach in these fields," (I'm not sure what that means) it's that Bayesian methods are useful for some problems. But Bayesianism falls short when it comes to explaining how we actually create knowledge. (No amount of updating on evidence + Newtonian mechanics gives you relativity.)

Despite his popularity among scientists who get given one philosophy of science class. 

 Love the ad hominem attack. 

If you deny that observations confirm scientific theories, then you would have no reason to believe scientific theories which are supported by observational evidence, such as that smoking causes lung cancer. 

Smoking causes lung cancer is a hypothesis, smoking does not cause lung cancer is another. We then discriminate between the hypotheses based on evidence (we falsify  incorrect hypotheses). We slowly develop more and more sophisticated explanatory theories of how smoking causes lung cancer, always seeking to falsify them. At any time, we are left  with the best explanation of a given phenomenon. This is how falsification works. (I can't comment on your claim about Popper's beliefs - but I would be surprised if true. His books are filled with examples of scientific progress.)

 If you deny the rationality of induction, then you must be sceptical about all scientific theories that purport to be confirmed by observational evidence.

 Yes. Theories are not confirmed by evidence (there's no number of white swans you can see which confirms that all swans are white. "Swans are white" is a hypothesis, which can be refuted by  seeing a black swan), they are falsified by it. Evidence plays the role of discrimination, not confirmation.

Inductive sceptics must hold that if you jumped out of a tenth floor balcony, you would be just as likely to float upwards as fall downwards.

No - because we have explanatory theories telling us why we'll fall downwards (general relativity). These theories are the only ones which have survived scrutiny, which is why we abide by them. Confirmationism, on the other hand, purports to explain phenomenon by appealing to previous evidence. "Why do we fall downwards? Because we fell downwards before".  The sun rising tomorrow morning does not confirm the hypothesis that the sun rises every day. We should not increase our confidence in the sun rising tomorrow because it rose yesterday. Instead, we have a theory about why and when the sun rises when it does  (heliocentric model + axis-tilt theory). 

Observing additional evidence in favour of the theory should not increase our "credence" in it. Finding confirming evidence of a theory is easy, as evidenced by  astrology and ghost stories. The amount of confirmatory evidence for these theories is irrelevant, what matters is whether and by what they can be falsified. There are more accounts of people seeing UFOs than there are of people witnessing gamma ray bursts. According the confirmationism, we should thus increase our credence in the former, and have almost none in the existence of the latter. 

If you haven't read this piece on the failure of probabilistic induction to favour one generalization over another, I highly encourage you to do so. 

Anyway, happy to continue this debate if you'd like, but that was my primer. 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T20:53:21.160Z · EA · GW

I don't  think the question makes sense.  I agree with Vaden's argument that there's no well-defined measure over all possible futures. 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T20:51:00.229Z · EA · GW

But we also have to make similar (although less strong) assumptions and have generalization error even with RCTs. Doesn't GiveWell make similar assumptions about the impacts of most of their recommended charities?

 

Yes, we do! And the  strength of those assumptions is key. Our skepticism should rise in proportion to the number/feasibility of the assumptions. So you're definitely right, we should always be  skeptical of social science research - indeed, any empirical research. We should be looking for hasty generalizations, gaps in the analysis, methodological errors etc., and always pushing to do more research. But there's a massive difference between the assumptions driving GiveWell's models and the assumptions required in the nuclear threat example. 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T20:45:56.369Z · EA · GW

Why are probabilities prior to action - why are they so fundamental? Could Andrew Wiles "rationally put probabilities" on him solving Fermat's Last Theorem? Does this mean he shouldn't have worked on it? Arguments do not have to be in number form. 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T20:41:20.688Z · EA · GW

Sure - Nukes exist. They've been deployed before, and we know they have incredible destructive power. We know that many  countries have them, and have threatened to use them.  We know the protocols are in place for their use. 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T20:32:25.659Z · EA · GW

Hi Michael! 

It seems like you're acting as if you're confident that the number of people in the future is not huge, or that the interventions are otherwise not so impactful (or they do more harm than good), but I'm not sure you actually believe this. Do you? 

I have no idea about the number of future people. And I think this is the only defensible position. Which interventions do you mean? My argument is that longtermism enables reasoning that de-prioritizes current problems in lieu of possible, highly uncertain, future problems. Focusing on such problems prohibits us from making actual progress. 

It sounds like you're skeptical of AI safety work, but it also seems what you're proposing is that we should be unwilling to commit to beliefs on some questions (like the number of people in the future), and then deprioritize longtermism as a result, but, again, doing so means acting as if we're committed to beliefs that would make us pessimistic about longtermism.

I'm not quite sure I'm following this criticism, but I think it can be paraphrased as: You refuse to commit to a belief about x, but commit to one about y and that's inconsistent. (Happy to revise if this is unfair.) I don't think I agree - would you commit to a belief about what Genghis Khan was thinking on his 17th birthday? Some things are unknowable, and that's okay. Ignorance is par for the course. We don't need to pretend otherwise. Instead, we need a philosophy which is robust to uncertainty which, as I've argued, is one that focuses on correcting mistakes and solving the problems in front of us.  

I think you do need to entertain arbitrary probabilities

... but they'd be arbitrary, so by definition don't tell us anything about the world? 

how do we decide between human-focused charities and animal charities, given the pretty arbitrary nature of assigning consciousness probabilities to nonhuman animals and the very arbitrary nature of assigning intensities of suffering to nonhuman animals?

This is of course a difficult question. But I don't think the answer is to assign arbitrary numbers to the consciousness of animals. We can't pull knowledge out of a hat, even using the most complex maths possible. We have theories of neurophysiology, and while none of them conclusively tells us that animals definitely feel pain, I think that's the best explanation of our current observations. So, acknowledging this, we are in a situation where billions of animals needlessly suffer every year according to our best theory. And that's a massive, horrendous tragedy - one that we should be fighting hard to stop. Assigning credences to the consciousness of animals just so we can start comparing this to other cause areas is just pretending knowledge where we have none. 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T19:11:29.044Z · EA · GW

Oh interesting. Did you read my critique as saying that the philosophy is wrong? (Not sarcastic; serious question.) I don't really even know what "wrong" would mean here, honestly. I think the reasoning is flawed and if taken seriously leads to bad consequences.  

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T19:06:27.212Z · EA · GW

Yeah I suppose I would still be skeptical of using ranges in the absence of data (you could just apply all my objections to the upper and low bounds of the range). But I'm definitely all for sensitivity analysis when there are data backing up the estimates!

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T19:04:21.894Z · EA · GW

I have read about (complex) cluelessness. I have a lot of respect for Hilary Greaves, but I don't think cluelessness is particularly illuminating concept. I view it as a variant of "we can't predict the future." So, naturally, if you ground your ethics in expected value calculations over the long term future then, well, there's going to be problems. 

I would propose to resolve cluelessness as follows: Let's admit we can't predict the future. Our focus should instead be on error-correction. Our actions will have consequences - both intended and unintended, good and bad. The best we can do is foster a critical, rational environment where we can discuss the negatives consequences, solve them, and repeat. (I know this answer will sound glib, but I'm quite sincere.) 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T17:27:31.703Z · EA · GW

Hey Fin! Nice - lot's here. I'll respond to what  I can. If I miss anything crucial just yell at me :) (BTW, also enjoying your podcast. Maybe we should have a podcast battle at some point ... you can defend longtermism's honour). 

In any case: declaring that BE "has been refuted" seems unfairly rash.

Yep, this is fair. I'm imagining myself in the position of some random stranger outside of a fancy EA-gala, and trying to get people's attention. So yes - the language might be a little strong (although I do really think Bayesianism doesn't stand up to scrutiny if you drill down on it). 

On the first point, it feels more accurate to say that these numbers are highly uncertain rather than totally arbitrary.

Sure, guessing that there will be between 1 billion and 1000 quadrillion people in the future is probably a better estimate than 1000 people. But it still leaves open a discomfortingly huge ran. Greaves and MacAskill could easily have used half a quadrillion people, or 10 quadrillion people. Instead of trying to wrestle with this uncertainty, which is fruitless, we should just acknowledge that we can't know and stop trying.  

If it turned out that space colonisation was practically impossible, the ceiling would fall down on estimates for the size of humanity's future. So there's some information to go on — just very little.

Bit of a nitpick here, but space colonization isn't prohibited  by the laws of physics, so it can only be "practically impossible" based on our current knowledge. It's just a problem to be solved. So this particular example couldn't bring down the curtains on our expected value calculations. 

Really? If you're a rationalist (in the broad Popperian sense and the internet-cult sense), and we share common knowledge of each other's beliefs, then shouldn't we be able to argue towards closer agreement?

I don't think so. There's no data on the problem, so there's nothing to adjudicate between our disagreements. We can honestly try this if you want. What's your credence? 

Now, even if we could converge on some number, what's the reason for thinking that number captures any aspect of reality? Most academics were sympathetic to communism before it was tried; most physicists thought Einstein was wrong. 

You can use bigger numbers in the sense that you can type extra zeroes on your keyboard, but you can't use bigger numbers if you care about making sure your numbers fall reasonably in line with the available facts, right?

What are the available facts when it comes to the size of the future? There's a reason these estimates are wildly different across papers: From 10^15 here, to 10^68 (or something) from Bostrom, and everything in between. I'm gonna add mine in: 10^124 + 3. 

The response is presumably: "sure, this guess is hugely uncertain. But better to give some number rather than none, and any number I pick is going to seem too precise to you. Crucially, I'm trying to represent something about my own beliefs — not that I know something precise about the actual world."

Agree that this is probably the response. But then we need to be clear that these estimates aren't saying "anything precise about the actual world." They should be treated completely differently than estimates based on actual data. But they're not. When Greaves and MacAskill compare how many lives are saved by donating to AI safety versus the AMF, they compare these numbers as if they were equally as reliable and equally as capable of capturing something about reality.

Where there's lots of empirical evidence,  there should be little daylight between your subjective credences and the probabilities that fall straight out of the 'actual data'.

There should be no daylight. Whatever daylight there is would have to be a result of purely subjective beliefs, and we shouldn't lend this any credibility. It doesn't belong alongside an actual statistical estimate. 

However, if you agree that subjective credences are applicable to innocuous 'short-term' situations with plenty of 'data', then you can imagine gradually pushing the time horizon (or some other source of uncertainty) all the way to questions about the very long-run future.

I think the above also answers this? Subjective credences aren't applicable to short term situations. (Again,  when I say "subjective" there's an implied "and based on no data"). 

Isn't it the case that strong longtermism makes knowledge creation and accelerating progress seem more valuable, if anything? And would the world really generate less knowledge, or progress at a slower rate, if the EA community shifted priorities in a longtermist direction?

I've seen arguments to the contrary. Here for instance:  

I spoke to one EA who made an argument against slowing down AGI development that I think is basically indefensible: that doing so would slow the development of machine learning-based technology that is likely to lead to massive benefits in the short/medium term. But by the own arguments of the AI-focused EAs, the far future effects of AGI dominate all other considerations by orders of magnitude. If that’s the case, then getting it right should be the absolute top priority, and virtually everyone agrees (I think) that the sooner AGI is developed, the higher the likelihood that we were ill prepared and that something will go horribly wrong. So, it seems clear that if we can take steps to effectively slow down AGI development we should.

There's also the quote by  Toby Ord (I think?) that goes something like: "We've grown technologically mature without acquiring the commensurate wisdom." I take the implication here to be that we should stop developing technology and wait for our wisdom to catch up. But this misses how wisdom is generated in the first place: by solving problems. 

When you believe  the fate of an untold number of future people is on the line, then you can justify almost anything in the present. This is what I find so disturbing about longtermism. I find many of the responses to my critique say things like: "Look, longtermism doesn't mean we should throw out concern for the present, or be focused on problem-solving and knowledge creation, or continue improving our ethics". But you can get those things without appealing to longtermism. What does longtermism buy you that other philosophies don't, except for headaches when trying to deal with insanely big numbers? I see a lot of downsides,  and no benefits that aren't there in other philosophies. (Okay, harsh words to end, I know - but if anyone is still reading at this point I'm surprised ;) )

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T16:18:39.579Z · EA · GW

I'm tempted to just concede this because we're very close to agreement here. 

For example we need to wrestle with problems we face today to give us good enough feedback loops to make substantial progress, but by taking the long-term perspective we can improve our judgement about which of the nearer-term problems should be highest-priority.

If this turns out to be true (i.e., people end up working on actual problems and not, say, defunding the AMF to worry about "AI controlled police and armies"), then I have much  less of a problem with longtermism. People can use whatever method they want to decide which problems they want to work on (I'll leave the prioritization to 80K :) ). 

I actually think that in the longtermist ideal world (where everyone is on board with longtermism) that over 90% of attention -- perhaps over 99% -- would go to things that look like problems already.

Just apply my critique to the x% of attention that's spent worrying about non-problems. (Admittedly, of course, this world is better than the one where 100% of attention is on non-existent possible future problems.)  

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T16:00:09.516Z · EA · GW

Well, far be it from me to tell others how to spend their time, but I guess it depends on what the goal is. If the goal is to literally put a precise number (or range) on the probability of nuclear war before 2100, then yes, I think that's a fruitless and impossible endeavour. History is not an  iid sequence of events. If there is such a war, it will be the result of complex geopolitical factors based on human belief, desires, and knowledge  at the time. We cannot pretend to know what these will be. Even if you were to gather all the available evidence we have on nuclear near misses,  and generate some sort of probability based on this, the answer would look something like: 

"Assuming that in  2100 the world looks the same as it did during the time of past nuclear near misses, and nuclear misses are distributionally similar to actual  nuclear strikes, and [a bunch of other assumptions], then the probability of a nuclear war before 2100 is x". 

We can debate the merits of such a model, but I think it's clear  that it would be of limited use.   

None of this is to say that we shouldn't be working on nuclear threat, of course. There are good arguments for why this is a big problem that have nothing to do with probability and subjective credences. 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T15:39:07.835Z · EA · GW

Hi Owen! 

Re: inoculation of criticism. Agreed that it doesn't make criticism impossible in  every sense (otherwise my post wouldn't exist). But if one reasons with numbers only (i.e., EV reasoning), then longtermism becomes unavoidable. As soon as one adopts what I'm calling "Bayesian epistemology", then there's very little room to argue with it. One can retort: Well,  yes, but there's very little room to argue with General Relativity, and that is a strength of the theory, not a weakness. But the difference is that GR is very precise: It's hard to argue with  because it aligns so well with observation. But there are lots of observations which would refute it (if light didn't bend around stars, say). Longtermism is difficult to refute for a different reason, namely because it's so easy to change the underlying assumptions. (I'm not trying to equate moral theories with empirical theories in every sense, but this example gets the point across I think.)

Your second point does seem correct to me. I think I try to capture this sentiment when  I say  

Greaves and MacAskill argue that we should have no moral discount factor, i.e., a “zero rate of pure time preference”. I agree — but this is besides the point. While time is morally irrelevant, it is relevant for solving problems.

Here  I'm granting that the moral view that future generations matter could be correct. But this, on my problem/knowledge-focused view of progress,  is irrelevant for decision making.  What matters is maintaining the ability to solve problems and correct our (inevitable) errors. 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T06:00:49.654Z · EA · GW

Hi Jack, 

I think you're right, the comparison to astrology isn't entirely fair. But sometimes one has to stretch a little bit to make a point. And the point, I think, is important. Namely,  that these estimates can be manipulated and changed all too easily to fit a narrative.  Why not half a quadrillion, or 10 quadrillion people in the future? 

On the falsifiability point - I agree that the claims are  technically falsifiable. I struggled with the language for this reason while writing it (and Max Heitmann helpfully tried to make this point before, but apparently I ignored him). In principle, all of their claims are falsifiable (if we go extinct, then sure, I guess we'll know how big the future will be).  Perhaps it's better if I write "easily varied" or "amenable to drastic change" in place of irrefutable/unfalsifiable? 

The great filter example is interesting, actually. For if we're working in a Bayesian framework, then surely we'd assign such a hypothesis a probability. And then the number of future people could again be vast  in expectation. 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T05:35:16.159Z · EA · GW

As a major aside - there's a little joke Vaden and I tell on the podcast sometimes when talking about Bayesianism vs Criticial Rationalism (an alternative philosophy first developed by Karl Popper). The joke is most certainly a strawman of Bayesianism, but I think it gets the point across. 

Bob and Alice are at the bar, being served by Carol. Bob is trying to estimate whether Carol has children. He starts with a prior of 1/2. He then looks up the base rate of adults with children, and updates on that. Then he updates based on  Carol's age. And what car she drives. And the fact that she's married. And so on. He pulls out a napkin, does some complex math, and arrives at the following conclusion: It's 64.745% likely that Carol has children. Bob is proud of his achievement and shows the napkin to Alice.  Alice leans over the bar and asks "Hey Carol - do you have kids?".  

Now, obviously this is not how the Bayesian acts in real life. But it demonstrates the care the Bayesian takes in having correct beliefs;  about having the optimal brain state. I think this is the wrong target. Instead, we should be seeking to falsify as many conjectures as possible, regardless of where the conjectures came from. I don't care what Alice thought the probability  was before she asked the question, only about the result of the test. 

Comment by ben_chugg on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T05:17:44.864Z · EA · GW

Hey James! 

Answering this in its entirety would take a few more essays, but my short answer is: When there are no data available, I think subjective probability estimates are basically useless, and do not help in generating knowledge. 

I emphasize the condition when there are no data available because data is what  allows us to discriminate between different models. And when data is available, well, estimates become less subjective. 

Now, I should say that I don't really care what's "reasonable" for someone to do - I definitely don't want to dictate how someone should think about problems. (As an aside, this is a pet peeve of mine when it comes to Bayesianism -it tells you how you must think  in order to be a rational person. As if rationality was some law of nature to be obeyed.) In fact, I want people thinking about problems in many different ways. I want Eliezer Yudkowski applying Bayes' rule and updating in strict accordance with the rules of probability, you being inspired by your fourth grade teacher, and me ingesting four grams of shrooms with a blindfold on in order to generate as many  ideas as possible. But how do we discriminate between these ideas? We subject them to ruthless criticism and see which ones stand up to scrutiny. Assigning numbers to them doesn't tell us anything (again, when there's no underlying data).

In the piece I'm making a slightly different argument to the above, however. I'm criticizing  the tendency for these subjective estimates to be compared with estimates derived from actual data. Whether or not someone agrees with me that Bayesianism is misguided, I would hope that they still recognize the problem in comparing numbers of the form "my best guess about x" with "here's an average effect estimate with confidence intervals over 5 well-designed RTCs". 

Comment by ben_chugg on A case against strong longtermism · 2020-12-18T17:32:50.752Z · EA · GW

Hi Elliott, just a few side comments from someone sympathetic to Vaden's critique: 

I largely agree with your take on time preference. One thing I'd like to emphasize is that thought experiments used to justify a zero discount factor are typically conditional on knowing that future people will exist, and what the consequences will be. This is useful for sorting out our values, but less so when it comes to action, because we never have such guarantees. I think there's often a move made where people say "in theory we should have a zero discount factor, so let's focus on the future!". But the conclusion ignores that in practice we never have such unconditional knowledge of the future.  

Re: the dice example: 

First, your point about future expectations being undefined seems to prove too much. There are infinitely many ways of rolling a fair die (someone shouts ‘1!’ while the die is in the air, someone shouts ‘2!’, etc.). But there is clearly some sense in which I ought to assign a probability of 1/6 to the hypothesis that the die lands on 1.

True - there are infinitely many things that can happen while the die is in the air, but  that's not the outcome space about which we're concerned. We're concerned about  the result of the roll, which is a finite space with six outcomes. So of course probabilities are defined in that case (and in the 6 vs 20 sided die case). Moreover, they're defined by us, because we've chosen that a particular mathematical technique applies relatively well to the situation at hand. When reasoning about all possible futures however, we're trying to shoehorn in some mathematics that is not appropriate to the problem (math is a tool - sometimes it's useful, sometimes it's not). We can't even write out the outcome space in this scenario, let alone define a probability measure over it. 

So, to summarise the above, we have to assign probabilities to empirical hypotheses, on pain of getting Dutch-booked and accuracy-dominated. And all reasonable-seeming probability assignments imply that we should pursue longtermist interventions.

 Once you buy into the idea that you must quantify all your beliefs with numbers, then yes - you have to start assigning probabilities to all eventualities, and they must obey certain equations. But you can drop that framework completely. Numbers are not primary - again, they are just a tool. I know this community is deeply steeped in Bayesian epistemology, so this is going to be an uphill battle, but assigning credences to beliefs is not the way to generate knowledge. (I recently wrote about this briefly  here.) Anyway, the Bayesianism debate is a much longer one (one that  I think the community needs to have, however), so I won't yell about any longer, but I do want to emphasize that it is only one way  to reason about the world (and leads to many paradoxes and inconsistencies, as you all know). 

Appreciate your engagement :)  

Comment by ben_chugg on A case against strong longtermism · 2020-12-17T20:39:57.172Z · EA · GW

Hi Owen! Really appreciate you engaging with this post. (In the interest of full disclosure, I should say that I'm the Ben acknowledged in the piece, and I'm in no way unbiased. Also, unrelatedly, your story of switching from pure maths to EA-related areas has had a big influence over my current trajectory, so thank you for that :) ) 

I'm confused about the claim 

I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value.

This seems in direct opposition to what the authors say (and what Vaden quoted above), namely that:

The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years

I understand that they may not feel this way, but it is what they argued for and is, consequently, the idea that deserves to be criticized. Next, you write that if

we had certainty in some finite time horizon (however large), then all of the EVs would become defined again and this technical objection would disappear.

I don't think so. The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us to compute over possible futures?  

Finally, I'm not sure what to make of 

e.g. if someone tried the reasoning from the Shivani example in earnest rather than as a toy example in a philosophy paper I think it would rightly get a lot of criticism

When reading their paper, I honestly did not read it as a toy example. And I don't believe the authors state it as such.  When discussing Shivani's options they write:

Our remaining task, then, is to show that there does indeed exist at least one option available to Shivani with the property that its far-future expected value (over BAU) is significantly greater than the best available short-term expected value (again relative to BAU). That is the task of the remainder of this section. 

and when discussing AI risk in particular:

There is also a wide consensus among diverse leading thinkers (both within and outside the AI Research community) to the effect that the risks we have just hinted at are indeed very serious ones, and that much more should be done to mitigate them.

Considering  that the Open Philanthropy Project has poured millions into AI Safety, that it's listed as a top cause by 80K, and that EA's far-future-fund makes payouts to AI safety work, if Shivani's reasoning isn't to be taken seriously then now is probably a good time to make that abundantly clear. Apologies for the harshness in tone here, but for an august institute like GPI to make normative suggestions in its research and then expect no one to act on them is irresponsible. 

Anyway, I'm a huge fan of 95% of EA's work, but really think it has gone down the wrong path with longtermism. Sorry for the sass -- much love to all :)