Posts

[Creative Writing Contest] [Referral] Pascal's Mugger Strikes Again 2021-09-14T08:00:20.036Z
The Impossibility of a Satisfactory Population Prospect Axiology 2021-05-12T15:35:33.662Z

Comments

Comment by elliottthornley on EA Forum Creative Writing Contest: $10,000 in prizes for good stories · 2021-09-13T17:48:06.015Z · EA · GW

Sweet! I've messaged him.

Comment by elliottthornley on EA Forum Creative Writing Contest: $10,000 in prizes for good stories · 2021-09-13T11:33:37.877Z · EA · GW

How about Dylan Balfour's 'Pascal's Mugging Strikes Again'? It's great.

Comment by elliottthornley on Towards a Weaker Longtermism · 2021-08-09T11:35:28.704Z · EA · GW

I remember Toby Ord gave a talk at GPI where he pointed out the following:

Let L be long-term value per unit of resources and N be near-term value per unit of resources. Then spending 50% of resources on the best long-term intervention and 50% of resources on the best near-term intervention will lead you to split resources equally between A and C. But the best thing to do on a 0.5*(near-term value)+0.5*(long-term value) value function is to devote 100% of resources to B.

Diagram

Comment by elliottthornley on The Impossibility of a Satisfactory Population Prospect Axiology · 2021-05-15T12:07:27.835Z · EA · GW

Yes, that all sounds right to me. Thanks for the tip about uniformity and fanaticism!  Uniformity also  comes up here, in the distinction between the Quantity Condition and the Trade-Off Condition.

Comment by elliottthornley on The Impossibility of a Satisfactory Population Prospect Axiology · 2021-05-13T15:03:41.066Z · EA · GW

Thanks! This is a really cool idea and I'll have to think more about it. What I'll say now is that I think your version of lexical totalism violates RGNEP and RNE. That's because of the order in which I have the quantifiers. I say, 'there exists p such that for any k...'. I think your lexical totalism only satisfies weaker versions of RGNEP and RNE with the quantifiers the other way around: 'for any k, there exists p...'.

Comment by elliottthornley on The Impossibility of a Satisfactory Population Prospect Axiology · 2021-05-13T14:12:13.499Z · EA · GW

Ah no, that's as it should be!  is saying that  is one of the very positive welfare levels mentioned on page 4.

Comment by elliottthornley on The Impossibility of a Satisfactory Population Prospect Axiology · 2021-05-13T14:07:15.698Z · EA · GW

Thanks! Your points about independence sound right to me.

Comment by elliottthornley on The Impossibility of a Satisfactory Population Prospect Axiology · 2021-05-13T14:05:29.093Z · EA · GW

Thanks for your comment! I think the following is a closer analogy to what I say in the paper:

Suppose apples are better than oranges, which are in turn better than bananas. And suppose your choices are:

  1. An apple and  bananas for sure.
  2. An apple with probability  and an orange with probability , along with  oranges for sure.

Then even if you believe:

  • One apple is better than any amount of oranges

It still seems as if, for some large  and small , 2 is better than 1. 2 slightly increases the risk you miss out on an apple, but it compensates you for that increased risk by giving you many oranges rather than many bananas.

On your side question, I don't assume completeness! But maybe if I did, then you could recover the VNM theorem. I'd have to give it more thought.

Comment by elliottthornley on The Impossibility of a Satisfactory Population Prospect Axiology · 2021-05-13T13:37:50.993Z · EA · GW

Thanks! 

And agreed! The title of the paper is intended as a riff on the title of the chapter where Arrhenius gives his sixth impossibility theorem: 'The Impossibility of a Satisfactory Population Ethics.' I think that an RC-implying theory can still be satisfactory.

Comment by elliottthornley on A case against strong longtermism · 2020-12-19T09:55:30.686Z · EA · GW

Thanks!

Your point about time preference is an important one, and I think you're right that people sometimes make too quick an inference from a zero rate of pure time preference to a future-focus, without properly heeding just how difficult it is to predict the long-term consequences of our actions. But in my experience, longtermists are very aware of the difficulty. They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0. Nevertheless, they think that the long-term consequences of some very small subset of actions are predictable enough to justify undertaking those actions.

On the dice example, you say that the infinite set of things that could happen while the die is in the air is not the outcome space about which we're concerned. But can't the longtermist make the same response? Imagine they said: 'For the purpose of calculating a lower bound on the expected value of reducing x-risk, the infinite set of futures is not the outcome space about which we're concered. The outcome space about which we're concerned consists of the following two outcomes: (1) Humanity goes extinct before 2100, (2) Humanity does not go extinct before 2100.'

And, in any case, it seems like Vaden's point about future expectations being undefined still proves too much. Consider instead the following two hypotheses and suppose you have to bet on one of them: (1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So  it seems like these probabilities are not undefined after all.

Comment by elliottthornley on A case against strong longtermism · 2020-12-18T11:20:11.171Z · EA · GW

Hi Vaden,

Cool post! I think you make a lot of good points. Nevertheless, I think longtermism is important and defensible, so I’ll offer some defence here.

First, your point about future expectations being undefined seems to prove too much. There are infinitely many ways of rolling a fair die (someone shouts ‘1!’ while the die is in the air, someone shouts ‘2!’, etc.). But there is clearly some sense in which I ought to assign a probability of 1/6 to the hypothesis that the die lands on 1.

Suppose, for example, that I am offered a choice: either bet on a six-sided die landing on 1 or bet on a twenty-sided die landing on 1. If both probabilities are undefined, then it seems I can permissibly bet on either. But clearly I ought to bet on the six-sided die.

Now you may say that we have a measure over the set of outcomes when we’re rolling a die and we don’t have a measure over the set of futures. But it’s unclear to me what measure could apply to die rolls but not to futures.

And, in any case, there are arguments for the claim that we must assign probabilities to hypotheses like ‘The die lands on 1’ and ‘There will exist at least 10^16 people in the future.’ If we don’t assign probabilities, we are vulnerable to getting Dutch-booked and accuracy-dominated.

Suppose, then, that you accept that we must assign probabilities to the relevant hypotheses. Greaves and MacAskill’s point is that all reasonable-sounding probability assignments imply that we ought to pursue longtermist interventions (given that we accept their moral premise, which I discuss later). Consider, for example, the hypothesis that humanity spreads into space and that 10^24 people exist in the future. What probability assignment to this hypothesis sounds reasonable? Opinions will differ to some extent, but it seems extremely overconfident to assign this hypothesis a probability of less than one in one billion. On a standard view about the relationship between probabilities and rational action, that would imply a willingness to stake £1 billion on the hypothesis, losing it all if the hypothesis turns out true and winning an extra £2 if the hypothesis turns out false (assuming, for illustration’s sake only, that utility is linear with respect to money across this interval).

The case is the same with other empirical hypotheses that Greaves and MacAskill consider. To get the result that longtermist interventions don’t maximise expected value, you have to make all kinds of overconfident-sounding probability assignments, like ‘I am almost certain that humanity will not spread to the stars,’ ‘I am almost certain that smart, well-motivated people with billions of pounds of resources would not reduce extinction risk by even 0.00001%,’ ‘I am almost certain that billions of pounds of resources devoted to further research on longtermism would not unearth a viable longtermist intervention,’ etc. So, as it turns out, accepting longtermism does not commit us to strong claims about what the future will be like. Instead, it is denying longtermism that commits us to such claims.

So, to summarise the above, we have to assign probabilities to empirical hypotheses, on pain of getting Dutch-booked and accuracy-dominated. And all reasonable-seeming probability assignments imply that we should pursue longtermist interventions.

Now, this final sentence is conditional on the truth of Greaves and MacAskill’s moral premises. In particular, it depends on their claim that we ought to have a zero rate of pure time preference. 

The first thing to note is that the word ‘pure’ is important here. As you point out, ‘we should be biased towards the present for the simple reason that tomorrow may not arrive.’ Greaves and MacAskill would agree. Longtermists incorporate this factor in their arguments, and it does not change their conclusions. Ord calls it ‘discounting for the catastrophe rate’ in The Precipice, and you can read more about the role it plays there.

When Greaves and MacAskill claim that we ought to have a zero rate of pure time preference, they are claiming that we ought not care less about consequences purely because they occur later in time. This pattern of caring really does seem indefensible. Suppose, for example, that a villain has set a time-bomb in an elementary school classroom. You initially think it is set to go off in a year’s time, and you are horrified. In a year’s time, 30 children will die. Suppose that the villain then tells you that they’ve set the bomb to go off in ten years’ time. In ten years’ time, 30 children will die. Are you now less horrified? If you had a positive rate of pure time preference, you would be. But that seems absurd.

As Ord points out, positive rates of pure time preference seem even less defensible when we consider longer time scales: ‘At a rate of pure time preference of 1 percent, a single death in 6,000 years’ time would be vastly more important than a billion deaths in 9,000 years. And King Tutankhamun would have been obliged to value a single day of suffering in the life of one of his contemporaries as more important than a lifetime of suffering for all 7.7 billion people alive today.’

Thanks again for the post! It’s good to see longtermism getting some critical examination.