Posts

Comments

Comment by Tyle_Stelzig on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-09T19:55:16.345Z · EA · GW

See my response to AlexHT for some of my overall thoughts. A couple other things that might be worth quickly sketching: 

The real meat of the book from my perspective were the contentions that (1) longtermist ideas, and particularly the idea that the future is of overwhelming importance, may in the future be used to justify atrocities, especially if these ideas become more widely accepted, and (2) that those concerned about existential risk should be advocating that we decrease current levels of technology, perhaps to pre-industrial levels. I would have preferred if the book focused more on arguing for these contentions. 

Questions for Phil (or others who broadly agree):  

  • On (1) from above, what credence do you place on 1 million or more people being killed sometime in the next century in a genocidal act whose public or private justifications were substantially based on EA-originating longtermist ideas?
  • To the extent you think such an event is unlikely to occur, is that mostly because you think that EA-originating longtermists won't advocate for it, or mostly because you think that they'll fail to act on it or persuade others?
  • On (2) from above, am I interpreting Phil correctly as arguing in Chapter 8 for a return to pre-industrial levels of technology?  (Confidence that I'm interpreting Phil correctly here: Low.)
  • If Phil does want us to return to a pre-industrial state, what is his credence that humanity will eventually make this choice? What about in the next century?

P.S. - If you're feeling dissuaded from checking out Phil's arguments because they are labeled as a 'book', and books are long, don't be - it's a bit long for an article, but certainly no longer than many SSC posts, for example. That said, I'm also not endorsing the book's quality. 

Comment by Tyle_Stelzig on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-09T19:37:25.013Z · EA · GW

I upvoted Phil's post, despite agreeing with almost all of AlexHT's response to EdoArad above. This is because I want to encourage good faith critiques, even those which I judge to contain serious flaws. And while there were elements of Phil's book that read to me more like attempts at mood affiliation than serious engagement with his interlocutor's views (e.g. 'look at these weird things that Nick Bostrom said once!'), on the whole I felt that there was enough effort at engagement that I was glad Phil took the time to write up his concerns. 

Two aspects of the book that I interpreted somewhat differently than Alex: 

  • The genocide argument that Alex expressed confusion about: I thought Phil's concern was not that longtermism would merely consider genocide while evaluating options, but that it seems plausible to Phil that longtermism (or a future iteration of it encountering different facts) could endorse genocide - i.e. that Phil is worried about genocide as an output of longtermism's decision process, not as an input. My model of Phil is that if he were confident that longtermism would always reject genocide, then he wouldn't be concerned merely that such possibilities are evaluated. Confidence: Low/moderate. 
  • The section describing utilitarianism: I read this section as merely aiming to describe an aspect of longtermism and to highlight features which might be wrong or counter-intuitive, not to actually make any arguments against the views he describes. This could explain Alex's confusion about what was being argued for (nothing) and feeling that intuitions were just being thrown at him (yes). I think Phil's purpose here is to lay the groundwork for his later argument that these ideas could be dangerous.  The only argument I noticed against utilitarianism comes later - namely, that together with empirical beliefs about the possibility of a large future it leads to conclusions that Phil rejects. Confidence: Low. 

I agree with Alex that the book was not clear on these points (among others), and I attribute our different readings to that lack of clarity. I'd certainly be happy to hear Phil's take. 

I have a couple of other thoughts that I will add in a separate comment. 

Comment by Tyle_Stelzig on Use resilience, instead of imprecision, to communicate uncertainty · 2020-07-26T20:10:19.033Z · EA · GW

In the technical information-theoretic sense, 'information' counts how many bits are required to convey a message. And bits describe proportional changes in the number of possibilities, not absolute changes. The first bit of information reduces 100 possibilities to 50, the second reduces 50 possibilities to 25, etc. So the bit that takes you from 100 possibilities to 50 is the same amount of information as the bit that takes you from 2 possibilities to 1.

And similarly, the 3.3 bits that take you from 100 possibilities to 10 are the same amount of information as the 3.3 bits that take you from 10 possibilities to 1. In each case you're reducing the number of possibilities by a factor of 10.

To take your example: If you were using two digits in base four to represent per-sixteenths, then each digit contains the 50% of the information (two bits each, reducing the space of possibilities by a factor of four). To take the example of per-thousandths: Each of the three digits contains a third of the information (3.3 bits each, reducing the space of possibilities by a factor of 10).

But upvoted for clearly expressing your disagreement. :)

Comment by Tyle_Stelzig on Doing Good Better - Book review and comments · 2016-01-12T17:12:24.696Z · EA · GW

The difference between carbon offsetting and meat offsetting is that carbon offsetting doesn't involve causing harms, while meat offsetting does.

Most people would consider it immoral to murder someone for reasons of personal convenience, even if you make up for it by donating to a 'murder offset', such as, let's say, a police department. MacAskill is saying that 'animal murder' offsetting is like this, because you are causing harm to animals, then attempting to 'make up for it' by helping other animals. Climate offsets are different because the offset prevents the harm from occurring in the first place.

Indeed, murder offsets would be okay from a purely consequentialist perspective. But this is not the trolley problem, for the reason that Telofy explains very well in his second paragraph above. Namely, the harmful act that you are tempted to commit is not required in order to achieve the good outcome.

Comment by Tyle_Stelzig on Doing Good Better - Book review and comments · 2016-01-12T17:00:07.180Z · EA · GW

Regarding your first paragraph: most people would consider it unethical to murder someone for reasons of personal convenience, even if you donated to a 'murder offset' organization such as, I don't know, let's say police departments. MacAskill is saying that 'animal murder' offsets are unethical in this same way. Namely, you are committing an immoral act - killing an animal - then saving some other animals to 'make up for it'. Climate offsets are different because the harm is never caused in this case.

Regarding your last paragraph: This is a nice example, but it will fail if your company might modulate the amount of food that it buys in the future based on how much gets eaten. For example, if they consistently have a bunch of leftover chicken, they might try to save some money by purchasing less chicken next time. If this is possible, then there is a reason not to eat the free chicken.