Posts

AI risk hub in Singapore? 2020-10-29T11:51:49.741Z · score: 23 (19 votes)
Relevant pre-AGI possibilities 2020-06-20T13:15:29.008Z · score: 22 (9 votes)
Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post 2019-02-15T19:14:41.459Z · score: 66 (25 votes)
Tiny Probabilities of Vast Utilities: Bibliography and Appendix 2018-11-20T17:34:02.854Z · score: 9 (6 votes)
Tiny Probabilities of Vast Utilities: Concluding Arguments 2018-11-15T21:47:58.941Z · score: 21 (12 votes)
Tiny Probabilities of Vast Utilities: Solutions 2018-11-14T16:04:14.963Z · score: 18 (10 votes)
Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem 2018-11-10T09:12:15.039Z · score: 22 (11 votes)
Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? 2018-11-08T10:09:59.111Z · score: 22 (14 votes)
Ongoing lawsuit naming "future generations" as plaintiffs; advice sought for how to investigate 2018-01-23T22:22:08.173Z · score: 8 (8 votes)
Anyone have thoughts/response to this critique of Effective Animal Altruism? 2016-12-25T21:14:39.612Z · score: 3 (7 votes)

Comments

Comment by kokotajlod on When you shouldn't use EA jargon and how to avoid it · 2020-10-30T09:42:35.698Z · score: 6 (4 votes) · EA · GW

My only disagreement is with the order of magnitude thing. I love orders of magnitude talk. I think it's really useful to think in orders of magnitude about many (most?) things. If this means I sometimes say "one order of magnitude" when I could just say "ten times" so be it.

Comment by kokotajlod on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-10-30T09:39:19.070Z · score: 2 (2 votes) · EA · GW

Thanks for following up. Nope, I didn't write it, but comments like this one and this one are making me bump it up in priority! Maybe it's what I'll do next.

Comment by kokotajlod on AI risk hub in Singapore? · 2020-10-30T09:35:55.995Z · score: 2 (2 votes) · EA · GW

Yeah I have no idea, and would defer to people like Brian Tse. I've talked to people from Singapore but not people from China about this. My rough thoughts at the moment are that Singapore would be an easier (and therefore more probable) place to start a hub than China due to the language and diversity and political situation. It might also make it easier to build another hub in China. However, given a choice between a hub in China and a hub in Singapore, a hub in China is probably better since more AI research is being done in China. I'm not sure how to balance these considerations, and there are probably other considerations I haven't thought of.

Comment by kokotajlod on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T14:07:44.826Z · score: 3 (2 votes) · EA · GW

Thanks! This is good news; will go look at those studies...

Comment by kokotajlod on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-18T06:43:37.986Z · score: 1 (1 votes) · EA · GW

I agree! I'd love to see more research into this stuff. In my relevant pre-agi possibilities doc I call this "Deterioration of collective epistemology." I intend to write a blog post about a related thing (Persuasion Tools) soon.

Comment by kokotajlod on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-17T11:43:55.784Z · score: 17 (6 votes) · EA · GW

I disagree. Trump draws his power from the Red Tribe; the Blues can't cancel him because they don't have leverage over him.

We, by contrast, are mostly either Blues ourselves or embedded in Blue communities.

Can you give an example of someone or some community in a situation like ours, that adopted a strategy of thoroughgoing shamelessness, and that successfully avoided cancellation?

Comment by kokotajlod on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-17T11:37:34.426Z · score: 14 (5 votes) · EA · GW

Yeah, I wasn't super clear, sorry. I think I basically agree with you that communities can and should have higher standards than society at large, and that communities can and should be allowed to set their own standards to some extent. And in particular I think that insofar as we think someone has bad character, that's a decently good reason not to invite them to things. It's just that I don't think that's the most accurate description of what happened at Munich, or what's happening with cancel culture more generally -- I think it's more like an excuse, rationalization, or cover story for what's really happening, which is that a political tribe is using bullying to get us to conform to their ideology. As a mildly costly signal of my sincerity here, I'll say this: I personally am not a huge fan of Robin Hanson and if I was having a birthday party or something and a friend of his was there and wanted to bring him along, I'd probably say no. This is so even though I respect him quite a lot as an intellectual.

I should also flag that I'm still confused about the best way to characterize what's going on. I do think there are people within each tribe explicitly strategizing about how the tribe should bully people into conformity, but I doubt that they have any significant control over the overall behavior of the tribe; instead I think it's more of an emergent/evolved phenomenon... And of course it's been going on since the dawn of human history, and it waxes and wanes. It just seems to be waxing now. Personally I think technology is to blame--echo chambers, filter bubbles, polarization, etc. I think that if these trends are real then they are extremely important to predict and understand because they are major existential risk factors and also directly impede the ability of our community to figure out what we need to do to help the world and coordinate to do it.

Comment by kokotajlod on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-16T05:44:46.369Z · score: 18 (6 votes) · EA · GW

I'm not construing "do not invite someone to speak at events" as cancel culture.

This was an invite-then-caving-to-pressure-to-disinvite. And it's not just any old pressure, it's a particular sort of political tribal pressure. It's one faction in the culture war trying to have its way with us. Caving in to specifically this sort of pressure is what I think of as adopting cancel culture.

Comment by kokotajlod on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T08:31:01.387Z · score: 6 (4 votes) · EA · GW

I think I agree with you except for your example. I'm not sure, but it seems plausible to me that in many cases the bullied kid doing X is a bad idea. It seems like it will encourage the bullies to ask for Y and Z later.

Comment by kokotajlod on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-14T20:07:30.241Z · score: 15 (9 votes) · EA · GW

Judgments about someone's character are, unfortunately, extremely tribal. Different political tribes have wildly different standards for what counts as good character and what counts as mere eccentricity. In many cases one tribe's virtue is another tribe's vice.

In light of this, I think we should view with suspicion the argument that it's OK to cancel someone because they have bad character. Yes, some people really do have bad character. But cancel culture often targets people who have excellent character (this is something we all can agree on, because cancel culture isn't unique to any one tribe; for examples of people with excellent character getting cancelled, just look at what the other tribe is doing!) so we should keep this sort of rationale-for-cancellation on a tight leash.

Here is a related argument someone might make, which I bring up as an analogy to illustrate my point:

Argument: Some ideas are true, others are false. The false ideas often lead to lots of harm, and spreading false ideas therefore often leads to lots of harm. Thus, when we consider whether to invite people to events, we shouldn't invite people insofar as we think they might spread false ideas. Duh.

My reply: I mean, yeah, it seems like we have to draw the line somewhere. But the overwhelming lesson of history is that when communities restrict membership on the basis of true ideas, that just leads to an epistemic death spiral where groupthink and conformity reign, ideology ossifies into dogma, and the community drifts farther from the truth instead of continually seeking it. Instead, communities that want to find the truth need to be tolerant of a wide range of opinions, especially opinions advanced politely and in good faith, etc. There's a lot more to say about best practices for truth-seeking community norms, and I'd be happy to go into more detail if you like, but you get the idea.

I think the legal/justice system is another example of this.

Argument: Look, we all know OJ Simpson did it. It's pretty obvious at this point. So why don't we just... go grab him and put him in jail? Or beat him up or something?

Reply: Vigilante justice often makes mistakes. Heck, even the best justice systems often make mistakes. Worse, sometimes the mistakes are systemically biased in various ways, or even not even mistakes at all, but rather intentional patterns of oppression. And the way we prevent this sort of thing is by having all sorts of rules for what counts as admissible evidence, for when the investigation into someone's wrongdoing is supposed to be over and they are supposed to go free, etc. And yeah, sometimes following these rules means that people we are pretty sure are guilty end up going free. And this is bad. But it's better than the alternative.

Comment by kokotajlod on Plan for Impact Certificate MVP · 2020-10-04T08:09:56.126Z · score: 1 (1 votes) · EA · GW

These problems seem like they've been solved satisfactorily by museums and the associated industries, though.

Comment by kokotajlod on Plan for Impact Certificate MVP · 2020-10-02T22:56:49.046Z · score: 1 (1 votes) · EA · GW

Thanks! I get the divisibility thing, but why is it harder to retroactively fund IC holders with physical objects? Can't you just buy the object and add it to your collection? Isn't this basically how art works already -- Museums pay millions for a painting from long-dead artists, so smaller collectors pay hundreds of thousands, and individual rich people pay tens of thousands.

Comment by kokotajlod on Plan for Impact Certificate MVP · 2020-10-02T09:44:05.793Z · score: 10 (7 votes) · EA · GW

What's the case for doing these IC's on the blockchain instead of through some other means? It's been a while since I thought about this, but I remember thinking that it would work best via signed physical objects, i.e. relics. Someone does something great for the world, you take a symbolic object that was part of the thing and print the relevant certificate on it somewhere, and get them to sign it... This is good because it makes the certificates way more valuable and more likely to retain value. They can be put in a museum, for example, and be actually interesting to look at instead of just being pieces of paper (or worse, electronic paper!). Also humans have been doing pretty much exactly this for millenia with relics of saints and historical antiques and museums etc., so it'll seem less weird and get more buy-in from a broader range of people.

Comment by kokotajlod on The Fable of the Bladder-Tyrant · 2020-09-30T21:21:03.017Z · score: 4 (3 votes) · EA · GW

I agree that peeing etc. happens a lot and that a large quantity of minor suffering can sometimes be more important than a smaller quantity of intense suffering. However I think that in this case the things I mentioned -- poverty, aging, etc. -- are overall much more important. Consider: What would happen if we polled people and asked them "What if you had the choice between two pills, one of which would keep you young and healthy until you died by some non-natural cause, and another of which would magically eliminate your pee and poop so you never had to go to the bathroom. Which would you choose?" I'd bet the vast majority of people would choose the first pill if they chose any pill at all. Now imagine asking similar questions about poverty... I'm pretty sure people would rather pee and poop than be poor. Much rather. Similarly, consider asking people to choose between giving the no-pee-or-poop-pills to 100 people, or helping 1 person to stay healthy for just a mere 10 more years. I'm pretty sure almost everyone would say the morally correct choice is the second one. All this to say, I feel pretty confident in my judgment that eliminating poverty, aging, etc. is way more important than eliminating pee & poop etc.

I'm glad to hear you talk about catheters -- they are indeed much more tractable. However, my understanding is that people who use them are usually happy to stop using them; this suggests that they are actually less comfortable, more degrading, etc. than our usual bodily functions!

I totally buy that it's possible for society to change its norms around peeing pooping etc. and decide that we should eliminate it. Like you said, society changes its opinions on things like this every century or so. However, the question is how much control we have over society's opinions on this. And while I think we do have some (small) amount of influence, I think we'd better use that influence to change society's opinions about other things, like the moral status of farmed animals, or the importance of existential risk reduction. (Because again, those things are more important. And for that matter they are more tractable too; it's easier to change people's minds about them, I think.)

Comment by kokotajlod on The Fable of the Bladder-Tyrant · 2020-09-30T10:39:54.132Z · score: 4 (3 votes) · EA · GW

Large-scale? Not compared to other things. Poverty is much more important, animal welfare is much more important, defeating aging is much more important... it's so easy to think of things which are much more important that I'm not going to bother extending the list.

Tractable? Not compared to other things. It's much easier to convince people that aging is bad and research on how to stop it should be funded, than to convince people that having to go to the bathroom is bad and research on how to stop it should be funded. Also scientifically there might be less hope for a solution in the short run; the waste our bodies produce has to leave somehow, and there may not be a significantly more elegant way to do it. In the long run with nanobot or upload bodies this problem can be solved, but by far the most effective ways for us to achieve this is to simply work on AI alignment and the like.

Neglected? Sure.

Comment by kokotajlod on Examples of self-governance to reduce technology risk? · 2020-09-26T13:54:09.282Z · score: 3 (5 votes) · EA · GW

Maybe something about recommendation algorithms? Facebook, Twitter, Youtube, etc. taking steps to clamp down on hate speech etc. Or to make their algorithms less racially biased etc.

Comment by kokotajlod on Parenting: Things I wish I could tell my past self · 2020-09-14T13:50:08.278Z · score: 25 (11 votes) · EA · GW

Thanks for this. For my part, I have a daughter who is almost 1 year old now. I endorse / also experienced pretty much everything you describe here, e.g. I didn't change much as a person either.

The sleeping in shifts thing sounds good. I wish we had done something like that. Instead, I just did all the night feedings, and also took care of the baby for most of the day most days until we had childcare. It sucked. I was constantly sleep-deprived for six months or so, and I still don't get as much sleep as I used to.

Taking leave is super important. Neither I nor my wife took leave; I just worked less hard on my dissertation and other responsibilities. (Well, my wife took one week off from her classes, but she had to make it up later.) My productivity crashed, and I became unhappy trying to do too many things at once without sleep.

We stopped breastfeeding after three months because my wife had to study for exams. I thought that it wouldn't be too hard to get the baby back to breast afterwards. I was wrong; we never got the baby back to breast and had to pump thereafter.

Comment by kokotajlod on A New X-Risk Factor: Brain-Computer Interfaces · 2020-08-10T11:51:30.264Z · score: 4 (4 votes) · EA · GW

Thanks for this post! It seems to me that one additional silver lining of BCI is that the mind-reading that could be used for totalitarianism could also be used to enforce treaties. The world leaders agree to e.g. say twice a day "I do not now, nor have I ever, made any intention to break treaty X under any circumstances" while under a lie detector. This could make arms races and other sorts of terrible game-theoretic situations less bad. I think Bostrom first made this point.

Comment by kokotajlod on Delegate a forecast · 2020-08-07T18:58:20.551Z · score: 1 (1 votes) · EA · GW

Amazing, splendiforously wonderful news! The passport arrived TODAY, August 7!

Comment by kokotajlod on Delegate a forecast · 2020-07-29T09:49:49.087Z · score: 1 (1 votes) · EA · GW

Oh, and to answer your question for why it's more likely shorter than later: Progress right now seems to be driven by compute, and in particular by buying greater and greater quantities of it. In a few years this trend MUST stop, because not even the US government would have enough money to continue the trend of spending an order of magnitude+ more each year. So if we haven't got to crazy AI by 2026 or so, the current paradigm of "just add more compute" will no longer be so viable, and we're back to waiting for new ideas to come along.

Comment by kokotajlod on Delegate a forecast · 2020-07-29T09:46:13.382Z · score: 2 (2 votes) · EA · GW

I have a spreadsheet of different models and what timelines they imply, and how much weight I put on each model. The result is 18% by end of 2026. Then I consider various sources of evidence and update upwards to 38% by end of 2026. I think if it doesn't happen by 2026 or so it'll probably take a while longer, so my median is on 2040 or so.

The most highly weighted model in my spreadsheet takes compute to be the main driver of progress and uses a flat distribution over orders of magnitude of compute. Since it's implausible that the flat distribution should extend more than 18 or so OOMs from where we are now, and since we are going to get 3-5 more OOM in the next five years, that yields 20%.

The biggest upward update from the bits of evidence comes from the trends embodied in transformers (e.g. GPT-3) and also to some extent in alphago, alphazero, muzero: Strip out all that human knowledge and specialized architecture, just make a fairly simple neural net and make it huge, and it does better and better the bigger you make it.

Another big update upward is... well, just read this comment. To me, this comment did not give me a new picture of what was going on but rather confirmed the picture I already had. The fact that it is so highly upvoted and so little objected to suggests that the same goes for lots of people in the community. Now there's common knowledge.

Comment by kokotajlod on Delegate a forecast · 2020-07-28T21:19:47.427Z · score: 1 (1 votes) · EA · GW

Thanks! It's about what I expected, I guess, but different from my own view (I've got more weight on much shorter timelines). It's encouraging to hear though!

Comment by kokotajlod on Delegate a forecast · 2020-07-28T21:16:13.175Z · score: 2 (2 votes) · EA · GW

Thanks! Yes it is. All I had been doing was looking at that passport backlog, but I hadn't made a model based on it. It's discouraging to see so much probability mass on December, but not too surprising...

Comment by kokotajlod on Delegate a forecast · 2020-07-27T12:35:58.809Z · score: 3 (3 votes) · EA · GW

What is the probability that my baby daughter's US passport application will be rejected on account of inadequate photo?

Evidence: The photo looked acceptable to me but my wife, who thought a lot more about it, judged it to be overexposed. It wasn't quite as bad as the examples of overexposure given on the website, but in her opinion it was too close for comfort.

Evidence: The lady at the post office said the photo was fine, but she was rude to us and in a hurry. For example, she stapled it to our application and hustled us through the rest of the process and we were too shy and indecisive to stop her.

Comment by kokotajlod on Delegate a forecast · 2020-07-27T12:32:14.592Z · score: 3 (3 votes) · EA · GW

When will my daughter's passport arrive? (We are US citizens, applied by mail two weeks ago, application received last week)

Comment by kokotajlod on Delegate a forecast · 2020-07-26T18:54:42.204Z · score: 4 (4 votes) · EA · GW

When will there be an AI that can play random computer games from some very large and diverse set (say, a representative sample of Steam) that didn't appear in its training data and do about as well as an casual human player trying the game for the first time?

Comment by kokotajlod on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T14:22:31.783Z · score: 4 (3 votes) · EA · GW

OK, thanks. Not sure I can pull it off, that was just a toy example. Probably even my best arguments would have a smaller impact than a factor of three, at least when averaged across the whole community.

I agree with your explanation of the ways this would improve things... I guess I'm just concerned about opportunity costs.

Like, it seems to me that a tripling of credence in Sudden Emergence shouldn't change what people do by more than, say, 10%. When you factor in tractability, neglectedness, personal fit, doing things that are beneficial under both Sudden Emergence and non-Sudden Emergence, etc. a factor of 3 in the probability of sudden emergence probably won't change the bottom line for what 90% of people should be doing with their time. For example, I'm currently working on acausal trade stuff, and I think that if my credence in sudden emergence decreased by a factor of 3 I'd still keep doing what I'm doing.

Meanwhile, I could be working on AI safety directly, or I could be working on acausal trade stuff (which I think could plausibly lead to a more than 10% improvement in EA effort allocation. Or at least, more plausibly than working on Sudden Emergence, it seems to me right now).

I'm very uncertain about all this, of course.

Comment by kokotajlod on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2020-07-16T14:45:28.884Z · score: 3 (2 votes) · EA · GW

Thanks, I'll update the text when I get access to Metaculus again (I've blocked myself from it for productivity reasons lol)

Comment by kokotajlod on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-15T13:44:22.482Z · score: 9 (7 votes) · EA · GW

You say that there hasn't been much literature arguing for Sudden Emergence (the claim that AI progress will look more like the brain-in-a-box scenario than the gradual-distributed-progress scenario). I am interested in writing some things on the topic myself, but currently think it isn't decision-relevant enough to be worth prioritizing. Can you say more about the decision-relevance of this debate?

Toy example: Suppose I write something that triples everyone's credence in Sudden Emergence. How does that change what people do, in a way that makes the world better (or worse, depending on whether Sudden Emergence is true or not!)

Comment by kokotajlod on EA considerations regarding increasing political polarization · 2020-06-28T01:19:22.020Z · score: 3 (2 votes) · EA · GW

Yeah, what I meant was the second thing--I was responding to someone saying it was weird to bring up the cultural revolution; I was explaining why it was perfectly sensible to do so. I didn't say we shouldn't also talk about the red scare. Perhaps I misinterpreted the original comment though -- maybe they were not so much saying it was weird to talk about the CR, but that it was weird to not talk about the Red Scare, in which case I agree.

Comment by kokotajlod on EA considerations regarding increasing political polarization · 2020-06-26T11:20:32.258Z · score: 2 (4 votes) · EA · GW

I agree it's appropriate to compare to the Red Scare and I wish people did that more. However, I was responding to a comment suggesting that it was inappropriate to compare to Cultural Revolution. I think it should be compared to both; the Red Scare would be an example of a situation like this that didn't get worse, and the CR would be an example of a situation like this that did.

(As an aside, I don't know enough about the Red Scare to say whether it was worse or better than the current situation. Also, to say it's so unlikely that we'll reach the extreme scenario is premature; we need to get a dataset of similar situations and see what the base rate is. We know of at least a few "extreme" scenarios so they can't be that unlikely.)

Comment by kokotajlod on EA considerations regarding increasing political polarization · 2020-06-25T14:57:24.057Z · score: 9 (5 votes) · EA · GW

I agree that the current situation is like McCarthyism/Red Scare. The question is whether it will get worse; hence the comparisons to things which got worse.

Comment by kokotajlod on EA considerations regarding increasing political polarization · 2020-06-21T14:22:35.353Z · score: 25 (14 votes) · EA · GW
Since urban and rural areas rely critically on each other for resources, it is unlikely that an urban-rural war could be logistically feasible.

People keep saying this as an argument for why we won't have a civil war, but it seems pretty weak to me:

1. Logistical problems mean a war would end quickly, not that it would never happen at all. And a civil war that ends quickly would IMO be almost as bad as one that takes longer to end.

2. The previous US civil war was not an urban/rural divide. But plenty of modern civil wars are; it's pretty standard, in fact, for a central government controlling the major cities to wage war for several years against insurgents controlling much of the countryside.

As for the cultural revolution: As far as I can tell it wasn't actually very top-down organized. It was sparked and to some extent directed by revered leaders like Mao, but on numerous occasions even the leaders couldn't control the actions of the students. There were loads of cases of different sects of Red Guards fighting street battles with each other--not the sort of behavior you'd expect from a top-down movement!

What I'd like to learn about is the culture in china before the massacres began. Were people suspected of being rightists, counter-revolutionaries, landlords, etc. being deplatformed, harassed, fired, etc. prior to the massacres? Was there an uptick in this sort of thing in the years prior to the massacres?

Comment by kokotajlod on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-06-17T16:29:45.330Z · score: 13 (5 votes) · EA · GW

I think I agree that academic philosophy tends to have above-average openness norms--but note that academic philosophy has mostly lost them at this point, at least when it comes to topics related to SJ. I can provide examples of this on request; there are plenty to see on Daily Nous.

Comment by kokotajlod on Why Don’t We Use Chemical Weapons Anymore? · 2020-04-23T14:22:45.294Z · score: 5 (4 votes) · EA · GW

Great post. I think it focuses too much on the use of chemical weapons against enemy soldiers, however. IMO chemical weapons were and almost always have been thought of as terror weapons. For example, before WW2 it was feared that squadrons of bombers would drop chemical weapons all over european cities on Day 1 of the next war. Instead, they dropped propaganda leaflets and focused on military targets, and then gradually escalated to bombing and then firebombing cities.

True, civilian populations can be equipped with anti-chemical-weapon gear. But even so, my guess is that chemical bombs would have been effective terror weapons. Imagine if during the Blitz, instead of 100% conventional weapons, they had gone for a 80 - 20 mix of conventional and gas, many of the gas weapons being timed release so that hours after the air raid was over the gas would start hissing out.

Another piece of evidence is that the Allies shipped huge amounts of chemical weapons to Italy during their invasion, presumably in case they needed them. (They didn't; in fact a German air raid accidentally set off chemical weapons and caused massive casualties. Quote: "From the start, Allied High Command tried to conceal the disaster, in case the Germans believed that the Allies were preparing to use chemical weapons, which might provoke them into preemptive use, but there were too many witnesses to keep the secret, and in February 1944, the U.S. Chiefs of Staff issued a statement admitting to the accident and emphasizing that the U.S. had no intention of using chemical weapons except in the case of retaliation.")

As for Stalin: Did the USSR have large chemical weapon stockpiles? Maybe they didn't. Maybe they figured their poorly equipped troops would fare worse in a chemical weapon fight than the Germans. (The Germans, meanwhile, perhaps thought that if they used chemical weapons against the Russians, the Brits and USA would retaliate against Germany.)

Epistemic status: Just presenting some pushback/counter-evidence. Not sure what to think, ultimately. Probably the truth is a combination of both factors, I'd guess.

Comment by kokotajlod on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-19T23:34:00.438Z · score: 2 (2 votes) · EA · GW

In general I think you've thought this through more carefully than me so without having read all your points I'm just gonna agree with you.

So yeah, I think the main problem with Tobias' original point was that unknown risks are probably mostly new things that haven't arisen yet and thus the lack of observed mini-versions of them is no evidence against them. But I still think it's also true that some risks just don't have mini-versions, or rather are as likely or more likely to have big versions than mini-versions. I agree that most risks are not like this, including some of the examples I reached for initially.

Comment by kokotajlod on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-04-17T16:01:01.349Z · score: 3 (2 votes) · EA · GW

Update: Turns out it returns to the topic of Cortes at the end of the book. It confirms what wikipedia says, that smallpox arrived after Cortes had already killed the emperor and fled the city. I think it also exaggerates the role of smallpox even then, actually -- it makes it sound like Cortes' "first assault" on the city failed because the city was too strong and then his "second assault" succeeded because it was weakened by disease. But (a) his "first assault" was just him and his few hundred followers killing the Emperor and escaping, and his "second assault" came after a long siege and involved 200,000 native warriors helping him plus additional Spaniards with siege weapons etc. Totally different things. And (b) smallpox didn't just strike Tenochtitlan, it hit everywhere, including Cortes' native allies. And (c) The final battle for Tenochtitlan was intense; he didn't exactly walk in over the corpses of smallpox-ridden defenders, he had to fight his way in against a gigantic army of determined defenders. So I still stand by my claim that disease had fairly little to do with Cortes' victory, even though 1493, a book which I otherwise respect, says otherwise. (And by "fairly little" I mean "not so much that my conclusions in the post are undermined.")

Comment by kokotajlod on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-17T15:53:59.697Z · score: 1 (1 votes) · EA · GW
Likewise, AI can arguably be seen as a continuation of past technological, intellectual, scientific, etc. progress in various ways. Of course, various trends might change in shape, speed up, etc. But so far they do seem to have mostly done so somewhat gradually, such that none of the developments would've been "ruled out" by expecting the future to looking roughly similar to the past or the past+extrapolation. (I'm not an expert on this, but I think this is roughly the conclusion AI Impacts is arriving at based on their research.)

I agree with all this and don't think it significantly undermines anything I said.

I think the community has indeed developed more diverse views over the years, but I still think the original take (as seen in Bostrom's Superintelligence) is the closest to the truth. The fact that the community has gotten more diverse can be easily explained as the result of it growing a lot bigger and having a lot more time to think. (Having a lot more time to think means more scenarios can be considered, more distinctions made, etc. More time for disagreements to arise and more time for those disagreements to seem like big deals when really they are fairly minor; the important things are mostly agreed on but not discussed anymore.) Or maybe you are right and this is evidence that Bostrom is wrong. Idk. But currently I think it is weak evidence, given the above.

Comment by kokotajlod on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-17T15:46:23.718Z · score: 1 (1 votes) · EA · GW

Tobias' original point was " Also, if engineered pandemics, or "unforeseen" and "other" anthropogenic risks have a chance of 3% each of causing extinction, wouldn't you expect to see smaller versions of these risks (that kill, say, 10% of people, but don't result in extinction) much more frequently? But we don't observe that. "


Thus he is saying there aren't any "unknown" risks that do have common mini-versions but just haven't had time to develop yet. That's way too strong a claim, I think. Perhaps in my argument against this claim I ended up making claims that were also too strong. But I think my central point is still right: Tobias' argument rules out things arising in the future that clearly shouldn't be ruled out, because if we had run that argument in the past it would have ruled out various things (e.g. AI, nukes, physics risks, and come to think of it even asteroid strikes and pandemics if we go far enough back in the past) that in fact happened.

Comment by kokotajlod on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-17T15:40:35.761Z · score: 1 (1 votes) · EA · GW

Yeah in retrospect I really shouldn't have picked nukes and natural pandemics as my two examples. Natural pandemics do have common mini-versions, and nukes, well, the jury is still out on that one. (I think it could go either way. I think that nukes maybe can kill everyone, because the people who survive the initial blasts might die from various other causes, e.g. civilizational collapse or nuclear winter. But insofar as we think that isn't plausible, then yeah killing 10% is way more likely than killing 100%. (I'm assuming we count killing 99% as killing 10% here?) )

I think AI, climate change tail risks, physics risks, grey goo, etc. would be better examples for me to talk about.

Comment by kokotajlod on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-17T01:29:22.344Z · score: 2 (2 votes) · EA · GW

I feel the need to clarify, by the way, that I'm being a bit overly aggressive in my tone here and I apologize for that. I think I was writing quickly and didn't realize how I came across. I think you are making good points and have been upvoting them even as I disagree with them.

Comment by kokotajlod on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-17T01:27:27.671Z · score: 6 (3 votes) · EA · GW

I think there are some risks which have "common mini-versions," to coin a phrase, and others which don't. Asteroids have mini-versions (10%-killer-versions), and depending on how common they are the 10%-killers might be more likely than the 100%-killers, or vice versa. I actually don't know which is more likely in that case.

AI risk is the sort of thing that doesn't have common mini-versions, I think. An AI with the means and motive to kill 10% of humanity probably also has the means and motive to kill 100%.

Natural pandemics DO have common mini-versions, as you point out.

It's less clear with engineered pandemics. That depends on how easy they are to engineer to kill everyone vs. how easy they are to engineer to kill not-everyone-but-at-least-10%, and it depends on how motivated various potential engineers are.

Accidental physics risks (like igniting the atmosphere, creating a false vacuum collapse or black hole or something with a particle collider) are way more likely to kill 100% of humanity than 10%. They do not have common mini-versions.

So what about unknown risks? Well, we don't know. But from the track record of known risks, it seems that probably there are many diverse unknown risks, and so probably at least a few of them do not have common mini-versions.

And by the argument you just gave, the "unknown" risks that have common mini-versions won't actually be unknown, since we'll see their mini-versions. So "unknown" risks are going to be disproportionately the kind of risk that doesn't have common mini-versions.

...

As for what I meant about making the exact same argument in the past: I was just saying that we've discovered various risks that don't have common mini-versions, which at one point were unknown and then became known. Your argument basically rules out discovering such things ever again. Had we listened to your argument before learning about AI, for example, we would have concluded that AI was impossible, or that somehow AIs which have the means and motive to kill 10% of people are more likely than AIs which pose existential threats.

Comment by kokotajlod on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-04-16T14:48:36.664Z · score: 8 (3 votes) · EA · GW

I'm reading it now; it is indeed a very good book. I don't think it supports the claim that disease hit the Aztecs before Cortes arrived--it makes a brief one-sentence claim to that effect, but other sources (e.g. wikipedia) say the opposite, and give more details (e.g. they say it arrived with the expedition sent to capture Cortes). And of course there's still Afonso.

Comment by kokotajlod on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-16T14:45:21.663Z · score: 2 (2 votes) · EA · GW

Yeah I take back what I said about it being substantially less likely, that seems wrong.

Comment by kokotajlod on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-15T14:55:42.662Z · score: 12 (5 votes) · EA · GW
Also, if engineered pandemics, or "unforeseen" and "other" anthropogenic risks have a chance of 3% each of causing extinction, wouldn't you expect to see smaller versions of these risks (that kill, say, 10% of people, but don't result in extinction) much more frequently? But we don't observe that.

I don't think so. I think a reasonable prior on this sort of thing would have killing 10% of people not much more likely than killing 100% of people, and actually IMO it should be substantially less likely. (Consider a distribution over asteroid impacts. The range of asteroids that kill around 10% of humans is a narrow band of asteroid sizes, whereas any asteroid above a certain size will kill 100%.)

Moreover, if you think 10% is much more likely than 100%, you should think 1% is much more likely than 10%, and so on. But we don't in fact see lots of unknown risks killing even 0.1% of the population. So that means the probability of x-risk from unknown causes according to you must be really really really small. But that's just implausible. You could have made the exact same argument in 1917, in 1944, etc. and you would have been wildly wrong.

Comment by kokotajlod on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-03-21T16:57:38.765Z · score: 4 (3 votes) · EA · GW

Anyhow, thanks for the consideration. Yeah, maybe I'll write a blog post on the subject someday.

Comment by kokotajlod on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-03-20T10:28:35.864Z · score: 1 (1 votes) · EA · GW

My nitpick was not about the nonexistence stuff, it was about hurting and killing people.

Comment by kokotajlod on [deleted post] 2020-03-17T16:09:55.653Z

Pretty much no. There are various motivations simulators might have for simulating civilizations like ours; on some of them, interestingness matters; yes yes yes. But I don't think COVID-19 or Trump are more interesting in the relevant sense. They might be, but I think we basically have no idea at this early stage.

"Interesting" in the relevant sense means roughly "Lots of branching possibilities from this point, compared to how many possibilities branch off from other points." So suppose Clinton won instead of Trump. Would there be fewer possible futures branching off? Eh, maybe. I doubt it.

Moreover there's a bias we should try to correct for, which is the bias to see significance and importance in whatever everyone is talking about these days.

Comment by kokotajlod on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-03-13T23:28:51.833Z · score: 11 (10 votes) · EA · GW

Would you be interested in having a section on the website that is basically "Ways to be an EA while not being a utilitarian?" I say this as someone who is very committed to EA but very against utilitarianism. Fair enough if the answer is no, but if the answer is yes, I'd be happy to help out with drafting the section.


Nitpick: This quote here seems wrong/misleading: "What matters most for utilitarianism is bringing about the best consequences for the world. This involves improving the wellbeing of all individuals, regardless of their gender, race, species, and their geographical or temporal location."

What do you mean by "this involves?" If you mean "This always involves" it is obviously false. If you mean "this typically involves" then it might be true, but I am pretty sure I could convince you it is false also. For example, very often more utility will be created if you abandon some people--even some entire groups of people--as lost causes and focus on creating more happy people instead. Most importantly, if you mean "for us today, it typically involves" it is also false, because creating a hedonium shockwave dramatically decreases the wellbeing of most individuals on Earth, at least for a short period before they die. :P


(You may be able to tell from the above some of the reasons why I think utilitarianism is wrong!)

Comment by kokotajlod on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-03-13T23:01:11.015Z · score: 2 (2 votes) · EA · GW

Harsanyi's version also came first IIRC, and Rawls read it before he wrote his version. (Edit: Oh yeah you already said this)