Posts

Soares, Tallinn, and Yudkowsky discuss AGI cognition 2021-11-29T17:28:19.739Z
Christiano, Cotra, and Yudkowsky on AI progress 2021-11-25T16:30:52.594Z
Yudkowsky and Christiano discuss "Takeoff Speeds" 2021-11-22T19:42:59.014Z
Ngo and Yudkowsky on AI capability gains 2021-11-19T01:54:56.512Z
Ngo and Yudkowsky on alignment difficulty 2021-11-15T22:47:46.125Z
Discussion with Eliezer Yudkowsky on AGI interventions 2021-11-11T03:21:50.685Z
Status Regulation and Anxious Underconfidence 2017-11-16T21:52:19.366Z
Against Modest Epistemology 2017-11-14T21:26:48.198Z
Blind Empiricism 2017-11-12T22:23:47.083Z
Living in an Inadequate World 2017-11-09T21:47:27.193Z
Moloch's Toolbox (2/2) 2017-11-06T21:34:51.158Z
Moloch's Toolbox (1/2) 2017-11-04T21:47:50.825Z
An Equilibrium of No Free Energy 2017-10-31T22:25:02.739Z
Inadequacy and Modesty 2017-10-28T22:02:31.066Z

Comments

Comment by EliezerYudkowsky on FTX EA Fellowships · 2021-11-20T01:36:15.811Z · EA · GW

If I ended up spending some time in the Bahamas this year, do you have a guess as to when would be the optimal time for that?

Comment by EliezerYudkowsky on List of EA funding opportunities · 2021-10-27T18:45:28.197Z · EA · GW

Can you put something on here to the effect of: "Eliezer Yudkowsky continues to claim that anybody who comes to him with a really good AGI alignment idea can and will be funded."

Comment by EliezerYudkowsky on Towards a Weaker Longtermism · 2021-09-23T20:27:06.273Z · EA · GW

It strikes me as a fine internal bargain for some nonhuman but human-adjacent species; I would not expect the internal parts of a human to able to abide well by that bargain.

Comment by EliezerYudkowsky on Towards a Weaker Longtermism · 2021-08-08T18:21:03.458Z · EA · GW

There’s nothing convoluted about it! We just observe that historical experience shows that the supposed benefits never actually appear, leaving just the atrocity! That’s it! That’s the actual reason you know the real result would be net bad and therefore you need to find a reason to argue against it! If historically it worked great and exactly as promised every time, you would have different heuristics about it now!

Comment by EliezerYudkowsky on Towards a Weaker Longtermism · 2021-08-08T16:26:36.380Z · EA · GW

The final conclusion here strikes me as just the sort of conclusion that you might arrive at as your real bottom line, if in fact you had an arrived at an inner equilibrium between some inner parts of you that enjoy doing something other than longtermism, and your longtermist parts.  This inner equilibrium, in my opinion, is fine; and in fact, it is so fine that we ought not to need to search desperately for a utilitarian defense of it.  It is wildly unlikely that our utilitarian parts ought to arrive at the conclusion that the present weighs about 50% as much as our long-term future, or 25% or 75%; it is, on the other hand, entirely reasonable that the balance of what our inner parts vote on will end up that way.  I am broadly fine with people devoting 50%, 25% or 75% of themselves to longtermism, in that case, as opposed to tearing themselves apart with guilt and ending up doing nothing much, which seems to be the main alternative.  But you're just not going to end up with a utilitarian defense of that bottom line; if the future can matter at all, to the parts of us that care abstractly and according to numbers, it's going to end up mattering much more than the present; equivalently, any rationalization like exponential discounting that can imply averting this, is going to imply that it is better to eat an ice cream today and destroy a galaxy of happy sapient beings in ten million years.  This is crazy, and I think it makes a lot more sense to just admit that part of you cares about galaxies and part of you cares about ice cream and say that neither of these parts are going to be suppressed and beaten down inside you.

I think this is what actually yields an appeal of "regular longtermism", and since that's what actually produces the bottom line, I think that what produces this bottom line should just be directly called the justification for it - there's no point in reaching for a different argument for justification than for conclusion-production.

Comment by EliezerYudkowsky on Towards a Weaker Longtermism · 2021-08-08T14:39:14.843Z · EA · GW

The reason we have a deontological taboo against “let’s commit atrocities for a brighter tomorrow” is not that people have repeatedly done this, it worked exactly like they said it would, and millions of people received better lives in exchange for thousands of people dying unpleasant deaths exactly as promised.

The reason we have this deontological taboo is that the atrocities almost never work to produce the promised benefits. Period. That’s it. That’s why we normatively should have a taboo like that.

(And as always in a case like that, we have historical exceptions that people don’t like to talk about because they worked, eg, Knut Haukelid, or the American Revolution. And these examples are distinguished among other factors by a found mood (the opposite of a missing mood) which doesn’t happily jump on the controversial wagon for controversy points, nor gain power and benefit from the atrocity; but quietly and regretfully kills the innocent night watchman who helped you, to prevent the much much larger issue of Nazis getting nuclear weapons.)

This logic applies without any obvious changes to “let’s commit atrocities in pursuit of a brighter tomorrow a million years away” just like it applies to “let’s commit atrocities in pursuit of a brighter tomorrow in 2 years”. Literally any nice thing somebody says you could get would “justify atrocities”, in exactly the same way, if you forgot this rule. If you admit the existence of thousands of American schoolchildren getting suboptimally nutritious lunches, it could, oh no, justify abducting and torturing businessmen into using their ATM cards so you could get more money for the schoolchildren. Obviously then those children must not exist, or maybe they don’t have qualia so their suffering won’t be important, because if they existed and mattered that could justify atrocities, couldn’t it?

There is nothing special about longtermism compared to any other big desideratum in this regard. It is 100% unjustified special attention because people don’t like the desideratum itself. The same way that people ask “How can we spend money on AI safety when children are starving now?” but their mind doesn’t make the same leap about “How can we spend money on fighting global warming when children are starving now?” or say “Hey maybe we should critique total spending on lipstick advertising before we critique spending on rockets.”

As always, transhumanism done correctly is just humanism.

Comment by EliezerYudkowsky on Taboo "Outside View" · 2021-06-17T17:23:55.491Z · EA · GW

I worriedly predict that anyone who followed your advice here would just switch to describing whatever they're doing as "reference class forecasting" since this captures the key dynamic that makes describing what they're doing as "outside viewing" appealing: namely, they get to pick a choice of "reference class" whose samples yield the answer they want, claim that their point is in the reference class, and then claiming that what they're doing is what superforecasters do and what Philip Tetlock told them to do and super epistemically virtuous and anyone who argues with them gets all the burden of proof and is probably a bad person but we get to virtuously listen to them and then reject them for having used the "inside view".

My own take:  Rule One of invoking "the outside view" or "reference class forecasting" is that if a point is more dissimilar to examples in your choice of "reference class" than the examples in the "reference class" are dissimilar to each other, what you're doing is "analogy", not "outside viewing".

All those experimental results on people doing well by using the outside view are results on people drawing a new sample from the same bag as previous samples.  Not "arguably the same bag" or "well it's the same bag if you look at this way", really actually the same bag: how late you'll be getting Christmas presents this year, based on how late you were in previous years.  Superforecasters doing well by extrapolating are extrapolating a time-series over 20 years, which was a straight line over those 20 years, to another 5 years out along the same line with the same error bars, and then using that as the baseline for further adjustments with due epistemic humility about how sometimes straight lines just get interrupted some year.  Not by them picking a class of 5 "relevant" historical events that all had the same outcome, and arguing that some 6th historical event goes in the same class and will have that same outcome.

Comment by EliezerYudkowsky on Two Strange Things About AI Safety Policy · 2016-09-28T22:10:28.188Z · EA · GW

The idea of running an event in particular seems misguided. Conventions come after conversations. Real progress toward understanding, or conveying understanding, does not happen through speakers going On Stage at big events. If speakers On Stage ever say anything sensible, it's because an edifice of knowledge was built in the background out of people having real, engaged, and constructive arguments with each other, in private where constructive conversations can actually happen, and the speaker On Stage is quoting from that edifice.

(This is also true of journal publications about anything strategic-ish - most journal publications about AI alignment come from the void and are shouting into the void, neither aware of past work nor feeling obliged to engage with any criticism. Lesser (or greater) versions of this phenomenon occur in many fields; part of where the great replication crisis comes from is that people can go on citing refuted studies and nothing embarrassing happens to them, because god forbid there be a real comments section or an email reply that goes out to the whole mailing list.)

If there's something to be gained from having national-security higher-ups understanding the AGI alignment strategic landscape, or from having alignment people understand the national security landscape, then put Nate Soares in a room with somebody in national security who has a computer science background, and let them have a real conversation. Until that real progress has already been made in in-person conversations happening in the background where people are actually trying to say sensible things and justify their reasoning to one another, having a Big Event with people On Stage is just a giant opportunity for a bunch of people new to the problem to spout out whatever errors they thought up in the first five seconds of thinking, neither aware of past work nor expecting to engage with detailed criticism, words coming from the void and falling into the void. This seems net counterproductive.

Comment by EliezerYudkowsky on The history of the term 'effective altruism' · 2015-05-31T05:02:02.655Z · EA · GW

There's only so many things you can call it, and accidental namespace collisions / phrase reinventions aren't surprising. I was surprised when I looked back myself and noticed the phrase was there, so it would be more surprising if Toby Ord remembered than if he didn't. I'm proud to have used the term "effective altruist" once in 2007, but to say that this means I coined the term, especially when it was re-output by the more careful process described above, might be giving me too much credit - but it's still nice to have this not-quite-coincidental mention be remembered, so thank you for that!