Status Regulation and Anxious Underconfidence 2017-11-16T21:52:19.366Z
Against Modest Epistemology 2017-11-14T21:26:48.198Z
Blind Empiricism 2017-11-12T22:23:47.083Z
Living in an Inadequate World 2017-11-09T21:47:27.193Z
Moloch's Toolbox (2/2) 2017-11-06T21:34:51.158Z
Moloch's Toolbox (1/2) 2017-11-04T21:47:50.825Z
An Equilibrium of No Free Energy 2017-10-31T22:25:02.739Z
Inadequacy and Modesty 2017-10-28T22:02:31.066Z


Comment by EliezerYudkowsky on Two Strange Things About AI Safety Policy · 2016-09-28T22:10:28.188Z · EA · GW

The idea of running an event in particular seems misguided. Conventions come after conversations. Real progress toward understanding, or conveying understanding, does not happen through speakers going On Stage at big events. If speakers On Stage ever say anything sensible, it's because an edifice of knowledge was built in the background out of people having real, engaged, and constructive arguments with each other, in private where constructive conversations can actually happen, and the speaker On Stage is quoting from that edifice.

(This is also true of journal publications about anything strategic-ish - most journal publications about AI alignment come from the void and are shouting into the void, neither aware of past work nor feeling obliged to engage with any criticism. Lesser (or greater) versions of this phenomenon occur in many fields; part of where the great replication crisis comes from is that people can go on citing refuted studies and nothing embarrassing happens to them, because god forbid there be a real comments section or an email reply that goes out to the whole mailing list.)

If there's something to be gained from having national-security higher-ups understanding the AGI alignment strategic landscape, or from having alignment people understand the national security landscape, then put Nate Soares in a room with somebody in national security who has a computer science background, and let them have a real conversation. Until that real progress has already been made in in-person conversations happening in the background where people are actually trying to say sensible things and justify their reasoning to one another, having a Big Event with people On Stage is just a giant opportunity for a bunch of people new to the problem to spout out whatever errors they thought up in the first five seconds of thinking, neither aware of past work nor expecting to engage with detailed criticism, words coming from the void and falling into the void. This seems net counterproductive.

Comment by EliezerYudkowsky on The history of the term 'effective altruism' · 2015-05-31T05:02:02.655Z · EA · GW

There's only so many things you can call it, and accidental namespace collisions / phrase reinventions aren't surprising. I was surprised when I looked back myself and noticed the phrase was there, so it would be more surprising if Toby Ord remembered than if he didn't. I'm proud to have used the term "effective altruist" once in 2007, but to say that this means I coined the term, especially when it was re-output by the more careful process described above, might be giving me too much credit - but it's still nice to have this not-quite-coincidental mention be remembered, so thank you for that!