Alexander and Yudkowsky on AGI goals 2023-01-31T23:36:21.486Z
Who's at fault for FTX's wrongdoing 2022-11-16T04:47:59.767Z
An Important Lesson of the FTX Implosion... 2022-11-15T02:59:41.262Z
IMPCO, don't injure yourself by returning FTXFF money for services you already provided 2022-11-12T04:51:28.080Z
The bottom line 2022-07-27T23:13:34.812Z
AGI Ruin: A List of Lethalities 2022-06-06T23:28:39.006Z
Shah and Yudkowsky on alignment failures 2022-02-28T19:25:12.896Z
Christiano and Yudkowsky on AI predictions and human intelligence 2022-02-23T16:51:08.221Z
Ngo and Yudkowsky on scientific reasoning and pivotal acts 2022-02-21T17:00:01.453Z
Ngo's view on alignment difficulty 2021-12-14T19:03:07.377Z
Conversation on technology forecasting and gradualism 2021-12-09T19:00:00.000Z
More Christiano, Cotra, and Yudkowsky on AI progress 2021-12-06T20:34:07.106Z
Shulman and Yudkowsky on AI progress 2021-12-04T11:37:23.279Z
Biology-Inspired AGI Timelines: The Trick That Never Works 2021-12-01T22:44:32.203Z
Soares, Tallinn, and Yudkowsky discuss AGI cognition 2021-11-29T17:28:19.739Z
Christiano, Cotra, and Yudkowsky on AI progress 2021-11-25T16:30:52.594Z
Yudkowsky and Christiano discuss "Takeoff Speeds" 2021-11-22T19:42:59.014Z
Ngo and Yudkowsky on AI capability gains 2021-11-19T01:54:56.512Z
Ngo and Yudkowsky on alignment difficulty 2021-11-15T22:47:46.125Z
Discussion with Eliezer Yudkowsky on AGI interventions 2021-11-11T03:21:50.685Z
Purchase fuzzies and utilons separately 2019-12-27T02:21:19.723Z
Status Regulation and Anxious Underconfidence 2017-11-16T21:52:19.366Z
Against Modest Epistemology 2017-11-14T21:26:48.198Z
Blind Empiricism 2017-11-12T22:23:47.083Z
Living in an Inadequate World 2017-11-09T21:47:27.193Z
Moloch's Toolbox (2/2) 2017-11-06T21:34:51.158Z
Moloch's Toolbox (1/2) 2017-11-04T21:47:50.825Z
An Equilibrium of No Free Energy 2017-10-31T22:25:02.739Z
Inadequacy and Modesty 2017-10-28T22:02:31.066Z
Making beliefs pay rent 2015-06-16T23:00:00.000Z
What is evidence? 2007-09-22T04:09:00.000Z


Comment by EliezerYudkowsky on Abuse in LessWrong and rationalist communities in Bloomberg News · 2023-03-14T13:15:25.409Z · EA · GW

FYI:  IIRC/IIUC, Bryk is the one who made up the thing about my having a harem of submissive mathematicians whom I called my "math pets".   This is false; people sufficiently connected within the community will know that it is false, not least since it'd be widely known and I wouldn't have denied it if it were true.  I am not sure what to do about it simply, if someone's own epistemic location is such that my statements there are unknowable to them as being true.

It is known to me that Bryk has gone on repeating the "math pets" allegation, including to journalists, long after it should've been clear to her that it was not true.

My own understanding of proper procedure subsequent to this would be to treat Bryk as somebody having made a known false allegation, especially since I don't know of any corresponding later-verified/known-true allegations that she was first to bring forth; and that this implies we ought to cross everything alleged by Bryk off any such lists, unless there's independent witnesses for it, in which case we can consider those witnesses and also reconsider the future degree to which Bryk ought to (not) be considered as an evidential source.

(If I am recalling correctly that Jax started the "math pets" thing.)

Comment by EliezerYudkowsky on Abuse in LessWrong and rationalist communities in Bloomberg News · 2023-03-14T13:10:47.561Z · EA · GW

IIRC, Jax is Bryk is the one who made up the "math pets" allegation against me, which hopefully everyone knows to be false.  I don't know anything about the state of the rest of the allegations against Michael, but if I'm recalling correctly that Jax is that particular known-false-accuser, we probably want to subtract anything from Jax and then evaluate the rest of the list.

Comment by EliezerYudkowsky on People Will Sometimes Just Lie About You · 2023-02-19T01:05:09.778Z · EA · GW

The usual argument, which I think is entirely valid, and has been delivered by famouser and more famously reputable people if you don't want to trust me about it, was named the "Gell-Mann Amnesia Effect" by Richard Feynman.  Find something that you are really, truly an expert on.  Find an article in TIME Magazine about it.  Really take note of everything they get wrong.  Try finding somebody who isn't an expert and see what their takeaways from the article were - what picture of reality they derive without your own expertise to guide them in interpretation.

Then go find what you think is a pretty average blog post by an Internet expert on the same topic.

It is, alas, not something you can condense into a single webpage, because everybody has their own area of really solid expertise, even if it's something like "the history of Star Trek TOS" because their day job doesn't lead them into the same level of enthusiasm.  Maybe somebody should put together a set of three comparisons like that, from three different fields - but then the skeptics could worry it was all cherry-picked unusual bad examples, even if it hadn't been.

I will note that I do think that the great scientists of recent past generations have earned more of our respect than internationally famous journalistic publications, and those scientists did not speak kindly of their coverage of science - and that was before the era of clickbait, back when the likes of the New York Times kept to notably higher editorial standards.

I think you can talk to any famous respectable person in private, and ask them if there should be a great burden of skepticism about insinuating that a "major international publication" like TIME Magazine might be skewing the truth the way that Aella describes, and the famous respectable person (if they are willing to answer you at all) will tell you that you should not hold that much trust towards TIME Magazine.

Comment by EliezerYudkowsky on People Will Sometimes Just Lie About You · 2023-02-19T00:41:15.183Z · EA · GW

I'd absolutely bring the same kind of skepticism.  I would refuse to read a TIME expose of supposed abuses within LDS, because I would expect it to take way too much work to figure out what kind of remote reality would lie behind the epstemic abuses that I'd expect TIME (or the New York Times or whoever) would devise.  If I thought I needed to know about it, I would poke around online until I found an essay written by somebody who sounded careful and evenhanded and didn't use language like journalists use, and there would then be a possibility that I was reading something with a near enough relation to reality that I could end up closer to the truth after having tried to do my own mental corrections.

I want to be very clear that this is not my condescending advice to Other People who I think are stupider than I am.  I think that I am not able to read coverage in the New York Times and successfully update in a more truthward direction, after compensating for what I think their biasing procedures are.  I think I just can't figure out the truth from that.  I don't think I'm that smart.  I avoid clicking through, and if it's an important matter I try to find a writeup elsewhere instead.

Comment by EliezerYudkowsky on People Will Sometimes Just Lie About You · 2023-02-19T00:35:16.575Z · EA · GW

I've had worse experiences with coverage from professional journalists than I have from random bloggers.  My standard reply to a journalist who contacts me by email is "If you have something you actually want to know or understand, I will answer off-the-record; I am not providing any on-the-record quotes after past bad experiences."  Few ever follow up with actual questions.

A sincere-seeming online person with a blog can, any time they choose to, quote you accurately and in context, talk about the nuance, and just generally be truthful.  Professional journalists exist in a much stranger context that would require much longer than this comment to describe.

Comment by EliezerYudkowsky on People Will Sometimes Just Lie About You · 2023-02-18T18:15:40.625Z · EA · GW

I mean the human tendency, not the EA tendency.  TIME does it because it's effective on their usual audience.  EAs, evidently, have not risen above that.

If you think there's an actual problem, I think the correct avenue is doing a real investigation and a real writeup.  Trying to "steelman" a media version of it, that is going to be incredibly and deliberately warped, adversarially targeted at exploiting the audience's underestimate of its warping by experienced adversaries, strikes me as a very wrong move.  And it's just legit hard to convey how very wrong of a move it is, if you've never been the subject of that kind of media misrepresentation in your personal direct experience, because you really do underestimate how bad it is until then.  Aella did.  I did.

Comment by EliezerYudkowsky on People Will Sometimes Just Lie About You · 2023-02-18T17:43:15.317Z · EA · GW

I also attest that Aella is, if anything, severely underconveying the extent to which this central thesis is true.   It's really really hard to convey until you've lived that experience yourself.  I also don't know how to convey this to people who haven't lived through it.  My experience was also of having been warned about it, but not having integrated the warnings or really actually understood how bad the misrepresentation actually was in practice, until I lived through it. 

Comment by EliezerYudkowsky on People Will Sometimes Just Lie About You · 2023-02-18T17:38:13.718Z · EA · GW

Trying to "steelman" the work of an experienced adversary who relies on, and is exploiting, your tendency to undercompensate and not realize how distorted these things actually are - which is the practical, hard-earned knowledge that Aella is trying to propagate - seems like a mistake.

(Actually, trying to "steelman" is a mistake in general and you should focus on passing Ideological Turing Tests instead, but that's a much longer conversation.)

Comment by EliezerYudkowsky on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years · 2023-01-15T23:29:53.477Z · EA · GW

Numerous people on rationalityTwitter called it way before Feb 20th, and some of those bought put options and made big profits.  This must be some interesting new take on "rational expectations".

Comment by EliezerYudkowsky on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years · 2023-01-14T22:03:22.922Z · EA · GW

Not only have I never heard this before, I was there and remember watching this not happen.  Source?

Comment by EliezerYudkowsky on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years · 2023-01-14T22:01:56.760Z · EA · GW

Those look like nominal rates, not real rates. 

Comment by EliezerYudkowsky on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years · 2023-01-13T14:08:44.920Z · EA · GW

Not until timelines are even blatantly shorter than present and long-term loans are on offer, and not unless there's something useful to actually do with the money.

Comment by EliezerYudkowsky on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years · 2023-01-13T14:07:58.649Z · EA · GW

Corrected, thanks.

Comment by EliezerYudkowsky on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years · 2023-01-12T13:44:12.558Z · EA · GW

Trying again:

OP seems to ambiguate between two ideas, one true idea, and one false idea.

The true idea is that if Omega tells you personally that the world will end in 2030 with probability 1, you personally should not bother saving for retirement.  Call this the Personal Idea.

The false idea is that if you believe in foomdoom, you should go long real interest rates and expect a market profit.  Call this the Market Idea.

Intuitively, at least if you're swayed by this essay, the idea in Market probably seems pretty close to the idea in Personal.  If everybody started consuming for today and investing less, real interest rates would go up, right?  So if you don't believe that Market is about as strong as Personal, what invalid reasoning step occurs within the gap between the true premise in Personal to the false conclusion in Market?

Is it invalid that if in 2025 everyone started believing that the world would end in 2030 with probability 1, real interest rates would rise in 2025?  Honestly, I'm not even sure of that in real life.  People are arguing clever-ideas like 'Shouldn't everyone take out big loans due later?' but maybe the lender doesn't want to lend anymore, if everyone knows that.  There's a supply collapse and a demand collapse and yes I see the theoretical argument but real-world monetary stuff is in fact really strange and complicated; I didn't see anybody calling the actual interest-rate trajectory surrounding Covid in advance of it actually playing out.

In real life, what zaps you when you think there's a worldwide pandemic coming and try to trade interest rates, isn't that you didn't know about the pandemic ahead of the oblivious market, it's that you guessed wrong about what the market would really actually do in real life as the pandemic played out and finally ended.

You can sometimes make a profit off an oblivious market, if you guess narrowly enough at reactions that are much more strongly determined.  Wei Dai reports making huge profits on the Covid trade against the oblivious market there, after correctly guessing that people soon noticing Covid would at least increase expected volatility and the price of downside put options would go up.

But I don't think anybody called "there will be a huge market drop followed by an even steeper market ascent, after the Fed's delayed reaction to a huge drop in TIPS-predicted inflation, and people staying home and saving and trading stocks, followed by skyrocketing inflation later."

I don't think the style of reasoning used in the OP is the kind of thing that works reliably.  It's doing the equivalent of trying to predict something harder than 'once the pandemic really hits, people in the short term will notice and expect more volatility and option prices will go up'; it's trying to do the equivalent of predicting the market reaction to the Fed reaction a couple of years later.

The entire OP is written as if we lived in an alternate universe where it is way way easier than history has actually shown, to figure out what happens in broad markets after any sort of complicated or nontrivial event occurs in real life that isn't on the order of "unexpected Fed rate hike" or "company reports much higher-than-expected profits".  And it's written in such a way as to mislead EAs reading it about the general confidence that the field of economics is able to justly put in predictions about broad market reactions to strange things.

If you haven't already looked at OP's recommended investment instrument of (short) LPTZ, which holds inflation-protected Treasuries and is therefore their recommended way of tracking real interest rates, I recommend the following exercise:  First try to figure out what you would have believed a priori, without benefit of hindsight, real interest rates would do over the course of a pandemic.  Then, decide what investment strategy you'd have followed with LTPZ if you thought you knew about a pandemic ahead of the market.  Then, decide what you think happened to real interest rates with benefit of hindsight.   Then, go look at the actual price trajectory of their recommended instrument of LTPZ.

I am not sure I can properly convey this thought that I am trying to convey; I have had trouble actually conveying this thought to EAs before.  The thought is that people often do long careful serious-sounding writeups which EAs then take Very Seriously, because they are so long and so seriously argued, but in fact fail to bind to reality entirely, in a way that doesn't have to do with the details of the complicated arguments.  Very serious arguments about what ought to happen to the price of an ETF that tracks 15-year TIPS, via the intermediate step of arguing about what logically ought to happen to real interest rates, are the sort of thing that, historically, average economists have not really been able to pull off; it's a kind of thought that you should expect fails to achieve basic binding to reality.  What would LTPZ or its post-facto equivalent have been doing around the time of the Cuban Missile Crisis?  My model says 'no prediction'; they'll have done whatever.  Afterwards somebody will make up a story about it in hindsight, but it is not the sort of thing where history says that long complicated analyses are remotely reliably good at doing it in advance.

But there are even weaker links in the argument, so let's accept the LTPZ step arguendo and pass on.

An even bigger problem is that, since everybody is going to die before anything really pays out, the marginal foresightful trader does not have a strong incentive to early-on move the market toward where the market would end up in equilibrium after everyone agreed on the actual facts of the matter and had time to trade about them.

Prediction markets, I sometimes explain to people, are tools for transmitting future observables, or more generally propositions that people expect to publicly agree on at some future point even if they don't agree now, lossily backward in time, manifesting as well-calibrated probability distributions.

To run a prediction market, you first and foremost need a future publicly observable measurement, which is a special case of a place where we expect most people to agree on an extreme probability assignment later, even though they don't agree now or don't make extreme probability assignments now.  You cannot run a prediction market on whether supplementing lots of potassium can produce weight loss; you can only run a prediction market about what an experiment will report in the way of results, or what a particular judge will say the experimental evidence seems to have indicated in five years.  You cannot directly observe "whether potassium causes weight loss" as an underlying fact of biology, so you can't have a prediction market about that; you can only observe what somebody reports as an experimental result, or what a particular person appointed as judge says out loud about the state of evidence later.

The marginal foresightful trader usually has a motive to run ahead of the market and make trades now, based on where the equilibrium ought to settle later; not because they are nobly undertaking a grand altruistic project of transmitting facts backward in time and making the market behave nicely from the standpoint of theoretical economics, but because they expect to get paid after everyone makes the common observation and the market settles into a new equilibrium reflecting that state of knowledge.  And then they expect to have that money, or to get a bonus for earning that money for their proprietary trading firm, and for that money to still hold its value, and for them to be able to spend the money on nice things.

In the unusual case of foomdoom, even if doom proceeds slowly enough that a large-enough group of marginal foresightful traders see the foomdoom coming, even if there is somehow a really definitive fire alarm that says "extremely high probability you are dead within two to four years", it is incredibly unlikely that everyone in the world will agree that yes we'll all be dead in two to four years, and that the markets will settle into the equilibrium that an economist would say corresponds to the belief that we'll all be dead in two to four years; which is what's required for the foresightful proprietary trader to score a huge profit and get a big bonus that year and have time to spend it on some fancy way of passing the remaining time.

People do not usually agree on what will happen in two to four years.  This kind of agreement that reliably reflects a fact, and makes a market pay out in a way that you can trust to correspond to that fact, is usually achieved after that fact is publicly definitively observed, not two to four years ahead of the observation.

In case of foomdoom the world never settles into equilibrium later, the bets never pay out, there is never that moment where everybody says "What a foresightful trader that was!" and agree on the fact that yes we sure are all dead now.  So even if a proprietary trader sees doom coming, they do not have much of an incentive to dutifully transmit that information backward in time in order to make the market behave now in the way that an economist thinks ought to correspond to the equilibrium it would settle into after everybody agreed that they were dead.

That incentive would only exist if you expected everybody to agree that they were going to die in a few years, far enough ahead of everybody actually being dead, for the markets to settle into equilibrium and foresightful traders to collect bonuses on having made the trade before that.  Which is a much stronger and stranger thing to claim, about a planet like Earth, than the usual much weaker claim that a few sharp traders might see a fact coming, and move the markets a few years ahead of time to where they would go after everybody agreed on that fact later.

Though even then, of course, we have cases like the financial crisis of 2006-2008, where some traders did see it coming and turn huge profits, but couldn't move enough marginal money around to actually shift the entire broader market.

To suppose that the market is broken around foomdoom is really not a remotely surprising market behavior to suppose!  Even in a world where nearly all the prices are efficient relative to you and that's why you can't make 10%/day trading Microsoft stock!

What happened in 2006-2008 was much more broken than that!  Marginal traders saw it coming, and some of them won huge even though CDSs were not trivial to short; but they didn't move enough money to shift anything remotely as large as 'real interest rates' ahead of the actual materialization of the disaster.

The market's behavior around Covid also showed much more obliviousness than this; it showed the kind of obliviousness where people I'd previously marked as the strongest EMH challengers reported collecting vast profits over a timespan of a couple of months.  (But on chains of logic much less fraught than OP's, because in real life you can't call LTPZ movements or real interest rate changes in advance, just things on the order of 'buy volatility'.)

We should not believe 'the market', in the sense of that unusually intelligent entity whose opinions we actually pay attention to, driven by the highly incentivized marginal trader, has any opinion on AGI except that "in the next few years, not everyone will have started believing that they are going to die in a few years after that".  The market is showing no actual opinion on foomdoom, only on what most market participants will believe about foomdoom in a couple of years.   The usual incentive mechanism whereby, if a pandemic starts, in a few years most market participants will agree that this pandemic happened and foresightful traders will collect bonuses and spend them - as is responsible for the market sometimes but not always being foresightful, because it is paid to be foresightful - is in this case broken.  We are really always seeing, when the market foretells an observable's value a few years later, that the foresightful marginal trader thinks that a few years later lots of people will hold a certain opinion; it's just that usually, this common opinion is being mundanely driven by a direct observation.

The market says, "It won't be the case that in 2030, everyone agrees that they were killed by AGIs."  The market isn't saying anything about whether that's because everyone agrees they are alive, or because nobody is left to agree anything.

OP reads like somebody has heard that markets sometimes anticipate things ahead of them happening, that markets sometimes transmit information lossily backward in time, and doesn't quite seem to have understood the mechanism behind it; that what everyone will agree on later, unusually foresightful marginal traders can sometimes cause the market to reflect now even though not everyone agrees on it yet.  Instead they are talking about "What if the market believed..." as if this kind of market belief reflected a numerical majority of the people in the market believing that foomdoom would kill us all before 30-year bonds paid out.  But this is not where the market gets its power to say things that we ought to pay special attention to (though even then market isn't always right about those things, or even righter than us, especially if those things are a little strange, eg Covid etc).  The market gets its power from unusually foresightful marginal traders expecting to get a payoff from what everyone believes later after the thing actually happened and therefore most market participants agree about what happened.  And this transmission mechanism is broken w/r/t AGI doom in a way that it wouldn't even be broken for an asteroid strike; with an asteroid strike, you might get weird effects from money losing its value in the future, but at least people could all agree on the asteroid strike coming.  With AGI, I think you'd have to be pretty naive to expect everybody to agree that AGI will kill us in two years, two years before AGI kills us.  So it is inappropriate to skip over a step we can usually skip over, and compress the true proposition "The market doesn't believe that everyone in 2030 will believe that in 2030 everyone is dead", to the false proposition "The market doesn't believe that in 2030 everyone will be dead."

Though again - also just to be clear - AGI ruin is a harder call than Covid and I wouldn't strongly expect the market to get it right, even if transmission weren't broken; and even if I thought the market would get it right after seeing GPT-4 and that transmission wasn't broken, I wouldn't buy "short LTPZ" as a surefire way to profit.

Comment by EliezerYudkowsky on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years · 2023-01-12T12:03:28.339Z · EA · GW

I wouldn't say that I have "a lot of" skepticism about the applicability of the EMH in this case; you only need realism to believe that the bar is above USDT and Covid, for a case where nobody ever says 'oops' and the market never pays out.

Comment by EliezerYudkowsky on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years · 2023-01-10T19:19:23.658Z · EA · GW

Suppose you are one of the 0.1% of macro bonds traders familiar with Yudkowskian foom.  You reason as follows:  "Suppose that in the next 2 years, we get even more alarming news out of GPT-4 and successors.  Suppose it's so incredibly alarming that 10% of macro traders notice, and then 10% of those  hear about Yudkowskian foom scenarios.  Putting myself into the shoes of one of those normie macro traders, I think I reason... that most actual normal people won't change their saving behavior any time soon, even if theoretically they should  decrease their saving,  and that's not likely to have macro effects.  Still as a normie trader who's heard about Yudkowsky foomdoom, I think I reason that if Yudkowsky's right, we're all dead, and if Yudkowsky's not right, I'll get embarrassed about a wrong trade and fired.  So this normie trader won't trade on Yudkowsky foomdoom premises.  Therefore I don't think I can profit over the next two years by shorting a TIPS fund... even leaving aside concerning feelings about whether going hugely short LTPZ would have model risk about LTPZ's actual relation to real interest rates in these scenarios, or whether other traders would expect big AI impacts to hit measured inflation because of AI-driven lower prices or AI-driven unemployment."

And then one week before the end of the world, the 1% of most clueful macro bonds traders... will take vacation days early, and draw down their rainy day funds to spend time with their family.  They still won't make macro trades about that, because the payoff matrix looks like "If you're right, you're dead and not paid, and if you're wrong, you're embarrassed and get fired."  Then haha whoops it turns out that the world didn't end in a week after all, and people go back to work with a nervous laugh and a sick feeling in their stomachs, and everybody actually falls over dead three weeks later.

If Omega tells you today that everyone will be dead in 2030 with probability 1, there's no direct way to make a market profit on that private information over the next 2 years, except insofar as foresightful traders today expect less foresightful future traders to hear about AI and read an economics textbook and decide that interest rates theoretically ought to rise and go short TIPS index funds.  Foresightful traders today don't expect this.

To put it another way:  Yes, savvy market traders don't believe that in 2025 everybody will realize that the world is ending.  The savvy market traders are correct!  Even at the point where the world is ending, everybody will not believe this, and so at no point will the savvy trader have made a profit!  The death of all of humanity induces a market anomaly wherein savvy traders don't expect to be able to profit from everyone else's error because no event occurs where the real thing actually happens and everybody says "Oops" and the savvy trader gets paid off.

There just isn't any mystery here.  You can't make a short-term profit off correcting these market prices even if Omega whispers the truth in your ear with certainty.  That's it, that's the mystery explained, you're done.

Comment by EliezerYudkowsky on Revisiting EA's media policy · 2022-12-05T04:59:39.881Z · EA · GW

If you haven't extensively, successfully dealt with the media, someplace where the media do not start out nicely inclined towards you (i.e., your past media experience at the Center for Rare Diseases in Cute Puppies does not count), you are not qualified to give this advice.  It should be given by somebody who understands how bad journalism gets and what needs to be done to avoid the usual and average negative outcome, or not at all.

Comment by EliezerYudkowsky on The case for actively assisting FTX clawbacks · 2022-11-28T02:32:16.003Z · EA · GW

I think the sort of people who look at this advice and find that it sounded plausible to them, might want to first follow the rule of only taking advice that originated in actual lawyers, because they couldn't tell which nonlawyers had done real legal research.  IDK, I don't know what it's like from the inside to read the original post and not scream.

Comment by EliezerYudkowsky on The case for actively assisting FTX clawbacks · 2022-11-27T22:25:46.793Z · EA · GW

Important notice to readers.  Please vote up even though it is not very carefully argued here, because it may be important to some readers to read it immediately.




Comment by EliezerYudkowsky on EA should blurt · 2022-11-23T02:31:15.148Z · EA · GW

I see no mention in either of your forum posts of the aforesaid lawyer?

Comment by EliezerYudkowsky on Who's at fault for FTX's wrongdoing · 2022-11-17T23:41:33.604Z · EA · GW

I'd agree with this statement more if it acknowledged the extent to which most human minds have the kind of propositional separation between "morality" and "optics" that obtained financially between FTX and Alameda.

Comment by EliezerYudkowsky on Who's at fault for FTX's wrongdoing · 2022-11-17T23:40:38.078Z · EA · GW

Yeah, I think it's a severe problem that if you are good at decision theory you can in fact validly grab big old chunks of deontology directly out of consequentialism including lots of the cautionary parts, or to put it perhaps a bit more sharply, a coherent superintelligence with a nice utility function does not in fact need deontology; and if you tell that to a certain kind of person they will in fact decide that they'd be cooler if they were superintelligences so they must be really skillful at deriving deontology from decision theory and therefore they can discard the deontology and just do what the decision theory does.  I'm not sure how to handle this; I think that the concept of "cognitohazard" gets vastly overplayed around here, but there's still true facts that cause a certain kind of person to predictably get their brain stuck on them, and this could plausibly be one of them.  It's also too important of a fact (eg to alignment) for "keep it completely secret" to be a plausible option either.

Comment by EliezerYudkowsky on Who's at fault for FTX's wrongdoing · 2022-11-17T23:37:02.043Z · EA · GW

This strikes me as a bad play of "if there was even a chance".   Is there any cognitive procedure on Earth that passes the standard of "Nobody ever might have been using this cognitive procedure at the time they made $mistake?"  That more than three human beings have ever used?  I think when we're casting this kind of shade we ought to be pretty darned sure, preferably in the form of prior documentation that we think was honest, about what thought process was going on at the time.

Comment by EliezerYudkowsky on Who's at fault for FTX's wrongdoing · 2022-11-17T23:33:51.438Z · EA · GW

Maybe they weren't familiar with the overwhelming volume of previous historical incidents, hadn't had their brains process history or the news as real events rather than mythology, or were genuinely unsure about how often these sorts of things happened in real life rather than becoming available on the news.  I'm guessing #2.

Comment by EliezerYudkowsky on Who's at fault for FTX's wrongdoing · 2022-11-16T23:57:58.725Z · EA · GW

The point is not "EA did as little to shape Alameda as Novik did to shape Alameda" but "here is an example of the mental motion of trying to grab too much responsibility for yourself".

Comment by EliezerYudkowsky on Who's at fault for FTX's wrongdoing · 2022-11-16T23:56:56.168Z · EA · GW

Fair point.

Comment by EliezerYudkowsky on Who's at fault for FTX's wrongdoing · 2022-11-16T09:37:29.597Z · EA · GW

I was not being serious there.  It was meant to show - see, I could blame myself too, if I wanted to be silly; now don't be that silly.

Comment by EliezerYudkowsky on Who's at fault for FTX's wrongdoing · 2022-11-16T09:36:26.388Z · EA · GW

The point there isn't so much, "He could not have had any EA thoughts in his head at all", which I doubt is really true - though also there could've just been pressure from coworkers, and office politics around it, resolving in something like the Future Fund so that they were doing anything.  My point is just that this nightmare is probably not one of a True Sincere Committed EA Act Utilitarian doing these things; that person would've tried to take more money off the table, earlier, for the Future Fund.  Needing an e-sports site named after your company - that's indeed something that other businesses do for business reasons; and if it feeds your business, that's real, that's urgent, that has to happen now.  The philanthropy side was evidently not like that.

Comment by EliezerYudkowsky on Who's at fault for FTX's wrongdoing · 2022-11-16T08:19:53.406Z · EA · GW

I was passing through the Bahamas and asked if FTX wanted me to talk to the EAs they had on fellowships there.  They paid for my hotel room and an Airbnb when the hotel got full, for a week.  I'm not sure but I don't think I remember getting to see SBF at all while I was at the hotel.  Didn't go swimming or sunning or any such because I am not a very outdoors person.  It does not seem entirely accurate to characterize this as "was hosted by SBF in the Bahamas".

The Future Fund basically turned down all my ideas until the regrantor program started; I made two recommendations and I expect neither of them will pay out now unless they moved very fast.

Unless I specifically defend an idea, I think that a lot of what gets said in the San Francisco Bay Area is also not something I'd accept as my fault.  Eg there was a lot of drug use involved in this going wrong, which I'm sure did not start from me, and I've suggested increasingly loudly and openly of late that people cut back on the drug use; maybe it's Bay-associated idk, but it sure is not Yudkowsky-endorsed.

I did think Will MacAskill was from the Singer side of things, so I admit to being surprised if the highly-legible side of effective altruism got nothing, unless it was a room-for-more-funding issue with Givewell+OpenPhil having already snapped up all the fruit hanging lower than GiveDirectly.  I will consider myself tentatively corrected on that point unless I hear otherwise or have investigated.

Comment by EliezerYudkowsky on Who's at fault for FTX's wrongdoing · 2022-11-16T06:21:26.452Z · EA · GW

I agree that if I, personally, had steered SBF into crypto, and uncharacteristically failed to add on a lot of "hey but please don't scam people, only do this if you find a kind of crypto you can feel good about" I might consider myself more at fault.  I even think that the Singer side of EA in fact does less talking about deontology, less writing of fiction that exemplifies the feelings and reasoning behind that deontology, less cautioning of people against twisting up their brains by chasing good ideas; on my view, the Singer side explicitly starts by trying to twist people's brains up internally, and at some point we should all maybe have a conversation about that.

The thing is, if you want to be sane about this sort of thing, even so and regardless I think Peter Singer himself would not have approved this, would obviously not have approved this.  When somebody goes that far off the rails, I just don't see how you could reasonably hold responsible people who didn't tell them to do that and would've obviously not wanted them to do that.

Comment by EliezerYudkowsky on Selective truth-telling: concerns about EA leadership communication. · 2022-11-16T06:16:01.445Z · EA · GW

I don't, in fact, take federal charges like that seriously - I view it as a case of living in a world with bad laws and processes - but I do take seriously the notion of betraying an investor's investment and trust.

Comment by EliezerYudkowsky on Selective truth-telling: concerns about EA leadership communication. · 2022-11-16T05:47:41.193Z · EA · GW

Okay; I agree then that it's reasonable to say of Ben Delo that Hayes and cofounders were accused of trying to defraud two early investors, that Ben Delo is accused of taunting them with a meme, and that they settled out of court.

I do note that this is pretty different from what Vaughan was previously accusing Delo of, which sounded pretty plausibly like a "victimless crime".

Comment by EliezerYudkowsky on Selective truth-telling: concerns about EA leadership communication. · 2022-11-16T05:26:36.168Z · EA · GW

If it's as the plaintiffs represent, I agree that's pretty damning.  Is it known, aside from the complaint itself, that the plaintiffs are telling the truth and the whole truth?  Don't suppose you have a link to the meme taunt?

Comment by EliezerYudkowsky on Selective truth-telling: concerns about EA leadership communication. · 2022-11-16T04:49:21.048Z · EA · GW

I wish I lived in a society where this question was not necessary, but:  Was this a "victimless crime"; else, who were the victims and what did they lose?

Comment by EliezerYudkowsky on Selective truth-telling: concerns about EA leadership communication. · 2022-11-16T01:32:45.036Z · EA · GW

We still don't have a clue about Ben Delo, afaik.

Comment by EliezerYudkowsky on How could we have avoided this? · 2022-11-15T22:18:09.318Z · EA · GW

Golly, I didn't even realize that.

Comment by EliezerYudkowsky on What to do if a reporter contacts you about FTX? · 2022-11-15T09:42:04.189Z · EA · GW

Unless you know the reporter, and you know that their coverage about subjects that you personally are well-informed about has been accurate and fair (not just plausible-sounding coverage of things you don't know) then Rule 1 is don't talk to reporters.

I almost always don't.  If it seems plausibly important I offer to answer their questions off-the-record, if they're really looking for knowledge rather than a money misquote; and so far only one reporter, a Pulitzer Prize winner, has taken me up on that - been interested in knowledge at all.

Comment by EliezerYudkowsky on An Important Lesson of the FTX Implosion... · 2022-11-15T03:45:21.719Z · EA · GW

Mostly, I think EAs are beating themselves up too much about FTX; but separately among the few problems that I think EA actually does have, is producing really lengthy writeups of things, that don't simplify well and don't come with tldrs, a la the incentives in the academic paper factory; and that life wisdom that produces distrust of complicated things that don't simplify well, is produced in part by watching complicated things like FTX implode, and drawing a lesson of (bounded defeasible) complexity-distrust from that.

Comment by EliezerYudkowsky on An Important Lesson of the FTX Implosion... · 2022-11-15T03:02:52.738Z · EA · GW

Okay, fine, a couple of caveats:

Distrust complicated stories that don't have much simpler versions that also make sense, unless they're pinned down very precisely by the evidence.  When two sides of a yes-no question both complain the other side is committing this sin, you now have a serious challenge to your epistemology and you may need to sit down and think about it.

Distrust complicated designs unless you can calculate very precisely how they'll work or they've been validated by a lot of testing on exactly the same problem distribution you're drawing from.

Comment by EliezerYudkowsky on How could we have avoided this? · 2022-11-15T02:38:54.501Z · EA · GW

Standard reply is that a visible bet of this form would itself be sus and would act as a subsidy to the prediction market that means bets the other way would have a larger payoff and hence warrant a more expensive investigation.  Though this alas does not work quite the same way on Manifold since it's not convertible back to USD.

Comment by EliezerYudkowsky on IMPCO, don't injure yourself by returning FTXFF money for services you already provided · 2022-11-15T00:55:13.005Z · EA · GW
Comment by EliezerYudkowsky on NY Times on the FTX implosion's impact on EA · 2022-11-14T04:10:32.805Z · EA · GW

I think EAs could stand to learn something from non-EAs here, about how not to blame the victim even when the victim is you.

Comment by EliezerYudkowsky on Thoughts on legal concerns surrounding the FTX situation · 2022-11-14T03:15:20.194Z · EA · GW
Comment by EliezerYudkowsky on A personal statement on FTX · 2022-11-14T01:40:39.109Z · EA · GW not saying anything in favor of protecting some aspect of our current culture, when somebody else has just recently expressed concerns about it?  That's a rule?

Comment by EliezerYudkowsky on A personal statement on FTX · 2022-11-14T01:38:42.437Z · EA · GW

You could plausibly claim it gets disclosed to Sequoia Capital, if SC has shown themselves worthy of being trusted with information like that and responding to it in a sensible fashion eg with more thorough audits.  Disclosing to FTX Future Fund seems like a much weirder case, unless FTX Future Fund is auditing FTX's books well enough that they'd have any hope of detecting fraud - otherwise, what is FTXFF supposed to do with that information?

EA generally thinking that it has a right to know who its celebrity donors are fucking strikes me as incredibly unhealthy.

Comment by EliezerYudkowsky on A personal statement on FTX · 2022-11-13T23:48:30.424Z · EA · GW

Somebody else in that thread was preemptively yelling "vote manipulation!" and "voting ring!", and as much as it sounds recursively strange, this plus some voting patterns (early upvotes, then suddenly huge amounts of sudden downvoting) did lead me to suspect that the poster in question was running a bunch of fake accounts and voting with them.

We would in fact be concerned if it turned out that two people who were supposed to have independent eyes on the books were in a relationship and didn't tell us!  And we'd try to predictably conduct ourselves in such a mature, adult, understanding, and non-pearl-clutching fashion that it would be completely safe for those two people to tell the MIRI Board, "Hey, we've fallen in love, you need to take auditing responsibility off one of us and move it to somebody else" and have us respond to that in a completely routine, nonthreatening, and unexcited way that created no financial or reputational penalties for us being told about it.

That's what I think is the healthy, beneficial, and actually useful for minimizing actual fraud in real life culture, of which I do think present EA has some, and which I think is being threatened by performative indignation.

Comment by EliezerYudkowsky on The FTX Future Fund team has resigned · 2022-11-13T22:54:20.960Z · EA · GW

Yep, well-picked nit, I was just told about that myself.  Perfectly good substantive disagreement with the original thesis, imo, you don't need to downplay it that much.

It also makes sense that the money would've come from the Alameda side (maybe in 2020 or early 2021 according to Wayback, somebody said) rather than the FTX side.  Alameda would have had the Bay Areans, while FTX's philanthropic side was constructed (exclusively?) out of Oxfordians.

Comment by EliezerYudkowsky on A personal statement on FTX · 2022-11-13T22:50:31.037Z · EA · GW

...are you suggesting that nobody ought to dare to defend aspects of our current culture once somebody has expressed concerns about them?

Comment by EliezerYudkowsky on The FTX Future Fund team has resigned · 2022-11-13T22:45:20.800Z · EA · GW

Apparently there was a $132K Alameda donation to MIRI in 2020 or early 2021.  Didn't actually know that.

Well, obviously they donated less to MIRI after they turned evil, and the stopping of MIRI donations was a huge red flag that we all should have noticed.  Sage nod.

Comment by EliezerYudkowsky on A personal statement on FTX · 2022-11-13T21:48:39.202Z · EA · GW

I'm under the impression that mainstream orgs deal with this rather poorly, by having the relationships still happen, but be Big Dark Forbidden Secrets instead of things that people are allowed to actually know about and take into account.  But they Pretend to be acting with Great Propriety which is all that matters for the great kayfabe performance in front of those who'd perform pearl-clutching otherwise.  People falsifying their romantic relationships to conform to ideas about required public image is part of our present culture of everything being fake; so what loves you forbid from being known and spoken of, by way of trying to forbid the loves themselves, you should forbid very hesitantly.

I think our current culture is better, even in light of current events, because I don't think the standard culture would have actually prevented this bad outcome (unless almost any minor causal perturbance would've prevented it).  It would mean that SBF/C's relationship was just coming out now even though they'd previously supposedly properly broken up before setting up the two companies, or something - if we learned even that much!