Posts
Comments
I almost never engage in karma voting because I don’t really have a consistent strategy for it I’m comfortable with, but I just voted on this one. Karma voting in general has recently been kind of confusing to me, but I feel like I have noticed a significant amount of wagon circling recently, how critical a post was of EA didn’t used to be very predictive of its karma, but I’ve noticed that recently, since around the Bostrom email, it has become much more predictive. Write something defensive of EA, get mostly upvotes, potentially to the triple digits. Write something negative, very mixed to net negative voting, and if it reaches high enough karma, possibly even more comments. Hanania’s post on how EA should be anti-woke just got downvoted into the ground twice in a row, so I don’t think the voting reflects much ideological change by comparison (being very famous in EA is also moderately predictive, which is probably some part of the Aella post’s karma at least, and is a more mundane sort of bad I guess).
I’m still hopeful this will bounce back in a few hours, as I often see happen, but I still suspect the overall voting pattern will be a karmic tug of war at best. I’m not sure what to make of this, is it evaporative cooling? Are the same people just exhausted and taking it out on the bad news? Is it that the same people who were upvoting criticism before are exhausted and just not voting much at all, leaving the karma to the nay sayers (I doubt this one because of the voting patterns on moderately high karma posts of the tug of war variety, but it’s the sort of thing that makes me worry about my own voting, how I don’t even need to vote wrong to vote in a way that creates unreasonable disparities based on what I’m motivated to vote on at all, and just voting on everything is obviously infeasible). Regardless, I find it very disturbing, I’m used to EA being better than this.
I think this lack of ability to self-advocate is actually crucial to our failures to treat non-human animals with minimum decency. In fact that difference, and its arbitrariness, is one of my favorite alternatives to the argument from marginal cases:
"Say that you go through life neglecting, or even contributing to the suffering of factory farmed animals. One day, you meet someone, who tells you that she used to be a battery cage hen. She is, understandably, not pleased with how she was treated before magically transforming into a conversant agent who could confront you about it. How would you justify yourself to her?"
"This, I think, is importantly different from a closely related case, in which a rock you once kicked around, and which suffered from this, transforms and confronts you. In such a case, you could honestly say that you didn’t think you were hurting the rock at all, because you didn’t think the rock could be hurt. If this rock person was reasonable, and you could convince the rock that your extremely low credence in a scenario like this was reasonable, then it seems as though this would be a perfectly adequate excuse. There is no parallel between this reason and what you might say to the humanized hen, unless you were mistaken about the fact that as a hen she was suffering in her conditions. Perhaps you could instead say that you had, quite reasonable, very very low credence that she would ever be in a position to confront you about this treatment. Do you think she would accept this answer? Do you think she should? What differs between this case and the real world, in terms of what is right or wrong in your behavior, if we agree that your lack of credence that she would transform would be reasonable, but not a good enough answer? It is generally accepted that one should be held as blameworthy or blameless based on their actual beliefs. If these lead you astray in some act, it is a forgivable accident. Given that you are in the same subjective position in this world as you are in the real world, in terms of your credence that you actually will be confronted by a humanized hen, then it seems as though if you have adequate justification in the real world, then there is also something you could give as an adequate justification to this hen. Working backwards, if you have no adequate excuse you can tell the hen, you have no adequate excuse in the real world either."
Anyway, I think this is my favorite piece of Julian's so far!
I didn't downvote (I rarely engage in karma voting) but if I had to guess, I would say that having the entire content of the comment be "downvote me" misled people who didn't understand the connection to your previous comment immediately (i.e. more confusion than some specific plan to go against your stated purpose).
A nit picking (and late) point of order I can’t resist making because it’s a pet peeve of mine, re this part:
“the public perception seems to be that you can’t be an effective altruist unless you’re capable of staring the repugnant conclusion in the face and sticking to your guns, like Will MacAskill does in his tremendously widely-publicised and thoughtfully-reviewed book.”
You don’t say explicitly here that staring at the repugnant conclusion and sticking to your guns is specifically the result of being a bullet biting utilitarian, but it seems heavily implied by your framing. To be clear, this is roughly the argument in this part of the book:
-population ethics provably leads every theory to one or more of a set of highly repulsive conclusions most people don’t want to endorse
-out of these the least repulsive one (my impression is that this is the most common view among philosophers, though don’t quote me on that) is the repugnant conclusion
-nevertheless the wisest approach is to apply a moral uncertainty framework that balances all of these theories, which roughly adds up to a version of the critical level view, which bites a sandpapered down version of the repugnant conclusion as well as (editorializing a bit here, I don’t recall MacAskill noting this) a version of the sadistic conclusion more palatable and principled than the averagist one
Note that his argument doesn’t invoke utilitarianism anywhere, it just invokes the relevant impossibility theorems and some vague principled gesturing around semi-related dilemmas for person-affecting ethics. Indeed many non-utilitarians bite the repugnant conclusion bullet as well, what is arguably the most famous paper in defense of it was written by a deontologist.
I can virtually guarantee you that whatever clever alternative theory you come up with, it will take me all of five minutes to point out the flaws. Either it is in some crucial way insufficiently specific (this is not a virtue of the theory, actual actions are specific so all this does is hide which bullets the theory will wind up biting and when), or winds up biting one or more bullets, possibly different ones at different times (as for instance theories that deny the independence of irrelevant alternatives do). There are other moves in this game, in particular making principled arguments for why different theories lead to these conclusions in more or less acceptable ways, but just pointing to the counterintuitive implication of the repugnant conclusion is not a move in that game, but rather a move that is not obviously worse than any other in the already solved game of “which bullets exist to be bitten”.
Maybe the right approach to this is to just throw up our hands in frustration and say “I don’t know”, but then it’s hard to fault MacAskill, who again, does a more formalized version of essentially this rather than just biting the repugnant conclusion bullet.
Part of my pet peeve here is with discourse around population ethics, but also it feels like discourse around WWOTF is gradually drifting further away from anything I recognize from its contents. There’s plenty to criticize in the book, but to do a secondary reading skim from a few months after its release, you would think it was basically arguing “classical utilitarianism, therefore future”, which is not remotely what the book is actually like.
I can understand some of these even where I disagree, but could you elaborate on why a group being more “aspie” contributes to sexual harassment (disclosure I am an aspie, but in fairness I’m also male and feel that I understand that one much more).
I don't agree with MIRI on everything, but yes, this is one of the things I like most about it
For what it’s worth, speaking as a non-comms person, I’m a big fan of Rob Bensinger style comms people. I like seeing him get into random twitter scraps with e/acc weirdos, or turning obnoxious memes into FAQs, or doing informal abstract-level research on the state of bioethics writing. I may be biased specifically because I like Rob’s contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRI’s preferred public image was poured, but, look, I also just find that type of job description obviously offputting. In general I liked getting to know the EAs I’ve gotten to know, and I don’t know Shakeel that well, but I would like to get to know him better. I certainly am averse to the idea of wrist slapping him back into this empty vessel to the extent that we are blaming him for carelessness even when he specifies very clearly that he isn’t speaking for his organization. I do think that his statement was hasty, but I also think we need to be forgiving of EAs whose emotions are running a bit hot right now, especially when they circle back to self-correct afterwards.
Equality is always “equality with respect to what”. In one sense giving a begger a hundred dollars and giving a billionaire a hundred dollars is treating them equally, but only with respect to money. With respect to the important, fundamental things (improvement in wellbeing) the two are very unequal. I take it that the natural reading of “equal” is “equal with respect to what matters”, as otherwise it is trivial to point out some way in which any possible treatment of beings that differ in some respect must be unequal in some way (either you treat the two unequally with respect to money, or with respect to welfare for instance).
The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it, this is for instance the view of people like Singer, Bentham, and Sidgwick (yes, including non-human animals, which is my view as well). It is also, if not universally at least to a greater degree than average, one of the cornerstones of the philosophy and culture of Effective Altruism, it is also the reading implied by the post linked in that part of the statement.
Even if you disagree with some of the extreme applications of the principle, race is easy mode for this. Virtually everyone today agrees with equality in this case, so given what a unique cornerstone of EA philosophy this type of equality is in general, in cases where it seems that people are being treated with callousness and disrespect based on their race, it makes sense to reiterate it, it is an especially worrying sign for us. Again, you might disagree that Bostrom is failing to apply equal respect of this sort, or that this use of the word equality is not how you usually think of it, but I find it suspicious that so many people are boosting your comment given how common, even mundane a statement in EA philosophy ones like this are, and that the statement links directly to a page explaining it on the main EA website.
At the risk of running afoul of the moderation guidelines, this comment reads to me as very obtuse. The sort of equality you are responding to is one that I think almost nobody endorses. The natural reading of “equality” in this piece is the one very typical of, even to an extent uniquely radical about, EA. When Bentham says “each to count for one and none for more than one”, or Sidgwick talking about the point of view of the universe, or Singer discusses equal consideration of equal interests. I would read this charitably and chalk it up to an isolated failure to read the statement charitably, but it is incredibly implausible to me that this becoming the top voted comment can be accounted for by mass reading comprehension problems. If this were not a statement critical of an EA darling, but rather a more mundane statement of EA values that said something about how people count equally regardless of where in space and time they are, or sentient beings count equally regardless of their species, I would be extremely surprised to see a comment like this make it to the top of the post. I get that taking this much scandal in a row hurts, but guys, for the love of god just take the L, this behavior is very uncharming.
For what it’s worth, I think that you are a well-liked and respected critic not just outside of EA, but also within it. You have three posts and 28 comments but a total karma of 1203! Compare this to Emile Torres or Eric Hoel or basically any other external critic with a forum account. I’m not saying this to deny that you have been treated unfairly by EAs, I remember one memorable event when someone else was accused by a prominent EA of being your sock-puppet on basically no evidence. This just to say, I hope you don’t get too discouraged by this, overall I think there’s good reason to believe that you are having some impact, slowly but persistently, and many of us would welcome you continuing to push, even if we have various specific disagreements with you (as I do). This comment reads to me as very exhausted, and I understand if you feel you don’t have the energy to keep it up, but I also don’t think it’s a wasted effort.
Personally I think the Most Important Century series is closest to my own thinking, though there isn't any single source that would completely account for my views. Then again I think my timelines are longer than some of the other people in the comments, and I'm not aware of a good comprehensive write up of the case for much shorter timelines.
The impact for me was pretty terrible. There were two main components of the devastating parts of my timeline changes which probably both had a similar amount of effect on me:
-my median estimate year moved back significantly, cut down by more than half
-my probability mass on AGI significantly sooner than even that bulked up
The latter gives me a nearish term estimated prognosis of death somewhere between being diagnosed with prostate cancer and colorectal cancer, something probably survivable but hardly ignorantle. Also everyone else in the world has it. Also it will be hard for you to get almost anyone else to take you seriously if you tell them the diagnosis.
The former change puts my best guess arrival for very advanced AI well within my life expectancy, indeed when I’m middle aged. I’ve seen people argue that it is actually in one’s self interest to hope that AGI arrives during their lifetimes, but as I’ve written a bit about before this doesn’t really comfort me at all. The overwhelming driver of my reaction is more that, if things go poorly and everything and everyone I ever loved is entirely erased, I will be there to see it (well, see it in a metaphorical sense at least).
There were a few months, between around April and July of this year, when this caused me some serious mental health problems, in particular it worsened my insomnia and some other things I was already dealing with. At this point I am doing a bit better, and I can sort of put the idea back in the abstract idea box AI risk used to occupy for me and where it feels like it can’t hurt me. Sometimes I still get flashes of dread, but mostly I think I’m past the worst of it for now.
In terms of donation plans, I donated to AI specific work for the first time this year (MIRI and Epoch, the process of deciding which places to pick was long, frustrating, and convoluted, but probably the biggest filter was that I ruled out anyone doing significant capabilities work). More broadly I became much more interested in governance work and generally work to slow down AI development than I was before.
I’m not planning to change career paths, mostly because I don’t think there is anything very useful I can do, but if there’s something related to AI governance that comes up that I think I would be a fit for, I’m more open to it than I was before.
I think the overall balance of positive and negative sources is fair when only viewed from a "positive versus negative" standpoint. As I think Habiba Islam pointed out somewhere much of the positive reading is much much longer. Where I think this will wind up running into trouble is something like this:
-While there is some primary reading in this list, most of the articles, figures, events, ideas etc. that are discussed across these readings appear in the secondary sources.
-This is pretty much inevitable, the list would multiply out far too much if she added all of the primary sources needed to evaluate the secondary sources from scratch
-Most of the secondary sources are negative, and often misleading in some significant way
-The standard way to try to check these problems without multiplying out primary sources too much is to read other pieces arguing with the original ones
-The trouble is, there are very few of those outside of blogs and the EA forum on these topics, something I've been hand wringing about for a while, and Thorn seems to only be looking at more official sources like academic/magazine/newspaper publications
-I think Thorn will try to be balanced and thoughtful, but I think this disparity will almost ensure that the video will inherit many of the flaws of its sources
Endorsed. A bunch of my friends were recommending that I read the sequences for a while, and honestly I was skeptical it would be worth it, but I was actually quite impressed. There aren’t a ton of totally new ideas in it, but where it excels it honing in on specific, obvious-in-retrospect points about thinking well and thinking poorly, being clear engaging and catchy describing them, and going through a bit of the relevant research. In short, you come out intellectually with much of what you went in with, but with reinforcements and tags put in some especially useful places.
As a caveat I take issue with a good deal of the substantial material as well. Most notably I don’t think he describes those he disagrees with fairly sometimes, for instance David Chalmers, and I think “Purchase Fuzzies and Utilons Separately” injected a basically wrong and harmful meme into the EA community (I plan to write a post on this at some point when I get the chance). That said if you go into them with some skepticism of the substance, you will come out satisfied. You can also audiobook it here, which is how I read it.
Interesting, I’ll have to think about this one a bit, but I tend to think that something like Shiffrin’s gold bricks argument is the stronger antinatalist argument anyway.
Thanks, I appreciate the added information! I'm not sure I'm convinced that this was worthwhile, but I feel like I now have a much better understanding of the case for it.
Thanks, this is indeed helpful. I would also like to know though, what made this property “the most appropriate” out of the three in a bit greater detail if possible. How did its cost compare to the others? Its amenities? I think many people in this thread agree that it might have been worth it to buy some center like this, but still question whether this particular property was the most cost effective one.
Larry Temkin is a decent candidate. I think he has plenty of misunderstandings about EA broadly, but he also defends many views that are contrary to common EA approaches and wrote a whole book about his perspective on philanthropy. As far as philosopher critics go, he is a decent mixture of a) representing a variety of perspectives unpopular among EAs and 2) doing so in a rigorous and analytic way EAs are reasonably likely to appreciate, in particular he has been socially close to many EA philosophers, especially Derek Parfit.
I've tended to be pretty annoyed by EA messaging around this. My impression is that the following things are true about EAs talking to the media:
-Journalists will often represent what you say in a way you would not endorse, and will rarely revise based on your feedback on this, or even give you the opportunity to give feedback
-It is often imprudent to talk to the media, at least if you are not granted anonymity first, because it shines a spotlight on you that is often distorted, and always invites some possible controversy directed at you
However, the advice is often framed as though a third thing is also true:
-It is usually bad for Effective Altruism if Effective Altruists talk to the media without extreme care
My personal impression has been that the articles about EA that are most reflective of the EA I know tend to involve interviews with EAs, and that the parts of those articles that are often best reflective of EA are the parts where the interviewed EAs are quoted. The worst generally contain no interviews at all. Interviews like this might grant unearned credibility, but at minimum, they also humanize us, depict some part of the real people that we are. I guess this might not be everyone's experience, but it's worth remembering that even if the parts where the EAs are interviewed are often misrepresentative, so are the parts, often to a greater degree, where they aren't. This is especially true of articles that are written in relative good faith but by outsiders briefly glancing in for their impressions, and it is my impression that this describes the overwhelming majority of pieces written on EA, especially where interviewed EAs get quoted.
Still, I don't think this advice is the main reason EA has failed so badly with PR recently. FTX was the obvious one, but in terms of actual media strategy I stand by this comment as my main diagnosis of our mistake. With some honorable exceptions, EA's media strategy this past few months seems to me something like: shine highbeams on ourselves, especially this rather narrow part of ourselves, mostly don't respond to critics directly in any very prominent non-EA-specific place, except maybe Will MacAskill will occasionally tweet about it, and don't respond to very harsh critics even this much. I think pretty much every step in this strategy crashed and burned.
No problem, welcome to the forum! You can feel free to share whatever you’re comfortable with, but personally I would recommend you don’t post your email address in the comments, as there was recently someone webcrawling the forum for email addresses to send a scam email to. I would reserve information like that to DMs, my own plan is to DM you his email address if and when he gives me his approval. If you’d prefer, feel free to give me your email address and I can send it to him instead, again, whatever works best for you.
Thanks for commenting, this is very reassuring!
I have a friend in my program (not exactly EA, but EA curious and a great guy) who has done a good deal of work with an organization that teaches and discusses philosophy with prisoners. If you would like, I can ask him if he would mind being put in touch, as he might have some useful insights/connections.
I very rarely engage in karma voting, and didn’t do so for this comment either. That said, one relevant point is that the comment with the most karma gets to sit at the top of the comments section. That means that many people probably vote with an intention to functionally “pin” a comment, and it may not be so much that they think the comment should represent the most important reaction to a post, as that they think it provides crucial context for readers. I think this comment does provide context on the part of this otherwise very good and important post that made me most uncomfortable as stated. I also agree that Alexander’s tone isn’t great, though I read it in almost the opposite way from you (as an emotional reaction in defense of his friends who came forward about Forth).
In case you don’t get adequate responses here, another possibility is to reach out to Julia Wise. She’s both the person who does the most work in this area that I know of, and someone whose work Forth admired. I probably can’t give an adequate response to your question in particular (just the messy reactions I suggest above), but she might have more of a concrete idea of anything that did or didn’t happen at the institutional level.
As far as I know, not much. But I’m personally very conflicted about what to think about this case and how to respond to it, based on information that came out in the wake of her death, that make the case that she was probably unwell and hurting others, and that she made at least one confirmed false accusation in the past. See this statement from Kelsey Piper in particular:
The trouble is I don’t know exactly how much this should change how I read her statement, so much of both what she said and what others said about her are too vague to easily work through for me. It would be terrible not to take this seriously enough, and it is a possibility I have to keep in mind that responses like this one in the wake of her death were exaggerated out of motivated reasoning. I suspect something should have been done anyway, but I’m not sure what should have been done, and as far as I know not much was done. Presumably there is still time to change that, but I don’t have any ideas for how in particular. But this is one reason I suspect many people had a hard time reacting.
Also more related to the content of this post I'm looking at Strong Minds very seriously. I was aware of them and liked there work before, but this year have been convinced that they are unusually underrated by major granters in the field.
Maybe the biggest thing is that I got much more worried about AI risk over the last year. Cliche in this crowd, but you guys got me, I wasn't expecting it, and I'm not thrilled about it. I went into the year sort of assuming we had about a century and that Stuart Russell had plausibly solved the technical side (in theory at least), I left (not so much because of actual developments in AI, as Yudkowsky's dramatizing motivating me to do my homework on the field in the way I hadn't before) thinking we probably have less than 50 years, and Russell is probably wrong even on the broad strokes. I don't know whether this will cause me to donate directly to AI work or not (I don't have a good sense of where the best place to donate is, and much of the broader community work seems meta in ways I'm skeptical of), but it's probably the biggest, most relevant update of my own views this year.
Sure, I don’t think that’s a crazy position, I just disagree with it pretty strongly. Insofar as movement building and community health are valid EA cause areas (and at least we often treat them as such) this strikes me as highish on the list of most impactful things, not just in hindsight but also in expectation, people working on this cause area could have done.
SBF specifically might have been less likely to commit fraud, if some of this fraud was motivated by wanting to earn to give, but in general I don’t think it would actively prevent most people from committing this fraud. That’s not what I take the point of an audit like this to be.
I also think if we look, find nothing, and then later it turns out there was harder to spot wrongdoing, that won’t be worse than if we don’t even try and the same wrongdoing comes up. If the concern is that it will cause us to be overconfident, I would want to see an example of what we would be doing differently from what we are doing now if we were overconfident.
Finally, I think the point “ultimately I do think it sounds strange to require an audit of someone's business accounts before they buy bednets to save lives” proves too much. I think EA’s involvement with SBF was bad, and it would have been worth some effort to try to avoid that. I think this is true even if SBF still engaged in fraud, and still donated money some other place. You could have raised identical points to defend not doing this in the case of SBF, which maybe you also think it wouldn’t have made sense to do, but at the very least in retrospect there was a cost to not doing so. Given this cost, it doesn’t seem particularly odd to me at all.
I think if we do an audit, we shouldn’t hire someone for it who’s part of EA at all.
I’m not sure what you mean here, I give a narrative in the post - Moskovitz makes up most of our funding, this is a big deal worth some worry and scrutiny, therefore even if we trust him, as I tend to, a little extra housekeeping would be prudent. Maybe you think it’s a bad narrative, but it’s certainly a narrative.
I don’t think it has to be that burdensome to be useful, just some independent investigation into relevant information, ideally with some help from Moskovitz with some of the relevant documents/information. That said I would bet actual real money that Moskovitz won’t stop donating to EA if he faces some audit, provided it doesn’t require some sort of serious breach of contract of the sort you mention. I think he would understand given the circumstances and my impression is that he genuinely cares about this stuff quite a lot. If you want to arrange something feel free to DM me. The bigger risk, which I considered bringing up, might be putting off future donors who are less committed if they expect to face similar scrutiny. I’m not sure how to feel about this one, except I kind of bite the bullet that, if the result of ensuring that it is less likely that something like SBF happens again means no one person donates nearly as much to us as he did, that might well be worth it. I tend to lean steering over rowing on these things, and whether it’s hindsight bias or not recent events make me feel fairly vindicated in this.
I usually agree, but Moskovitz isn’t just any donor, he makes up the great majority of EA funding. Insofar as this rule has any limits in exceptional cases, Moskovitz’s money seems to rise to the level where it’s worth considering. I should also add, in case the wording made it seem otherwise, I’m not necessarily suggesting this should be a super burdensome audit of the sort the IRS might conduct, if that seems like too much even something much lighter seems like it would be useful.
For those of us worried about insect suffering, I don’t think it’s so much that we confuse pain and suffering (and there’s some even worse problems that come from ambiguities in what people mean by pain as opposed to suffering. Some use pain to refer to a process for aversively responding to stimulus, but not necessarily conscious the way suffering is, others use it to refer to a conscious experience that is often associated with negative valence, but which doesn’t necessarily rise to “suffering” without this valence), as that the question is just actually really hard. Insects might well not be conscious, the evidence here is quite mixed, some of it depends on interpretation of different phenomena, some of it which types of evidence one prioritizes. I think it is very plausible that no insects suffer, that all species suffer, or that some suffer and some don’t. The philosophers also seem very mixed on this one.
I don’t think this presents a strong counterpoint to your belief that they don’t suffer, but I think it does to your apparent extrapolation from this that the reason EAs and vegans care about insects is because of EV one-up-man-ship. If you shoot up a house for fun but say it’s alright because you think there’s a 60% chance it’s empty, I think any reasonable, non-EV-obsessed person would want a word with you. If I was a bullet biting EV maximizer, I would be worried about electrons or bacteria, not insects.
Of course the real situation is very different in ways other than odds from shooting up a possibly occupied house, it is possible a different concern is with pure aggregation rather than EV fanaticism. Yes perhaps the odds of doing wrong by harming insects are high enough to normally rise to morally meaningful non-fanatical levels, but the amount of harm that would be involved if so is so small per individual, that the only reason to care about insects is how overwhelmingly many of them they are. A comparison to this kind of worry might be if EAs and vegans were obsessing over a pollutant that had a 30% chance of making a billion people have slightly itchier scalps.
I am more sympathetic to this than EV worries, because if it isn’t even clear if something can suffer, then perhaps we should also assume that if it can suffer, that form of suffering is somewhere just over the line into suffering, including the morally meaningful dimensions of it. My concern with this is that I think the morally meaningful aspects of suffering are actually extremely simple as a rule. I can undergo exquisite opera-worthy intellectual angsts, or I can experience brute torture. The latter seems like it usually matters more than the former, despite being much much much simpler, and presumably accessible to much simpler conscious organisms.
So ultimately, I don’t think concerns about insect suffering are comparable to the itchy pollution case. I think it involves some genuinely difficult and important research programs that could easily show us that insect suffering has radical implications (for instance that insect factory farming is morally on par with more familiar forms of factory farming), or that it is completely irrelevant.
I looked back at the specific instances I remembered of this, and they weren’t quite how I remembered. There were more instances that I don’t remember specific enough information about to find again, which makes linking to a source for my impression hard, but given how I misremembered the instances I could check up on specifically, I’m more generally suspicious of how well grounded my impression actually was. I still think a tendency of this sort exists to some extent, but I’m tentatively unendorsing my original comment.
Every time I’ve seen someone make this same point on the forum about Emile Torres’ use of the “white supremacy” label they get net downvoted. A bit off topic, and I hesitate to bring it up because it’s not clear who specifically is implicated in inconsistency here, but I do notice a bias in the voting on comments like this in the different contexts that I think is worth pointing out. For my own part I guess I think the distinction matters denotationally at least but comes with costs in misleading connotation that usually makes it better to phrase things differently.
My understanding is that the author ultimately decided to take it down when someone called them a bigot in the comments (for their points related to polyamory). I think both the comment and reaction to it were a bit much personally, but I can understand not wanting the comments visible if that was the key worry for the author.
In agreement with the first part of this comment at least. If there were EA causes but not an EA community, it seems like much the same thing would have happened. A bunch of causes SBF thought were good would have gotten offered money, probably would have accepted the money, and then wound up accidentally laundering his reputation for being charitable while facing the prospect that some of the money they got was ill-gotten, and some of the money they had planned on getting wasn't going to come. Maybe SBF wouldn't have made his money to begin with? I find it unlikely, ideas like earning to give and ends-justifies-means naive consequentialism and high-risk strategies for making more money are all ideas that people associate with EA, but which don't appeal to anything like a "community". This isn't to say none of these points are important aside from SBF, but well, it's just odd to see them get so much attention because of him. Similar points have been made in Democratizing Risk, and in a somewhat different way in the recent pre-collapse Clearer Thinking interview with Michael Nielson and Ajeya Cotra. Maybe it's still worth framing this in terms of SBF if now is an unusually good chance to make major movement changes, but at the same time I find it a little iffy. It seems misleading to frame this in terms of SBF if SBF didn't actually provide us with good reasons to update in this direction, and it feels a bit perverse to use such a difficult time to promote an unrelated hobbyhorse, as a more recent post harped on (I think a bit too much, but I have some sympathy for it).
I should emphasize that I agree with the point about mental health here, I more noted it as one of the major points of the post that was not really disputable. If MIRI is one of the only orgs making a truly decent effort to save the world, then that's just the way things are. Dealing with that fact in a way that promotes a healthy culture/environment, if it is true, is inherently very difficult, and I don't blame MIRI leadership for the degree to which they fail at it given that they do seem to try.
Which parts? I completely agree that the controversy is in large part over comparisons to Leverage, and that there is a great deal of controversy, but I'm not aware of a major factual point of the piece that is widely contested. Much of the post, where it gets specific, concentrates on things like internal secrecy to avoid infohazards, MIRI thinking they are saving the world, and are one of the only groups putting a serious effort towards it, and serious mental health issues many people around these groups experienced, all things I think are just true and publicly available. I also take it that the piece was substantially playing down the badness of Leverage, at least implicitly, for instance by invoking similarities between both and the culture of normal start-ups and companies. Much of the controversy seems to be over this, some over the author's connections to Michael Vassar, some over interpretations of facts that seem much less sinister to others (like the idea that if the author had been open about paranoid fantasies with MIRI employees, they might be disturbed and even try to report her for this, which others pointed out was pretty normal and probably more healthy than the Leverage approach Zoe described). I'm not saying that none of the controversy was related to contested facts, or that everything in the piece is on the ball, just that you seem to be giving it too little credit as an account of governance/culture problems worth considering based on what I see to be a fairly superficial reading of karma/comment count.
On 4, my impression of the controversy over this piece is just that it makes comparisons to both Leverage, and standard start up culture, in a way that seemed inapt and overly generous to Leverage to many people. The on the ground facts about MIRI in it are mostly undisputed from what I can tell, and many of them are governance issues worth criticizing, and relevant to a post like this.
Oh definitely, that’s why I wanted to emphasize that my point wasn’t directly relevant to the one you were making, I just thought it would be useful context for anyone who read the Twitter thread but not the ensuing discussion that clarified or nuanced some of the points it raises.
Not directly related to the main point here, but Vaughan has commented on some of the points in this thread on the forum and the replies add some additional information and context to the 2018 SBF stuff from other people involved at the time:
In the wake of the recent FTX downfall, I want to add that I would also add a section on Sam Bankman Fried. It also means there are several existing answers than will need to be updated, either in fairly minor ways (Q4, Q11), or more substantially in a way that tracks updates in my own views rather than just updates on the current state of affairs (Q1, Q13).
I think this point is really important. Statements like those mentioned in the post are important, but now that FTX doesn’t look like it’s going to be funding anyone going forward, they are also clearly quite cheap. The discussion we should be having is the higher stakes one, where the rubber meets the road. If it turns out that this was fraudulent, but then SBF makes a few billion dollars some other way, do we refuse that money then? That is the real costly signal of commitment, the one that actually makes us trustworthy.
Got it, should be fixed now
My friend Micha, president of EA RIT, recently made an account (@botahamec@mas.to).
Yup! In this way it also has things in common with the mentioned "archipelago" utopia. Another example in this vein that I've heard good things about but haven't read is Ada Palmer's "Too Like the Lightning".
Will MacAskill wrote one you can get to by scanning the QR code towards the back of WWOTF:
Oh wow, yeah definitely, thanks! I think the forum defaults to this type of copyright anyway, but feel free to do whatever you want with my writing here as long as you properly source and credit it.