Posts

NY Times on the FTX implosion's impact on EA 2022-11-14T03:51:38.432Z
Longevity research as AI X-risk intervention 2022-11-06T17:58:09.140Z
AllAmericanBreakfast's Shortform 2022-08-20T05:43:26.331Z
UVC air purifier design and testing strategy 2022-06-01T05:35:00.134Z
Early spending research and Carrick Flynn 2022-05-19T05:08:44.245Z
Prediction Markets For Credit? 2022-03-05T20:33:22.148Z
Text-to-speechifying the EA Forum 2022-02-11T01:43:02.463Z
Submit comments on Paxlovid to the FDA (deadline Nov 29th). 2021-11-27T19:30:56.868Z
EA is a Career Endpoint 2021-05-14T23:58:37.138Z
Talking With a Biosecurity Professional (Quick Notes) 2021-04-10T04:23:10.056Z
Why I prefer "Effective Altruism" to "Global Priorities" 2021-03-25T18:23:19.182Z
Articles are invitations 2021-03-17T20:28:15.056Z
Don't Be Bycatch 2021-03-10T05:28:28.487Z
For Better Commenting, Avoid PONDS 2021-02-05T00:29:57.391Z
Trade Heroism For Grit. 2020-06-06T19:57:04.733Z
EA lessons from my father 2020-05-10T20:37:51.407Z
Why aren't we talking about personal development? 2020-02-29T20:23:14.240Z
[WIP] Summary Review of ITN Critiques 2019-10-09T08:27:49.403Z
Competition is a sign of neglect in important causes with long time horizons for impact. 2019-08-31T01:42:46.531Z
Peer Support/Study/Networking group for EA math-centric students 2019-07-28T21:47:47.301Z
Math advising interview notes + project ideas (for math-inclined EA career changers) 2019-07-26T19:40:20.406Z
Call for beta-testers for the EA Pen Pals Project! 2019-07-26T19:02:03.422Z
Seeking EAs to Interview on Career Change Resources 2019-07-12T00:57:26.471Z
Open for comment: EA career changer worksheet 2019-07-03T20:05:18.890Z
For older EA-oriented career changers: discussion and community formation 2019-07-01T20:46:00.021Z

Comments

Comment by AllAmericanBreakfast on Why did CEA buy Wytham Abbey? · 2022-12-08T18:26:50.474Z · EA · GW

I chose my words carefully here, and phrased my comment as a hypothetical pathway by which the conference center could be net positive EV with a negative effect on donations. The likelihood that it is in fact positive EV is an entirely separate question. We don't have data on that - not even the bad tweets and press we're getting right now is evidence of the effect on donations. Obviously, we're never going to have great data and we'll have to do reasoning under uncertainty. But I don't think we should update much on twitter. Until someone really digs in and writes the analysis, I'm withholding judgment.

Comment by AllAmericanBreakfast on Why did CEA buy Wytham Abbey? · 2022-12-07T21:16:41.176Z · EA · GW

This conference center can drive donations in multiple ways: by improving the quality of projects and ideas, by increasing the points of contact with EA, by becoming an object of media attention, and by provoking reactions within EA to the conference center’s existence and symbolism.

To argue that negative press makes this conference center extremely net negative, it’s not enough to say it’s going to generate bad press. That bad press needs to cause people who were previously going to become substantial donors to EA to reconsider their decision. And that effect needs to have no substantial counterbalance from the other ways the conference center can drive donations.

Beyond this, the conference center can also be net positive EV even if it has a net negative effect on donations. If it cuts out donations in half, but triples the effectiveness of the money we do spend, then it’s paying for itself in utilons.

Comment by AllAmericanBreakfast on "Write a critical post about Effective Altruism, and offer suggestions on how to improve the movement." · 2022-12-07T03:22:57.643Z · EA · GW

I tried something similar and got the same criticisms of focus on quantitative metrics, lack of diversity, transparency, and accountability. Very similar style, structure, word choice, etc.

Comment by AllAmericanBreakfast on I’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared. · 2022-12-02T20:10:43.158Z · EA · GW

I’m not too confident about this, but one reason you may not have heard about men being held accountable in EA is that it’s not the sort of thing you necessarily publicize. For example, I helped a friend who was raped by a member of the AI safety research community. He blocked her on LessWrong, then posted a deceptive self-vindicating article mischaracterizing her and patting himself on the back.

I told her what was going on and helped her post her response once she’d crafted it via my account. Downvotes ensued for the guy. Eventually he deleted the post.

That’s one example of what (very partial) accountability looks like, but the end result in this case was a decrease in visibility for an anti-accountability post. And except for this thread, I’m not going around talking about my involvement in the situation.

I don’t know how much of the imbalance this accounts for, nor am I claiming that everything is fine. It’s just something to keep in mind as one aspect of parsing the situation.

Comment by AllAmericanBreakfast on I’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared. · 2022-12-02T14:40:57.016Z · EA · GW

My friend is not part of EA, she was just at an EA-adjacent organization, where the community health team does not have reach AFAIK.

Comment by AllAmericanBreakfast on I’m a 22-year-old woman involved in Effective Altruism. I’m sad, disappointed, and scared. · 2022-12-02T08:04:57.452Z · EA · GW

It would be nice to imagine that aspiring to be a rational, moral community makes us one, but it’s just not so. All the problems in the culture at large will be manifest in EA, with our own virtues and our own flaws relative to baseline.

And that’s not to mitigate: a friend of mine was raped by a member of the Bay Area AI safety community. Predators can get a lot of money and social clout and use it to survive even after their misbehavior comes to light.

I don’t know how to deal with it except to address specific issues as they come to light. I guess I would just say that you are not alone in your concern for these issues, and that others do take significant action to address them. I support what I think of as a sort of “safety culture” for relationships, sexuality, race, and culture in the EA movement, which to me means promoting an openness to the issues, a culture of taking them seriously, and taking real steps to address them when they come up. So I see your post as beneficial in promoting that safety culture.

Comment by AllAmericanBreakfast on SBF's comments on ethics are no surprise to virtue ethicists · 2022-12-02T06:10:51.119Z · EA · GW

What you have is a hypothesis. You could gather data to test it. But we should not take any significant action on the basis of your hypothesis.

Comment by AllAmericanBreakfast on SBF's comments on ethics are no surprise to virtue ethicists · 2022-12-02T05:26:34.226Z · EA · GW

I am really specifically interested in the claim you promote that moral calculation interferes on empathic development, rather than contributes to it or is neutral, on net. I don’t expect there’s much lit studying that, but that’s kind of my point. Why would we fee so confident that this or that morality has that or this psychological effect? I have a sense of how my morality has affected me, and we can speculate, but can we really claim to be going beyond that?

Comment by AllAmericanBreakfast on SBF's comments on ethics are no surprise to virtue ethicists · 2022-12-02T02:20:01.285Z · EA · GW

No worries!

I understand your concern. It seems like your model is that you assume most people start with a sort of organic, healthy gut-level caring and sense of fellow-feeling, which moral calculation tends to distort.

My model is the reverse. Most people are somewhere between cold and unfeeling, and aggressively egocentric. Moral reflection builds into them some capacity for paying attention to others and cultivating empathy, which at first starts as an intellectual exercise and eventually becomes a deeply ingrained and felt habit that feels natural.

By analogy, you seem to see that moral reflection turns humans into robots. By contrast, I see moral reflection as turning animals into humans. Or think of it like acting. If you've ever acted, or read lines for a play in school, you might have experienced that at first, it's hard to even understand what your character is saying or identify their objectives. After time with the script, actors understand the goal and develop an intellectual understanding of their character and the actions they use to convey emotion. The greatest actors are perhaps method actors, who spend so much time with their character that they actually feel and think naturally like their character. But this takes a lot of time and effort, and seems like it requires starting with a more intellectualized relationship with their character.

As I see it, this is pretty much how we develop our adult personalities and figure out how to fit into the social world. Maybe I'm wrong - maybe most people have a nice well-adjusted sense of fellow feeling and empathy from the jump, and I'm the weird one who's had to work on it. If so, I think that my approach has been successful, because I think most people I know see me as an unusually empathic and emotionally aware person.

I can think of examples of people with all four combinations of moral systematization and emapthy: high/high, high/low, low/high, and low/low. I'm really not sure how the correlations run.

Overall, this seems like a question for psychology rather than a question for philosophy, and if you're really concerned that consequentialism will turn us into calculators, I'd be most interested to see that argument referring to the psych literature rather than the philosophy literature.

Comment by AllAmericanBreakfast on SBF's comments on ethics are no surprise to virtue ethicists · 2022-12-01T22:56:55.047Z · EA · GW

I think that's fine too.

Comment by AllAmericanBreakfast on SBF's comments on ethics are no surprise to virtue ethicists · 2022-12-01T21:38:23.819Z · EA · GW

Based on this comment, I think I understand your original point better. In most situations, a conscious chain of ethical reasoning held in the mind is not what should be motivating our actions from moment to moment. That would be crazy. I don’t need to consider the ethics of whether to take one more sip of my cup of tea.

But I think the way we resolve this is a common sense and practical form of consequentialism: a directive to apply moral thought in a manner that will have the most good consequences.

One way that might look is outsourcing our charity evaluations to specialists. I don’t have to decide if bednets or direct donations is better: GiveWell does it for me with their wonderful spreadsheets.

And I don’t have to consider every moment whether deontology or consequentialism is better: the EA movement and my identity as an EA does a lot of that work for me. It also licenses me to defer to habit almost 100% of the time, and invites applying modest limits to my obligation to give of my resources - time, money, and by extension thought.

So I think EA is already doing a pretty darn good job of limiting our need to think about ethics all the time. It’s just that when people do EA stuff, that’s what they think about. My personal EA involvement is only a tiny fraction of my waking hours, but if you thought of my EA posting as 100% of who I am, it would certainly look like I’m obsessed.

Comment by AllAmericanBreakfast on SBF's comments on ethics are no surprise to virtue ethicists · 2022-12-01T20:38:32.367Z · EA · GW

The term I'd probably use is hypocrisy. Usually, we say that hypocrisy is when one's behaviors don't match one's moral standards. But it can also take on other meanings. The film The Big Short has a great scene in which one hypocrite, whose behavior doesn't match her stated moral standards, accuses FrontPoint partners of being hypocrites, because their true motivations (making money by convincing her to rate the mortgage bonds they are shorting appropriately) don't match their stated ethical rationales (combating fraud).

On Wikipedia, I also found definitions from David Runciman and Michael Gerson showing that hypocrisy can go beyond a behavior/ethical standards mismatch:

According to British political philosopher David Runciman, "Other kinds of hypocritical deception include claims to knowledge that one lacks, claims to a consistency that one cannot sustain, claims to a loyalty that one does not possess, claims to an identity that one does not hold".[2] American political journalist Michael Gerson says that political hypocrisy is "the conscious use of a mask to fool the public and gain political benefit".[3]

I think "motivational hypocrisy" might be a more clear term than "moral schizophrenia" for indicating a motives/ethical rationale mismatch.

Comment by AllAmericanBreakfast on SBF's comments on ethics are no surprise to virtue ethicists · 2022-12-01T17:53:19.840Z · EA · GW

The Mayo Clinic says of schizophrenia:

“ Schizophrenia is characterized by thoughts or experiences that seem out of touch with reality, disorganized speech or behavior, and decreased participation in daily activities. Difficulty with concentration and memory may also be present.”

I don’t see the analogy between schizophrenia and “a certain coldness toward ethical choices,” and if it were me, I’d avoid using mental health problems as analogies, unless the analogy is exact.

Comment by AllAmericanBreakfast on SBF's comments on ethics are no surprise to virtue ethicists · 2022-12-01T17:50:22.360Z · EA · GW

Thanks for clarifying!

The big distinction I think needs to be made is between offering a guide to extant consensus on moral paradigms, and proposing your own view on how moral paradigms ought to be divided up. It might not really be possible to give an appropriate summary of moral paradigms in the space you’ve allotted to yourself, just as I wouldn’t want to try and sum up, say, “indigenous vs Western environmentalist paradigms” in the space of a couple paragraphs.

Comment by AllAmericanBreakfast on SBF's comments on ethics are no surprise to virtue ethicists · 2022-12-01T13:18:36.040Z · EA · GW

I’m confused about how you’re dividing up the three ethical paradigms. I know you said your categories were excessively simplistic. But I’m not sure they even roughly approximate my background knowledge of the three systems, and they don’t seem like places you’d want to draw the boundaries in any case.

For example, my reading of Kant, a major deontological thinker, is that one identifies a maxim by asking about the effect on society if that maxim were universalized. That seems to be looking at an action at time T1, and evaluating the effects at times after T1 should that action be considered morally permissible and therefore repeated. That doesn’t seem to be a process of looking “causally upstream” of the act.

When I’ve seen references to virtue ethics, they usually seem to involve arbitrating the morality of the act via some sort of organic discussion within one’s moral community. I don’t think most virtue ethicists would think that if we could hook somebody up to a brain scrambler that changed their psychological state to something more or less tasteful immediately before the act, that this could somehow make the act more or less moral. I don’t buy that virtue ethicists judge actions based on how you were feeling right before you did it.

And of course, we do have rule utilitarianism, which doesn’t judge individual actions by their downstream consequences, but rules for actions.

Honestly, I’ve never quite understood the idea that consequentialism, deontology, and virtue ethics are carving morality at the joints. That’s a strong assertion to make, and it seems like you have to bend these moral traditions to the categorization scheme. I haven’t seen a natural categorization scheme that fits them like a glove and yet beating distinguishes one from the other.

Comment by AllAmericanBreakfast on Should we Audit Dustin Moskovitz? · 2022-11-28T04:41:20.738Z · EA · GW

If there was a consequence-free way to do it, it seems like a good idea. One difference is that Moscovitz’s funding and fortune has been reliable for years. Just by the Lindy principle, it seems on its face to be more likely than SBF’s to continue without scandal or disappearance.

Comment by AllAmericanBreakfast on Democratizing the workplace as a cause area · 2022-11-27T19:14:49.246Z · EA · GW

You’re the one proposing the new cause area :)

Comment by AllAmericanBreakfast on Democratizing the workplace as a cause area · 2022-11-27T16:37:44.407Z · EA · GW

For having a baby specifically, postpartum depression affects 10-20% of new mothers. A fair fraction experience suicidal ideation, and suicide does occur.

One reason we may see more explicit discussion of the stresses and dissatisfactions of working life is social acceptability bias, which is powerful enough that it can completely distort our common-sense perceptions of how the population at large feels.

Nothing precludes us from helping with people’s working life even if the rest of life also makes us miserable. However, I think it is good to approach this with clarity. For example, is there reason to think helping with the stressors of working life is more tractable, important, and neglected than helping with personal life stressors?

Comment by AllAmericanBreakfast on Sam Bankman-Fried, the FTX collapse, and the limits of effective altruism [The Hindu] · 2022-11-27T05:57:19.709Z · EA · GW

EA is apparently such a successful idea that even its critics feel compelled to use its framing to levy their criticism:

“SBF’s Future Fund donated $36.5 million to Effective Ventures, a charity chaired by friend and mentor MacAskill. It is unclear what the basis of this donation was. Was there a randomised, controlled trial (RCT) to decide if this was the best use of the money?”

“ Why should we take his views on existential threats facing humanity that he calls to our attention in his latest book What We Owe The Future with any seriousness? If MacAskill cannot predict threats to his own near-term future, namely, the threat associating with SBF posed to his own reputation and the effective altruism movement, how well can he estimate, much less affect, the million-year prospects of humanity?”

Because it is in fact much easier to predict the major threats facing humanity in this century than to predict which specific companies will commit fraud. The former is just a property of the universe, available to discover on inspection. The latter is being actively hidden by precisely the person with the best ability and most incentive to hide it.

“ What would MacAskillian calculations make of CMC’s modest beginning as a single-bed clinic?”

I don’t know about MacAskill specifically, but this was in a time before AI was a looming threat. We are generally very up on neglected and important global health interventions.

“ If malaria eradication in Bangladesh is one’s greatest passion, the least that can be done is to live and work amongst the people of Chittagong’s hill tract districts.”

To be self-consistent, shouldn’t you be living among Chittagong’s hill tract districts if you’re going to use them in your piece?

Comment by AllAmericanBreakfast on Democratizing the workplace as a cause area · 2022-11-27T00:26:43.755Z · EA · GW

That's getting toward an intriguing comparison, but it sounds like those two figures are from separate surveys? One can be satisfied with one's personal life while also being "stressed" by it on a daily basis - for example, I just got back from visiting friends who have a fussy two-month-old baby, and while I suspect they'd say they were satisfied with their personal lives, they are certainly very stressed right now. Being "stressed" is not the same thing as "not liking" one's job. Overall, I'd really like to see a direct comparison.

Comment by AllAmericanBreakfast on Democratizing the workplace as a cause area · 2022-11-26T23:07:55.299Z · EA · GW

In your opening paragraph establishing the importance of the issue, I noticed that you didn’t compare the way people feel at work to the way they feel in other situations in life. Is work making some people miserable, or is work just another part of what some people experience as an overall miserable life?

Comment by AllAmericanBreakfast on Announcing the first issue of Asterisk · 2022-11-23T09:44:01.719Z · EA · GW

I love the way the main text is centered and the footnotes appear immediately to the right of the reference.

Comment by AllAmericanBreakfast on EA should blurt · 2022-11-22T23:43:26.038Z · EA · GW

I think the issue you’re addressing is a real and important one. However, I think current norms are a response to disadvantages of blurting, both on an individual and movement level. As you note, most people’s naive divergent first impressions are wrong, and on issues most salient to the community, there’s usually somebody else who’s thought about it more. If we added lots more blurting, we’d have an even greater problem with finding the signal in the noise. This adds substantial costs in terms of reader energy, and it also decreases the reward for sharing carefully vetted information because it gets crowded out by less considered blurting.

Hence, the current equilibria, in which ill-considered blurting gets mildly socially punished by people with better-considered views frustrated by the blurter, leading to pre-emptive self-censorship and something of a runaway “stay in your lane” feedback loop that can result in “emperor has no clothes” problems like this one. Except it wasn’t a child or “blurter” who exposed SBF - it was his lead competitor, one of the most expert people on the topic.

I’ve said it before and I’ll say it again, EA’s response to this fraud cannot be - not just shouldn’t, but can’t - to achieve some combination of oracular predictive ability and perfect social coordination for high-fidelity information transmission. It just ain’t gonna happen. We should assume that we cannot predict the next scandal. Instead we should focus on finding general-purpose ways to mitigate or prevent scandal without having to know exactly how it will occur.

This comes down to governance. It’s things like good accounting, finding ways to better protect grantees in the case that their funder goes under, perhaps increased transparency of internal records of EA orgs, that sort of thing.

Comment by AllAmericanBreakfast on On EA messaging - being a doctor in a poorer country · 2022-11-20T19:15:28.098Z · EA · GW

And I'd note that there are lots of EAs outside of the west. I've spoken to EAs in the Phillipines, Brazil, Russia... It would be great if we could support them in building EA-linked institutions that are specific to the opportunities and challenges they face.

Comment by AllAmericanBreakfast on Polis (Cluster) Poll: What are our opinions on EA, governance and FTX? (Now with visualisation) · 2022-11-19T18:41:38.108Z · EA · GW

Fair enough :) I still think it's worth amplifying this point.

Comment by AllAmericanBreakfast on Polis (Cluster) Poll: What are our opinions on EA, governance and FTX? (Now with visualisation) · 2022-11-19T14:48:43.124Z · EA · GW

I’m not convinced that the opinions that will be aggregated by this poll will constitute a representative sample of the EA movement.

Comment by AllAmericanBreakfast on Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely. · 2022-11-18T21:27:43.690Z · EA · GW

Absolutely. We obviously can weather losing funding. EA started small and it can grow back. And people always have enjoyed heaping one form of abuse on it or another. The more fundamental damage will be what we inflict on ourselves.

But I'm still optimistic this will mostly blow over with respect to the EA movement. Mostly, I think that people are being louder than usual across the board, but they seem to be expressing opinions they'd already held. When it stops being as salient, people will probably more or less quiet down and keep pursuing the same types of goals and having the same perspectives they had previously. Hopefully in the context of improved movement governance.

Comment by AllAmericanBreakfast on Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely. · 2022-11-18T21:22:38.535Z · EA · GW

We treat them, in many ways, like "philosopher-kings" of the community

Having read Plato, I have no idea what you mean by this.

Comment by AllAmericanBreakfast on If Professional Investors Missed This... · 2022-11-17T06:07:17.314Z · EA · GW

If we had commissioned a report on contingency plans for FTX fraud (w/o predicting fraud, just saying what we’d do to mitigate the fallout if it happened) I think that would make us look prudent? Because it would have been prudent.

I’m no financial risk manager, but the point of having one is to figure out the set of things that are cost effective. I will bet a buttcheek that the number of common sense cost effective risk mitigation steps we could have taken is greater than zero.

Comment by AllAmericanBreakfast on If Professional Investors Missed This... · 2022-11-17T00:25:52.067Z · EA · GW

There’s a big difference between “we should have seen this coming” and “we should have taken steps to mitigate possible disaster.”

The fact that EA had more to lose in some ways from the FTX bust in no way provided information to predict that bust. “If professional investors missed this…” holds true whether EA had $1 or $1 billion on the line.

But there are steps EA could have taken to mitigate the fallout even without having been able to predict fraud.

For example, we could have invested in legal clarity and contingency plans in case of FTX going bankrupt or being revealed as fraudulent. It’s like wearing your seatbelt. Nobody wears a seatbelt because they predict they’re going to get in a crash. They do it because it’s a cheap and potent form of risk mitigation, without making any effort to predict the outcome on their specific trip. EA risk management should look like installing seatbelts for the movement.

Comment by AllAmericanBreakfast on Who's at fault for FTX's wrongdoing · 2022-11-16T07:32:28.349Z · EA · GW

Which ethical systems do you think have a better track record and why? Does virtue ethics, the preferred moral system of Catholics, have to take responsibility for pedophile priests? Does the rule-based ethics of deontology have to take responsibility for mass incarceration in the USA?

I can understand people claiming that this ethics implies that crazy conclusion, or assigning blame to an idea that seems clearly to have inspired a particular person to do a particular act. But I have no confidence that anybody on this earth has a clue about which ethical system is most or least disproportionately to blame for common-sense forms of good or bad behavior.

Comment by AllAmericanBreakfast on Proposals for reform should come with detailed stories · 2022-11-15T02:09:04.876Z · EA · GW

Crypto's failures are massive and obvious. Yet it's hard to see how it has delivered the goods. That might be a matter of how it's portrayed in the media.

So I'd ask you: how many of the unbanked has it banked? What fraction of remittances are via crypto, and how have the people using crypto remittances been affected by the volatility in crypto?  Is it making progress on providing decentralized property rights in lawless nations, and are we pleased with the way that benefits and costs have been distributed in these populations? What control will I have over my employment history or medical records via crypto, when my doctor has my medical records and I have to publicize my employment history to get hired? Why do I need a decentralized, secure, hard-to-fake weather report?

I don't expect you to have answers to all these questions, but I will openly say I was skeptical of crypto before this crash, and it's even easier to lean into that skepticism now. The specific thing I think crypto seems good for is funding real-money prediction markets. But I'd trade that away in a heartbeat to get rid of the ills I've seen come of crypto.

Convincing people like me to come 'round will require showing that there really is a large magnitude of realized practical benefit. It takes time, I am patient, but right now, it seems right to me for EA to keep crypto at arm's length in most cases.

Comment by AllAmericanBreakfast on Proposals for reform should come with detailed stories · 2022-11-15T01:51:15.650Z · EA · GW

I fully agree with this, and let me give a somewhat detailed story about what this culture shift might look like if we worked through some of its implications.

Right now, I think people go through a semi-unconscious thought process something like this:

I want to make a proposal for X. But if I go on too long about X, nobody will read it because it'll be too long. And those who do will pounce on the first seeming error they can find. The longer I go on, the bigger the attack surface. And the more contingent my claims become, so the less likely the whole story is to be true. Almost certainly, nobody is going to follow up or build on what I write.  But maybe if I keep my writing short, somebody who does have power and influence will view me as "voting with my words" for a general type of intervention that they can actually get enacted. They'll work out the details.

And so when we see calls for "audits," or what elsewhere I've called "risk management for EA," we should understand that these aren't viewed by their authors as fully fleshed-out proposals. They're the seeds of ideas that could be built out by a person who has the power and influence to access money and professional human resources in the EA movement. Pollsters don't call and ask you to draft an entire bill for your favored policies. In the EA movement, we don't even call people to ask their opinion about the direction of the movement. So if people want to have input, the way they can give it is by making one-liners advocating for things like "audits," and hope that somebody in authority will take them up on the suggestion.

If we want people to explain the details of their proposal in greater depth, we need to make it worth their while. Well-thought-through proposals ought to be known as the sort of thing that can result in offers of grants or jobs. Going forward, I'd reframe the "criticism" contest as an "EA policy proposal" contest, or even have a range of similar contests addressing criticism, EA policy and governance, cause areas, interventions, and so on. If we can't afford to do that for comments on the EA forum, then EA organizations like OpenPhil, 80k, CEA, and so on ought to have dedicated places for people to make proposals about EA governance, where those proposals are read, taken seriously in some kind of legible manner, and can demonstrably lead to real change even when it's not an EA insider making the proposal.

If that's not tractable, then I would actually prefer if these organizations could explicitly declare that their organizations are not open to external input or oversight, and that this is a matter of policy. Ideally, they'd explain this, but just a one-sentence declaration about the forms of openness they are or are not open to would be an improvement. For example, one proposal I heard was that EA orgs should publish all of their internal emails going forward. I'm given to understand this is a norm in the Linux community, and that one can read all of Linus Torvalds' cantankerous emails if one so desires. If OpenPhil didn't want to publish their emails, I'd understand. But it would be nice if they had a web page where they explicitly declared that they'd considered and rejected this idea, even nicer if they articulated why, and best of all if they outlined the true argument for why they rejected it, and created a form in which people could submit counterarguments. Perhaps OpenPhil could then put these counterarguments to a vote, and declare themselves obligated to published a response if a counterargument received  a certain number of votes.

Comment by AllAmericanBreakfast on NY Times on the FTX implosion's impact on EA · 2022-11-14T06:46:16.227Z · EA · GW

Did you mean "low moral expectations" instead of "low moral standards?"

Comment by AllAmericanBreakfast on NY Times on the FTX implosion's impact on EA · 2022-11-14T04:56:05.792Z · EA · GW

I added a note for people to check this comment out.

Comment by AllAmericanBreakfast on A Newcomer's Critique of EA - Underprioritizing Systems Change? · 2022-11-13T19:42:26.713Z · EA · GW

As you know, the question of how a government ought to provision for welfare and the morality and economics of inequality is a multifaceted debate that's raged not for decades, but centuries. Let me give a personal example of why I think it's best to avoid getting wrapped up in those debates in most cases.

One of my interests is the question of whether we ought to compensate people for selling a kidney. I've read dozens of news articles, many scholarly papers, and talked with doctors and economists. I'm a biomedical engineering grad student, have a philosophy/humanities background, and a decent familiarity with economics. Furthermore, I have a lot of experience with "diplomatic dialog," facilitating friendly conversations in the context of sales, interviews, and teaching. So I think I'm unusually well-positioned to navigate this debate. 

I literally just got off a two-hour phone call with a doctor who used to screen kidney donors to get his thoughts. He's against legalization. A very wise and experienced person. Yet it took me two hours just to understand his chain of reasoning. Some of his views were internally inconsistent. I'd patiently talk through a line of thought with him, and we'd find that the reasoning was circular. Fortunately, he's very patient, and our conversation was non-defensive, so that did not result in an ego conflict. What happened instead is that he'd import an entirely different argument that now became his true fundamental objection. And then there would be another one after that. And another one after that.

After two hours, I do think I understand his reasoning, more or less. Regardless of the practical health/economic aspects of the problem, and regardless of how the seller feels about their decision to engage in this transaction, he feels that it's undignified for society to allow organs to be sold. Selling a kidney is not admirable, so it degrades the spirit of altruism that pervades kidney donation. It's not the kind of society we should want to live in - an invasion of the sanctity of the body.

He's willing to admit that this might not be the right way to think about dignity and altruism, but nobody on the pro-legalization side is taking this dimension of the problem seriously. And even if they did, there's no cut-and-dry way to make the case that permitting kidney sales enhances human dignity, or sanctifies the body, or betters the moral worth of our society.

He doesn't see the question as urgent. These matters transcend the practical urgency of a long and growing waitlist for kidney transplant, or the $28 billion the USA spends on dialysis annually. He's perfectly willing to wait patiently for somebody to him personally to change his perspective on the dignity and symbolism of kidney sales. Until they do, he's happy to stay with his present perspective, which as an added benefit is compatible with the law.

Now, I personally think that we should permit kidney sales in the short run, but that implantable dialysis will more or less completely eliminate demand for living kidneys within a few decades. I could make it my life's work to construct a moral argument for kidney sales that might be persuasive to people like the doctor I spoke with today. But the debate's been raging for decades, the Catholic church is on the other side, Federal law would have to be changed, there's no clear argumentative strategy to change people's minds about "dignity," and the problem itself is temporary on long enough time scales.

Since I'm a biomedical engineer, I have the opportunity to work on the bioartificial kidney technology that I think will eventually replace living kidney transplant. I can also work on a lot of other technological solutions for human health problems, or policy issues that might be uncontroversial and make a big difference in human health. Why select a political issue where we've had decades of evidence of the inability to make progress, for reasons that are easy to understand once you start seeing what motivates people on each side of the debate?

When I shift from considering kidney sales, where the practical arguments are cut-and-dry in favor of permitting them, to measures to increase taxes to fund social services, where even the practical economic arguments are much more controversial, and where you're not trying to permit a voluntary transaction but force a large confiscation of money from some of the most powerful individuals in the world, it seems to me that you're not only at serious risk of doing harm, you're at an even greater risk of failing to do good -- just as many generations of our ancestors have.

This isn't to say you're wrong. It's to say that this is what you'd have to persuade me of if you wanted to convince me, personally, that EA should be doing more progressive activism on taxation and welfare. But my warning is that this would probably have to start with the equivalent of the two-hour phone call I had earlier with the doctor, and it might turn out that I'd convince you, rather than the other way 'round. And either way, it would only be zero or one person who was convinced. It's easy to get sucked into, but tough to scale or accomplish things with. That's an important reason why I have chosen to pursue a career in technology rather than in politics, and have affiliated myself with a movement that focuses on philanthropic provision of goods and services rather than on trying to use government as the primary vehicle for its agenda.

If you think you can make a compelling case (i.e. a case that would convince me) that I'm wrong in my thinking, and that the best way to do good in the world might be for me to focus on politics in some way, let me know!

Comment by AllAmericanBreakfast on A Newcomer's Critique of EA - Underprioritizing Systems Change? · 2022-11-13T04:55:51.289Z · EA · GW

Parting a billionaire from his money when he doesn't want to give it to you is extremely difficult. It also doesn't always give you the results you were hoping for. Consider the track records of countries now and in the past that have sought to use government power to markedly reduce inequality. Which ones seem like examples of outcomes you'd be happy with?

EA has had quite a bit of fast success (FTX notwithstanding) in inducing billionaires to part with their money willingly.

Comment by AllAmericanBreakfast on How could we have avoided this? · 2022-11-12T22:40:25.193Z · EA · GW

I support these actions, conditional on them becoming common knowledge community norms. However, it's strictly less likely for us to trade with bad actors and project that we don't support them than it is for us to just trade with bad actors.

Comment by AllAmericanBreakfast on [deleted post] 2022-11-12T22:14:00.711Z

I'm a monogamous man with very little connection to any in-person EA community - I attended EAGxBoston last year, had a great time, and that's it. So hearing these anecdotes about the in-person scene is quite disturbing to me.

I have no beef with polyamory. I'm a big fan of EA. And I utterly disagree with your characterization of EA as "altruism stripped of empathy and morality."

But what you are describing is an incredibly toxic power dynamic. It does sound predatory. Mixing institutional authority, money, drugs, and sex sounds like a recipe for disaster. We've already had this sh*t going on at the Monastic Academy. If it's pervasive in the EA and LW communities more broadly, then that's terrible for the people on whom this unwanted attention is inflicted. An absolute scandal factory. I'm quite prepared to believe it's going on.

Add this to the list of things an EA risk management and whistleblowing organization needs to focus on.

Comment by AllAmericanBreakfast on How could we have avoided this? · 2022-11-12T21:58:41.067Z · EA · GW

Right, I agree that it's good to drain his resources and turn them into good things. The problem is that right now, our model is "status is a voluntary transaction." In that model, when SBF, or in this example VP, donates, they are implicitly requesting status, which their recipients can choose to grant them or not.

I don't think grantees - even whole movements - necessarily have a choice in this matter. How would we have coordinated to avoid granting SBF status? Refused to have him on podcasts? But if he donates to EA, and a non-EA podcaster (maybe Tyler Cowen) asks him, SBF is free to talk about his connection and reasoning. Journalists can cover it however they see fit. People in EA, perhaps simply disagreeing, perhaps because they hope to curry favor with SBF, may self-interestedly grant status anyway. That wouldn't be very altruistic, but we should be seriously examining the degree to which self-interest motivates people to participate in EA right now.

So if we want to be able to accept donations from radioactive (or potentially radioactive) people, we need some story to explain how that avoids granting them status in ways that are out of our control. How do we avoid journalists, podcasters, a fraction of the EA community, and the donor themselves from constructing a narrative of the donor as a high-status EA figure?

Comment by AllAmericanBreakfast on How could we have avoided this? · 2022-11-12T21:30:50.448Z · EA · GW

"Old mafia don?" How about Vladimir Putin?

I tend to lean in your direction, but I think we should base this argument on the most radioactive relevant modern case.

Comment by AllAmericanBreakfast on How could we have avoided this? · 2022-11-12T21:26:34.356Z · EA · GW

The quote you're citing is an argument for abject helplessness. We shouldn't be so confident in our own utter lack of capacity for risk management that we fund this work with $0.

Comment by AllAmericanBreakfast on How could we have avoided this? · 2022-11-12T21:22:33.099Z · EA · GW

Also, the point of risk management isn't to identify, with confidence, what will happen. It almost certainly was not possible to predict FTX's collapse, much less the possibility of fraud, with high confidence.

What we probably could have done is find ways to mitigate that risk. For example, it sounds possible that money disbursed from the Future Fund could be clawed back. Was there an appropriate mechanism by which we could have avoided disbursing money until we were sure that grantees could feel totally secure that this would not happen? In fact, is there a way this could be implemented at other grantmaking organizations?

Could we have put the brakes on incorporating FTX Future Fund  as an EA-affiliated grantmaker until it had been around for a while?

There are probably prudent steps we could start taking in the future to mitigate such damages without having to be oracles.

Comment by AllAmericanBreakfast on How could we have avoided this? · 2022-11-12T21:14:10.390Z · EA · GW

I’m not trying to take credit for my silent suspicion. One of the reasons the crypto industry is notorious is because of fraud. I think that’s a natural case a dedicated risk team could have considered if we’d had one.

Comment by AllAmericanBreakfast on How could we have avoided this? · 2022-11-12T20:19:42.305Z · EA · GW

I have been quietly thinking "this is crypto money and it could vanish anytime." But I never said it out loud, because I knew people like Eliezer would say the kind of thing Eliezer said in the tweet above: "you're no expert, people way deeper into this stuff than you are putting their life savings in FTX, trust the market." It's a strangely inconsistent point of view from Eliezer in particular, who's expressed that his "faith has been shaken" in the EMH.

What Eliezer's ignoring in his tweet here is that the people who were skeptical of FTX, or crypto generally, mostly just didn't invest, and thus had no particular incentive to scrutinize FTX for wrongdoing. As it turns out, the only people looking closely enough at FTX were their rivals, who may have been doing this strategically in order to exploit vulnerabilities, and thus were incentivized not to spread this information until they were ready to trigger catastrophe. If there's money in scrutinizing a company, there's no money in releasing that information until after you've profited from it.

In my opinion, we need dedicated risk management for the EA community. The express purpose of risk management would be to start with the assumption that markets are not efficient, to brainstorm all the hazards we might face, without a requirement to be rigorously quantitative, to try and prioritize them according to severity and risk, and figure out strategies to mitigate these risks. And to be rude about it.

I think this does point to a serious failure mode within EA. Deference to leadership + insistance on quantitative models + norms of collegiality + lack of formal risk assessment + altruistic focus on other people's problems -> systemic risk of being catastrophically blindsided more than once.

Comment by AllAmericanBreakfast on AllAmericanBreakfast's Shortform · 2022-11-12T05:14:26.380Z · EA · GW

EA needs its own risk assessment team. I'm sure I'm not the only one who looked at FTX and quietly thought to myself, "this is crypto money and it could vanish anytime." We do a lot of unasked-for risk management on behalf of the planet. How can we fill that role if we can't even do an adequate job of managing risk for the movement itself?

EA risk management should focus on things like:

  • Protecting grantees from making major life changes based on winning a grant before their funding is absolutely locked in.
  • Protecting organizations from disbursing grants until they're sure they won't have to claw it back.
  • Preventing the EA movement from spreading infohazards.
  • Considering the risks of aims like movement growth, media engagement, and branding organizations as "part of EA."
Comment by AllAmericanBreakfast on We must be very clear: fraud in the service of effective altruism is unacceptable · 2022-11-11T18:09:09.149Z · EA · GW

Yes, I agree that believing the world may be about to end would tend to motivate more rules-breaking behavior in order to avoid that outcome. I'll say that I've never heard anybody make the argument "Yes, AGI is about to paperclip the world, but we should not break any rules to avoid that from happening because that would be morally wrong."

Usually, the argument seems to be "Yes, AGI is about to paperclip the world, but we still have time to do something about it and breaking rules will do more harm than good in expectation," or else "No, AGI is not about to paperclip the world, so it provides no justification for breaking rules."

I would be interested to see somebody bite the bullet and say:

  • The world is about to be destroyed.
  • There is one viable strategy for averting that outcome, but it requires a lot of rule-breaking.
  • We should not take that strategy, due to the world-breaking, and let the world be destroyed instead.
Comment by AllAmericanBreakfast on We must be very clear: fraud in the service of effective altruism is unacceptable · 2022-11-11T17:36:44.220Z · EA · GW

I’ll have to think about that. I’ve been working on a response, but on consideration, perhaps it’s best to reserve “utilitarianism” for the act of evaluating world-states according to overall sentient affinity for those states.

Utilitarianism might say that X is bad insofar as people experience the badnesss of X. The sum total of badness that people subjectively experience from X determines how bad it is.

Deontology would reject that idea.

And it might be useful to have utilitarianism refuse to accept that “deontology might have a point,” and vice versa.

Comment by AllAmericanBreakfast on The FTX Future Fund team has resigned · 2022-11-11T06:34:02.065Z · EA · GW

I agree with this. Actually, I think we could go further and initiate some form of productive public dialog with the wider world on this question. "Do you think that we ought to take money in the EA ecosystem and pay it back to people [potentially] defrauded by FTX, or should we put this money into the charities for which it was intended?"

That seems like responsible stewardship, and I'd expect people's opinions would vary widely.

The question would be how we'd make such decisions, how we'd hold this dialog, and how much time and energy we'd want to put into that endeavor. One way might be to solicit input from groups that we think ought to have a say: charities we donate to, ethical thinkers, community leaders, and people who lost money in the FTX meltdown, to name a few. We could potentially make the decision by running some sort of vote, which could be as sophisticated as we like. We could vote on whether to return the money, but also how much of it should be returned.

Just brainstorming here, I don't expect that these are the ideal way to deal with this. Just a starting point.

Comment by AllAmericanBreakfast on We must be very clear: fraud in the service of effective altruism is unacceptable · 2022-11-11T06:27:01.297Z · EA · GW

It is people who are uncertain about whether utilitarianism is correct in the first place who decide to factor in moral uncertainty. Also open question how you actually factor it in, and whether this improved version also doesn't run into its own set of repugnant conclusions.

Utilitarianism factors in uncertainty, moral and epistemic. Sure, if you can find a way to criticize factoring in uncertainty into utilitarianism, I'm all ears! But of course, then whatever was the superior solution is what utilitarianism recommends as well. Utilitarianism is best thought of as something engineered, not given.

I would also like to separate moral uncertainty from moral parliament. Moral parliament is usually for multiple people with different values to provide their inputs to a decision process (such as superintelligent AI's values). Moral uncertainty can exist inside the mind of a single person.

I've always heard of moral parliament as being primarily about an individual reconciling their own different moral intuitions into a single aggregate judgment. Never heard it used in the sense you're describing. Here's Newberry & Ord, which is clearly about reconciling one's own diverse moral intuitions, rather than a way of aggregating the moral judgments of a group.

We introduce a novel approach to the problem of decision-making under moral uncertainty, based on an analogy to a parliament. The appropriate choice under moral uncertainty is the one that would be reached by a parliament comprised of delegates representing the interests of each moral theory, who number in proportion to your credence in that theory.

It does seem helpful to have a term for aggregating moral judgments of multiple people, but "moral parliament" is already taken.

Utilitarianism comes with more assumptions than a vague non-formalised sense of "do what you think is better", it formalises "better decisions" in a very specific way.

I was going to keep arguing, but I wanted to ask - it seems like you might be concerned that utilitarianism is "morally unfalsifiable." In general, my own argument here may convey the idea that "whatever moral frameowrk is correct is utilitarian." In which case, it's only tautologically "true" and doesn't provide any actual decision-making guidance of its own. I don't think this is actually true about utilitarianism, but I can see how my own writing here could give that impression. Is this getting at the point you're making?