Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T09:47:47.423Z · score: 3 (2 votes) · EA · GW

"...but do not think that they are smart or committed enough to be engaging at your level?" was intended to be from a generic insecure (or realistic) EA's perspective, not yours. Sorry for my confusing phrasing.

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T09:31:46.838Z · score: 7 (2 votes) · EA · GW

How long ago did you attend your CFAR workshop? My sense is that the content CFAR teaches and who the teachers are have changed a lot over the years. Maybe they've gotten better (or worse?) about teaching the "true form."

(Or maybe you were saying you also didn't get the "true form" even in the more recent AIRCS workshops?)

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T22:00:38.766Z · score: 3 (2 votes) · EA · GW

So, to clarify: this program is for people who are already mostly sure they want to work on AI Safety? That is, a person who is excited about ML, and would maaaaybe be interested in working on safety-related topics, if they found those topics interesting, is not who you are targeting?

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T21:56:22.804Z · score: 14 (7 votes) · EA · GW

If you feel comfortable sharing: who are the people whose judgment on this topic you think is better?

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T21:48:56.469Z · score: 12 (6 votes) · EA · GW

Yeah, I am sympathetic to that. I am curious how you decide where to draw the line here. For instance, you were willing to express judgment of QRI elsewhere in the comments.

Would it be possible to briefly list the people or orgs whose work you *most* respect? Or would the omissions be too obvious?

I sometimes wish there were good ways to more broadly disseminate negative judgments or critiques of orgs/people from thoughtful and well-connected people. But, understandably, people are sensitive to that kind of thing, and it can end up eating a lot of time and weakening relationships.

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T19:05:49.241Z · score: 3 (2 votes) · EA · GW

What are your regular go-to sources of information online? That is, are there certain blogs you religiously read? Vox? Do you follow the EA Forum or LessWrong? Do you mostly read papers that you find through some search algorithm you previously set up? Etc.

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T18:58:52.971Z · score: 11 (5 votes) · EA · GW

4) You seem like you have had a natural strong critical thinking streak since you were quite young (e.g., you talk about thinking that various mainstream ideas were dumb). Any unique advice for how to develop this skill in people who do not have it naturally?

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T18:58:05.007Z · score: 11 (6 votes) · EA · GW

3) I've seen several places where you criticize fellow EAs for their lack of engagement or critical thinking. For example, three years ago, you wrote:

I also have criticisms about EAs being overconfident and acting as if they know way more than they do about a wide variety of things, but my criticisms are very different from [Holden's criticisms]. For example, I’m super unimpressed that so many EAs didn’t know that GiveWell thinks that deworming has a relatively low probability of very high impact. I’m also unimpressed by how many people are incredibly confident that animals aren’t morally relevant despite knowing very little about the topic.

Do you think this has improved at all? And what are the current things that you are annoyed most EAs do not seem to know or engage with?

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T18:57:31.357Z · score: 27 (12 votes) · EA · GW

2) Somewhat relatedly, there seems to be a lot of angst within EA related to intelligence / power / funding / jobs / respect / social status / etc., and I am curious if you have any interesting thoughts about that.

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T18:56:34.129Z · score: 15 (5 votes) · EA · GW

1) Do you have any advice for people who want to be involved in EA, but do not think that they are smart or committed enough to be engaging at your level? Do you think there are good roles for such people in this community / movement / whatever? If so, what are those roles?

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T18:56:01.912Z · score: 51 (23 votes) · EA · GW

Reading through some of your blog posts and other writing, I get the impression that you put a lot of weight on how smart people seem to you. You often describe people or ideas as "smart" or "dumb," and you seem interested in finding the smartest people to talk to or bring into EA.

I am feeling a bit confused by my reactions. I think I am both a) excited by the idea of getting the "smart people" together so that they can help each other think through complicated topics and make more good things happen, but b) I feel a bit sad and left out that I am probably not one of the smart people.

Curious about your thoughts on a few things related to this... I'll put my questions as separate comments below.

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T17:37:48.680Z · score: 27 (11 votes) · EA · GW

In "Ways I've changed my mind about effective altruism over the past year" you write:

I feel very concerned by the relative lack of good quality discussion and analysis of EA topics. I feel like everyone who isn’t working at an EA org is at a massive disadvantage when it comes to understanding important EA topics, and only a few people manage to be motivated enough to have a really broad and deep understanding of EA without either working at an EA org or being really close to someone who does.

I am not sure if you still feel this way, but this makes me wonder what the current conversations are about with other people at EA orgs. Could you give some examples of important understandings or new ideas you have gained from such conversations in the last, say, 3 months?

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-18T08:31:05.558Z · score: 9 (4 votes) · EA · GW

Is there any public information on the AI Safety Retraining Program other than the MIRI Summer Update and the Open Phil grant page?

I am wondering:

1) Who should apply? How do they apply?

2) Have there been any results yet? I see two grants were given as of Sep 1st; have either of those been completed? If so, what were the outcomes?

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-18T08:14:28.262Z · score: 8 (3 votes) · EA · GW

You write:

"I think that the field of AI safety is growing in an awkward way. Lots of people are trying to work on it, and many of these people have pretty different pictures of what the problem is and how we should try to work on it. How should we handle this? How should you try to work in a field when at least half the "experts" are going to think that your research direction is misguided?"

What are your preliminary thoughts on the answers to these questions?

Comment by elle on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-18T08:11:35.510Z · score: 4 (3 votes) · EA · GW

In your opinion, what are the most helpful organizations or groups working on AI safety right now? And why?

In parallel: what are the least helpful organizations or groups working on (or claiming to work on) AI safety right now? And why?

Comment by elle on Healthy Competition · 2019-11-04T05:29:03.685Z · score: 14 (11 votes) · EA · GW

This article reminded me that I sometimes wish for a competitor to 80K. 80K tries to do a lot: research and write good career advice, coach people, create advanced and accessible EA-related content (via the podcast), network with and connect extremely high-impact people... It is not obvious to me that these activities should all be grouped together. Maybe they should. They have some synergies.

But, for instance, my impression is that it is difficult to receive coaching from 80K, unless you are especially impressive. Perhaps there could be a career-coaching-focused organization with a lower bar for providing coaching. This could offer some 80K competition, but not be too competitive. Maybe it could be cooperative-competitive, if that makes sense.

Comment by elle on Effective Altruism and Everyday Decisions · 2019-09-19T18:48:18.629Z · score: 20 (13 votes) · EA · GW

I broadly agree with this article, but some part of me felt... uncomfortable?... with the topic. So I tried to give voice to that part of me. Very uncertain about this, and it is a bit confusing.


I think we often build up pictures/stories of ourselves based on our regular habits/ actions. If I exercise every day, I start to think of myself as athletic/ healthy/ strong. If I wipe the counters in the kitchen, it contributes to my sense of responsibility/ care-taking/ cleanliness. If I do X that I believe is wasteful (i.e. common opinion says that X is wasteful and I have not seen any analyses that disprove the common opinion), I think of myself as a more selfish/ wasteful/ immoral person.

It seems easier for people to build up stories around actions that are direct/ concrete/ tangible/ etc. For example, I hear there are GiveWell employees who feel like their work is so removed from outcomes that it is difficult to feel motivated. Even though GiveWell does much more "direct work" than most of us will ever do! I think actions we feel are present/ close/ non-abstract/ non-alienating/ etc. may influence our self-identity significantly.

Negative self-images, if they get too strong, can be debilitating. At the very least, they are not fun. Often, to avoid negative self-image, we develop stories about why it's "OK" to be wasteful, even if we would not want everyone else to be. Stories such as "my time is super valuable."

Stories matter. If you attain a position of power, they influence what actions you take with that power. And stories interact and compound. For example, my feeling guilty about being wasteful can lead to a reinforcement of the belief that my time is valuable. Believing that my time is really valuable can lead to me making more wasteful decisions. Decisions like: "It is totally fine for me to buy all these expensive ergonomic keyboards simultaneously on Amazon and try them out, then throw away whichever ones do not work for me." Or "I will buy this expensive exercise equipment on a whim to test out. Even if I only use it once and end up trashing it a year later, it does not matter."

The thinking in the examples above worries me. People are bad at reasoning about when to make exceptions to rules like "try to behave in non-wasteful ways", especially when the exception is personally beneficial. And I think each exception can weaken your broader narrative about what you value and who you are.

I think I want people to default towards the common-sense, non-wasteful actions (as long as the cost feels pretty low to them), until they have read or made a well-reasoned case that the action is not wasteful in the way common opinion indicates (e.g., I liked Rob's article that complicates our narrative around recycling: I suspect that this approach will lead to a reinforcement of narratives/ values that seem good to me.

Comment by elle on I Estimate Joining UK Charity Boards is worth £500/hour · 2019-09-17T20:50:28.144Z · score: 1 (1 votes) · EA · GW

I appreciate the other comments about how the model does not take into account the base impact of the charities. Also:

1) Do trustees take on some form of legal responsibility for charities? If so, it could be risky to get involved with a charity you know little about.

2) Do you know how many other trustees are typically already involved? If you are one of three, I could see you having an easier time influencing a charity than if you are, say, one of seven.

Comment by elle on Funding chains in the x-risk/AI safety ecosystem · 2019-09-10T05:44:20.403Z · score: 1 (1 votes) · EA · GW

Personal opinion: the circular layout seems more useful. I like that it more clearly demonstrates a) entities that are connected to only one other entity in the graph (example: Inst. Phil. Research is only connected to BERI, Thiel is only connected to MIRI), and b) how many arrows are going into each node (example: it's easier to see that MIRI has the widest range of supporters of this group, followed by CEA and CFAR).

Comment by elle on Funding chains in the x-risk/AI safety ecosystem · 2019-09-10T05:37:10.227Z · score: 1 (3 votes) · EA · GW

Isn't EA Grants part of CEA? Perhaps there should be an arrow from CEA to EA Grants if so.

Comment by elle on Cause X Guide · 2019-09-05T09:41:47.210Z · score: 7 (5 votes) · EA · GW

Does this mean you no longer endorse the original statement you made ("there is little evidence of benefit from schooling")?

I'm feeling confused... I basically agreed with Khorton's skepticism about that original claim, and now it sounds like you agree with Khorton too. It seems like you, in fact, believe something quite different from the original claim; your actual belief is something more like: "for some children, the benefits of schooling will not outweigh the torturous experience of attending school." But it doesn't seem like there has been any admission that the original claim was too strong (or, at the very least, that it was worded in a confusing way). So I'm wondering if I'm misinterpreting.

Comment by elle on [deleted post] 2019-05-26T08:23:00.653Z

I am confused about what exactly you are trying to communicate with this post and its partner ( My sense is that you are saying something like:
1) Look, socialist-leaning and capitalist-leaning EAs, the policies you probably want are essentially the same, work together and make something happen, and
2) Look, EAs, I can write nearly identical posts with titles that will make you assume the posts are at odds - challenge your assumptions. Realize the power language has over you.

Or maybe you primarily want engagement with the policies you are most excited about, but want comments from both socialists and capitalists, and you felt this was the best way to achieve that?

I seek clarity.

(I feel stupid, not being able to interpret you well, albeit only after one quick read-through. But, I think folks should typically make comments when confused, so here I am.)

Comment by elle on Please use art to convey EA! · 2019-05-25T21:20:00.768Z · score: 19 (12 votes) · EA · GW

I like your encouragement to create more art. However, I noticed cringing at some of your ideas in the appendix. I worry that they would end up being "poorly executed cultural artefacts [that] may put EA into disrepute" as you put it.

I do not feel capable of explaining exactly where the cringe reaction is coming from, but a few examples:

I do not like the idea in Beautopia of equating physical appearance with moral goodness, given that a) it is already an issue that people assume positive personality traits when they see physically attractive people and b) it assumes there is some objective and real "good" that can be calculated. And the final plot line implying that it is good to kill people we think are evil seems like a bad meme to spread.

Dead baby currency seems overly simplistic and insensitive, although I am having a hard time putting words to why. It also triggers scrupulosity concerns (for example, see ).

Finally, I am wary of how you refer to "Africa" monolithically. For more, see