Posts

Against value drift 2019-10-29T20:04:23.510Z · score: 10 (25 votes)
Task Y: representing EA in your field 2019-03-24T18:37:00.498Z · score: 11 (10 votes)
The Home Base of EA 2019-03-22T05:07:54.017Z · score: 21 (19 votes)
EA Hotel Fundraiser 1: The story 2018-12-27T12:15:55.157Z · score: 64 (33 votes)

Comments

Comment by toonalfrink on Does climate change deserve more attention within EA? · 2019-10-29T18:56:52.520Z · score: 1 (1 votes) · EA · GW

If you take this model a step further, it suggests working on whatever the most tractable problem is that others are spending resources on, regardless of its impact, because that will maximally free up energy for other causes.

Sounds like something someone should simulate to see if this effect is strong enough to take into account.

Comment by toonalfrink on Announcing the launch of the Happier Lives Institute · 2019-06-19T16:16:38.767Z · score: 18 (8 votes) · EA · GW
[Our] research group is investigating the most promising giving opportunities among mental health interventions in lower and middle-income countries.

Any reason why you're focusing on interventions that target mental health directly and explicitly, instead of any intervention that might increase happiness indirectly (like bednets)?

Comment by toonalfrink on Please use art to convey EA! · 2019-05-26T20:29:05.120Z · score: 9 (6 votes) · EA · GW

Can we come up with a list of existing pieces of art that come close to this? I don't expect good ideas to come from first principles, but there might be some type of art out there that is non-cringy and conveys elements of EA thinking properly.

I'll start with Schindler's list, and especially this scene, where the protagonist breaks down while calculating just how many more lives he could have saved if he had sold his car, his jewelry, etc.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-15T15:19:27.267Z · score: 10 (4 votes) · EA · GW

Okay, you've convinced me that a US based EA organisation should consider raising their wages to attract top talent.

This data does make me doubt the wisdom of basing non-local activities in the US, but that is another matter.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-15T14:03:27.351Z · score: 8 (4 votes) · EA · GW

It does provide clarity, and I can imagine that there are unfortunate cases where those entry level salaries aren't enough.

As I said elsewhere in this thread, I think this problem would be best resolved simply by asking how much an applicant needs, instead of raising wages accross the board. The latter would cause all kinds of problems. It would worsen the already latent center/periphery divide in EA by increasing inequality, it would make it harder for new organisations to compete, it would reduce the net amount of people that we can employ, etc etc.

But I could be wrong, and I sense that some of my thoughts might be ideologically tainted. If you feel the urge to point me at some econ 101, please do.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-15T13:33:26.704Z · score: 3 (2 votes) · EA · GW

30 was just an arbitrary number. Is London still hard to live in for 60? Mind that the suggestion is to raise salaries from 75k to 100k. I can't imagine many cases where 75k is prohibitive, except for those that feel a need to be competitive with their peers from industry (which, fwiw, is not something I outright disapprove of)

We should probably operationalize this argument with actual data instead of reasoning from availability.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-15T10:26:19.789Z · score: 4 (3 votes) · EA · GW

Given the numbers that we have in mind, these examples are all very specific to the US.

Medical expenses don't get much past $2k per year in most European countries. The only place where cost of living is prohibitively high past a ~$30k income, is San Francisco.

I'm not arguing against the idea that some people exist that should be given the $150k that is needed to unlock their talents. I'm arguing that this group of people might be very small, and concentrated in your bubble.

I think that's the crux of the argument. If a majority of senior people needed $150k to get by, I'd agree that that should be the wage you offer. If these people make up just 1% of the population (which seems true to me), offering $150k to everyone else is just going to cause a lot of subtle cultural damage.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-15T10:13:26.810Z · score: 1 (1 votes) · EA · GW
a lot of resentment would emerge

To the extent that this would cause resentment, I'd interpret that as a perception of a higher counterfactual, which means that the execution wasn't done well.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-14T10:57:15.034Z · score: 5 (2 votes) · EA · GW

It's unclear to me what you mean with privilege. I'm trying to imagine a situation where making 75k is not enough for a low-privilege person, but I can't think of any. AFAIK 75k is an extremely high wage. I know a CEO of a bank that makes that.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-14T10:13:31.432Z · score: -1 (2 votes) · EA · GW

Don't advertise the wage on the ad. Ask candidates how much they need to be satisfied, then give them that amount or the amount that they are economically worth to you, whichever is lower. Discourage employees from disclosing how much they make.

Comment by toonalfrink on A Framework for Thinking about the EA Labor Market · 2019-05-14T10:09:53.183Z · score: 6 (2 votes) · EA · GW

In preventing wage dissatisfaction, I think it's better to look at perceived counterfactuals. This can come from being used to a certain wage, or a certain counterfactual wage being very obvious to you. Or it can come from your peers making a certain wage.

You seem to assume something like "people don't like to accept a wage that is lower than they can get". I suggest replacing that with "people don't like to accept a wage that is lower than they feel they can get".

I know some people that are deliberately keeping their income frozen at 15k so they won't get used to more. They reason that if they did, not only would they be psychologically attached to that wage, to a lesser extent so would their peers. In some sense they are keeping up a healthy cultural environment where it's possible to make little and still be satisfied.

I've heard of some organisations that don't have a fixed wage for a job, but a maximum. They ask their applicants "how much would you need to be satisfied", and that's how much they get. I'd expect that this practice, combined with a culture that doesn't overly discuss income or flaunt wealth, would be the best way to keep everyone satisfied, compete with industry, and still keep the average wage low.

Comment by toonalfrink on Open Thread #44 · 2019-05-03T16:31:10.394Z · score: 3 (2 votes) · EA · GW

I sometimes think about seeking funding outside of EA to increase the amount of available EA funding.

But I never made serious work of it. I have no idea what is available, or where to look. Governments? Foundations? With which ones does an Xrisk project have a chance? What's a good strategy for applying to them?

I'd be very happy if someone dived into this.

Comment by toonalfrink on Psychedelics Normalization · 2019-04-30T10:58:50.766Z · score: 11 (3 votes) · EA · GW

You forgot ibogaine, which seems to be the most compelling example. According to lots of anecdotes across the internet, it reliably cures decades old addictions to heroin in a single sitting.

Still I don't think psychedelic use is necessarily a good thing. It makes people more open to experience, which for some will be a door to madness. See for example Scott Alexander's writings about it

Comment by toonalfrink on Does climate change deserve more attention within EA? · 2019-04-24T20:49:15.953Z · score: 7 (7 votes) · EA · GW

Another consideration comes to mind: climate change is currently taking up a large amount of attention from competent altruistic people. If the issue were to be solved or its urgency reduced, some of those resources might flow into EA causes.

Comment by toonalfrink on EA Hotel Fundraiser 4: Concrete outputs after 10 months · 2019-04-21T15:07:51.839Z · score: 3 (2 votes) · EA · GW

fwiw, I personally give it >75% probability that we will be able to survive at least until next round

Comment by toonalfrink on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-11T19:37:01.113Z · score: 6 (4 votes) · EA · GW

Am certainly open to considering this business model for the hotel.

Comment by toonalfrink on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-11T19:32:16.668Z · score: 7 (5 votes) · EA · GW

The hotel did apply.

The marginal per-EA cost of supplying runway is probably lower with shared overhead and low COL like that.

It's about $7500 per person per year

Comment by toonalfrink on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T13:38:06.437Z · score: 10 (8 votes) · EA · GW

As a potential grant recipient (not in this round) I might be biased, but I feel like there is a clear answer to this. No one is able to level up without criticism, and the quality of your decisions will often be bottlenecked by the amount of feedback you receive.

Negative feedback isn't inherently painful. This is only true if there is an alief that failure is not acceptable. Of course the truth is that failure is necessary for progress, and if you truly understand this, negative feedback feels good. Even if it's in bad faith.

Given that grantmakers are essentially at the steering wheel of EA, we can't afford for those people to not internalize this. They need to know all the criticism to make a good decision, they should cherish it.

Of course we can help them get this state of mind by celebrating their willingness to open up to scrutiny, along with the scrutiny

Comment by toonalfrink on When should EAs allocate funding randomly? An inconclusive literature review. · 2019-04-01T17:20:25.079Z · score: 3 (2 votes) · EA · GW
For this specific post, I probably won't add a summary because my guess is that in this specific case the size of the beneficial effect doesn't justify the cost.

I still think you should write it. This looks like an important bit of information, but not worth the read, and I estimate a summary would increase the amount of readers fivefold.

Comment by toonalfrink on The Case for the EA Hotel · 2019-04-01T01:50:20.151Z · score: 4 (3 votes) · EA · GW

I wrote that intense model, and I agree that it's not a good post. My apologies.

Comment by toonalfrink on The Case for the EA Hotel · 2019-04-01T01:47:09.219Z · score: 8 (3 votes) · EA · GW

I imagine EA's getting into all sorts of fields and industries while staying in the community, and this seems so valuable that it makes me second-guess the hotel.

People don't stay in the community because, if you're not involved professionally, there's not much left to gain. We should change that.

I've proposed a solution to this problem here and here

Comment by toonalfrink on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-01T01:41:34.539Z · score: 4 (6 votes) · EA · GW
I think part of why Y Combinator is so successful is because funding so many startups has allowed them to build a big dataset for what factors do & don't predict success. Maybe this could become part of the EA Hotel's mission as well.

Good idea. It will be somewhat tricky since we don't have the luxury of measuring success in monetary terms, but we should certainly brainstorm about this at some point.

Comment by toonalfrink on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-31T20:38:01.234Z · score: 9 (6 votes) · EA · GW

Thank you.

With the hotel, I see a bunch of little hints that it's not worth my time to attempt an in-depth evaluation of the hotel's leaders. E.g. the focus on low rent, which seems like a popular meme among average and below average EAs in the bay area, yet the EAs whose judgment I most respect act as if rent is a relatively small issue.

Your posts suggests that there is some class of EA's that is a lot more competent than everyone else, which means that what everyone else is doing doesn't matter all that much. While I haven't met (or recognized) a lot of people that impress me this much, I still give this idea a lot of credence. I'd like to verify it for myself, to get on the same page with you (and perhaps even change my plans). Could you name some examples, besides Drexler and Bostrom, of EA's that are on this level of competence?

I'm not looking for credentials, I'm looking for resources that demonstrate how these people are thinking, or stories about impressive feats, so I can convince my S1 to sit down and be humble (and model their minds so I can copy the good bits).

Podcasts, maybe?

Comment by toonalfrink on The Case for the EA Hotel · 2019-03-31T15:00:12.337Z · score: 9 (5 votes) · EA · GW

I have burned out slightly, but this has happened every 6 months or so for the past 5 years, so it's probably not caused by the hotel.

Comment by toonalfrink on Altruistic action is dispassionate · 2019-03-31T12:38:24.124Z · score: 3 (2 votes) · EA · GW

At the very least, I agree that one coherent thread is more healthy and something to strive for, but in choosing a thread you might want to be aware of the various stakeholders and their incentives. I find that counting myself and my needs into my moral framework makes my moral framework more robust.

Comment by toonalfrink on Altruistic action is dispassionate · 2019-03-31T02:01:44.665Z · score: 8 (3 votes) · EA · GW

I'd argue that humans would actually be better understood as an aggregate of agents, each with their own utility function. In your case, these agents might cooperate so well that your internal experience is that you're just one agent, but that's certainly not a human universal.

Comment by toonalfrink on EA Hotel Fundraiser 4: Concrete outputs after 10 months · 2019-03-31T01:55:43.840Z · score: 19 (9 votes) · EA · GW

I would rather not. This would pressure people into goodharting their projects for legibility, which is one of the things our setup is supposed to prevent.

(tldr: an agent is legible if a principal can easily monitor them, but it limits their options to what is easy for the principal to measure, which might reduce performance)

Quite a few of our guests are not even on this list, but this doesn't mean they're sitting around doing nothing all day. They're doing illegible work that is hard or even impossible to evaluate at a distance. I put a few examples in the second caveat of the post.

(I realise this is at odds with the EA maxim of measuring outcomes. That's why we published this post: so the hotel could at least be evaluated in aggregate. I think it's neat that people with illegible work can hide behind legible ones)

Comment by toonalfrink on Altruistic action is dispassionate · 2019-03-30T17:53:05.728Z · score: 2 (4 votes) · EA · GW

I realise that I've been implicitly assuming this is true, which made me resist optimizing for impressions. Doing that I could no longer convince myself that I was acting altruistically. The awful and hard to accept reality is that you sometimes do have to convince people in order for your work to be supported.

Comment by toonalfrink on Why is the EA Hotel having trouble fundraising? · 2019-03-28T20:10:14.366Z · score: 2 (2 votes) · EA · GW
1. Does RAISE/the Hotel have a standardized way to measure the progress of people self-studying AI? If so, especially if it's been vetted by AI risk organizations, it seems like that would go a long ways towards resolving this issue.

Not yet, but it's certainly a project that is on our radar. We also want to find ways to measure innate talent, so that people can tell earlier whether AIS research would be a good fit for them.

Comment by toonalfrink on Why is the EA Hotel having trouble fundraising? · 2019-03-28T14:11:23.167Z · score: 8 (6 votes) · EA · GW

I do think it affects their behavior, I just refuse to let it affect mine more than is strictly necessary, because I think it's a negative sum game.

Comment by toonalfrink on Why is the EA Hotel having trouble fundraising? · 2019-03-28T01:00:00.934Z · score: 12 (6 votes) · EA · GW

Strong upvoted, and thank you, because finally someone is honest about their doubts. You're as critical in your speech as you are in your thoughts. This should be standard, but it's rare.

projects that seem pretty tragic like “writing a novel on AI alignment” and “writing a mobile game” - it’s a difficult balance here, unoccupied rooms are doing nothing for the hotel but equally I doubt indulging these sorts of things are valuable

This is what I understand to be hits-based giving. If you have 20 rooms, you can make these kinds of weird gambles, and someone should be doing that.

Poor presentation- I found the post on expected value essentially incoherent as a pitch , but in all of the posts so far little thought seems to have been put into the elevator pitch of why fund this or what are the best aspects of the project are. Funders want a one paragraph or one sentence summary of why they should fund it which seems absent here

I take full responsibility for that. Perhaps I should have studied how other meta organisations estimate their value. I was hubristic to assume that I would be able to do it from scratch.

People don’t want to be associated with something low status

I'd rather assume EA's to be above status when it comes to stakes this high.

not even close to every current/previous resident had made even a nominal donation of £5 to the campaign

I don't see why they should. At that point you're just manipulating impressions. I want to present an honest picture, and I don't want to engage in a signalling race to the bottom.

Perhaps that's naive.

Comment by toonalfrink on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-28T00:19:49.285Z · score: 2 (2 votes) · EA · GW
I'm not at all convinced that the counterfactual would be working on their problems in solitude.

I wouldn't be convinced either, but we interviewed our guests and 15 out of 20 were already doing the same work before taking up residence at the hotel. They were either working parttime or burning through runway.

Comment by toonalfrink on Why is the EA Hotel having trouble fundraising? · 2019-03-27T20:32:07.930Z · score: 2 (2 votes) · EA · GW

That's a good point. You made me aware of a certain population of potential hotel residents that would be better off building career capital elsewhere. But I think "almost every case" is an overstatement. Here's some idealized examples, for the sake of argument:

  • The person with the high-profile career that decides to do independent research instead of taking a job at a multinational NGO that eventually leads them to a lot more influence
  • The EA-adjacent software developer that would have drifted outside of the community, if not for a place at the EA Hotel where they're doing useful knowledge work
  • The entrepreneurial person that starts an EA organisation at the hotel, instead of doing a second-grade Master's degree in relative obscurity because they were never good at caring about grades

Would you agree that the first would be a net loss, while the second and the third would be a net gain? I'm curious what you think our pool of residents is like, and how this influences your opinion.

Comment by toonalfrink on Does EA need an underpinning philosophy? Could sentientism be that philosophy? · 2019-03-27T18:42:00.383Z · score: 4 (6 votes) · EA · GW

Meta: I'm concerned about the amount of downvotes I see that aren't accompanied with any justification. Consider that there is a lot of information value in a negative judgment. I imagine that the author would be very happy to hear about this, and more generally, I imagine that EA as a whole would skill up a *lot* faster if downvotes came with instructions.

Comment by toonalfrink on Should EA Groups Run Organ Donor Registration Drives? · 2019-03-27T18:41:41.990Z · score: 7 (4 votes) · EA · GW

Meta: I'm concerned about the amount of downvotes I see that aren't accompanied with any justification. Consider that there is a lot of information value in a negative judgment. I imagine that the author would be very happy to hear about this, and more generally, I imagine that EA as a whole would skill up a *lot* faster if downvotes came with instructions.

Comment by toonalfrink on Why is the EA Hotel having trouble fundraising? · 2019-03-27T18:35:54.077Z · score: 11 (5 votes) · EA · GW
3. Guest output and testimonials are not ready to be released. It's clear that several donors would prefer evidence of concrete output over an estimate of the value added by living at the hotel.

Aiming for this Friday.

Comment by toonalfrink on Why is the EA Hotel having trouble fundraising? · 2019-03-27T18:32:33.617Z · score: 12 (8 votes) · EA · GW

I agree that feedback is extremely important. I even imagine that feedback is almost universally the bottleneck to growth. Feedback in the general sense. Not just from people, but from experience as well.

We're giving guests 15 minutes of feedback per week, through personal check-ins with the manager (which is currently me). I can imagine that this is a bit less than what one would usually get from one's superior, and that this feedback is less good because management is unlikely to be an expert on the subject at hand.

Coming from a different perspective: EA seems to be more generally constrained by mentorship. If all the mentors are already mentoring at full capacity, the next best thing is to let people try and figure things out by themselves (or read books about it). I'd guess that that is better than letting people sit at home and wait for their turn, so to speak, which seems to be the real counterfactual.

Comment by toonalfrink on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-27T18:24:13.375Z · score: 13 (6 votes) · EA · GW

Crossposting my comment from Facebook. Full disclosure: I work at the EA Hotel.

"Here's the article Greg mentions, and I think that it's the best argument for being lukewarm about the EA Hotel: https://www.effectivealtruism.org/articles/ea-neoliberal/

The tldr is that the neoliberals managed to change the world to adopt their ideology chiefly through convincing academia. This isn't just some random hypothesis: they claimed that this was how to do it, *and then they did it*. Academia is mostly influenced through weird things like prestige and respectability, and therefore the success of the EA movement hinges on the impression that it makes, which hinges not on its total output as much as the *average* quality of its organisations. It seems likely that the EA hotel would push this average down.

(one can argue against this, for example by disagreeing that academia sets the overton window, or by arguing that ideology is just as likely to "trickle up" from the masses as it is likely to trickle down, or generally that there are other ways to set the overton window. The neoliberal story is compelling though, and one could claim that the same thing happened with social justice recently)"

Comment by toonalfrink on EA is vetting-constrained · 2019-03-27T17:13:26.264Z · score: 16 (6 votes) · EA · GW

Thanks! I wrote it :)

Comment by toonalfrink on Apology · 2019-03-26T14:46:31.062Z · score: 8 (6 votes) · EA · GW

Can't we just allow people to change their mind and retract their statements?

Comment by toonalfrink on The career coordination problem · 2019-03-25T11:11:33.834Z · score: 4 (2 votes) · EA · GW

Just wanted to mention this problem is orthogonal to the related problem of generating enough work to do in the first place, and before you start thinking about how to cut up the pie better you might want to consider making the pie bigger instead.

Unless making the pie bigger is less neglected. I guess this problem can be applied to itself :)

Comment by toonalfrink on EA jobs provide scarce non-monetary goods · 2019-03-22T03:10:23.474Z · score: 3 (2 votes) · EA · GW

Nope, it's full-time. Right now two of us are doing a side project, but that's not usual

Comment by toonalfrink on Announcement: Join the EA Careers Advising Network! · 2019-03-21T04:23:15.863Z · score: 4 (3 votes) · EA · GW

Hey, this is great! I'm not sure for which role I should apply. Do you have some definition of what makes a proper advisor? How many years of experience? What level of investment? Or shall I just write my own hero license?

Comment by toonalfrink on EA jobs provide scarce non-monetary goods · 2019-03-21T04:02:48.365Z · score: 8 (6 votes) · EA · GW

Data points:

  • We offered a job that didn't offer any monetary reward at all (except for a place in the EA Hotel) and we still got 10 applications.
  • When we offered a job with a negative salary, we didn't get any applicants (yet)

Obviously, these numbers might be influenced by many factors besides pay.

Comment by toonalfrink on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T18:04:48.025Z · score: 9 (3 votes) · EA · GW

I'm glad that someone mentions this. I have a strong alief that misrepresenting your opinions to be more palatable is a bad idea if you're right. It pulls you into a bad equilibrium.

If you sermon the truth, you might lose the respect of those that are wrong, but you will gain the respect of those that are right, and those people are the ones you want in your community.

Having said that, you really do have to be right, and I feel like not even EA's are up to the herculean task of clearly seeing outside of their political intuitions. I for one have so far been wrong about many things that felt obvious to me.

I guess that's why we focus on meta truth instead. It seems that the set of rules that arrive at truth are much more easily described than the truth itself.

Comment by toonalfrink on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T17:50:10.186Z · score: 6 (4 votes) · EA · GW

Downvoted because I felt that the "though not linked to" and the hyperboles in your comment suggest that you're coming from a subtly adversarial mindset

(I'm telling you this because I like to see more people explain their downvotes. They're great information value. No bad feels!)

Comment by toonalfrink on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T17:38:07.722Z · score: 12 (7 votes) · EA · GW

Appreciate the data!

Comment by toonalfrink on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T02:30:07.880Z · score: 0 (3 votes) · EA · GW

any rules we make will be reasonable

Nah, it does apply to itself :)

and we won't push people out for having an unfashionable viewpoint

But you think pushing them out is the right thing to do, correct?

Let me just make sure I understand the gears of your model.

Do you think one person with an unfashionable viewpoint would inherently be a problem? Or will it only become a problem when this becomes a majority position? Or perhaps, is the boundary the point where this viewpoint starts to influence decisions?

Do you think any tendency exists for the consensus view to drift towards something reasonable and considerate, or do you think that it is mostly random, or perhaps there is some sort of moral decay that we have to actively fight with moderation?

Surely, well kept gardens die by pacifism, and so you want to have some measures in place to keep the quality of discussion high, both in the inclusivity/consideration sense and in the truth sense. I just hope that this is possible without banning topics. For most of the reasons stated by the OP. Before we start banning topics, I would want to look for ways that are less intrusive.

Case in point: it seems like we're doing just fine right now. Maybe this isn't a coincidence (or maybe I'm overlooking some problems, or maybe it's because we already ignore some topics)

Comment by toonalfrink on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T01:35:00.654Z · score: 6 (3 votes) · EA · GW

I wonder where this fear of extreme viewpoints comes from. It seems to be a crux.

I personally don't have an alief that there is a slippery slope here. It seems to me that there are some meta rules for discussion in place that will keep this from happening.

For example, it seems to me that EA's are very keen to change their minds, take criticism and data very seriously, bring up contrarian viewpoints, and epistemics humility, to name a few things. I would like to call this Epistemic Honor.

Do you think that our culture of epistemic honor is insufficient for preventing extreme viewpoints, to the point that we need drastic measures like banning topics? My impression is that it's more than enough, but please prove me wrong!

Comment by toonalfrink on SHOW: A framework for shaping your talent for direct work · 2019-03-14T00:07:31.227Z · score: 6 (6 votes) · EA · GW

I don't think you read too much Robin Hanson, it clarifies a lot of things :)

In some sense, I don't even think these people are wrong to be frustrated. You have to satisfy your own needs before you can effectively help others. One of these needs just happens to be the need to feel relevant. And like everything else, this is a systemic problem. EA should try to make people feel relevant if and only if they're doing good. If doing good doesn't get you recognition unless you're in a prestigious organisation, then we have to fix that.