EA for dumb people?

post by Olivia Addy · 2022-07-11T10:46:55.229Z · EA · GW · 160 comments

I've been involved in EA for nearly a year now. At first, it was super exciting. I resonated so much with the core ideas of EA, and I couldn't wait to get started with doing the most good I possibly could. I had no idea there was so much opportunity.

As I got further into it, my hopes started to fade, and I started to feel like I didn't really fit in. EA is pitched to the super intelligent in our society, those who did super hard degrees at Oxford or Harvard and learned to code at age 8. For me, I'm just average. I never stood out at school, I went to mid-ranking university and studied sociology (which has a reputation for being an easy degree). I graduated, got an average job and am living an average life. I don't have some high earning side hustle and I don't spend my spare time researching how we can make sure AI is aligned with human values.

I do however, care a lot about doing the most good. So I really want to fit in here because that matters a lot to me. I want to leave the world a better place. But I feel like I don't fit, because frankly, I'm not smart enough. (I'm not trying to be self deprecating here, I feel like I'm probably pretty average among the general population - and I didn't really ever feel 'not smart enough' before getting involved in EA)

I totally understand why EA aims at the Oxford and Harvard graduates, of course, we want the most intelligent people working on the world's most pressing problems.

But most people aren't Oxford or Harvard graduates. Most people aren't even university graduates. So do we have a place in EA?

I want to be a part of this community, so I'm trying to make it work. But this leads me to be worried about a lot of other people like me who feel the same. They come across EA, get excited, only to find out that there's not really a place for them - and then they lose interest in the community. Even the idea of giving 10% of your salary can be hard to achieve if you're balancing the needs/wants of others in your family (who maybe aren't so EA minded) and considering the rises in the cost of living currently.

I'm guessing here, because I have absolutely no stats to back this up and it's based on mostly my anecdotal experience - but we could potentially be losing a lot of people who want to be a part of this but struggle to be because EA is so narrowly targeted.

Whenever I come on the EA forum I literally feel like my brain is going to explode with some of the stuff that is posted on here, I just don't understand it. And I'm not saying that this stuff shouldn't be posted because not everyone can comprehend it. These are really important topics and of course we need smart people talking about it. But maybe we need to be aware that it can also be quite alienating to the average person who just wants to do good.

I don't have a solution to all this, but it's been on my mind for a while now. I re-watched this Intro to EA by Ajeya Cotra this morning, and it really re-invigorated my excitement about EA, so I thought I'd put this out there.

I'd be really keen to hear if anyone has any thoughts/feelings/ideas on this - I'm honestly not sure if I'm the only one who feels like this.

160 comments

Comments sorted by top scores.

comment by Lukas_Gloor · 2022-07-12T12:18:36.287Z · EA(p) · GW(p)

I know that lukeprog's comment [EA(p) · GW(p)] is mostly replying to the insecurity about lack of credentials in the OP.  Still,  the most upvoted answer seems a bit ironic in the broader context of the question:

If you read the comment without knowing Luke, you might be like "Oh yeah, that sounds encouraging." Then you find out that he wrote this excellent 100++ page report on the neuroscience of consciousness, which is possibly the best resource on this on the internet, and you're like "Uff, I'm f***ed."

Luke is (tied with Brian Tomasik) the most genuinely modest person I know, so it makes sense that it seems to him like there's a big gap between him and even smarter people in the community. And there might be, maybe. But that only makes the whole situation even more intimidating.

It's a tough spot to be in and I only have advice that maybe helps make the situation tolerable, at least.

Related to the advice about Stoicism, I recommend [EA · GW] viewing EA as a game with varying levels of difficulty. 

Because life isn’t fair, the level of difficulty of the video game will sometimes be “hard” or even “insane”, depending on the situation you’re in. The robot on the other hand would be playing on “easy”, because it would never encounter a lack of willpower, skills, or thinking capacity. So don’t worry about not being able to score too many points in the absolute sense, and focus instead on how many points are reachable within the difficulty-level that you’re playing on.

I also like the concrete advice in the post SHOW: A framework for shaping your talent for direct work [EA · GW], by Ryan Carey and Tegan McCaslin.

1. Get Skilled: Use non-EA opportunities to level up on those abilities EA needs most.

2. Get Humble: Amplify others’ impact from a more junior role.

3. Get Outside: Find things to do in EA’s blind spots, or outside EA organizations.

4. Get Weird: Find things no one is doing.

Of course, for some people, it may feel like the only useable advice is "Get Humble." However, I think at least "Get Skilled" is also advice that should always work, and people who feel discouraged about it may want to start working on developing a bit more of a growth mindset* and combine the search for useful skills with "Get Humble" (i.e., look for skills that are in reach). 

*There's no use in feeling bad about not having as much of a growth mindset as others, because that's also a trait that varies among people, just like intelligence. And a growth mindset most likely comes easier to highly intelligent people.

I think "Get Outside" can also work out well because altruistically motivated people who think carefully about the impact of their role are in rare supply outside of EA. However, there might be a problem where doing well in roles outside of EA isn't very compatible with the typical identity of going to EA Global and talking about cause priortization and so on. I think that's a tricky situation. Maybe it makes sense for EAs in this sort of situation to meet up, compare experiences, and see if they find ways of dealing with it better. 

I have the maximum amount of respect for anyone who is motivated to spend a large portion of their time and resources to do the most good they can, no matter what their personal situation turns out to be. Especially if people are honest with themselves about personal limitations. 

comment by lukeprog · 2022-07-11T12:15:03.262Z · EA(p) · GW(p)

FWIW, I wouldn't say I'm "dumb," but I dropped out of a University of Minnesota counseling psychology undergrad degree and have spent my entire "EA" career (at MIRI then Open Phil) working with people who are mostly very likely smarter than I am, and definitely better-credentialed. And I see plenty of posts on EA-related forums that require background knowledge or quantitative ability that I don't have, and I mostly just skip those.

Sometimes this makes me insecure, but mostly I've been able to just keep repeating to myself something like "Whatever, I'm excited about this idea of helping others as much as possible, I'm able to contribute in various ways despite not being able to understand half of what Paul Christiano says, and other EAs are generally friendly to me."

A couple things that have been helpful to me: comparative advantage [EA(p) · GW(p)] and stoic philosophy.

At some point it would also be cool if there was some kind of regular EA webzine that published only stuff suitable for a general audience, like The Economist or Scientific American but for EA topics.

Replies from: Ben_Snodin, Olivia Addy, RedStateBlueState, jlemien, TomChivers
comment by Ben Snodin (Ben_Snodin) · 2022-07-15T06:09:22.412Z · EA(p) · GW(p)

This is pretty funny because, to me, Luke (who I don't know and have never met) seems like one of the most intimidatingly smart EA people I know of.

comment by Olivia Addy · 2022-07-12T12:53:08.132Z · EA(p) · GW(p)

Thanks for this comment. I really appreciate what you said about just being excited to help others as much as possible, rather than letting insecurities get the better of you.

Interesting that you mentioned the idea of an EA webzine because I have been toying with the idea of creating a blog that shares EA ideas in a way that would be accessible to lots of people. I’m definitely going to put some more thought into that idea.

Replies from: lukefreeman
comment by Luke Freeman (lukefreeman) · 2022-07-12T22:45:52.655Z · EA(p) · GW(p)

Let me know if you decide to go ahead with the idea and I'll see how I can help 😀 

comment by RedStateBlueState · 2022-07-12T02:09:34.038Z · EA(p) · GW(p)

Vox’s Future Perfect is pretty good for this!

comment by Joseph Lemien (jlemien) · 2022-07-12T23:26:29.309Z · EA(p) · GW(p)

regular EA webzine that published only stuff suitable for a general audience

That would be great! I'd love to see this. I consider myself fairly smart/well-read, but I don't think that I have the background or the quantitative skills to comprehend advanced topics. I would very much like to see content targeted at a general audience, the way that I can find books about the history of the earth or about astrophysics targeted at a general audience.

comment by TomChivers · 2022-09-12T09:36:47.051Z · EA(p) · GW(p)

re the webzine, I feel like Works in Progress covers a lot of what you're looking for (it's purportedly progress studies rather than EA, but the mindset is very similar and the topics overlap)

comment by Linch · 2022-07-11T19:39:24.449Z · EA(p) · GW(p)

I'm going to be boring/annoying here and say some things that I think are fairly likely to be correct but may be undersaid in the other comments:

  • EAs on average are noticeably smarter than most of the general population
  • Intelligence is an important component for doing good in the world.
  • The EA community is also set up in a way that amplifies this, relative to much of how the rest of the world operates.
  • Most people on average are reasonably well-calibrated about how smart they are.
    • (To be clear exceptions certainly exist) EDIT: This is false, see Max Daniel's comment.
  • If you're less smart than average for EAs (or less driven, or less altruistic, or less hardworking, or have less of a social safety net), than on average I'd expect you to be less good at having a positive impact than others.
  • But this is in relative terms, in absolute terms I think it's certainly possible to have a large impact still.
  • Our community is not (currently) set up  well to accommodate the contributions of many people who don't check certain boxes, so I expect there to be more of an uphill battle for many such people.
    • I don't think this should dissuade you from the project of (effectively) doing good, but I understand and emphasize if this makes you frustrated.
Replies from: Max_Daniel, Adam Morris, tn-77, DukeGartzea
comment by Max_Daniel · 2022-07-12T10:25:01.953Z · EA(p) · GW(p)

Most people on average are reasonably well-calibrated about how smart they are.

(I think you probably agree with most of what I say below and didn't intend to claim otherwise, reading your claim just made me notice and write out the following.)

Hmm, I would guess that people on average (with some notable pretty extreme outliers in both directions, e.g. in imposter syndrome on one hand and the grandiose variety of narcissistic personality disorder on the other hand, not to mention more drastic things like psychosis) are pretty calibrated about how their cognitive abilities compare to their peers but tend to be really bad at assessing how they compare to the general population because most high-income countries are quite stratified by intelligence. 

(E.g., if you have or are pursuing a college degree, ask yourself what fraction of people that you know well do not and will never have a college degree. Of course, having a college degree is not the same as being intelligent, and in fact as pointed out in other comments if you're reading this Forum you probably know, or have read content by, at least a couple of people who arguably are extremely intelligent but don't have a degree. But the correlation is sufficiently strong that the answer to that question tells you something about stratification by intelligence.)

That is, a lot of people simply don't know that many people with wildly different levels of general mental ability. Interactions between them happen, but tend to be in narrow and regimented contexts such as one person handing another person cash and receiving a purchased item in return, and at most include things like small talk that are significantly less diagnostic of cognitive abilities than more cognitively demanding tasks such as writing an essay on a complex question or solving maths puzzles.

For people with significantly above-average cognitive abilities, this means they will often lack a rich sense of how, say, the bottom third of the population in terms of general mental ability performs on cognitively demanding tasks, and consequently they will tend to significantly underestimate their general intelligence relative to the general population because they inadvertently substitute the question "how smart am I compared to the general population?" – which would need to involve system-2 reasoning and consideration of not immediately available information such as the average IQ of their peer group based on e.g. occupation or educational attainment – with the easier question "how smart am I compared to my peers?" on which I expect system 1 to do reasonably well (while, as always, of course being somewhat biased in one direction or the other).

As an example, the OP says "I'm just average" but also mentions they have a college degree – which according to this website is true of 37.9% of Americans of age 25 or older. This is some, albeit relatively weak, evidence against the "average" claim depending on what the latter means (e.g. if it just means "between the first and third quartile of the general population" then evidence against this is extremely weak, while it's somewhat stronger evidence against being very close to the population median).

This effect gets even more dramatic when the question is not just about "shallow" indicators like one's percentile relative to the general population but about predicting performance differences in a richer way, e.g. literally predicting the essays that two different people with different ability levels would write on the same question. This is especially concerning because in most situations these richer predictions are actually all that matters. (Compare with height: it is much more useful and relevant to know, e.g., how much different levels of height will affect your health or your dating prospects or your ability to work in certain occupations or do well at certain sports, than just your height percentile relative to some population.)

I also think the point that people are really bad at comparing them to the general population because society is so stratified in various ways applies to many other traits, not just to specific cognitive abilities or general intelligence. Like, I think that question is in some ways closer to the question "at what percentile of trait X are you in the population of all people that have ever lived", where it's more obvious that one's immediate intuitions are a poor guide to the answer.

(Again, all of this is about gradual effects and averages. There will of course be lots of exceptions, some of them systematic, e.g. depending on their location of work teachers will see a much broader sample and/or one selected by quite different filters than their peer group.

I also don't mean to make any normative judgment about the societal stratification at the root of this phenomenon. If anything I think that a clear-eyed appreciation of how little many people understand of the lived experience of most others they share a polity with would be important to spread if you think that kind of stratification is problematic in various ways.)

Replies from: Linch
comment by Linch · 2022-07-12T21:01:41.612Z · EA(p) · GW(p)

I think you're entirely right here. I basically take back what I said in that line. 

I think the thing I originally wanted to convey there is something like "people systematically overestimate effects like Dunning-Kruger and imposter syndrome," but I basically agree that most of the intuition I have is in pretty strongly range-restricted settings. I do basically think people are pretty poorly calibrated about where they are compared to the world. 

(I also think it's notably more likely that Olivia is above average than below average.)

Relatedly, I think social group stratification might explain some of the other comments to this post that I found surprising/tone-deaf. (e.g. the jump from "did a degree in sociology" to "you can be a sociologist in EA" felt surprising to me, as someone from a non-elite American college who casually tracks which jobs my non-STEM peers end up in). 

Replies from: Max_Daniel
comment by Max_Daniel · 2022-07-13T00:02:14.636Z · EA(p) · GW(p)

I think social group stratification might explain some of the other comments to this post that I found surprising/tone-deaf.

Yes, that's my guess as well.

comment by CounterBlunder (Adam Morris) · 2022-07-12T18:39:03.632Z · EA(p) · GW(p)

This feels like it misses an important point. On the margin, maybe less intelligent people will have on average less of an individual impact. But given that there are far more people of average intelligence than people on the right tail of the IQ curve, if EA could tune its pitches more to people of average intelligence, it could reach a far greater audience and thereby have a larger summed impact. Right?

I think there's also a couple other assumptions in here that aren't obviously true. For one, it assumes a very individualistic model of impact; but it seems possible that the most impactful social movements come out of large-scale collective action, which necessarily requires involvement from broader swaths of the population. Also, I think the driving ideas in EA are not that complicated, and could be written in equally-rigorous ways that don't require being very smart to parse.

This comment upset me because I felt that Olivia's post was important and vulnerable, and, if I were Olivia, I would feel pushed away by this comment. But I'm rereading your comment and thinking now that you had better intentions than what I felt? Idk, I'm keeping this in here because the initial gut reaction feels valuable to name.

Replies from: Linch
comment by Linch · 2022-07-12T20:16:01.676Z · EA(p) · GW(p)

Thanks I appreciate this feedback.

Replies from: Linch
comment by Linch · 2022-07-12T23:41:36.394Z · EA(p) · GW(p)

Anyway, on a good day, I try to aim my internet comments on this Forum to be true, necessary, and kind. I don't always succeed, but I try my best.

This comment upset me because I felt that Olivia's post was important and vulnerable, and, if I were Olivia, I would feel pushed away by this comment. But I'm rereading your comment and thinking now that you had better intentions than what I felt? Idk, I'm keeping this in here because the initial gut reaction feels valuable to name.

I think realizing that different people have different capacities for impact is importantly true. I also think it's important and true to note that the EA community is less well set up to accommodate many people than other communities. I think what I said is also more kind to say, in the long run, compared to casual reassurances that makes it harder for people to understand what's going on. I think most of the other comments do not come from an accurate model of what's most kind to Olivia  (and onlookers) in the long run. 

Is my comment necessary? I don't know. In one sense it clearly isn't (people can clearly go about their lives without reading what I said). But in another sense, I feel better about an EA community that is more honest to potential members with best guesses about what we are and what we try to do. 

In terms of "pushed away", I will be sad if Olivia (and others) read my comment and felt dissuaded about the project of doing good. I will be much less sad about some people reading my comment and it being one component in them correctly* deciding that this community is not for them. The EA community is not a good community for everyone, and that's okay.

(Perhaps you think, as some of the other commentators seem to, that the EA community can do a ton more to be broadly accommodating,. This is certainly something that's tractable to work on, e.g. we can emphasizing our role models to be more like people in Strangers Drowning rather than top researchers and entrepreneurs. But I'm not working on this, and chances are, neither are you). 

*There is certainly a danger of being overly prone to saying "harsh truths", such that people are incorrectly pushed away relative to a balanced portrayal. But I still stand behind what I said, especially in the context of trying to balance out the other comments that were in this post before I commented, notably before Lukas_Gloor's comment [EA(p) · GW(p)].

Replies from: Max_Daniel
comment by Max_Daniel · 2022-07-13T00:05:02.248Z · EA(p) · GW(p)

I think realizing that different people have different capacities for impact is importantly true. I also think it's important and true to note that the EA community is less well set up to accommodate many people than other communities. I think what I said is also more kind to say, in the long run, compared to casual reassurances that makes it harder for people to understand what's going on. I think most of the other comments do not come from an accurate model of what's most kind to Olivia  (and onlookers) in the long run. 

FWIW I strongly agree with this.

Replies from: Sophia
comment by Sophia · 2022-07-17T11:50:08.742Z · EA(p) · GW(p)

Will we permanently have low capacity? 

I think it is hard to grow fast and stay nuanced but I personally am optimistic about ending up as a large community in the long-run (not next year, but maybe next decade) and I think we can sow seeds that help with that (eg. by maybe making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere).

Replies from: Max_Daniel, Sophia
comment by Max_Daniel · 2022-07-17T14:06:11.386Z · EA(p) · GW(p)

Good question! I'm pretty uncertain about the ideal growth rate and eventual size of "the EA community", in my mind this among the more important unresolved strategic questions (though I suspect it'll only become significantly action-relevant in a few years).

In any case, by expressing my agreement with Linch, I didn't mean to rule out the possibility that in the future it may be easier for a wider range of people to have a good time interacting with the EA community. And I agree that in the meantime "making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere" is (in some cases) the right goal.

Replies from: Sophia
comment by Sophia · 2022-07-18T00:13:22.163Z · EA(p) · GW(p)

Thanks 😊. 

Yeah, I've noticed that this is a big conversation right now. 

My personal take

EA ideas are nuanced and ideas do/should move quickly as the world changes and our information about it changes too. It is hard to move quickly with a very large group of people. 

However, the core bit of effective altruism, something like "help others as much as we can and change our minds when we're given a good reason to", does seem like an idea that has room for a much wider ecosystem than we have. 

I'm personally hopeful we'll get better at striking a balance. 

I think it might be possible to both have a small group that is highly connected and dedicated (who maybe can move quickly) whilst also having more much adjacent people and groups that feel part of our wider team. 

Multiple groups co-existing means we can broadly be more inclusive, with communities that accommodate a very wide range of caring and curious people, where everyone who cares about the effective altruism project can feel they belong and can add value. 

At the same time, we can maybe still get the advantages of a smaller group, because smaller groups still exist too.

More elaboration (because I overthink everything 🤣)

Organisations like GWWC do wonders for creating a version of effective altruism that is more accessible that is distinct from the vibe of, say, the academic field of "global priorities research". 

I think it is probably worth it on the margin to invest a little more effort into the people that are sympathetic to the core effective altruism idea, but maybe might, for whatever reason, not find a full sense of meaning and belonging within the smaller group of people who are more intense and more weird. 

I also think it might be helpful to put a tonne of thought into what community builders are supposed to be optimizing for. Exactly what that thing is, I'm not sure, but I feel like it hasn't quite been nailed just yet and lots of people are trying to move us closer to this from different sides. 

Some people seem to be pushing for things like less jargon and more inclusivity. Others are pointing out that there is a trade-off here because we do want some people to be thinking outside the Overton Window. The community also seems quite capacity constrained and high-fidelity communication takes so much time and effort.

If we're trying to talk to 20 people for one hour, we're not spending 20 hours talking to just one incredibly curious person who has plenty of reasonable objections and, therefore, need someone, or several people, to explore the various nuances with them (like people did with me, possibly mistakenly 😛, when I first became interested in effective altruism and I'm so incredibly grateful they did). If we're spending 20 hours having in-depth conversations with one person, that means we're not having in-depth conversations with someone else. These trade-offs sadly exist whether or not we are consciously aware of them. 

I think there are some things we can do that are big wins at low cost though, like just being nice to anyone who is curious about this "effective altruism" thing (even if we don't spend 20 hours with everyone, we can usually spend 5 minutes just saying hello and making people who care feel welcome and that them showing up is valued, because imo, it should definitely be valued!). 

Personally, I hope there will be more groups that are about effective altruism ideas where more people can feel like they truly belong. These wider groups would maybe be a little bit distinct from the smaller group(s) of people who are willing to be really weird and move really fast and give up everything for the effective altruism project. However, maybe everyone, despite having their own little sub-communities, still sees each other as wider allies without needing to be under one single banner. 

Basically, I feel like the core thrust of effective altruism (helping others more effectively using reason and evidence to form views) could fit a lot more people. I feel like it's good to have more tightly knit groups who have a more specific purpose (like trying to push the frontiers of doing as much good as possible in possibly less legible ways to a large audience).

 I am hopeful these two types of communities can co-exist. I personally suspect that finding ways for these two groups of people to cooperate and feel like they are on the same team could be quite good for helping us achieve our common goal of helping others better (and I think posts like this one and its response do wonders for all sorts of different people to remind us we are, in fact, all in it together, and that we can find little pockets for everyone who cares deeply to help us all help others more).

comment by Sophia · 2022-07-17T11:53:52.410Z · EA(p) · GW(p)

There are also limited positions in organisations as well as limited capacity of senior people to train up junior people but, again, I'm optimistic that 1) this won't be so permanent and 2) we can work out how to better make sure the people who care deeply about effective altruism who have careers outside effective altruism organisations also feel like  valued members of the community.

comment by tn-77 · 2022-07-12T15:11:22.686Z · EA(p) · GW(p)

I think its important to define intelligence. Do we mean logic-based ability or a more broader definition (emotional intelligence, spatial, etc).

EAs are probably high in one category but low in others.

Soon I'll need a computer to keep track of the socially awkward interactions I've had with EAs who seem to be mostly aligned with a certain technical domain! 

Others I talk seem to have a similar experiences.

Replies from: Bluefalcon
comment by Vilfredo's Ghost (Bluefalcon) · 2022-09-12T14:16:44.482Z · EA(p) · GW(p)

awkward is pretty mild as far as ways to be emotionally stupid go. If that's all you're running into then EAs probably have higher than average emotional intelligence, but perhaps not as high in relative terms as their more classically defined intelligence

comment by DukeGartzea · 2022-07-12T12:41:53.962Z · EA(p) · GW(p)

I think you make a mistake to make a generalization of intelligence, you assume a universalized definition when, in reality, there is no single definition of what we mean when we talk about it. 

If we assume (and allow me to) that you mean IQ, I wanted to quickly comment on the controversy (and correlation) of IQ tests with white supremacy, and the perpetuation of racism in the United States and in what is commonly known as "the Global South" and the perpetuation of an ableist system that has reached eugenics

Understanding intelligence as something you are born with and not as a social construction based on trying to categorize and standardize (at the beginning, as I say, with racist and eugenic ways) a biosocial interaction is somewhat problematic. 

Honestly, and this is my personal opinion, I don't think EA people are smart per se. I also believe (or rather, I affirm) that there is a correlation between going to a top university like Oxford or Harvard not with being excellent, but with having had the opportunity to be that. And what I call opportunity also applies to those people who have not gone to university, of course. 

Anyway, in EA we have a problem when it comes to identifying ourselves as a group that could be easily resolved by investing efforts in how our dynamics work, and the ways in which we exclude other people (I'm not just referring to Olivia) and how that affects within the community, at the level of biases and at the level of the effects that all this has on the work we do.

Replies from: howdoyousay?, Linch, Sophia
comment by howdoyousay? · 2022-07-13T08:06:53.184Z · EA(p) · GW(p)

I didn't down/up-vote this comment but I feel the down-votes without explanation and critical engagement are a bit harsh and unfair, to be honest.  So I'm going to try and give some feedback (though a bit rapidly, and maybe too rapidy to be helpful...)
 

It feels like just an statement of fact to say that IQ tests have a sordid history; and concepts of intelligence have been weaponised against marginalised groups historically (including women, might I add to your list ;) ). That is fair to say. 

But reading this post, it feels less interested in engaging with the OP's post let alone with Linch's response, and more like there is something you wanted to say about intelligence and racism and have looked for a place to say that.  

  • I don't feel like relating the racist history of IQ tests helps the OP think about their role in EA; it doesn't really engage with what they were saying that they feel they are average and don't mind that, but rather just want to be empowered to do good.
  • I don't feel it meaningfully engages with Linch's central point; that the community has lots of people with attributes X in it, and is set up for people with attributes X, but maybe there are some ways the community is not optimised for other people

I think your post is not very balanced on intelligence.

  • general intelligence is as far as I understand a well established psychological / individual differences domain
    • Though this does how many people with outlying abilities in e.g. maths and sciences will - as they put it themselves - not be as strong on other intelligences, such as social. And in fairness to many EAs who are like this, they put their hands up on their intelligence shortcominds in these domains!
  • Of course there's a bio(psycho)social interaction between biological inheritance and environment when it comes to intelligence. The OP's and Linch's points still stand with that in mind.
  • The correlation between top university attendance and opportunity. Notably, the strongest predictor of whether you go to Harvard is whether your parents went to Harvard; but disentangling that from a) ability and b) getting coached / moulded to show your ability in the ways you need to for Harvard admissions interviews is pretty hard. Maybe a good way of thinking of it is something like for every person who get into elite university X...:
    • there are 100s of more talented people not given the opportunity or moulding to succeed at this, who otherwise would trounce them, but
    • there are 10000s more who, no matter how much opportunity or moulding they were given, would not succeed

Anyway, in EA we have a problem when it comes to identifying ourselves as a group that could be easily resolved by investing efforts in how our dynamics work, and the ways in which we exclude other people (I'm not just referring to Olivia) and how that affects within the community, at the level of biases and at the level of the effects that all this has on the work we do.

If I'm understanding you correctly, you're saying "we have some group dynamics problems; we involve some types of people less, and listen to some voices less". Is that correct? 

I agree - I think almost everyone would identify different weird dynamics within EA they don't love, and ways they think the community could be more inclusive; or some might find lack of inclusiveness unpalateable but be willing to bite that bullet on trade-offs. Some good work has been done recently on starting up EA in non-Anglophone, non-Western countries, including putting forward the benefits of more local interventions; but a lot more could be done. 

A new post on voices we should be listening to more, and EA assumptions which prevent this from happening would be welcome!

Replies from: DukeGartzea, Frederik
comment by DukeGartzea · 2022-07-13T12:41:58.034Z · EA(p) · GW(p)

Thank you for your comment, at the beginning I did not understand about the downvotes and why I  wasn't getting any kind of criticism

I agree with what you say with my comment, I would not contribute anything to Olivia's post, I realized this within hours of writing it and I did not want to delete or edit it. I prefer that the mistakes I may do remain present so that I can study a possible evolution for the near-medium future.

But reading this post, it feels less interested in engaging with the OP's post let alone with Linch's response, and more like there is something you wanted to say about intelligence and racism and have looked for a place to say that.  

Actually, my intention was not focused at any time to bring up the issue of racism or eugenics, but more in terms of how within the EA community intelligence is conceptualized and defined as a means to measure oneself between the group and the others. I believe this, thinking about it, is a good idea to write about it in this forum.

I also point out about writing on the subject of EA dynamics, giving voices to other people and criticizing both sides that you comment

comment by Frederik · 2022-07-13T11:56:19.700Z · EA(p) · GW(p)

Nothing to add -- just wanted to explicitly say I appreciate a lot that you took the time to write the comment I was too lazy to.

comment by Linch · 2022-07-14T18:31:20.704Z · EA(p) · GW(p)

I do think intelligence is less clearly defined than it could be, and I've complained in the past that the definition people often use is optimized more for prediction than independent validity. 

However, I think the different definitions are sufficiently correlated that it's reasonable to us to sometimes speak of it as one thing. Consider an analogy to "humor." Humor means different things to different people, and there's not a widely agreed upon culture-free definition, but still it seems "I'm not funny enough to be a professional standup comedian" is a reasonable thing to say. 

And my guess is that the different definitions of intelligence are more tightly correlated than different definitions (or different perspectives on) humor.

I also disagree with the implication (which rereading, you did not say outright. So perhaps I misread you) that intelligence (and merit-based systems in general) is racist. If anything, I find the idea that merit-based measurements is racist or white supremacist to be itself kind of a racist idea, not to mention condescending to nonwhites. 

I agree that intelligence has environmental components. I'm not sure why this is relevant here however.

Replies from: DukeGartzea
comment by DukeGartzea · 2022-07-14T21:45:20.021Z · EA(p) · GW(p)

Hi Linch! Thanks for your comment


When I brought up the subject of intelligence and its definitions, it came as a result of what Olivia comments about not feeling or looking intelligent for EA and how (in your comment) your fact fourth can be understood. What I mean is, that if she (speaking in ultra-simplified and quite relative terms) is less smart than the average EA, it does not mean that she will always be less smart. 

Leaving the door open to the learning and growth of possible intelligence that may be underdeveloped could be a valid option for Olivia, but I do not see that option in your comment. I do not see that you are trying to pull that idea of personal and intellectual growth. 

That is, she may not know about something and has the right to learn about it, at her own pace. Perhaps in the future, she will discover in herself an expert in some of all this, but how can we know if we do not give her that option?

I also disagree with the implication (which rereading, you did not say outright. So perhaps I misread you) that intelligence (and merit-based systems in general) is racist. 

Here you have really misunderstood what I said, as I mentioned before [EA · GW]:


Actually, my intention was not focused at any time to bring up the issue of racism or eugenics, but more in terms of how within the EA community intelligence is conceptualized and defined as a means to measure oneself between the group and the others. I believe this, thinking about it, is a good idea to write about it in this forum.

Lastly, on the merit-based system, I think we can have a more distant opinion, and if you ever want to talk about it in more depth, I think this forum has private messages for it. 

comment by Sophia · 2022-07-13T06:32:39.436Z · EA(p) · GW(p)

I strongly agree with:

"I think you make a mistake to make a generalization of intelligence, you assume a universalized definition when, in reality, there is no single definition of what we mean when we talk about it." 

I think the rest of your comment detracts from this initial statement because it claims a lot and extraordinarily strong claims need extraordinarily strong evidence. It was also a very politicalized comment. 

While naturally sometimes things just are political, when things toe a political party line it can sound more like rhetoric and less like rational argument and that can ring alarm bells in people's heads. I think for political comments especially, more evidence is needed per claim because people are prone to motivated reasoning when things are tribal, I know I am certainly less rational when it comes to my political beliefs than my beliefs about chemistry, for example (I think this is probably true also of things that toe the "EA party-line" but as this is the EA forum, it makes sense that things that are more commonly thought in the EA community get justified less than they would on a forum about any other topic, but I know that I have a bias towards believing things that are commonly believed in the EA community and I really should require more evidence per claim that agrees with the EA community to correct this bias in myself, a thing that maybe I should reflect on more in the future). 
 
 

Replies from: Sophia, DukeGartzea
comment by Sophia · 2022-07-13T06:33:04.509Z · EA(p) · GW(p)

I think that your comment could have been improved by 
1) making it several separate comments so people could upvote and downvote the different components separately (I am such a hypocrite as my comments are often long and contain many different points but this is something I should also work on), 
2) if you feel strongly that the more political parts of your comment were important to your core point, and you strongly suspect that there are parts that are true that could be fleshed out and properly justified, it would be better to maybe pick one narrow claim you made and fleshed it out a lot more, with more caveats on the bits you're more or less confident on/that seem more or less backed by science (I personally don't feel like those bits were important to your overall point but that's maybe because I don't fully understand the point you were trying to make). 

Replies from: Sophia
comment by Sophia · 2022-07-13T06:34:14.717Z · EA(p) · GW(p)

I also wanted to say sorry you got downvoted so much! That always sucks, especially when it's unclear what the reason is. 

It can be hard to tell whether people disagree with your core claim or whether people felt you didn't justify stuff enough. 

I didn't upvote or downvote but I both strongly agreed with your first sentence and felt a bit uncomfortable about how political your comment was for the reasons stated above and that might be the same reason other people downvoted. 

I hope my comment is more helpful and that it wasn't overly critical (my comments are also far from perfect)!

 I thought it was worth saying that at least one reader didn't completely disagree with everything here even if your original comment was very downvoted.

Replies from: Sophia
comment by Sophia · 2022-07-13T06:37:10.343Z · EA(p) · GW(p)

What we colloquially call "intelligence" does seem multi-dimensional, it would be very surprising to me if many people reading your comment disagreed with that (they might just think that there is some kind of intelligence that IQ tests measure that is not racist or ableist to think is valuable in some contexts for some types of tasks even if there are other types of intelligence that are harder to measure that also might be very valuable). 

Replies from: Sophia
comment by Sophia · 2022-07-13T06:43:38.782Z · EA(p) · GW(p)

FWIW, I am both mixed race and also have plenty of diagnoses that makes me technically clinically insane :P (bipolar and ADHD), so if one counter-example is enough, I feel like I can be that counter-example. 

 I'd like to think the type of intelligence that I have is valuable too -- no idea if it easily measurable in an IQ test (I don't think IQ tests are very informative for individuals so I've not taken one as an adult).

 Seeing my type of intelligence as valuable does not mean that other types of skills/intelligence can't be valued too and I, personally, don't think it makes much sense to see it as ableist or racist to value my skills/competencies/type of intelligence. We should still also value other skills/types of intelligence/competencies too.  

Replies from: Sophia
comment by Sophia · 2022-07-13T06:45:53.018Z · EA(p) · GW(p)

I do think that professions that, on average, women tend to do more of and men tend to do less of, for whatever reason, are valued less (eg. paediatricians versus surgeons). I would guess that this is a type of sexism. Is this the kind of thing you were trying to point to? 

comment by DukeGartzea · 2022-07-13T12:19:54.972Z · EA(p) · GW(p)

Hi, thank you for your comment. 

I could agree with the part where I assume things related to the IQ, but I make those assumptions having previously read other EA members with clearly essentialist and biologist ideas regarding the subject of intelligence, ideas that also are also quite far from being rational. Continuing with that, in the third paragraph I comment on the problem of naturalizing something -intelligence- for which we have evidence and consensus is not as stated. 

Understanding the politicization behind my following arguments, where I speak from a perspective beyond rationalist or philosophical could be the most correct thing in which I could reaffirm myself. For the next time, I might start with something about this.

I understand therefore what you say about politicization in the last paragraphs that I expose, for the next time I think I could focus more on possible evidence regarding this, something that I did not think about for a short and brief comment like this one at the beginning.
 

comment by Ajeya · 2022-07-12T02:46:13.791Z · EA(p) · GW(p)

I'm really sorry that you and so many others have this experience in the EA community. I don't have anything particularly helpful or insightful to say -- the way you're feeling is understandable, and it really sucks :(

I just wanted to say I'm flattered and grateful that you found some inspiration in that intro talk I gave. These days I'm working on pretty esoteric things, and can feel unmoored from the simple and powerful motivations which brought me here in the first place -- it's touching and encouraging to get some evidence that I've had a tangible impact on people.

Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-12T13:16:33.386Z · EA(p) · GW(p)

Thank you so much! I so appreciate this comment.

Your talk really is great. This weekend I’m facilitating my first introductory fellowship session and I’ve recommended it to those coming along because I think it’ll be great to inspire and get them interested in EA, like it did for me.

comment by A_donor · 2022-07-11T11:33:10.733Z · EA(p) · GW(p)

The Parable of the Talents, especially the part starting at:

But I think the situation can also be somewhat rosier than that.

Ozy once told me that the law of comparative advantage was one of the most inspirational things they had ever read. This was sufficiently strange that I demanded an explanation.

Ozy said that it proves everyone can contribute. Even if you are worse than everyone else at everything, you can still participate in global trade and other people will pay you money. It may not be very much money, but it will be some, and it will be a measure of how your actions are making other people better off and they are grateful for your existence.

Might prove reassuring. Yes, EA has lots of very smart people, but those people exist in an ecosystem which almost everyone can contribute to. People do and should give kudos to those who do the object level work required to keep the attention of the geniuses on the parts of the problems which need them.

As some examples of helpful things available to you: 

  • Being an extra pair of hands at events
  • Asking someone who you think is aligned with your values and might have too much on their plate what you can help them with (if you actually have the bandwidth to follow through)
  • Making yourself available to on-board newcomers to the ideas in 1-on-1 conversations
Replies from: lukefreeman, Max_Daniel
comment by Luke Freeman (lukefreeman) · 2022-07-12T22:52:06.805Z · EA(p) · GW(p)

I also want to chime in here and say that it was a bit of a shock for me coming into the EA community also: I was one of the more analytical people in most of my friendship groups, yet it was pretty quickly clear to me that my comparative advantage in this community was actually EQ, communications, and management. I'm glad to work with some incredibly smart analytical people who are kind enough to (a) help me understand things that confuse me when I'm frank about what I don't understand; and (b) remind me what else I bring to the table.

Replies from: Sophia
comment by Sophia · 2022-07-13T05:29:08.222Z · EA(p) · GW(p)

Luke needing to be reminded what he brings to the table I think is evidence that we're missing out on many extremely talented people who aren't 99.9th percentile on one particular skillset that we overselect for. 

As a counter-example, I am below average in many skills that people in my wider-peer group have that, I believe, would be incredibly helpful to the effective altruism movement. However, I am good at a very narrow type of things that are easy to signal in conversation that makes people in the EA community often think way more highly of me than, I believe, is rational.

 I have found easy  social acceptance in this community because I speak fluent mathematics. I have higher IQ friends who, in high-trust conversations, are extremely epistemically humble and have a lot to contribute but who I can't easily integrate into the effective altruism community. 

I believe that part of what makes it hard to introduce people who aren't exceptionally analytical to effective altruism is because there seems to be a stronger prior that intelligence and competence are one-dimensional (or all types of competence and intelligence are correlated) in a way there isn't so much this prior elsewhere. It does seem true that some people are more intelligent/skilled than others on many different dimensions we might care about and this is maybe a taboo thing to say in many contexts. However, competence and intelligence are multi-dimensional and different types of intelligence/skills seem to me unlikely to be perfectly correlated with each other. I'd guess some are probably anti-correlated (we each have a limited number of neurons, surely if those neurons are highly specialized at solving one type of problem then there are going to be trade-offs which mean, at the skill frontier, it seems likely that this scarce brain capacity trades-off against other specialized skills).

To find someone good at marketing, we possibly had to find the one marketing guy who happened to be way above average in pretty much everything, including analytic intelligence (who was only 99th percentile analytic instead of 99.9th percentile and so needs reminding of his value in a community of people that very heavily rewards analytical thinking).

While analytic reasoning can be handy, it is not the only skill worth having and I don't think you need to have that much of that particular skill to understand the core EA ideas enough to be a very valuable contributor to this community. Being exceptionally good at reasoning transparency and analytic philosophy is not perfectly correlated with many other types of skills or intelligence desperately needed within the effective altruism community for the EA community to maximize its impact. While some types of skills and intelligence have synergies and often come together, I suspect that other skills have different synergies.

If this model is accurate, then some skills are likely to be anti-correlated with the capacity to show large degrees of reasoning transparency and impress in EA-style conversations. 

If those are skills we are in desperate need of, saying this movement isn't for anyone who doesn't find the forum very easy to read or doesn't find analytical conversations as effortless might very well cause us to be much lower impact than we otherwise could be. 

Comparative advantage is a thing and, as far as I've observed, skillsets and personalities do seem to cluster together. 

If we want our movement to maximize its impact, then we can't just select for the people who are exceptionally analytical at the detriment of losing out on people who are exceptionally good at, e.g. marketing or policy (I suspect it could be harder to find top people to work in AI governance without there being room for a greater variety of people who care deeply about helping others).

Replies from: Sophia
comment by Sophia · 2022-07-13T05:33:08.657Z · EA(p) · GW(p)

In short, if my model is correct, being a bit different to other people in the effective altruism community is evidence that you might have a comparative advantage (and maybe even an absolute advantage) within our community and you are paving the way for other people who are less weird in ways people in EA tend to be weird to find belonging here. 

I strongly believe that if you care deeply about others in an impartial way, that carving out space for you is very much in the best interest of this community (and that if the EA community is a place you want to be, finding your place in it is going to help others feel like they, too, have a place). It is also fine for you to just do what's good for you too and if the EA community isn't healthy for you for whatever reason, it's fine to bring what you like and discard the rest elsewhere too! 

comment by Max_Daniel · 2022-07-11T23:03:20.056Z · EA(p) · GW(p)

Another relevant Slate Star Codex post is Against Individual IQ Worries.

Replies from: Sophia
comment by Sophia · 2022-07-13T06:54:16.171Z · EA(p) · GW(p)

I love this post. It is so hard to communicate that the 2nd moment of a distribution (how much any person or thing tends to differ from the average[1]) is often important enough that what is true on average often doesn't apply very well to any individual (and platitudes that are technically false can therefore often be directionally correct in EA/LessWrong circles).

  1. ^

    This definition was edited in because I only thought of an okay definition ages later.

Replies from: Sophia
comment by Sophia · 2022-07-18T01:24:03.369Z · EA(p) · GW(p)

Some of my personal thoughts on jargon and why I chose, pretty insensitively given the context of this post, to use some anyway

 I used the "second moment of a distribution" jargon here initially (without the definition that I later edited in) because I feel like sometimes people talk past each other. I wanted to say what I meant in a way that could be understood more by people who might not be sure exactly what everyone else precisely meant. Plain English sometimes lacks precision for the sake of being inclusive (inclusivity that I personally think is incredibly valuable, not just in the context of this post). And often precision is totally unnecessary to get across the key idea. 

However, when you say something in language that is a little less precise, it naturally has more room for different interpretations. Some interpretations readers might agree with and some they might not. The reason jargon tends to exist is because it is really precise. I was trying to find a really precise way of saying the vibe of what many other people were saying so everyone all felt a tiny bit more on the same page (no idea if I succeeded though or if it was actually worth it or if it was actually even needed and whether this is all actually just in my head). 

Replies from: Lorenzo Buonanno, Sophia
comment by Lorenzo Buonanno · 2022-07-21T06:57:48.220Z · EA(p) · GW(p)

For what it's worth, I think the term "variance" is much more accessible than "second moment".

Variance is a relatively common word. I think in many cases we can be more inclusive without losing precision (another example is "how much I'm sure of this" vs "epistemic status")

Replies from: Sophia
comment by Sophia · 2022-07-22T04:56:01.937Z · EA(p) · GW(p)

lol, yeah, totally agree (strong upvoted).

 I think in hindsight I might literally have been subconsciously indicating in-groupness ("indicating in-groupness" means trying to show I fit in 🤮 -- feels so much worse in plain English for a reason, jargon is more precise but still often less obvious what is meant, so it's often easier to hide behind it) because my dumb brain likes for people to think I'm smarter than I am. 

In my defense, it's so easy to, in the moment, to use the first way of expressing what I mean that comes to mind. 

I am sure that I am more likely to think of technical ways of expressing myself because technical language makes a person sound smart and sounding smart gets socially rewarded. 

I so strongly reflectively disagree with this impulse but the tribal instinct to fit in really is so strong (in every human being) and really hard to notice in the moment. 

I think it takes much more brain power to find the precise and accessible way to say something so, ironically, more technical language often means the opposite of the impression it gives.

 This whole thing reminds me of the Richard Feymann take that if you can't explain something in language everyone can understand, that's probably because you don't understand it well enough. I think that we, as a community, would be better off if we managed to get good at rewarding more precise and accessible language and better at punishing unnecessary uses of jargon (like here!!!).[1] 

I kind of love the irony of me having clearly done something that I think is a pretty perfect example of exactly what I, when I reflect, believe we need to do a whole lot less of as a community🤣

  1. ^

    I think it's also good to be nice on the forum and I think Lorenzo nailed this balance perfectly. Their comment was friendly and kind, with a suggested replacement term, but still made me feel like using unnecessary jargon was a bad thing (making using unnecessary jargon feel like something I shouldn't have done which  will likely make my subconscious less likely to instinctively want to use unnecessary jargon in the future👌).

comment by Sophia · 2022-07-18T01:25:53.159Z · EA(p) · GW(p)

It's just my general feeling on the forum recently that a few different groups of people are talking past each other sometimes and all saying valuable true things (but still, as always, people generally are good at finding common ground which is something I love about the EA community). 

Really, I just really want everyone reading to understand where everyone else is coming from. This vaguely makes me want to be more precise when other people are saying the same thing in plain English. It also makes me want to optimise for accessibility when everyone else is saying something in technical jargon that is an idea that more people could get value from understanding. 

Ideally I'd be a good enough at writing to be precise and accessible at the same time though (but both precision and making comments easier to understand for a broader group of readers is so time consuming so I often try to either do one or the other and sometimes I'm terrible and make a quick comment that is definitely neither 🤣). 

comment by titotal · 2022-07-11T15:52:14.248Z · EA(p) · GW(p)

You seem to be jumping to the conclusion that if you don't understand something, it must be because you are dumb, and not because you lack familiarity with community jargon or norms. 

For example, take the yudkowsky doompost [LW · GW] that's been much discussed recently. In the first couple of paragraphs, he namedrops people that would be completely unknown outside his specific subfield of work, and expects the reader to know who they are.  Then there are a lot of paragraphs like the following:

If nothing else, this kind of harebrained desperation drains off resources from those reality-abiding efforts that might try to do something on the subjectively apparent doomed mainline, and so position themselves better to take advantage of unexpected hope, which is what the surviving possible worlds mostly look like.

It doesn't matter if you have an oxford degree or not, this will be confusing to anyone who has not been steeped in the jargon and worldview of the rationalist subculture. (My PHD in physics is not helpful at all here)

This isn't necessarily bad writing, because the piece is deliberately targeted at  people who have been talking with this jargon for years. It would be bad writing if it were aimed at the general public though, because they don't know what these terms mean.  

This is similar to scientific fields, when you publish a scientific paper in a specific sub-discipline, a lot of knowledge is assumed. This avoids having to re-explain whole disciplines, but it does make papers incredibly hard to read for anyone that's even a little bit of an outsider. But when communicating results to the public (or even someone in a different field of physics), you have to translate into reasonably understandable english. I think people here should be mindful of who exactly their audience is, and tailor their language appropriately. 

Replies from: Olivia Addy, Guy Raveh, lukefreeman
comment by Olivia Addy · 2022-07-12T13:00:04.302Z · EA(p) · GW(p)

I agree, and reading other comments - I think I may have got a bit down on myself (unnecessarily) for not understanding a lot of the stuff on the forum, as that seems to be pretty common. I guess as this is sort of the ‘main place’ (as far as I’m aware) for EA discussion, this contributed to my feelings of not being ‘smart enough’ to fit in.

comment by Guy Raveh · 2022-07-11T16:26:27.258Z · EA(p) · GW(p)

Second everything here.

comment by Luke Freeman (lukefreeman) · 2022-07-12T22:53:28.812Z · EA(p) · GW(p)

Strongly agree!

comment by Gavin (technicalities) · 2022-07-11T15:37:37.432Z · EA(p) · GW(p)

On fancy credentials: most EAs [EA · GW] didn't go to fancy universities*. And I guess that 4% [EA · GW] of EAs dropped out entirely. Just the publicly known subset includes some of the most accomplished: Yudkowsky, Muehlhauser, Shlegeris?, Kelsey Piper, Nuno Sempere. (I know 5 others I admire greatly.) 

On intelligence: You might be over-indexing to research, and to highly technical research. Inside research / writing the peak difficulty is indeed really high, but the average forum post seems manageable. You don't need to understand stuff like Löb's theorem [? · GW] to do great work. I presume most great EAs don't understand formal results of this sort. I often feel dumb when following alignment research, but I can sure do ordinary science and data analysis and people management, and this counts for a lot.

On the optics of the above two things: seems like we could do more to make people feel welcome, and to appreciate the encouraging demographics and the world's huge need for sympathetic people who know their comparative advantage. (I wanted to solve the education misconception by interviewing [EA · GW] great dropouts in EA. But it probably would have landed better with named high-status interviewees.)

 

* Link is only suggestive evidence cos I don't have the row-level data.

Replies from: tessa, Buck, Olivia Addy, isabel
comment by Tessa (tessa) · 2022-07-12T14:44:02.519Z · EA(p) · GW(p)

I think people are also unaware of how tiny the undergraduate populations of elite US/UK universities are, especially if you (like me) did not grow up or go to school in those countries.

Quoting a 2015 article from Joseph Heath, which I found shocking at the time:

There are few better ways of illustrating the difference than to look at the top U.S. colleges and compare them to a highly-ranked Canadian university, like the University of Toronto where I work. The first thing you’ll notice is that American schools are miniscule. The top 10 U.S. universities combined (Harvard, Princeton, Yale, etc.) have room for fewer than 60,000 undergraduates total. The University of Toronto, by contrast, alone has more capacity, with over 68,000 undergraduate students.

In other words, Canadian universities are in the business of mass education. We take entire generations of Canadians, tens of thousands of them recent immigrants, and give them access to the middle classes. Fancy American schools are in the business of offering boutique education to a very tiny, coddled minority, giving them access to the upper classes. That’s a really fundamental difference.

Oxford (12,510 undergraduates) and Cambridge (12,720 undergraduates ) are less tiny, but still comparatively small, especially since the UK population is about 1.75x Canada's.

comment by Buck · 2022-07-11T19:16:32.313Z · EA(p) · GW(p)

(I'm flattered by the inclusion in the list but would fwiw describe myself as "hoping to accomplish great things eventually after much more hard work", rather than "accomplished".)

FWIW I went to the Australian National University, which is about as good as universities in Australia get. In Australia there's way less stratification of students into different qualities of universities--university admissions are determined almost entirely by high school grades, and if you graduate in the top 10% of high school graduates (which I barely did) you can attend basically any university you want to. So it's pretty different from eg America, where you have to do pretty well in high school to get into top universities. I believe that Europe is more like Australia in this regard.

Replies from: Frederik
comment by Frederik · 2022-07-12T15:12:25.481Z · EA(p) · GW(p)

I can support the last point for Germany at least. There's relatively little stratification among universities. It's mostly about which subject you want to study, with popular subjects like medicine requiring straight A's at basically every university. However you can get into a STEM program at the top universities without being in the top-10% at highschool level.

comment by Olivia Addy · 2022-07-12T13:09:22.688Z · EA(p) · GW(p)

I appreciate you highlighting that most EA’s didn’t go to top level uni’s - I wish this was out there more!

And I think (from reading other comments too) I was definitely getting a bit too wrapped up in not understanding highly complex stuff (when a lot of EA’s don’t either).

I agree there’s a huge need for more sympathetic people and that’s why I think it’s a shame that the community does feel like it has such a high bar to entry. I hope this changes in future.

comment by isabel · 2022-07-11T18:18:47.179Z · EA(p) · GW(p)

I'm pretty sure Kelsey didn't drop out, though she did post about having a very hard time with finishing. 

Replies from: Buck, technicalities
comment by Buck · 2022-07-11T19:10:30.002Z · EA(p) · GW(p)

This is correct, she graduated but had a hard time doing so, due to health problems. (I hear that Stanford makes it really hard to fail to graduate, because university rankings care about completion rates.)

Note that Kelsey is absurdly smart though, and struggled with school for reasons other than inherently having trouble learning or thinking about things.

comment by Gavin (technicalities) · 2022-07-11T18:29:34.221Z · EA(p) · GW(p)

Interesting, I seem to remember a Tumblr post to the contrary but it was years ago. 

Replies from: bec_hawk
comment by Rebecca (bec_hawk) · 2022-07-12T04:16:05.585Z · EA(p) · GW(p)

Maybe she had temporarily dropped out at the time, and later was able to finish?

comment by Guy Raveh · 2022-07-11T11:45:32.420Z · EA(p) · GW(p)

To supplement what others have said, I think the long term (few years or more) outcomes of the movement depend greatly of on the diversity of perspectives we manage to have available. Mathematicians and engineers are great for solving some complex problems (ok, I'm biased because I am one), but the current lack of e.g. sociologists in EA is going to hinder any efforts to solve big problems that exist in a social environment. Not only do you have a place here - it's necessary that people like you be part of EA.

Replies from: iamef
comment by emily.fan (iamef) · 2022-07-14T05:49:08.547Z · EA(p) · GW(p)

++ having a sociology background is great Not sure, but I think Vaidehi may have also studied Sociology at a non-Ivy+ school as well, and she seems to have done some cool stuff in the EA community too.

Not sure how relevant this comment is, but as someone who studies more technical stuff, I am honestly impressed with people who study things like sociology. The sheer number of papers and essays you guys pump out and how you have to think about large social systems honestly scares me! English / history classes were some of the hardest for me in high school!

I also think you might find some of Cal Newport's books helpful (So Good They Can't Ignore You, maybe even How To Be A High School Superstar). He shares a lot of encouraging stories about people who become good at what they do without being super impressive beforehand!

comment by Holly_Elmore · 2022-07-12T20:43:27.148Z · EA(p) · GW(p)

Another issue here is that the EA Forum is used sort of as the EA research journal by many EAs and EA orgs, including my employer, Rethink Priorities. We sometimes post write-ups here that aren't optimized for the average EA to read at all, but are more for a technical discipline within EA.

Replies from: Imma Six, Paige_Henchen, Chriswaterguy
comment by Imma (Imma Six) · 2022-07-15T19:23:15.186Z · EA(p) · GW(p)

Isn't that a good thing? I hope it stays like this.  Then the forum stays interesting for people who are specialized in certain fields or cause areas.

Replies from: bec_hawk
comment by Rebecca (bec_hawk) · 2022-07-16T02:09:49.669Z · EA(p) · GW(p)

It’s an issue insofar as people aren’t aware of it

Replies from: Holly_Elmore
comment by Holly_Elmore · 2022-07-20T08:13:35.702Z · EA(p) · GW(p)

Exactly, it's an issue if people think the posts on here are all aimed at a general EA audience 

Replies from: bec_hawk
comment by Rebecca (bec_hawk) · 2022-07-21T23:04:20.154Z · EA(p) · GW(p)

Perhaps there could be tags for different ‘levels’ of technicality

comment by Sunny1 (Paige_Henchen) · 2022-07-15T15:52:28.388Z · EA(p) · GW(p)

I think the centrality of the EA Forum to the overall "EA project" has likely caused a lot of unintended consequences like this. Participating in the Forum is seen as a pretty important "badge" of belonging in EA, but participating in an internet forum is generally not the type of activity that appeals to everyone, much less an internet forum where posts are expected to be lengthy and footnoted.

Replies from: Imma Six
comment by Imma (Imma Six) · 2022-07-15T19:19:34.811Z · EA(p) · GW(p)

Participating in the Forum is seen as a pretty important "badge" of belonging in EA,

Why do you believe this is true? I've met - online and offline - many higly involved people who never post or comment on the forum.  Maybe that's even the majority of the EA people I know. Some of them even never or seldom read anything here (I guess).

Replies from: Holly_Elmore
comment by Holly_Elmore · 2022-07-20T08:16:03.874Z · EA(p) · GW(p)

I second this-- a lot of prominent EAs don't look at the Forum. I check the Forum something like once a week on average and rarely post despite this being where my research reports are posted. A lot of EA social engagement happens on facebook and Discord and discourse may take place over more specialized fora like the Alignment Forum or specific Slacks.

Replies from: Holly_Elmore
comment by Holly_Elmore · 2022-07-20T08:17:56.330Z · EA(p) · GW(p)

(I have a lot of karma because I've been on here a long time)

comment by Chriswaterguy · 2022-08-16T16:35:37.094Z · EA(p) · GW(p)

Perhaps these posts could start with a note on "assumed context", similar to the "epistemic status" notes.

(A downside might be if it discourages someone from reading a post that they actually would have got value from, even if they didn't understand everything. So the choice of wording would be important.)

comment by Stephen Clare · 2022-07-11T12:20:50.054Z · EA(p) · GW(p)

Feel a bit sad reading this. I'm sorry you've felt alienated by EA and are unsure about how you fit in.

Re: your last sentence: you're far from alone in feeling this way. I cannot recommend Luisa Rodriguez's  80000 Hours article about imposter syndrome highly enough.

I don't think super high intelligence, or Ivy league degrees, are a requirement for engaging with EA. But setting aside that question, I do think there are lots of ways to engage that aren't, like, "do complicated AI alignment math". Organizations need many people and skills other than researchers to run well. And I think there are many ways to express EA values outside of your career, e.g. by donating, voting for political candidates who focus on important issues, and leading by example in your personal life and relationships.

Replies from: Ada-Maaria Hyvärinen, Olivia Addy
comment by Ada-Maaria Hyvärinen · 2022-07-14T10:18:37.948Z · EA(p) · GW(p)

I generally agree with your comment but I want to point out that for a person who does not feel like their achievements are "objectively" exceptionally impressive Luisa's article can also come across as intimidating: "if a person who achieved all of this still thinks they are not good enough, then what about me?"

I think Olivia's post is especially valuable because she dared to post even when she does not have a list of achievements that would immediately convince readers that her insecurity/worry is all in her head. It is very relatable to a lot of folks (for example me) and I think she has been really brave to speak up about this!

Replies from: bec_hawk
comment by Rebecca (bec_hawk) · 2022-07-14T13:31:20.827Z · EA(p) · GW(p)

I agree. I would actually go further and say that bringing imposter syndrome into it is potentially unhelpful, as it's in some ways the opposite issue - imposter syndrome is about when you are as smart/competent/well-suited to a role as your peers, but have a mistaken belief that you aren't. What Olivia's talking about is actual differences between people that aren't just imagined due to worry. I could see it come off as patronising/out-of-touch to some, although I know it was meant well. 

comment by Olivia Addy · 2022-07-12T15:48:51.650Z · EA(p) · GW(p)

Thank you for this comment and the article recommendation, I will definitely be checking it out. And thank you for highlighting the other ways to get involved, I could definitely do a bit more thinking about the options available to me, as I'm sure I can find my place somewhere!!

comment by Markus Amalthea Magnuson (peppersghost) · 2022-07-11T14:18:00.155Z · EA(p) · GW(p)

Since sociology is probably an underrepresented degree in effective altruism, maybe you can consider it a comparative advantage rather than "the wrong degree". The way I see it, EA could use a lot more sociological inquiry.

Replies from: Amber, Holly_Elmore
comment by Amber Dawn (Amber) · 2022-07-11T20:07:44.672Z · EA(p) · GW(p)

Yes, agree 100%! In general, I think EA neglects humanities skills and humanistic ways of solving problems. 

comment by Holly_Elmore · 2022-07-12T20:45:43.644Z · EA(p) · GW(p)

Yeah I would still love to see something like ethnographies of EA: https://forum.effectivealtruism.org/posts/YsH8XJCXdF2ZJ5F6o/i-want-an-ethnography-of-ea

comment by Amber Dawn (Amber) · 2022-07-11T20:27:38.134Z · EA(p) · GW(p)

This is such a good post + I agree so much! I'm sorry you feel like you don't fit in :( and I'm also worried about the alienating effect EA can have on people. Fwiw, I've also had worries like this in the past - not so much that I wasn't smart enough, but that there wasn't a place for me in EA because I didn't have a research background in any of the major cause areas (happy to DM about this). 

 A couple of points, some echoing what others have said:

-there's a difference between 'smart' and 'has fancy credentials'
-some stuff that's posted on the Forum is written for a niche audience of experts and is incomprehensible to pretty much everyone
-imo a lot of EA stuff is written in an unnecessarily complicated/maths-y/technical way (and the actual ideas are less complicated than they seem)
-maybe you have strengths other than "intellectual" intelligence, e.g. emotional intelligence, people skills, being organized, conscientiousness...

I really do think this is a problem with EA, not with you - EAs should offer more resources to people who are excited to contribute but don't fit into the extremely narrow demographic of nerdy booksmart STEM graduates. 

Replies from: RogerAckroyd, Olivia Addy, Sharmake
comment by RogerAckroyd · 2022-07-12T08:22:59.919Z · EA(p) · GW(p)

For people with math/technical background the easiest way to express certain ideas may be in a mathy way. 

Replies from: Amber
comment by Amber Dawn (Amber) · 2022-07-12T09:42:36.521Z · EA(p) · GW(p)

Yeah absolutely! And it's not always worth experts' time to optimize for accessibility to all possible readers (if it's most important that other experts read it). But this does mean that sometimes things can seem more "advanced" or complex than they are.

comment by Olivia Addy · 2022-07-12T13:03:29.916Z · EA(p) · GW(p)

Thank you for this comment!! The points you make are really great, and I hadn’t considered the importance of other types of intelligence so that’s something for me to think about a bit more.

I agree there needs to be more resources out there, and I hope this is something that changes over time.

comment by Sharmake · 2022-07-11T21:00:41.348Z · EA(p) · GW(p)

I disagree with this comment's implications, but I do understand why this post and comment was made. The basic problem is status in EA, especially metrics like lives saved (relatively), while arguably necessary to get the money it has now, is a problem. Obviously compared to AI Alignment, pretty much everyone is small in comparison.

But that's not a good mindset for EA to cultivate, or for individuals to have.

For dumb people, focus on donating money, do a lot of epistemic deference to organizations like Givewell, and ignore the status/relative game. Focus on the absolute numbers of lives saved by your actions, not comparing it to others lives saved.

The default action for most dumb people that works is Earn to give, invest in index funds, and auto-donate with heavy epistemic deference. Accept your limits, but don't define or break your psychological health on intelligence.

Replies from: Olivia Addy, iamef, Amber
comment by Olivia Addy · 2022-07-12T15:44:24.542Z · EA(p) · GW(p)

If aiming at dumb (I really just used this as a title, I just meant averagely intellingent) people, I don't think using the phrase 'epistemic deference' is ideal :D 

Replies from: Sharmake
comment by Sharmake · 2022-07-13T17:34:02.704Z · EA(p) · GW(p)

I'll probably write a top level comment on why I disagree with this.

comment by emily.fan (iamef) · 2022-07-14T05:59:00.826Z · EA(p) · GW(p)

I feel like certain populations (particularly women) tend to underestimate their abilities, so I find this comment pretty discouraging. My current take is that a lot of people think they aren't good enough for XYZ, but if they take a good stab at XYZ in an environment that is encouraging they may realize that they might be able to do XYZ after all.

I think that a lot of people naturally think that they are "not math people" when they could actually be much better at math.

And I don't think that you don't have to be the best at math or XYZ to contribute. I think that as long as you're willing to put some effort and are open-minded and willing to grow, you'll probably surprise yourself at how much you're able to do.

@Olivia I'm honestly very impressed with you because you've shown a lot of good traits by making this post. It's clear that you deeply care about making a difference. You were bold and took the initiative to open up about your insecurities. You were agentic in posting this on the forum. You're willing to take feedback from the audience Keep it up!

Replies from: Olivia Addy, Sharmake
comment by Olivia Addy · 2022-07-14T06:28:25.788Z · EA(p) · GW(p)

Thank you so much! I really appreciate this - it can definitely be challening to see good qualities I have sometimes...so thank you for posting this.

comment by Sharmake · 2022-07-14T15:13:07.481Z · EA(p) · GW(p)

My reason is that the opposite problem usually occurs called the Dunning-Kruger effect, where dumb people or low IQ vastly overestimate what good they can do (in the EA worldview or perspective, not other perspectives) due to not realizing just how bad they are, while High IQ people underestimate themselves due to not realizing how good they are, and IQ/g-factor is both important at the statstical or group level and mostly genetic, so that's why it can be discouraging to be in an EA job.

But adopting the just-world fallacy helps us nothing here. They can still be psychologically healthy, but they need to recognize their limits and not believe they can do anything they wish they can do.

Replies from: Linch
comment by Linch · 2022-07-14T15:51:21.096Z · EA(p) · GW(p)

I think Dunning-Kruger is overrated. Don't have a canonical source for this but here are some posts.

Replies from: Sharmake
comment by Sharmake · 2022-07-14T16:03:06.541Z · EA(p) · GW(p)

I'll provisionally retract all comments on this thread, because I think I got it majorly wrong, except this one.

comment by Amber Dawn (Amber) · 2022-07-12T09:43:06.834Z · EA(p) · GW(p)

What are the implications you disagree with? 

Replies from: Sharmake
comment by Sharmake · 2022-07-13T17:36:17.378Z · EA(p) · GW(p)

The implications that I disagree is that intelligence doesn't matter at the group level, or that a just world where dumb people in the 70-85 IQ range can work too much on complex problems like AI safety exist.

Replies from: Amber
comment by Amber Dawn (Amber) · 2022-07-13T21:00:25.299Z · EA(p) · GW(p)

I didn't mean to imply that intelligence doesn't matter - more that there are different types of intelligence (some of which are actually underrepresented in EA), or to put it another way, strengths other than IQ can also be very useful. 

comment by Anton Rodenhauser · 2022-07-12T06:20:38.191Z · EA(p) · GW(p)

I'd change the title of this post to "EA for non-geniuses".

Someone around 100-120 IQ isn't dumb, but actually still above avr!

comment by Hmash (Hamish Huggard) · 2022-07-16T23:06:38.220Z · EA(p) · GW(p)

And yet, this is a great contribution to EA discourse, and it's one that a "smart" EA couldn't have made.

You have identified a place where EA is failing a lot of people by being alienating. Smart people often jump over hurdles and arrive at the "right" answer without even noticing them. These hurdles have valuable information. If you can get good at honestly communicating what you're struggling with, then there's a comfy niche in EA for you.

comment by Sara Elsholz · 2022-07-12T11:10:15.496Z · EA(p) · GW(p)

Thank you for writing this up and putting it out there (coming from a non-fancy background I can totally relate!). 

One of the best things for my mental health this year was realising and allowing myself to accept that I am not and never will be a top researcher/EA thinker/ the person with the most accurate AI timelines/.... 

However, I do have an ops background and can see that my work is valuable, and there is no need for me to be the smartest in the room, as long as I get stuff done. 

I'd guess there are small things everyone can contribute (even just being a friendly face at an event and talking to new people can be really valuable). 

Additionally, I wish we had some more socialiogical studies about the EA community (posts like this one by Julia Wise [EA · GW]).

Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-12T15:53:18.281Z · EA(p) · GW(p)

That's really great advice, thank you! I can definitely be a bit hard on myself but recognising that I can still contribute is probably better than getting down about not being an Oxford PhD grad!!

comment by Moritz von Knebel · 2022-07-12T05:20:19.991Z · EA(p) · GW(p)

Hi Olivia! As a former teacher with a degree in Education from a very low-ranking German university, I understand how you feel. I've been there! I am 100% confident there is an impactful way you can contribute to the EA community. Please reach out to me if you'd like to chat about those options, talk through potential career moves or volunteering opportunities!

P. S. : Your post has inspired me to finally create an account here and start commenting and posting - who knows what impact you have created down the line... ;-)

Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-12T15:56:25.901Z · EA(p) · GW(p)

Hi Moritz, thank you for this kind comment, it's really made me smile!! I will definitely reach out.

comment by Aaron Bergman (aaronb50) · 2022-07-14T07:39:06.693Z · EA(p) · GW(p)

Aside, which I’m adding after having written the rest of the comment: I think most EAs would agree that intelligence isn’t a moral virtue. Nonetheless, I think there can be a tendency in EA to praise intelligence (and its signals) in a way that borders on or insinuates moralization.

In a more just world, being called “unintelligent” or even “stupid” wouldn’t seem much different than being called “unattractive,” and being called smart or even “brilliant” wouldn’t leave anyone gushing with pride. 

Nice post. I started writing a comment that turned into a couple pages, which I hope will become a post along the lines of:

  • No really, what about 'dumb' EAs? Have there been any attempts to really answer this question?
    • There seems to be a consensus among smart, well-connected EAs that impact is heavy-tailed, so everyone but the top 0.1-10% (of people who call themselves EA!) or something is a rounding error.
      • I think this is largely true from the perspective of a hiring committee who can fill one role
        • But the standard qualitative interpretation might break down when there is a large space of possible roles.
    • I know this isn't at all an original point and I'm sure there are better write ups elsewhere, but one thing my brain keeps coming back to is that “2*15=30.”
      • A more appropriate model/question might be: “what is n such that 1.1^n=100^7?”
        • I didn’t know what the answer was when writing it down, and didn’t have much of a guess. Exponentials are almost never intuitive, so my type-I brain I wouldn’t have been surprised to see “n=19.43” or “n=6.2*10^14” pop up.
        • The real answer is n338.
        • The implication here (very tentatively, I haven’t thought much about this) is that finding (and putting to work) 338 people who are 1.1xs is just as good and important as finding seven 100xs.
          • I think (once again, tentatively) that this literal equation might be a not-terrible model of the actual situation.
          • From the perspective of EA as a whole, the right response to a heavy tailed distribution might be to enthusiastically find and make good use of as many 1.1xs as we can!
          • I tentatively think the magnitude 1.1 is justifiably as high as it is because both:
            • (1) there is probably a range of IQ (maybe like 110-130)? that is sufficient to have a decent-good grasp of EA concepts and yet not sufficient to push the cutting edge of intellectual work.
            • (2) This means that you can’t buy this labor on the market, since the macroeconomy hasn’t yet adjusted to the existence of EA megadonors!
              • In theory with an ‘infinite EA money printer’, we’d eventually see education+industry trying to crank out people who can do the types of jobs I’m imagining from market incentives alone! I don’t think this is going to happen any time soon.
    • In economics it's pretty important (and I'm pretty sure empirically true quite often) that low and high skilled labor can be compliments. That is, increasing the marginal product of low skilled labor causally increases the MP of high skilled labor.
      • Worth noting though that the actual macroeconomy isn’t a great model for EA because a big chunk of the labor market is low-skilled, whereas in EA it is not.

A final point: I am virtually certain that most people reading this who haven’t looked at some relevant statistics recently are overestimating the abilities of the average person or person of any nationality. As just one stat I quickly came across, “according to the U.S. Department of Education, 54% of U.S. adults 16-74 years old - about 130 million people - lack proficiency in literacy, reading below the equivalent of a sixth-grade level.” 

  • I’m mostly saying this to emphasize that there is a real zone of people like Olivia who can be described as not merely “smarter than average” and “dumber than average in EA” but “way smarter than average, capable of doing useful cognitive work, capable of understanding core EA ideas, and also “dumber than the average EA.”
    • I think at least some more work can and should go to figuring out what a person who falls in this category should do.
  • I’m also saying to tell (as far as I have read) literally every single person commenting of the “i’M nOt tHat SmarT I dIdN'T gO tO aN iVy LeAgUe CoLlEgE” are wrong, and it’s worth noting that no standard signal of intelligence suffices to get a person accepted to any top 10 US school.
    • The chart below is of applicants to Stanford from my (public) high school.
      • Note that a weighted GPA of 5 would indicate that a person took exclusively AP classes during high school, which isn’t possible because of mandatory non-APs like P.E. and health.
From Naviance (requires login of someone from a high school with access)

 

Well, looks like “I started writing a comment but... ” comment has itself metastasized 🙃


 

Replies from: Linch
comment by Linch · 2022-07-14T15:44:15.525Z · EA(p) · GW(p)

A more appropriate model/question might be: “what is n such that 1.1^n=100^7?”

I don't understand the relevancy of this question. Can you elaborate a bit? :)

Replies from: aaronb50, Imma Six
comment by Aaron Bergman (aaronb50) · 2022-07-14T21:36:22.677Z · EA(p) · GW(p)

Yeah, that was written way too hastily haha. 

The idea is that currently,  CB+hiring seems think that finding seven (or any small integer) of people who can each multiply some idea/project/program's success by 100 is a big win, because this multiplies the whole thing by 10^14!  This is the kind of thing an "impact is heavy tailed" model naively seems to imply we should do.

But then I'm asking "how many people who can each add 10% to a thing's value would be as good as finding those seven superstars"? 

If the answer was like 410,000, it would seem like maybe finding the seven is the easier thing to do. But since the answer is 338, I think it would be easier to find and put to use those 338 people

Replies from: KaseyShibayama, Imma Six
comment by KaseyShibayama · 2022-07-15T06:07:41.545Z · EA(p) · GW(p)

Hmm, I’m skeptical of this model.  It seems like it would be increasingly difficult to achieve a constant 1.1x multiplier as you add more people.

For example, it would be much harder for Apple's 300th employee to increase their share price by 10% compared to their 5th employee.

comment by Imma (Imma Six) · 2022-07-15T05:22:42.563Z · EA(p) · GW(p)

Maybe edit your original com

comment by Imma (Imma Six) · 2022-07-15T05:24:07.591Z · EA(p) · GW(p)

Maybe edit your original comment? I think it's information that is worth explaining more clearly.

comment by Bessemer12 · 2022-07-12T16:12:02.075Z · EA(p) · GW(p)

I definitely felt dumb when I first encountered EA . Certain kinds of intelligence are particularly overrepresented and valorized in EA (e.g. quantitative/rational/analytical intelligence) and those are the kinds I've always felt weakest in (e.g. I failed high school physics, stopped taking math as quickly as I could). When I first started out working in EA I felt a lot of panic about being found out for being secretly dumb because I couldn't keep up with discussions that leaned on those kinds of intelligence. I feel a lot better about this now, though it still haunts me sometimes.

What's changed since then?

  1. I still make dumb math mistakes when I'm required to do math in my current role - but I've found that I have other kinds of intelligence some of the colleagues I initially felt intimidated by are less strong in (i.e. judgment, emotional intelligence, intuitions about people) and even though these can be easily dismissed/considered 'fluffy' they actually do and have mattered in concrete ways.
  2. I've come to realize that intelligence isn't useful to consider as a monolithic category -- most things can be decomposed into a bunch of specific skills, and ~anyone can get better at really specific skills if they decide it's important enough and ask for help. The reason I've kept making dumb mistakes is mostly due to a learned helplessness I developed over years of failing at certain classes in school. This caused me to develop an identity around being hopeless along those dimensions, but this really isn't the same as being "dumb" and I wish I'd recognized this sooner and just tried to solve a few key gaps in my knowledge that I felt embarrassed by.
  3. Spending a lot of time in EA social circles can make you feel like people have value in proportion to how smart they are (again, smart in very specific ways) and people often seem to treat intelligence as a proxy for ability to have impact. But looking back at history the track record of highly intelligent people is a pretty mixed bag (smart people are also responsible for breakthroughs that create terrible threats!), and other character traits have also mattered a lot for doing a lot of good (e.g. work ethic, courage, moral fortitude). It feels good to remind myself that e.g. Stanislav Petrov or Vasili Arkhipov seem like they were pretty average people intelligence-wise; it didn't stop them from preventing nuclear wars, so why should being bad at math lower my level of ambition about how much good I can do?
  4. Another thing that's helped is just having a sense of humor about feeling dumb. I used to feel a lot of shame about asking 'dumb' sounding questions and thus giving myself away as a dumb person in both work and social contexts. This caused a lot of anxiety and meant I was pretty nervous and serious most of the time. Over time I learned that if I asked my real questions in a lighthearted way/lightened up around discussions of stuff I didn't understand people took it well and I enjoyed the interactions more.
  5. I've also realized over time how much the desire to be perceived as smart shapes EA group dynamics. This leads to a) more people doing what I described above (not asking questions or wanting to reveal they don't understand, causing you to feel alone in this feeling), b) talking or writing in unnecessarily complex or erudite ways in order to signal that they're part of the in group. Becoming aware of these dynamics helped me to start opting out of them/trying to intentionally violate these scripts more often.

    I hope this is at least somewhat helpful -- I'm sorry you're feeling this way and I can definitely assure you you're not alone (and I really hope you don't leave EA for this reason)!.
comment by annaleptikon · 2022-07-11T15:37:20.068Z · EA(p) · GW(p)

First of all, thank you for speaking up about this. I know very smart people that are scared to just share their perspective on things and I do think THAT is very dumb.

Secondly, I do think donating some money regularly and cost-effectively is a safe bet, and freaking yourself out about "doing more" or even "the most" can easily be counterproductive. Just e.g. focusing on doing advocacy and explaining why evidence-based and cost-effective donations are good choices is still neglected in basically every country. There are many such relatively easy tasks that are great leverage points and in the end, it is precisely about comparative advantage. By you taking up such tasks you shoulder some burdens that are of relatively lower value to others.

Then for objectively difficult problems to solve it is, of course, reasonable to not try to make it "inclusive", there is a reason why there is a minimum height to become a soldier because the task environment will not change to accommodate certain people. I understand that you understand this. And by understanding this and e.g. not attempting something grandiose that ends up harmful, you are counterfactually already winning.

Then I also do think that "higher" intellectual ability and related work are not necessarily higher utility. There isn't one best or optimal thing everyone should be doing. The more one reads about complexity and systems science it is quite clear that there is no one optimal thing to do. It also shows that localism (serving one's direct community) e.g. is better than often portrayed in EA. Creatively and pragmatically solving problems you perceive directly around you is fantastic and your interest in EA suggests that might be better suited to doing so than others around you.

In general, you can be and become a virtuous person independently of your raw processing powers or academic credentials, and action on all possible levels is needed. 

comment by Luke Chambers · 2022-07-13T08:40:09.758Z · EA(p) · GW(p)

Pre-Warning: Please don't read any of this as a criticism of those people who fit into the super-intelligent,  hard degrees at Oxford etc.  If you're that person, you're awesome, not critical of you, and this comment is directed at exploring the strengths of other paths :) 

Tl;dr at bottom of post for the time-constrained :)

This was a really interesting post to read. I wrote a slightly controversial piece a little while back that highlighted that 'top' Universities like Oxford, Cambridge, Stanford have a lot of class and wealth restrictions so that instead of marketing to the 'most intelligent' in society, EA was frequently marketing to the 'intelligent, but wealthy' instead - and missing out most of the intelligent but economically stranded people who went to universities lower down on the rankings because it made more financial sense. It ended up spreading on Twitter a bit. Most people were interested and engaged, one or two got a bit angry (largely by misinterpreting what I said but that's likely on me as the writer, hence the pre-warning), that's life. But I would like to highlight that to you - I know a ton of really, really intelligent people who went to average universities. In fact of the five smartest and most impactful people I know, only 1 went to an elite-tier uni. So don't feel bad for going to a 'normal' uni. Loads of us did. And we're doing great.

As for being smart, I think it's easy to forget that science, law, engineering etc aren't just labwork/fieldwork. I've seen loads of people who are fantastic in labs and in research who produce really excellent research which doesn't go anywhere. Why? Because that's stage 1. Research results need highlighted, need shown to people who can take action, and need put into practice in order to be impactful. This can sometimes happen by accident, but science communications (and communications in general) are actually really complex. In short, a lot of the time these 'super intelligent' people you're intimidated by are actually like Narrow AI - fantastic at one thing, but fall apart elsewhere. This is why research frequently requires teams. Chances are by virtue of your own life path you might be fantastic at helping these researchers translate their research to be understood, and maybe great at getting it to where it needs to be. Research is a long process.

An advantage you have of your 'normal' life is that you will have learned a lot of skills others might not have. This isn't dunking on people from other walks of life - they are as blameless for their life lottery as we are, but what I'm saying is that each person's journey has taught them different skills.

An example from my own story entering EA is that I used to feel similar to you. I grew up on council estates to a completely non-academic family, many of my immediate family went into petty crime, and when I entered academia I felt like an outsider. I actually only started uni at 24 years old because I always felt that uni wasn’t for people like me - until I got a menial job in a STEM field and the scientists there told me otherwise. I actually met my EA group co-founder here and he was the same as me, both of us with very visible tattoos and strong regional accents standing out quite a bit among our peers. Often treated as stupid, or lesser, via a lot of (what is now called) microaggressions. Lots of doors slammed in faces. This was sometimes the same feeling in EA, but to a much lesser extent since the community is very welcoming. Seriously, the community here is lovely and all of the social/class barriers it faces are accidental and they try really hard to correct them. At least in my experience.  But humans gonna hume.

When I notice the difference of background personally is in little things like spending money. In my EA group I'll negotiate almost every transaction to scrape (sometimes literally) pennies in savings because that's how I grew up, whereas I've noticed that a lot of the time other EA group leaders will just pay full price for stuff. That always blows me away. Like they’ll contact a company, ask for a price, then just…pay it. That brings tears to my eyes! But a lot of people have lived their whole life not having to scrape pennies so they don’t even know it’s possible to do this. So this odd little thing in my past that makes me different to a lot of EAs I meet actually helps the EA community by saving money for redeployment. I found a way to make my difference/insecurity my strength.

As an example I just got £800 in funding to buy 40 things at £20 each for my group, but instead of ordering online I contacted the company and negotiated a bulk order discount with their sales department, which meant that some money could go back to EA and be better deployed elsewhere. It's a small example, but it shows that nothing occurs in isolation and there are a lot of valuable skills in EA. I often joke that EA funds could triple their impact by forgetting financial advisors and putting someone on Universal Credit in charge of their money! Emphasis on joke :)  But it shows that just because you're not in the position or desire to be applying to research fellowships that EA isn't for you. You might be a wizard at events organising, grant applications, etc.

Another thing you need to bear in mind, which links back to what I said about teams, is that a lot of this debate/research on forums is both hyper-specialised and theoretical. You’re not going to understand an extremely niche computer science conversation, or ethics conversation, and it’s unlikely that anyone on the forums understands everything. This is not your fault, and it’s actually one of EA communities other (very few) flaws. Their public comms is pretty bad. They have real difficulties interpreting their ideas, theories, and research into something that is understandable and digestible. EA orgs are generally much better at it than forum posters, but it's still an issue. That’s not on you, that’s a research skill that’s learned over time but that people in very theoretical and isolated environments tend not to learn it. I actually worked as a science communicator once where my job was to translate STEM  research into something the public could understand. It’s harder than you think, and something I think should be offered as a training course to EA researchers and groups. Good idea for any EA course-makers out there!

Granted, as you said sometimes these debates need complexity because the ideas are complex. That’s fair, and often very necessary. But if you don’t understand a post directed at the whole community it’s not because you’re dumb, it’s because the poster doesn’t understand how to present research to an interdisciplinary audience. I was the same until I worked that job, and even now I work in the legal field surrounding AI so I constantly struggle to explain law concepts to software engineers and software engineering concepts to lawyers. It’s tough, and to be honest I struggle to understand really complex ideas from both sometimes. At least once a week I send a STEM or law paper to a colleague and ask them to explain it like I’m a 5 year old. There’s no shame in it, because they do the same to me!

As a final reminder - though 99% of the EA community are great, there are inevitably (like in all human communities) that 1% who think they are far more intelligent than average, and wondergods in their field, and that EA is for people like them and not their underlings. I haven't met one in real-life yet, but I've seen one or two in comments sections over the years. Bear in mind that 1) They're often very wrong about that and lack self-awareness, and 2) I currently cooperate with a lot of government bodies in deciding policy relating to AI and those types of people have their research cast aside a lot of the time due to a lack of social skills, a misunderstanding of policy norms, and a lack of experience outside of their field. If you ever bump into someone like that, who tells you that their impact is greater than yours, don't listen. Because they're probably in for a big shock when they find out how much teamwork is required to reach the impact finish line and how hard it is to build a team with that attitude!

On that note, don't put yourself down as a sociologist. I just co-published a paper in the AI field and 2 of the authors are sociologists. Sociology is vital in areas such as governance and policy across AI, animal welfare, poverty, etc. Don't put your field down because it doesn't have graphs and algebra. Your field is what turns those things into actual, measurable actions that can work. There's also no such thing as an 'easy' degree.
 

TL;DR


You’re not dumb at all, sometimes EA is just poor at public/interdisciplinary communication both externally and internally.  Additionally, social barriers which EA struggles with sometimes can cause accidental alienation but rest assured you’re not alone, in fact everyone in this community probably feels like that sometimes. We deal with extremely difficult concepts across literally dozens of disciplines in areas of knowledge we’re only beginning to explore. Anyone who is thinking ‘wtf is going on’ at least once or twice a day isn’t paying attention. You'll meet the odd person who likes to show how much smarter they are than everyone else, but no-one wants to work with them and their careers end up dying.

Please feel free, and this counts for anyone reading this as well as OP, to reach out to me on the forums or via email if you ever feel like this. I’m always happy to chat with other EA members, as someone who has faced these feelings before, and will happily do what I can to open doors for you. :)

 

Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-13T13:39:06.915Z · EA(p) · GW(p)

Thank you so much for this comment! The points you made about the community needing different types of skills is great and I totally agree...your comment (and lots of others) has definitely helped open my mind up and think a bit more about ways in which I could be useful here...even if it's outside the traditional view I had of what an EA is...so thank you for that!!

comment by Holly_Elmore · 2022-07-12T20:24:12.427Z · EA(p) · GW(p)

Whenever I come on the EA forum I literally feel like my brain is going to explode with some of the stuff that is posted on here, I just don't understand it.

Dude, I have a degree from Harvard but it's in biology and I feel this way about a lot of the AI stuff! I admire your humility but you might not be that dumb.

I think your critique is totally spot-on, and I think a better EA community would have room for all kinds of engagement. When longtermism became dominant (along with the influx of a lot of cash so that we were more talent-constrained than money constrained) we lost a lot of the activities that had brought the entire community together, like thinking a lot about how to save and donate money or even a lot of emphasis on having local communities. We also stopped evangelizing as much as our message got more complicated and we became focused on issues like AI alignment that require specific people more than a large group of people. 

But even though I frequently say we should shore up the community by bringing back some focus on the original EA bread and butter causes like global health, I don't know if the current community is really making a mistake by focusing our limited efforts here. I think having more kinds of people in the community would  be great, but not if it detracted from the kind of discourse that you're saying is over your head. I'm not sure how to pull this off.

Have you thought about organizing a group yourself to focus on the ideas you are interested in? I think it would be really good for the ivory tower part of the community to have more EA classic groups out there, and it wouldn't be taking anyone's efforts away from this part of the community. 

Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-13T13:12:34.212Z · EA(p) · GW(p)

Hey no I haven't thought about setting up a group like that but I definitely think it could be a good idea! The original ideas I learnt about right when I got involved are the most exciting to me...I'm sure others feel the same

Replies from: Sophia
comment by Sophia · 2022-07-14T20:03:48.228Z · EA(p) · GW(p)

Maybe the giving what we can brand is good for this? (I'm not at all sure, this is really a question in my mind)

It is obviously focused on donations but if it was a university group, this can largely maybe be seen as something that seems worth discussing now and doing later post-graduation?

Replies from: Sophia
comment by Sophia · 2022-07-14T20:05:49.453Z · EA(p) · GW(p)

It obviously depends a lot on which ideas seem most compelling to you and the extent that they are captured by GWWC

comment by Ulrik Horn · 2022-07-12T07:58:57.441Z · EA(p) · GW(p)

Was not www.probablygood.org set up to address the large pool of talent that might have a hard time working on the cause areas identified by 80k hrs? Not sure if others mentioned this but ctrl+f did not show me any mentions of this org.

Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-12T15:55:19.126Z · EA(p) · GW(p)

I had no idea this existed but will definitely check it out.

comment by Monica · 2022-07-12T15:27:46.853Z · EA(p) · GW(p)

One thing that I think is helpful is to do the best you can to separate "EA the set of ideas" from "EA the set of people." People involved with EA form something akin to a broad social group. Like any social group, they have certain norms and tendencies that are  annoying and off-putting. Being snobbish about intelligence is one of these tendencies. My advice is to take the parts of "EA the set of ideas" that work for you and ignore the parts of the community that you find annoying. Maybe for you that means ignoring certain kinds of forum posts or maybe it means not going on the forum at all. Maybe it means giving 2 percent of your income to an effective charity and not worrying that you don't give more. Maybe it means being on the lookout for a job where you could have a higher impact but targeting organizations that are not EA-branded. The bottom line is that you do not need to be involved in EA the community to take EA the set of ideas seriously. 

 

This is not at all to concede that you cannot do high-impact things while being engaged in the community. I am happy with the impact I have and I went to a state school with standardized test scores that were nothing to brag about. This is just to say that if you find the community annoying, you don't need it to "Do EA".

Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-12T16:06:33.406Z · EA(p) · GW(p)

This is definitely a good point. I have had really great experiences with every EA i've met and actually talked too - I guess it's what I see online that I've struggled with.  I could definitely make more of an effort to not get so tied up in the parts which don't work for me.

comment by Sophia · 2022-07-14T12:10:37.322Z · EA(p) · GW(p)

I don't think it is your capacity for impact, your intelligence or anything else that would stop you from adding a tonne of value to the effective altruism project. Your post is insightful (a sign of intelligence if I were to look for signs) and I also think that intelligence and competence are complicated and are so far from perfectly correlated with impact that worrying about them on an individual level seems counter-productive to me (lots of people have said similar things in the comments, but I felt this was, nonetheless, still worth reiterating). 

 The biggest thing I see is whether the effective altruism community is a place where you feel you can add value and, therefore, whether you can be here and be mentally healthy. Regardless of reality, how you feel matters. I think people struggle in this community not because they can't contribute, but because they feel they can't contribute "enough", or at all. I think "enough" is a low bar to pass in reality (where enough is "the world is better because of your contribution than it would have been otherwise": thanks comparative advantage).

 I think feeling like you're enough is the real challenge here. 

Why I think people like me and not-so-like me can contribute to this community whether or not we are above average and what really should be at the core of deciding whether this community is for us

 

An anecdotal account of the unimportance of being analytical or finding forum posts relatively easy to read (which often gets confused for "intelligence" in this community) in determining your ability to contribute 

In my experience, in whatever ways I guess I've had an impact, I think being in the right place at the right time and caring a tonne has mattered much more than my intelligence. I'm rarely the smartest person in the room (and I think this is a good thing!). However, in places where I've messed up and could have had much more impact, I feel like self-love was a bigger factor than intelligence. I've seen much more intelligent people than me mess up. I have seen people who I have judged, at some point, to be much less intelligent than me do much more for others than I ever have. 

What I lack in intelligence, I feel I maybe make up for in relatability and I think that actually is a massive deal. I am weirdly analytical so fitting in was strangely easy for me, but I really do wish all the people I know who are more competent than me in so many different ways also could feel the sense of belonging that I feel here (there are a lot of people who probably would outperform me on an IQ test and/or who are much more successful than me who can't "trick" people, a skill in itself in my eyes, in conversation as well as I can into thinking they're smart/insightful). 

Even if it could be true, that on average, intelligence, as judged by EA forum post comprehension or people thinking you're smart or doing well on some kind of standardized test, was somewhat correlated with impact, impact is extremely noisy.

I am confident that so many people smarter than me, even if they are trying, are going to have less impact than me and so many people less smart than me by any reasonable measure are going to have more impact! The world is messy. Impact is noisy. Any social scientist worth their salt will tell us that people are not like particles, psychology (unlike physics) does not have the same kind of narrow confidence intervals that make this sort of reasoning all that meaningful. Nonetheless, I'm going to continue to try and make broad, sweeping generalisations. 

Things like caring a tonne, I think, often gets you further or having really deep levels of self-acceptance and self-compassion (which is something I believe anyone can get with a good enough psychologist if they have the time and money for one) or conscientiousness seem likely to me to be more predictive than IQ. I'd guess, even all of these traits put together will still be pretty noisy if we try and apply it to any individual. 

Sometimes I catch myself feeling insecure in this community because I'm not competent or smart or conscientious enough to have "enough" of an impact. But what even is enough? 

The way I see it,  it doesn't even matter if it is true that I can contribute much less than others because I am less smart or less conscientious or less "whatever-thing-I-am-insecure-about". I am pretty certain, if I am honest and upfront about what I can and can't do, if I'm self-aware enough to communicate clearly what my strengths and weaknesses are, due to comparative advantage, I can certainly find a way to contribute something. 

Thanks to comparative advantage (and having a very good psychologist), I honestly believe I can make this community better able to help others than if I weren't here at all. That is enough for me (that and the fact that being in this community makes me happy). 

 Others being able to contribute much more than me does not make my contributions, in absolute terms, any less valuable. Others being able to contribute much more than me just means much more is going to get contributed! It literally just means the world is a much better place than it would have been otherwise! Those people are using what they have to make the world a better place and that's pretty wonderful. I'll do the same in my own little way. 

I can't change my baseline level of intelligence.[1] I can still contribute. I can still do something about the mess of a world we live in. 

 I am in an entry-level job, earning a fairly average amount for the country I live in and I can speed up the end of absolute poverty by donating 10% of my income. 

Even if this community has more money than before, it does not have nearly enough to end absolute poverty and prepare us for future pandemics and give AI safety researchers a fraction of the resources AI capabilities researchers have let alone anything else. 

This community doesn't have, nor do I expect it to have any time soon, the sorts of resources that will allow it to solve all the world's most pressing problems. I don't need to be able to contribute as much as Sam Bankman-Fried for my contribution to matter a tonne more to the person I helped than that same money mattered to me.

 I don't need to contribute as much as others to feel like I belong here and to feel like the amount of value I am capable of adding is enough. Comparative advantage means that I know that if I'm self-aware enough to place myself well, I'm pretty confident that I can contribute regardless of how my baseline gifts compare to others. 

Being happy here and believing (on expectation) I can contribute is enough of a reason for me to feel fully justified in sticking around all these people who are even more gifted than me (I know it's hard to imagine that someone could be more gifted than a random ADHD mess of a woman who is in an entry level job in her late-20s, shocker!). 

 I do wish that people who were smart or competent in ways that I am not felt the levels of belonging that I feel in this community. I don't feel like I can easily bring along people who aren't as weirdly analytical as me (regardless of whether they are more intelligent or more competent than me according to any traditional measure). I also know that I don't have to be extraordinary to make it easier for extraordinary people to feel like they belong here. 7 years ago, in my local group, I was often the only woman in the room. Nowadays, I honestly don't know whether there are more women or men in my local group. Having another woman there first made it easier for the next woman to feel like this is the sort of place where she could belong and thrive. Extraordinary women have felt more welcome in rooms that I was in, simply because there was another woman there too (turns out you let one in, you let us all in :P). One "average" woman (and by the way, since when is being average a bad thing?? Half the population is below average and so if I'm in that half on any given trait, then that's totally fine with me!). I hope that being female is no longer weird (I think my local community might be better in this way than others but I honestly have no idea and I've spent too long on this comment to be bothered to look up the latest EA demographic survey). 

 I think being different but still completely and deeply passionate about the effective altruism project brings a tonne of value to the community. 
 

You being good for the community doesn't mean the community is good for you
 

I don't think being able to contribute means you should necessarily stay engaged with this movement!

For example, if the effective altruism community was bad for my mental health in the ways I've seen it be bad for other people I've known, I would think I should leave because staying in a relationship that is bad for you is bad for everyone (my guess is this holds in both romantic partnerships and for social movements too). 

Likewise, if you feel that this community is not good for your mental health and that you'd feel more valued and fulfilled elsewhere and maybe even better able to contribute out of this community than in it, then leaving for that reason is a great idea. Mutually beneficial relationships here only! 

I'm not sure how to help create a community about effective altruism that also allows for all sorts of different sorts of people who care deeply about others and who act on their best guess of how to do that a significant proportion of the time. I think that keeping effective altruism about effective altruism is challenging. Nonetheless, I believe that we can become a lot more inclusive with time by putting an extra bit of effort into making people who are a bit different, but still fundamentally care deeply about effective altruism and also understand what it is, feel like they belong here. 

I know not all of our experiences are going to be the same, but I think swapping our own personal stories might help more people carve out their space and their sense of belonging here or elsewhere so understanding every forum post doesn't feel like the defining feature of our community (I don't think everyone understands every forum post anyway, even the people who are much more typical members of this community than me). 

I might write out my experience in this community and finding belonging here in more depth in a top-level post (but I'm so ADHD that I've unfortunately got a graveyard of things that matter to me that have become quite ugh that I hope to get to long before this). 

While we might be able to evaluate ourselves somewhat accurately on any given trait compared to our peers, no one trait tells us all that much about how much we can contribute and imposter syndrome can certainly make us blind to our strengths. 

Even the objectively best people in the community (this was partly ironic, but I'm going to go out on a limb here and say that Sam Bankman-Fried, is, in fact, going to donate more than me and is, therefore, objectively going to have more of an impact through his donations than me - a controversial statement indeed) are going to feel imposter syndrome around some people sometimes until they see a great psychologist or unless they were born with a very healthy disposition that makes them much less likely to indulge in self-loathing (from Sam's podcast episode, he seems like an example of someone who just is mentally healthy but he still was the example I wanted to use because amount donated is such an uncontroversial measure of impact: point is, extraordinary people sometimes have imposter syndrome too and so do more salt-of-the-earth, proudly average people like me and it really doesn't seem to have much to do with anything of substance).

 In the meantime, I think extreme self-compassion goes a long way when you feel like you're not good enough to be here because I think that it is literally a fact proven by economists under weak enough assumptions to hold well enough in the real-world afaict that everyone can contribute (with some minor caveats that I am happy to go into in more depth but are more about mental health and self-awareness and not really much to do with the traits discussed by the OP)!  

Focusing more on the fact that everyone can contribute and letting go of this idea that "enough" requires anyone to be smarter or better than anyone else seems more wholesome, more useful and more true than focusing on many other things that make me, the OP, and many others unnecessarily miserable for a moment until something like Ajeya's talk reminds us that that comparative crap isn't what all of this is supposed to be about  (Sam BF donating does not make my donations any less valuable in absolute terms beyond him taking all the best donation opportunities, which he obviously hasn't because as far as I've observed, absolute poverty is still, tragically, a thing of the present and GiveDirectly could do a lot for that when we run of all the interventions that are six times better than cash transfers). 

Even if it is true that some people can understand more forum posts than others and that is well-correlated with something we might care about, even if this is true, and I am relatively unconvinced of this personally, it's so noisy and also not actionable enough for it to do any good to anyone to waste time and energy on thinking "am I enough?" because the answer to this question, "are you enough?" is a resounding yes. If being in this community is healthy for you, then you are definitely much more than enough to be a contributor! Now your full attention can go to the real question which is, is this community enough for you? It has some good bits and some bad bits and for different people, the answer to whether the good outweighs the bad will absolutely be different. This community has been exceptionally kind to me. I am aware that this is not everyone's experience. Furthermore, everyone being pretty nice or virtuous doesn't even mean this community, overall, can't have toxic elements anyway. 

At the end of the day, we're all probably smart enough to help others quite a bit over our lifetimes. That's all there needs to be to it.[2] The only real question is not "are you enough for this community?" but is this community good enough for you and your mental health for engaging with it to be worth it for you. It is fine to find a place to contribute here or elsewhere. This community is here if it's helpful and should absolutely be abandoned if it isn't, for whatever reason. As others have said, you can still bring the bits you love elsewhere, perhaps somewhere with zero association with this community, and leave the rest. Bringing the best stuff and leaving the rest can not only be good for a person's well-being but can also be hugely impactful in its own right, if impact is an important consideration for you (it doesn't have to be! For the OP, it sounds like it probably would be but it's obviously fine for this to not be a priority and it is especially fine for it to not always be a priority). 

Or if you decide staying engaged in this community is good for you, that's great too and we're lucky to have you.

PS. I don't think how average we are actually changes the whole comparative advantage thing being a slam dunk argument that anyone can probably find a way to contribute (but also that being able to contribute is neither here nor there for deciding whether to stay involved in this movement, the real question is actually about whether this movement is healthy for you and its totally fine for it to not be and for you to find purpose and meaning and value elsewhere for that reason!). 

I think I'm way above average in how analytical I am and I'm way below average in my ability to be on time to things (I'm working on it though). I don't think it matters much to the central point of this whole comment how average I am. I know that I have found it easier to belong here than others who, on objective tests of intelligence, perform better. I know that I'm also not that thick (but so clearly neither is the OP). I also don't think the solution to this issue is to reassure people that they are smart. I don't think the solution is to ignore the fact that, on a population level, more money or more intelligence or more charisma or more warmth or more conscientiousness or being prettier or taller (woe is me -- my legs are forever a little stumpy because I have no intention of getting leg-lengthening surgery and statistically, that really does harm my expected earnings and therefore my expected donation-wise impact) or being born rich all give a leg-up in life and in any goal you want to pursue, including social impact! However, I think 1) this stuff is oh so noisy, 2) everyone who is self-aware and honest can contribute something (even when we don't have the statistically optimal amount of a trait: despite my short legs, I'm pretty sure I can earn enough to donate a life-changing amount to someone in the world),  3) saving a life is still saving a life whether you are the only person in the world who has ever done it or whether everyone else is saving hundreds: it may not feel the same but it is, actually, still saving a life and it still matters as much as saving a life matters or any other impact you're able to have by focusing on what is useful and letting go of the rest and 4) people who are different from the existing community who understand what effective altruism is and who care deeply about it can contribute a lot to this community in paving the way for a wider range of people to understand what scope-insensitivity is about, why it might matter and how many different types of people can contribute to helping others a tonne. This is unlikely to be the only way people who are different from the existing mean community member can contribute, but as high-fidelity communication is darn hard in the best of times, and the more people are different, the harder crossing inferential gaps [? · GW] can be, this is a very significant contribution. 

  1. ^

    Likewise, I could say the same things about conscientiousness which I, personally, am much more insecure about. We all have our thing that we think matters so much more than everything else in our own eyes -- and like intelligence, conscientiousness also is pretty predictive on average and also probably has a baseline level that people are born with that can be worked on/that a growth mindset can almost definitely help with to at least some extent too.

  2. ^

     (but we're all human and should also keep that in mind so its okay to do less good for good reason or for no reason other than our own sanity if that's, at all, in jeopardy -- please look after yourself if that's what you actually need right now!)

comment by Anton Rodenhauser · 2022-07-11T20:46:24.704Z · EA(p) · GW(p)

I couldn't agree more with this post! E.g. I feel like there should be an "80k for average smart people in the 100-130IQ ranche".

Replies from: Konstantin Pilz, Lumpyproletariat
comment by Konstantin (Konstantin Pilz) · 2022-07-18T17:14:38.041Z · EA(p) · GW(p)

I honestly don't see why.
I think I'm much below 130 and still, 80k advised me. The texts they write about why AI might literally kill all of us and what I could do to prevent are not only relevant for oxford graduates but also for me who just attended an average German University. I think everyone can contribute to the world's most pressing problems. What's needed is not intelligence but ambition and open-mindedness. EA is not just math geniuses devising abstract problems it's hundreds of people running the everyday work of organizations, coming up with new approaches to community building, becoming politically active to promote animal welfare, or earning money to donate to the most important causes. None of these are only possible with an above-average IQ. 

comment by Lumpyproletariat · 2022-08-04T03:45:02.486Z · EA(p) · GW(p)

The 100-130 IQ range contains most of the United State's senators. 

You don't need a license to be more ambitious than the people around you, you don't need to have  131 IQ or greater to find the most important thing and do your best. I'm confident in your ability to have a tremendous outsize impact in the world, if you choose to attempt it.

comment by Greg_Colbourn · 2022-07-12T21:05:29.775Z · EA(p) · GW(p)

Something I wrote [EA(p) · GW(p)] a little while back regarding whether EA should be a "narrow movement of people making significant impact, or a much broader one of shallower impact":

I've sometimes wondered whether it would be good for there to be a distinct brand and movement for less hardcore EA, that is less concerned with prestige [? · GW], less elitist, more relaxed, and with more mainstream appeal. Perhaps it could be thought of as the Championship to EA's Premier League. I think there are already examples, e.g. Probably Good (alternative to 80,000 Hours), TLYCS and OFTW (alternatives to GWWC), and the different tiers of EA investing groups (rough and ready vs careful and considered). Places where you feel comfortable only spending 5 minutes editing a post, rather than agonising about it for hours; where you feel less pressure to compete with the best in the world; where you are less prone to analysis paralysis or perfect being the enemy of the good; where there is less stress, burnout and alienation; where ultimately the area under the impact curve could be comparable, or even bigger..? Perhaps one of the names mentioned here [EA · GW] could be used.

comment by Henry Howard · 2022-07-11T13:32:42.496Z · EA(p) · GW(p)

You don't need to have a PhD to give a portion of your income to effective charities and do a lot of good. That's part of what makes effective giving such a powerful idea.

Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-12T15:54:39.470Z · EA(p) · GW(p)

Agree, and I do donate a part of my income - my issue was that I wanted to do more than just donate, I wanted my career to be commited to making the world a better place, and that is where I was getting stuck.

Replies from: Imma Six
comment by Imma (Imma Six) · 2022-07-12T19:56:28.549Z · EA(p) · GW(p)

I can totally relate to the feeling of wanting to do more than "just donate". I strongly agree with Henry (and others) that donating is an accessible way to have an impact, small donations from individuals are valuable. But "just donate" may not be enough for people with a strong altruistic motivation.

It can be for someone that donating is not only a way to have some impact, but actually the way to have the most impact with their career, given their limited talent. I don't know if that is the case for you, nor for the person who is reading along here, but it might apply to some people. I do believe that it applies to me, and I have been working in normal jobs for 8 years and donating a significant part of my income.

In my experience, being altruistically motivated and "just donate" is a challenging combination. My monkey brain wants connection to the community, and to the organization and the cause I am donating to. If I were less motivated, I would just be satisfied throwing 10 percent of my income at whatever charity GiveWell recommends . If I  were less "dumb" had a different set of talents, I would do fulltime direct work. I experience a lot of excitement and commitment for EA causes, but I need to hold myself back, because my priority is be to optimize my income and keep my living budget modest. What helped me deal with it, is to remind myself that it is just bad luck that I need to live with both high motivation and unfitting abilities - and that doing something is much much better than doing nothing (see also this comment above) [EA(p) · GW(p)].

comment by freedomandutility · 2022-07-11T11:09:58.696Z · EA(p) · GW(p)

I don’t relate entirely but I do feel too dumb for AI Safety stuff in particular and don’t understand some posts about it, even though I think it’s very important.

I think community building, EA-related political advocacy, personal assistant jobs in the EA sphere and content creation related to EA on social media can be extremely high impact might be fairly accessible?

comment by Tsunayoshi · 2022-07-17T23:21:15.469Z · EA(p) · GW(p)

Very good post! Some potential tips how people who have similar experiences to what you described can feel more included: 

  1. Replacing visits to the EA Forum with visits to more casual online places: various EA Facebook groups (e.g. EA Hangout, groups related to your cause area of interest), the EA Discord server, probablygood.org (thanks to another commenter mentioning the site).   
  2. Attending events hosted by local EA groups (if close by). These events are in my experience less elite and more communal. 
  3. If attending larger EA conferences, understand that many people behave like they are in a job interview situation (because the community is so small,  reputation as a smart person can be beneficial), and will consequently e.g. avoid asking questions about concepts they do not know. 
Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-18T09:55:19.223Z · EA(p) · GW(p)

Hey thanks for these really concrete steps! I appreciate you highlighting these other parts of the EA community as I really wasn’t aware of them before I posted this.

And that’s a really great point about the conferences - think it’s key to be mindful that people might be trying to portray a certain image.

comment by Duc Nguyen · 2022-07-26T22:31:00.662Z · EA(p) · GW(p)

Really resonate with me. Reading 80,000 hours makes me want to jump out the window. Every article is something like “oh, Jane Street is a better way to spend your time than trying to join Downing Street”. What about us intellectual peasants? What about us in the 3rd world? What can we do? We exist in the billions, yet we are seen as the victims, not the solution.

comment by Fermi–Dirac Distribution · 2022-07-20T20:12:43.451Z · EA(p) · GW(p)

This resonates a lot with me. I actually studied physics at a pretty good college and did very well in all my physics classes, but I was depressed for a long time (two years?) [ETA: more like two and a half] for not feeling smart enough to be part of the EA community. 

I’m feeling better now, though that’s unfortunately because I stopped trying so hard to fit in. I stopped trying (and failing) to get into EAGs or get hired at EA orgs, and haven’t been reading the EA Forum as much as I used to. I still… have longtermist EA values, but I have no idea of what someone like me can do to help the future go well. Even donating part of my income seems approximately useless, given how longtermism is far from funding-constrained.

Replies from: Denkenberger
comment by Denkenberger · 2022-09-02T21:56:27.844Z · EA(p) · GW(p)

I'm sorry you feel this way. Have you tried volunteering [EA · GW]to skill up? I think a physics major could be a good quantitative generalist researcher. Also, it is a common perception that longtermism is not funding constrained with the entry of FTX, but they only funded about 4% of the applications in their open round [EA · GW]. And there are still longtermist organizations that are funding constrained, e.g. ALLFED (disclosure: which I direct).

comment by Rahela · 2022-07-13T08:47:05.842Z · EA(p) · GW(p)

Olivia Addy [EA · GW] I'm glad you wrote this post, today my supervisor forwarded it to me regarding our last conversation when I told him that I'm too stupid to work for such a big organization as Anima International. It all had a beginning in that, as a person with a strong interest in insects, I read a discussion on genes of insects and couldn't grasp any of it, although I'm in the middle of Richard Dawkins' book, "The selfish gene" not much in my head cleared up. Then it came to me that maybe I don't deserve this job.

Reading the comments below, I know I'm not stupid. Probably, I know more about some topics than many of the people around me, but getting into a pro-animal environment where it's so important to act effectively, my head went through a lot of changes. The beginnings were difficult, as were the beginnings with this forum, which I didn't understand, I knew, it's probably the only forum where comments are sometimes like separate posts and deserve their own development.  Someone told me that even  William MacAskill is afraid to add posts here, :) Don't know if this is true, but then I decided to create an account here and even created some draft of post.

At some point after reading a few posts on this forum I decided that I wanted to participate in The Blog Prize, as a blogger I have no problem with writing, Nick Whitacker added me to the slack regarding writing posts for this competition and I quickly calmed down as I saw what people were writing within their posts. Once again, I felt too stupid. I wrote feedback to Nick that I thought I could rather write at a basic level, which Nick agreed with and said that such texts are also needed, but to this day I haven't written a word. And this is even worse for me. Today, I know that I should do one thing at the time. 

To sum up, I have the impression (I don't have any good data for this) that I am undermining myself. I understand that I have my own limitations, e.g. I can't make logical conclusions quickly, some content here is completely incomprehensible to me, I'm not so good at Math, but I want to be part of this community, and I want to be part of Anima International. I believe that if I were too stupid, no one would hire me. This shows me that most of the limitations I have, I create for myself. A colleague of mine once told me, Rahela I didn't know all this either, but I read a lot and learned. I think this is the solution, although it scares me that I am already 42 years old and maybe too late for me, but I am not going to give up, so Olivia you are not alone :)

Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-13T13:14:22.856Z · EA(p) · GW(p)

Thank you for this comment. It's nice to know that I'm not the only person feeling this way and I totally relate to the feeling of undermining yourself, this is something I am trying to work on too!

comment by Harrison Durland (Harrison D) · 2022-07-11T23:34:01.636Z · EA(p) · GW(p)

Many people have already made comments, but I’ll throw my 2 cents into the ring:

  1. I don’t come from an Ivy League school or related background, nor do I have a fancy STEM degree, but feel decently at home in the EA community.
  2. I often thought that my value was determined by how “well” I could do my job in my intended field (policy research) vs. whomever I’m replacing. However, one of the great insights of EA is that you don’t have to be the 99th percentile in your field in order to be impactful: in some fields (e.g., policy research arguably) the most important consideration is “how much more do you focus on important topics—especially x-risk reduction—vs. whomever you’re replacing,” given how little incentive/emphasis our society places on certain moral patients/outcomes (e.g., poor people in a foreign country, animals, future sentient beings).
  3. There very likely is something at least moderately high-impact that you one can contribute to even if they are not traditionally very “smart,” especially now in the longtermist/x-risk reduction (and animal welfare) space. (I personally don’t know as much about other fields, but suspect that similar points apply.)
comment by SaraAzubuike · 2022-07-12T23:14:12.667Z · EA(p) · GW(p)

I don't like how all the comments basically reiterate that smart people have more impact. Of course smart people do. But one avenue for EA to actually make a difference--is to appeal to the masses. For policy to change, you have to appeal to the body politic. And for that, we do need a diverse range of skillsets from people that are much different than the average EA (for example, having more super-social salesperson types in EA would be a net positive for EA)

comment by tarianllwyfen · 2022-07-19T15:12:01.858Z · EA(p) · GW(p)

I agree with a lot of what other people have said here. I think the key message I would emphasize is that 1) yes, the EA community should do a better job at inclusion of this kind of diversity (among others), but 2) you really just shouldn't even think too much about "intelligence" in finding your path within EA, bc there is almost certainly SOME way you can make a significant contribution given your unique conditions. To the extent you're concerned about "intelligence" being a limiting factor, I think this should also incline you to chose topics and types of work that are intrinsically interesting to you. If it's interesting to you, you'll naturally engage with the topic intellectually while working in the area (even if your role is peripheral to the intellectual work). Over months and years, the cumulative effect of continual, immersive engagement can be much greater than you would expect when thinking "will I be able to learn X?" before starting out.

I'll add my own experience, which I think is similar, in case it feels reassuring to you also: I went to a good-but-not-great large public university in the United States where I studied humanities, and worked for the first few years of my career in not-particularly-impressive organizations before finding EA. I was rejected from career coaching by 80k (I think both bc they were relatively capacity constrained at the time & bc I didn't have any obviously impressive line-items on my CV), but was still very interested in the cause area that I'd chosen. I was never really worried about "intelligence" as a limiting factor on my overall impact per se, but I was definitely not sure I'd be able to master the technical aspects of the area, which did influence my expectations for what kind of roles would be a good fit for me.
Because I found the topic interesting though, I kept reading up and eventually got into a junior, operations-type role in a prestigious, relevant organization. I think I got this role actually not at all due to my self-study of the topic, but partly bc the unimpressive, non-impactful thing I was doing at the time was kind of similar to  the role I was hired for, and partly bc the hiring manager mis-perceived it as being more impressive/prestigious than it actually was. I think this was basically random luck.
In any case, at this organization I found that while the very smartest people indeed seemed much smarter than me, the people in roles that might imply they were much smarter than me were actually not clearly all smarter than me (or even had fundamental skills/aptitudes that I did not), they had just learned different things, which I could also learn, and many of which I did learn through continuing to do self-study (again, I found it intrinsically interesting besides being potentially important/impactful), asking colleagues questions, and just absorbing stuff over time. 
Since working at this organization, my career in the field has basically gone smoothly, including having opportunities to move into (what people generally perceive as) "smart person" roles or to advance along my previous path. At some point, 80k even basically told me that they probably made a mistake in not offering me career coaching, I think bc they had heard through the grapevine that I was generally adapting well to the environment. My impression is that even if I had been much less capable of understanding the technical aspects of the area, there would have been no lack of opportunities for me to have an impactful career in the space.

comment by Patricio · 2022-07-13T20:28:21.055Z · EA(p) · GW(p)

I also want all kind of people in this community. And I believe that not matter your intelligence you can have a good impact in the world and most even a job that's EA. For example I feel like community building could be a place for people with low level of studies to do valuable work, and even to solve this particular problem (make EA more accessible).  I think that creating more of those jobs would make EA more popular and that is the way of getting the most people to do direct work, GPR, donating, going vegan and voting good while also making a lot of them happier by giving them purpose and a community they can be part of. 

There is ways where that can be bad though, like taking too many resources or falling in the meta trap.

comment by ekka (Eddie K) · 2022-07-12T06:09:45.999Z · EA(p) · GW(p)

Great post! I think this is a failure of EA. Lots of corporations and open source projects are able to leverage the efforts of many average intelligence contributors to do impressive things on a large scale through collaboration. It seems to me like there must be something wrong when there are many motivated people willing to contribute their time and efforts to EA but don't have lots of avenues to do so other than earning to give and maybe community building (which leaves a lot of people who feel motivated by EA with no concrete ways to easily engage). It seems to me that for direct contributions, EA prefers more of a superstar model where one has to stand out in order to be able to contribute effectively instead of a more incremental collaborative model where the superstars would still have an outsized impact but also lowers the bar for anyone to make an incremental contribution. Maybe there are good reasons why EA prefers one model over the other but I'd be surprised if the model that mobilizes less people is considered more impactful.

Another issue is that EA may target people who are smarter than average(at least smarter in very specific ways) but given that most people are average by definition or are smarter in different dimensions, these 'very smart people' may not be able to model other people correctly or how things happen in the world where reality doesn't usually line up well with mathematical abstractions and theoretical thinking. I have found myself questioning whether the balance of intellectualism and pragmatism is tilted too far on the side of the former. Hopefully this doesn't lead to a situation where the EA community cares more about seeming smart and having higher moral ground at the expense of actually doing good in the world.

Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-12T16:00:29.542Z · EA(p) · GW(p)

Thanks! I agree with everything in your comment - and I really hope to see EA change in the future so that more 'average' people are able to contribute (I think we could have a lot to give!!)

comment by Weaver · 2022-07-11T15:00:25.964Z · EA(p) · GW(p)

You don't have to be smart, or a college graduate. What I'm attempting to do at least for my edification is breaking things down into manageable chunks, so that it can easily be explained. 

I also enjoy looking at some of the EA numbers and shaking my head.

The table is small, but that's because we need people to help us build more extensions to the table. Come get some tools and help me make an addition.

comment by Sunny1 (Paige_Henchen) · 2022-07-15T15:56:36.789Z · EA(p) · GW(p)

In my view one of the most defining features of the EA community is that it makes most people who come into contact with it feel excluded and "less than," on several dimensions. So it's not just you!

Replies from: alene
comment by alene · 2022-07-15T17:40:16.482Z · EA(p) · GW(p)

Yes.

comment by Vaidehi Agarwalla (vaidehi_agarwalla) · 2022-07-14T18:16:34.540Z · EA(p) · GW(p)

Adding to the list of anecdotes, I previously wrote about somewhat similar experiences here [EA · GW] (and am coincidentally also a sociology major).

comment by 10jwahl · 2022-07-13T16:30:55.843Z · EA(p) · GW(p)

I do agree that EA does sort of take an elitist approach which shuns people who don't come from an academic background -- which is a shame because it definitely stifles creativity and innovation. Even though I am from am elite American institution, finding an EA career has been incredibly difficult because the community is quite closed off. In my experience, if you are not a researcher within one of their pre selected fields, you are not worth their time. There is a significant drive to have positive impact paired with a significant lack of empathy. Again, these are just my experiences but I know many people agree with this point. Definitely something to look into for EA from a culture perspective. Inclusivity and adding more world views can only add value. There is a place for everyone here

comment by Ruairi · 2022-07-13T06:19:09.859Z · EA(p) · GW(p)

Yeah, I think a lot of people (myself included) feel a lot of the same things.

You might want to consider pursuing a career in operations or something with a more entrepreneurial vibe. There's generally a lack of such people in EA, so I think there's often a lot of impact to be had.

In my experience things like good judgement and grit matter a lot in these roles, and being super smart matters a lot less.

comment by KevinO · 2022-07-11T23:28:56.592Z · EA(p) · GW(p)

To the extent that I'm outside of the general population I think it's because of my giving, but I generally feel squarely inside the box of ordinary people. I can relate to not feeling as smart as many EAs.

I think there are numerous things a typical person could do to take EA ideas and try to concretely make the world a better place:

One action that I think is broadly available is to join some advocacy group for EA-related policies on some local / regional / national level like animal welfare, electoral reform, sane land use policy, or something else. You could try to introduce EA ideas or a focus on effective methods in to the discussion, if they are missing.

I think there's lots of information to be picked up from EA Global talks and some 80,000 Hours podcast episodes (not necessarily every episode!), and other EA podcasts [EA · GW] that have been posted previously to the forum.

You could also talk about EA with your friends or your workplace (though I have an ugh-field around talking about EA and an ugh-field around reflecting on this, so I can sympathize if you do too). Maybe you could influence / organize a work fundraiser while spreading the ideas of effective giving.

Similarly, talking about EA related books.

If available, going to your local EA meetup and contributing to a warm & welcoming environment.

You could also keep an eye out for ideas or programs that seem highly cost effective and then try and signal boost them them on that basis.

Also things like answering questions where you can, with whatever time you have for it, to help ramp up others on these concepts you're excited about.

Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-12T16:12:37.111Z · EA(p) · GW(p)

Some great ideas here, thank you! I've talked to my husband a lot about EA but like you do find it a bit challenging to branch out to others. I think this is something worth me working on though.

comment by Jacob (Jacob_Schaal) · 2022-07-11T20:58:35.448Z · EA(p) · GW(p)

I was not very aware about this topic until recently when somebody wanted to discuss outreach towards non-academics with me. We are in contact with an adult education center (Volkshochschule) and might offer an Intro Fellowship there. It might be worth considering starting an EA group for high school graduates (no college) , comparable to EA for Christians, but I haven't thought much about it and this should be founded by people without tertiary education (chicken egg problem).

Replies from: Guy Raveh
comment by Guy Raveh · 2022-07-11T22:05:44.970Z · EA(p) · GW(p)

I kinda think Christians, or students of X university, have something positive that unites them and makes sense to approach them as a group - while "non college-educated people" do not.

I do think it's worth to reach out to these people, but I don't know if that would be the right framing. Maybe indeed someone in this group would be better equipped to think about this, as you said.

Replies from: shinybeetle
comment by shinybeetle · 2022-07-13T12:38:06.404Z · EA(p) · GW(p)

In Norway there's a local group for a county/state that doesn't really have a large university. That group has a bunch of farmers and tradespeople in it :)

comment by kuzay (akuzee) · 2022-07-11T19:37:17.825Z · EA(p) · GW(p)

Seeing a lot of great responses and content in the comments, I love it. You’re clearly not alone! Echoing what others have said, I like to frame the goal as “finding a way to be useful.” There are so many ways to be useful, and intelligence is just one input (keep in mind that smart people are probably especially good at Appearing Useful, especially online). Diligence, pragmatism, humility, sociability, plus a million other abilities are inputs too, and they’re basically all things you can improve at. Getting smarter is obviously helpful to being useful, but we can’t let it be the whole picture. Until we run out of opportunities to do good, there are opportunities for anyone to make a difference.

Replies from: Olivia Addy
comment by Olivia Addy · 2022-07-12T16:02:16.727Z · EA(p) · GW(p)

I think 'finding a way to be useful' is a great way to think about it, and something I'm going to consider going forward!

comment by Tilly P · 2022-07-29T10:14:54.630Z · EA(p) · GW(p)

I don't know if replying to this thread after a couple of weeks is against the forum rules, as I've not posted on the forum before, but I have followed EA from a distance for a few years and completely agree with the OP. Well done for being brave enough to write about this, because I also felt similarly "dumb", while on paper, I know that I shouldn't be! I'm doing a PhD but it is not in a directly-EA related field/subject and I have not been to the highest ranking universities you mention. A lot of the very philosophical and computational stuff on here goes over my head. Ideally, I would like to find an EA-aligned role after my PhD, but I've accepted that either this will have to be a more operations-based role, rather than a research one, or maybe I will go down the earning to give route (again, once I am financially stable - another important point you raised). I think we need to stay as strong and confident in our own abilities as we can; staying curious about new topics while perhaps also making peace with when we can't always understand everything. What I like to think is that while e.g. a computer scientist may be amazing when it comes to machine learning, I may be better at writing or science communication, for instance. I don't mean to stereotype or denigrate anyone by saying this - it's just an example - but OP, you likely have many brilliant skills that would be an asset to EA, but perhaps in a less traditional sense. I always remember reading that one of the biggest impact things you can do is be an assistant to the director of an organisation, as it will enable them to work more effectively. So there are ways of having impact, even indirectly.

comment by EcstaticCompassion · 2022-07-16T02:23:06.905Z · EA(p) · GW(p)

This may seem off topic, but one thing I've "learned" from years of meditation practice is that no one "earns" or freely chooses anything, like their intelligence level, for example. Nor do they freely choose to work on improving their intelligence level. There is no pride or shame in any ability because no one chooses what happens in their life. Consciousness just sort of notices what happens and mistakenly applies agency to certain phenomena. . .or maybe not. . .what the hell am I talking about? :) The moral of the story is do what you can, how you can, to make the world the best place it can be. 

comment by Jeroen De Ryck · 2022-07-12T11:24:17.444Z · EA(p) · GW(p)

Thank you so much for writing this.

I feel a very similar way. Every so often I get that feeling and excitement again about doing so much good, and after reading some posts and listening to podcasts for a few days, I get incredibly depressed because I don't study at Oxford and I'm not good at mathematics and I even struggle to make a okay-ish cost-benefit analysis for very basic things and I have no idea how to take all those seemingly complicated things like moral uncertainty into count. It's just exhausting.

But it has also taught me a lot things that I previously thought differently, less rationally about (like nuclea, organic farming, technology in general, ...). Except a lot of that came all at once and it was very overwhelming that it made me feel very lost (and to this day still does). I'd love to do more good, but I feel that the only way of doing that is completely throwing my life around and leaving all my family and friends for something in a country far away that has a tiny chance of succeeding, but with very big rewards if it does.

I study geography, something I'm very interested in, but (un)fortunately there are not a lot of neglected existential problems in those fields of science. I see a lot of other people writing about comparative advantage in the comment section here, but I don't really know where mine lies, or even how to figure out where mine is. I'll admit that I'm scared that the conclusion might be that I have to drop out of university, something I don't want to do. I could go study mathematics or physics, but I'd have an incredibly hard time there and I would not be happy for one second there. But it probably does also mean I can have a bigger impact. Is it then worth it? For the world, probably. For me, no.

The EA group in my country is very small, and really only exists in the capital. A forum post [EA · GW] here explained how most students who are sympathetic to EA ideas haven't heard of it. These things gave me the idea to go around lecture halls in the beginning of the academic year and pitch EA to try and start a local group. I do think I'm half-decent at giving oral presentations and I quite enjoy it. But say a couple of people reach out, what then? How does that all work and where would the comparative advantage of our group lie? I don't have friends who are EA-sympathetic and I haven't really made any connections since I started following EA about a year and a half ago. So I'd have to start an organization that I don't fit in all by myself and somehow motivate people to join it and spend significant amounts of time and money on.

I have had some other ideas but again no connections to make it work or even pitch it to or even how to figure out if it's an idea that's worth my time. EA has made me think and question a lot, but has failed to explain how to find answers to those questions. I'm sorry for this rather incoherent rant that talks only about myself. These thoughts have been on my mind for many months, but I've never really had the chance to express them to people who might understand. I hope some poeple here do.

Replies from: shinybeetle, Olivia Addy
comment by shinybeetle · 2022-07-13T13:13:12.720Z · EA(p) · GW(p)

This isn't what you intended by posting this, but I think it's useful to say it anyways.

You sound discouraged in the same way that I used to be before I was involved in the EA community. It can be pretty hard to see where you fit in by looking at the forum and 80k. Getting help to plan my career and getting involved in the community really changed all of that for me by giving me direction and peers to relate to.

Here's some things I really really encourage you to do, that I think will be helpful:

  • Go to an EA conference. Doesn't matter if it's EAG or EAGx, just go and talk to as many people as you can. If you don't have the money to travel, ask for monetary support. If youre unsure if you're "EA enough" to go, apply and let the conference holders decide for you.

  • Get some help making a rough career plan or steps to get enough knowledge to make that plan. EA knowledge is a lot to take in. My country's EA group had people who were willing to learn about me and help me figure out what my options were. It really helped me find clarity in what I already could plan/decide, and what I needed to gather more information about.

Sorry that this is a bit of a mess, but I hope that its at least somewhat helpful

Replies from: Jeroen De Ryck
comment by Jeroen De Ryck · 2022-07-13T19:37:53.044Z · EA(p) · GW(p)

Don't worry, it is a fine answer and probably has more structure than what I wrote, so good job on that :D

I'm going to an EA meetup of the few people that do exist in EA in my country for the first time, I'm very much looking forward to what they have to say. Thanks for the reply!

comment by Olivia Addy · 2022-07-12T16:09:51.927Z · EA(p) · GW(p)

Thank you for this comment! It's really great to know I'm not alone with this - and I hope you start to find your way, I know how confusing it can feel to be completely lost. I'd love to connect with you. I will send a DM!

comment by Phil Tanny · 2022-08-26T08:20:33.190Z · EA(p) · GW(p)

But I feel like I don't fit, because frankly, I'm not smart enough.

 But you are smart enough, because you correctly perceived that any ideology primarily by and for intellectual elites is not scalable to the degree necessary to create the required change.

Don't worry about everyone else's fancy sounding intellectual analysis.  Most of that is just career promotion anyway, or in my case, a lot of ego horn honking.     

Don't worry about fitting in.   We (humanity) are mostly insane anyway, so fitting in isn't always such a great plan.   Be yourself, and let the chips fall where they may.   

Find a contribution that you'd like to make, and do your best to make it.    Keep it simple, and make it happen.