Posts

Where is the best place to post butterfly ideas? 2022-08-30T19:26:38.604Z
What is the journey to caring more about 1) others and 2) what is really true even if it is inconvenient? 2022-07-16T11:31:39.058Z
Is it possible for EA to remain nuanced and be more welcoming to newcomers? A distinction for discussions on topics like this one. 2022-07-15T07:03:55.356Z
Sophia's Shortform 2022-05-27T00:59:30.311Z

Comments

Comment by Sophia on The Multiple Butterfly Effect · 2022-09-11T18:09:49.134Z · EA · GW

I liked this commentary even if I disagreed with a lot of the bottom line conclusions. Since we have an inferential gap that could be quite large, I don't expect everything you say to make sense to me.

You are probably directionally correct so I have strong upvoted this to encourage you to continue writing.

I don't have the energy right now to get into the object-level but feel free to share future draft posts as your thoughts develop. If I have a spare moment, I'd be very happy to share any feedback I have on your future thoughts with you.

(all good humor tends to be pointing to some angle of the truth that needs time to become nuanced enough to be more widely legible)

Comment by Sophia on Does Academic Over Thinking Obscure That Which Most Requires Our Attention? · 2022-08-29T23:30:26.574Z · EA · GW

I strongly agree. I think this question getting downvoted reveals everything wrong with the EA movement. I am thinking it might be to start a new kind of revolution of compassion, patience and rationality. 🤣

What do you think?

Comment by Sophia on Does Academic Over Thinking Obscure That Which Most Requires Our Attention? · 2022-08-29T01:21:54.897Z · EA · GW

I think that you are pointing to an important grain of truth.

I think that crossing inferential gaps is hard.

Academic writing is one medium. I think that facial expressions have a tonne of information that is hard to capture in writing but can be captured in a picture. To understand maths, writing is fine. To understand the knowledge in people's heads, more high fidelity mediums than writing (like video) is better.

Comment by Sophia on Is it possible for EA to remain nuanced and be more welcoming to newcomers? A distinction for discussions on topics like this one. · 2022-08-21T16:07:51.818Z · EA · GW

This was so heartwarming to read 😊

Comment by Sophia on EA can sound less weird, if we want it to · 2022-08-18T16:15:38.624Z · EA · GW

tl;dr:

  • I am not sure that the pressure on community builders to communicate all the things that matter is having good consequences.
  • This pressure makes people try to say too much, too fast.
  • Making too many points too fast makes reasoning less clear.
  • We want a community full of people who have good reasoning skills.
  • We therefore want to make sure community builders are demonstrating good reasoning skills to newcomers
  • We therefore want community builders to take the time they need to communicate the key points
  • This sometimes realistically means not getting to all the points that matter

I completely agree that you could replace "jargon" with "talking points".

I also agree with Rohan that it's important to not shy away from getting to the point if it is possible you can make the point in a well-reasoned way.

However, I actually think it's possibly quite important for improving the epistemics of people new to the community for there to be less pressure to communicate "all the things that matter". At least, I think there needs to be less pressure to communicate all the things that matter all at once.

The sequences are long for a reason. Legible, clear reasoning is slow. I think too much pressure to get to every bottom line in a very short time makes people skip steps. This means that not only are we not showing newcomers what good reasoning processes look like, we are going to be off-putting to people who want to think for themselves and aren't willing to make huge jumps that are missing important parts of the logic.

Pushing community builders to get to all the important key points, many bottom lines, will maybe make it hard for newcomers to feel like they have permission to think for themselves and make their own minds up. To feel rushed to a conclusion, to feel like you must come to the same conclusion as everyone else, no matter how important it is, will always make clear thinking harder.

If we want a community full of people who have good reasoning processes, we need to create environments where good reasoning processes can thrive. I think this, like most things, is a hard trade-off and requires community builders to be pretty skilled or to have much less asked of them.

If it's a choice between effective altruism societies creating environments where good reasoning processes can occur or communicating all the bottom lines that matter, I think it might be better to focus on the former. I think it makes a lot of sense to have effective altruism societies to be about exploration.

We still need people to execute. I think having AI risk specific societies, bio-risk societies, broad longtermism societies, poverty societies (and many other more conclusion focused mini-communities) might help make this less of a hard trade-off (especially as the community grows and there becomes more room for more than one effective altruism related society on any given campus). It is much less confusing to be rushed to a conclusion when that conclusion is well-labelled from the get-go (and effective altruism societies then can point interested people in the right direction to find out why certain people think certain bottom lines are sound).

Whatever the solution, I do worry rushing people to too many bottom lines too quickly does not create the community we want. I suspect we need to ask community builders to communicate less (we maybe need to triage our key points more), in order for them to communicate those key points in well-reasoned way.

Does that make sense?

Also, I'm glad you liked my comment (sorry for writing an essay objecting to a point made in passing, especially since your reply was so complementary; clearly succinctness is not my strength so perhaps other people face this trade-off much less than me :p).

Comment by Sophia on EA can sound less weird, if we want it to · 2022-08-14T01:37:38.659Z · EA · GW

Cool . I'm curious, how does this feeling change for you if you found out today that AI timelines are almost certainly less than a decade?

I'm curious because my intuitions change momentarily whenever a consideration pops into my head that makes me update towards AI timelines being shorter.

I think my intuitions change when I update towards shorter AI timelines because legibility/the above outlined community building strategy has a longer timeline before the payoffs. Managing reputation and goodwill seem like good strategies if we have a couple of decades or more before AGI.

If we have time, investing in goodwill and legibility to a broader range of people than the ones who end up becoming immediately highly dedicated seems way better to me.

Legible high-fidelity messages are much more spreadable than less legible messages but they still some take more time to disseminate. Why? The simple bits of it sound like platitudes. And the interesting takeaways require too many steps in logic from the platitudes to go viral.

However, word of mouth spread of legible messages that require multiple steps in logic still seem like they might spread exponentially (just with a lower growth rate than simpler viral messages).

If AI timelines are short enough, legibility wouldn't matter in those possible worlds. Therefore, if you believe timelines are extremely short then you probably don't care about legibility or reputation (and you also don't advise people to do ML PhDs because by the time they are done, it's too late).

Does that seem right to you?

Comment by Sophia on Cause Exploration Prize: Distribution of Information Among Humans · 2022-08-12T01:55:35.715Z · EA · GW

Thanks for this analysis! I would be excited to see this cause area explored/investigated further.

Comment by Sophia on EA can sound less weird, if we want it to · 2022-08-11T15:07:37.268Z · EA · GW

Note: edited significantly for clarity the next day

Tl;dr: Weirdness is still a useful sign of sub-optimal community building. Legibility is the appropriate fix to weirdness. 

I know I used the terms "nuanced" and "high-fidelity" first but after thinking about it a few more days, maybe "legibility" more precisely captures what we're pointing to here?

Me having the hunch that the advice "don't be weird" would lead community builders to be more legible now seems like the underlying reason I liked the advice in the first place. However, you've very much convinced me you can avoid sounding weird by just not communicating any substance.  Legibility seems to capture what community builders should do when they sense they are being weird and alienating.  

 EA community builders probably should stop and reassess when they notice they are being weird, "weirdness" is a useful smoke alarm for a lack of legibility. They should then aim to be more legible. To be legible, they're  probably strategically picking their battles on what claims they prioritize justifying to newcomers. They are legibly communicating something, but they're probably not making alienating uncontextualized claims they can't back-up in a single conversation. 

They are also probably using clear language the people they're talking to can understand.  

I now think the advice "make EA more legible" captures the upside without the downsides of the advice "make EA sound less weird". Does that seem right to you?

I still agree with the title of the post. I think EA could  and should sound less weird by prioritizing legibility at events where newcomers are encouraged to attend. 

Noticing and preventing weirdness by being more legible seems important as we get more media attention and brand lock-in over the coming years. 

Comment by Sophia on Undergrads, Consider Messaging Your Friends About Longtermism or EA Right Now · 2022-08-10T00:30:54.606Z · EA · GW

I definitely appreciate the enthusiasm in this post, I'm excited about Will's book too. 

However, for the reasons Linch shared in their comment, I would recommend editing this post a little.

 I think it is important to only recommend the book to people who we know well enough to judge that they probably would get a lot of a book like this one and to whom we can legibly articulate why we think they'd get a lot out of the book. 

A recommended edit to this post

I recommend editing the friends bit to something like this (in your own words of course, my words are always lacking succinctness so I recommend cutting as you see fit):

If you feel comfortable reaching out to a couple of friends who you think might find "what we owe the future" a good read, now might be a particularly excellent time to give them this book recommendation. 

Now seems like a particularly good time to give this book recommendation to friends who might be interested because the book is getting more media attention than it likely will get in the future because it just launched so there is a lot more media coverage on it than usual. Therefore,  they are more likely to be reminded of your recommendation and so are more likely to actually read the book if you recommend it now than if you recommend it later. 

Having said that, after you have read the book or after you've come across something they'd find  interesting might possibly be better times to share it with certain people so it's probably best to make a judgement call based on the particular friend you are considering recommending the book to.

Other thoughts on who to recommend this book to and how to do it in a sensitive way that leaves room for them to say no if they don't feel it's their vibe

I think it's particularly good to recommend the book to people to whom you can explain clearly why you think they would get a lot out of the book. I also think it could be very good to explicitly encourage them to think critically about it and send you their critical thoughts or discuss their critical thoughts with you in person. If you send it to people you know well enough to have enough context to tell they would probably enjoy the book, you're almost definitely going to genuinely want to hear what they have to say after reading it so that's a double bonus that you've started a conversation! 

I think requests like this can come off as pushy if the person who you are recommending the book to doesn't understand why you think they should read the book. By making the reasoning clear and also leaving room in the message for them to just not follow your recommendation (by not assuming they'll necessarily read it just because you think they'd enjoy it), a recommendation can give a good vibe instead of a bad one. Basically, it's important to vibe it and only recommend the book in ways that leaves everyone feeling good about the interaction whether or not they read the book. 

For example, you could say something like (obviously find your own words, this is definitely a message written with my vibe so it would probably be weird to copy it exactly word-for-word because it probably wouldn't sound like you and therefore wouldn't sound as genuine):

I am so excited about this book being released because I think that future generations matter a tonne and Will MacAskill thinks a little differently to other people who have thought hard about this so I'm looking forward to seeing what he has to say. I thought I'd send you a link too because from previous conversations we've had, I get the sense that you'd enjoy a book detailing someone's thinking on what we can do to benefit future generations. 

I'd be really keen to discuss it with you if you do end up reading it and especially keen to here any pushback you had because I'm a little in my bubble so I won't necessarily be able to see my water as well as you could.

Let me know what you think about my suggestions (and feel free to pushback on anything I've said here). I think I could easily change my mind on the above, I'm not at all confident in my recommendations so I'd be keen to here what you or anyone else thinks. 

Comment by Sophia on EA can sound less weird, if we want it to · 2022-08-09T15:33:24.327Z · EA · GW

Goal of this comment: 

This comment fills in more of the gaps I see that I didn't get time to fill out above. It fleshes out more of the connection between the advice "be less weird" and "communicate reasoning over conclusions".

  • Doing my best to be legible to the person I am talking to is, in practice, what I do to avoid coming across as weird/alienating. 
  • there is a trade-off between contextualizing and getting to the final point
  • we could be in danger of never risking saying anything controversial so we do need to encourage people to still get to the bottom line after giving the context that makes it meaningful
  • right now, we seem to often state an insufficiently contextualized conclusion in a way that seems net negative to me 
  • we cause bad impressions
  • we  cause bad impressions while communicating points I see as less fundamentally important to communicate 
  • communicating reasoning/our way of thinking seems more important than the bottom line without the reasoning
  • AI risk can often take more than a single conversation to contextualize well enough for it to move from a meaningless topic to an objectionable claim that can be discussed with scepticism but still some curiosity
  • I think we're better off trying to get community builders to be more patient and jump the gun less on the alienating bottom line
  • The soundbite "be less weird" probably does move us in a direction I think is net positive


I suspect that this is what most community builders will lay the groundwork to more legibly support conclusions when given advice like "get to the point if you can, don't beat around the bush, but don't be weird and jump the gun and say something without the needed context for the person you are talking to to make sense of what you are saying"



I feel like making arguments about stuff that is true is a bit like sketching out a maths proof for a maths student. Each link in the chain is obvious if you do it well, at the level of the person with whom you are taking through the proof, but if you start with the final conclusion, they are completely lost. 

You have to make sure they're with you every step of the way because everyone gets stuck at a different step. 

You get away with stating your conclusion without the proof in maths because there is a lot of trust that you can back up your claim (the worst thing that happens is the person you are talking to loses confidence in their ability to understand maths if you start with the conclusion before walking them through it at a pace they can follow). 

We don't have that trust with newcomers until we build it. They won't suspect we're right unless we can show we're right in the conversation we made the claim in.

 They'll lose trust and therefore interest very fast if we make a claim that requires at least 3 months of careful thought to come to a nuanced view on it. AI risk takes a tonne of time to develop inside views on. There is a lot of deference because it's hard to think the whole sequence through yourself for yourself and explore various objections until you feel like it's your view and not just something dictated to you. Deference is weird too (and gets a whole lot less weird when you just admit that you're deferring a bit and what exactly made you trust the person you are deferring to to come to reasonable views in the first place). 

 I feel like "don't sound weird" ends up translating to "don't say things you can't backup to the person you are talking to". In my mind, "don't sound weird" sounds a lot like "don't make the person you are talking to feel alienated", which in practice means "be legible to the person you are talking to".  

People might say much less when they have to make the person they are talking to understand all the steps along the way, but I think that's fine. We don't need everyone to get to the bottom line. It's also often worse than neutral to communicate the bottom line without everything above it that makes it reasonable. 

Ideally, community builders don't go so glacially slowly that they are at a standstill, never getting to any bottom lines that sound vaguely controversial, but while we've still got a decent mass of people who know the bottom line and enough of the reasoning paths that can take people there, it seems fine to increase the number of vague messages in order to decrease the number of negative impressions

I still want lots of people who understand the reasoning and the current conclusions, I just don't think starting with an unmotivated conclusion is the best strategy for achieving this and I think "don't be weird" plus some other advice to stop community builders from literally stagnating and never getting to the point seems much better than the current status quo. 

Comment by Sophia on EA can sound less weird, if we want it to · 2022-08-09T07:02:46.223Z · EA · GW

tl;dr: 

  • when effective altruism is communicated in a nuanced way, it doesn't sound weird.
  • rushing to the bottom line and leaving it unjustified means the person has neither a good understanding of the conclusion nor of the reasoning
  • I want newcomers to have a nuanced view of effective altruism
  • I think newcomers only understanding a rushed version of the bottom line without the reasoning is worse than them only understanding the very first step of the reasoning.
  • I think it's fine for people to go away with 10% of the reasoning.
  • I don't think it's fine for people to go away with the conclusion and 0% of the reasoning.
  • I want to incentivize people to communicate in a nuanced way rather than just quickly rush to a bottom line they can't justify in the time they have.
  • Therefore, I think "make EA ideas sound less weird" is much better than no advice.

 

Imagine a person at MIT comes to an EA society event and has some conversations about AI and then never comes back. Eventually, they end up working at Google and making decisions about Deepmind's strategy. 

Which soundbite do I want to have given them? What is the best 1 dimensional (it's a quick conversation, we don't have time for the 2 dimensional explanation) message I could possibly leave this person with? 

Option 1: "an AI might kill us all" (they think we think a Skynet scenario is really likely and we have poor reasoning skills because a war with walking robots is not that reasonable)
Option 2: "an AI system might be hard to control and because of that, some experts think it could be really dangerous" (this statement accurately applies to the "accidentally breaks the child's finger" case and also world-ending scenarios, in my mind at least, so they've fully understood my meaning even if I haven't yet managed to explain my personal bottom line)

I think they will be better primed to make good decisions about safe AI if I focus on trying to convey my reasoning before I try and communicate my conclusion. Why? My conclusion is actually not that helpful to a smart person who wants to think for themselves without all the context that makes that conclusion reasonable. If I start with my reasoning, even if I don't take this person to my bottom line, someone else down the road who believes the same thing as me can take them through the next layer up. Each layer of truth matters. 

If it sounds weird, it's probably because I've not given enough context for them to understand the truth and therefore I haven't really done any good by sharing that with them (all I've done is made them think I'm unreasonable). 

My guess is this person who came to one EA event and ended up being a key decisionmaker at Deepmind is going to be a lot less resistant when they hear about AI alignment in their job if they heard option 2 and not option 1. Partly because the groundwork to the ideas was better laid out. Partly because they trust the "effective altruism" brand more because they have the impression the effective altruism people, associated with "AI alignment" (an association that could stick if we keep going the way we've been going), to be full of reasonable people who think reasonable things.  
 
What matters is whether we've conveyed useful truth, not just technically true statements. I don't want us to shy away from communicating about AI, but I do want us to shy away from communicating about AI in a confronting way that is counterproductive to giving someone a very nuanced understanding later on. 

I think the advice "make AI sound less weird" is better than no advice because I think that communicating my reasoning well (which won't sound weird because I'll build it up layer by layer) is more important than communicating my current bottom line (leaving an impression of my bottom line that has none of the context attached to make it meaningful, let alone nuanced and high-fidelity) as quickly as possible. 

PS: I still don't think I've actually done a good job of laying the reasoning for my views clearly here so I'm going to write a post at some point (I don't have time to fix the gaps I see now). It is helpful for you to point out gaps you see out explicitly so I can fill them in future writing if they actually can be filled (or change my mind if not). 

In the meantime, I wanted to say that I've really valued this exchange. It has been very helpful for forcing me to see if I can make my intuitions/gut feeling more explicit and legible. 

Comment by Sophia on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-08T00:35:36.653Z · EA · GW

Lucky people maybe just have an easier time doing anything they want to do, including helping others, for so many reasons.

I didn't go to an elite university but I am exceptionally lucky in so many extreme ways (extremely loving family, friends, citizen of a rich country, good at enough stuff to feel valued throughout my life including at work etc).

While there is a counterfactual world where of course I could have put myself in a much worse position, it would have been impossible for most people to have it as good as I have it even if they worked much harder than me their entire lives.

Because of my good luck, it is much easier for me to think about people (and other sentient beings) beyond my immediate friends and family. It is very hard to have a wide moral circle when you, your friends and your family are under real threat. My loved ones are not under threat and haven't ever been. I care a tonne about the world. Clearly this is largely due to luck. I have no idea what I would have become without a life of good fortune.

I think it makes sense to try and find people who are in a position to help others significantly even though it is always going to be largely through luck. Things just are incredibly unfair. If they were fairer, effective altruism would be less needed.

It's probably easier to find people who are exceptionally lucky at elite universities.

I do think it makes sense to target the luckiest people to use their luck as well as possible to make things better for everyone else.

The challenge is doing this while making sure a much wider variety of people can feel a sense of belonging here in this community.

I do think we have to be better at making it clear that many different types of people can belong in the effective altruism movement.

Who should feel welcome should they stumble upon us and want to contribute is a much, much larger group of people than to whom should we spend scarce resources promoting the idea that lucky people can help others enormously.

I think part of the answer comes from the people who don't fit the mould who stay engaged anyway because they resonate with the ideas. Because they care a tonne about helping others. These people are trailblazers. By being in the room, they make anyone else who walks in the room who is more like them and less like the median person in the existing community feel like this space can be for them too.

I don't think this is the full answer. Other social movements have had great successes from making those with less luck notice they have much more power than they sometimes feel they do. I'm not sure how compatible EA ideas are with that kind of mass mobilisation though. This is because the message isn't simple so when it's spread on mass, key points have a tendency to get lost.

I do think it's fair to say that due to comparative advantage and diminishing returns, there is a tonne of value to building a community of people who come from all walks of life who have access to all sorts of different silos of information.

Regardless, I think it's incredibly important to not mistake the focus on elite universities for a judgement on whether they deserve to be there.

I think it is actually purely a judgement on what they might be able to do from that position to make things better for everyone else.

Effective altruism is about working out how to help others as much as we can.

If lucky people can help others more, then maybe we want to focus on finding lucky people to make all sentient beings luckier for the rest of time. If less lucky people can help more on the margin to help others effectively, then we should focus our efforts there.

This is independent of value judgements on anyone's intrinsic worth. Everyone is valuable. That's why we all want to help everyone as much as we can. Hopefully we can do this and make sure that everyone in our community still feels valued. This is hard because naturally people don't feel as valued because we all tie our instrumental value to our sense of self-worth even if instrumental value is usually pretty much entirely luck. This is a challenge we maybe need to rise to rather than a tension we can just accept because a healthy, happy effective altruism community where everyone feels valued will just be more effective at helping others. I think it's pretty clear that everyone can contribute (e.g. extreme poverty still exists and a small amount of money still sadly goes a very long way). I know I can contribute much, much less than many others but being able to contribute something is enough. I don't need to contribute more than anyone else to still be a net positive member of this community.

We're all on the same team. It's a good thing if other people are able to do much more than me. . If luckier people can do more, then I'm glad that they are the ones that are being most encouraged to use their luck for good. If those with less luck want to contribute what they can, I hope they can still feel valued regardless.

Hopefully we can all feel valued for being a part of something good and contributing what we can independent of whether luckier people are, due to their luck, able to do more (and therefore might be focused on more in efforts to communicate this community's best guesses on how to help others effectively).

Comment by Sophia on Sophia's Shortform · 2022-08-05T09:36:55.491Z · EA · GW

The reputation of the effective altruism society on each campus seems incredibly important for the "effective altruism" brand among key audiences. E.g. Future Deepmind team leaders could come out of MIT, Harvard, Stanford etc.

Are we doing everything we could to leave people with an honest but still good impression? (whether or not they seem interested in engaging further)

Comment by Sophia on EA can sound less weird, if we want it to · 2022-07-29T12:49:22.650Z · EA · GW

Fair enough. 

tl;dr: I now think that EA community-builders should present ideas in a less weird way when it doesn't come at the expense of clarity, but maybe the advice "be less weird" is not good advice because it might make community-builders avoid communicating weird ideas that are worth communicating. 

You probably leave some false impressions either way

In some sense (in the sense I actually care about), both statements are misleading.  

I think that community builders are going to convey more information, on average, if they start with the less weird statement. 

Often inferential gaps can't be crossed in a conversation.

 That missing understanding will always get filled in with something inaccurate (if they had an accurate impression, then there is no inferential gap here). The question is, which misconceptions are better to leave someone with?

You've outlined how "an AI system might be hard to control and because of that, some experts think it could be really dangerous" could be misunderstood. I agree that people are unlikely to think you mean "the AI system will kill us all" without further elaboration. They will attach more accessible examples to the vaguer statement in the meantime. It is unlikely they will attach one really specific wrong example though, there is ambiguity there and the uncertainty left from that ambiguity is much better than a strongly held false impression (if those are the two choices, ideally, the inferential gap gets closed and you get a strongly held true impression). 

People who are hearing the statement "the AI system will kill us all" without further context will still try and attach the most accessible examples they have to make the phrase make as much sense as possible to them. This tends to mean Skynet style walking robots. They'll also probably hypothesize that you don't have very good epistemics (even if this is not the language they'd use to describe it). They won't trust you to have good reasons to believe what you do because you've made an extraordinary claim without having laid out the case for it yet. These are false impressions too. They are also likely to stick more because the extra weirdness makes these first impressions much more memorable. 

Which impression do I prefer community builders leave newcomers with? 

I value community builders conveying the reasoning processes much more than the bottom line. I want newcomers to have the tools to come to reasonable conclusions for themselves (and I think giving newcomers the reasons why the EA community has its current conclusions is a good start). 

Giving newcomers a more accurate impression of a conclusion without giving them much context on that conclusion seems often worse than nothing. Especially since you often lose the trust of reasonable people when you make very surprising claims and can't back them up in the conversation (because that's too much of an inferential distance to cross in the time you have). 

Giving them an accurate impression of one of the reasons for a conclusion seems neutral (unaligned AI seems like it could be an x-risk because AI systems are hard to control). That isolated reason without further elaboration doesn't actually say that much, but I think it does lay the groundwork for a deeper understanding of the final conclusion "AI might kill us all" if future conversations happen down the road. 

My takeaways

After this discussion, I've changed my mind on "be less weird" being the right advice to get what I want.  I can see how trying to avoid being weird might make community builders avoid getting the point across.

Something like "aim to be as accurate as possible using language and examples the person you are talking to can understand" still probably will result in less weird. I'd be surprised if it resulted in community builders obscuring their point. 

Comment by Sophia on Tradeoffs in Community Building · 2022-07-28T13:05:26.068Z · EA · GW

My feeling on this is that there is a distinction between how many people could become interested and how many people we have capacity for right now. The number of people who have the potential to become engaged, have a deep understanding of the ideas along with how they relate to existing conclusions and feel comfortable pushing back on any conclusions that they find less persuasive in a transparent way is much larger than the number of people we can actually engage deeply like this. 

I feel like a low barrier of entry is great when your existing membership is almost all really engaged people with really nuanced views. A newcomer can come in, and with the right mindset, the group can quickly cross inferential gaps with them as they arise because everyone agrees on the fundamental principles (even if there might be disagreement on final cause prioritisation etc). Once a group is 50% newcomers though, it becomes extremely difficult to cross those inferential gaps because conversations are constantly getting side-tracked from the concepts and ideas that are relatively robust that are foundational to so many of the conclusions (even though there is still plenty to debate once you're past the foundational premises).

I feel like the barrier of entry should shift depending on the current composition. When there are less newcomers, I think that it's good to have a low barrier of entry and put a tonne of effort into including pretty much anyone who is curious who buys into the idea that helping others is a worthwhile use of their efforts and that maybe a scientific method-y kind of way of approaching this could be good too.  

Once a group is more newcomers than people who have a deep understanding of the existing ideas and concepts, then I think crossing inferential gaps is too hard for it to be productive to try and be inclusive to even more newcomers. I think then prioritising whoever is reading the most and finding it the most natural or whoever already understands the ideas makes a little more sense (on the margin).

 It's less of a a question of who has the potential to understand things, it's more a question of whether the group has the capacity to give them that understanding. 

Comment by Sophia on rohinmshah's Shortform · 2022-07-28T05:43:50.810Z · EA · GW

Maybe I want silent upvoting and downvoting to be disincentivized (or commenting with reasoning to be more incentivized). Commenting with reasoning is valuable but also hard work. 

After 2 seconds of thought, I think I'd be massively in favour of a forum feature where any upvotes or downvotes count for more (e.g. double or triple the karma) once you've commented.[1]  

Just having this incentive might make more people try and articulate what they think and why they think it. This extra incentive to stop and think might possibly make people change their votes even if they don't end up  submitting their comments. 

  1. ^

    Me commenting on my own comment shouldn't  mean the default upvote on my comment counts for more though: only the first reply should give extra voting power (I'm sure there are other ways to game it that I haven't thought of yet but I feel like there could be something salvageable from the idea anyway). 

Comment by Sophia on rohinmshah's Shortform · 2022-07-28T05:12:31.278Z · EA · GW

Hot take: strong upvoting things without great reasoning that also have conclusions I disagree with could be good for improving epistemics. At least, I think this gives us an opportunity to demonstrate common thinking processes in EA and what reasoning transparency looks like to newer people to the community. [1]

My best guess is that it also makes it more likely quality divergent thinking from established ideas happens in EA community spaces like the EA forum.

 My reasoning is in a footnote in my comment here. 

  1. ^

    I'm aware that people on this thread might think my thinking processes and reasoning abilities aren't stellar,* but I still think my point stands. 

    *My personal view is that this impression would be less because I'm bad at thinking clearly and more because our views are quite different. 

    A large inferential distance means it's harder to diagnose epistemics accurately (but I'm  not exactly an unbiased observer when it comes to judging my own ability to think clearly).  

    This leads me to another hot take: footnotes within footnotes are fun.
     

Comment by Sophia on Why this isn’t the “most important century” · 2022-07-28T00:26:03.717Z · EA · GW

Thank you very much for writing this. I think it is a valuable contribution to discussion.

I think there is something to a lot of the points you raised, though I think that this piece isn't quite there yet. I've put some quick thoughts on how to improve the tone (and why I didn't comment on anything substantive) in a footnote, as well as why I've strong upvoted anyway. [1]

I have to admit that I still find Holden's piece more compelling. Nonetheless, I think you posting this was a very valuable thing to do! 

I think the way new ideas develop into their strongest versions is that people post their thinking as-is (exactly like you did) and then others comment with well-argued pushback (rather than downvoting without commenting). 

I really hope you and others continue to develop these points. I can really imagine some of the questions you raised could be turned into something much more nuanced and much stronger.

This piece seems like a work-in-progress. It is a beginning to a discussion that I hope continues.

Comparing your piece and Holden's is not a fair comparison at all and shouldn't be the bar used to judge whether to upvote or downvote less well established ideas in this community.

 Holden has been thinking full-time about related topics for a very long time. He also has many people he can send his draft writing to who have also been thinking about this full-time to point out where he's being over-confident or where the flaws in his logic are, before he publicly posts his writing. 

 I would like to see more pieces like this one that push back on ideas that are well-established within this community with more charitable and well-argued comments and less silent downvotes.



 

  1. ^

    I think this piece often is a little unkind to the original piece it critiques,  a piece I think is very good. For example, using words like "myopic" seems unnecessary to any of your substantial points. 

    I suspect the unkind tone was one of the reasons it got downvoted. Softening this a bit would improve the piece.  However, more charitable tones take much more work and writing is a lot of work already! 

     I also don't think the arguments are yet strong enough for your conclusions. However, writing the strongest possible argument is extremely hard work and impossible to do alone. 

    It being very hard work to articulate reasoning rigorously is the reason why I'm not writing up exactly what I found less compelling: to do so well is extremely time consuming and takes a lot of energy for me (and for everyone, silently downvoting is a lot easier than writing a comment explaining why).  

    I am instead using my time to encourage others to put in the time and energy to comment on the substantive claims if they can, instead of downvoting without comment. If onlookers don't have time to comment, I think it is better to abstain from voting than to downvote so others can see the post and write their pushback.  

    Abstaining from voting if you don't have time to comment with reasoning for downvoting seems better than downvoting (even if you think the piece was not as compelling as the piece it criticises). Also, commenting kindly, while pointing out the flaws in logic,  seems very valuable because it gives future writers with similar intuitions to the OP something to build-on. 

    I personally hope for more development of ideas in EA spaces generally. 

    Putting unpolished ideas out there seems valuable so the sorts of conversations that allow for new ideas to be built on can begin. 

    Ideas don't start fully formed. They need a collaborative effort to develop and form into their strongest version. 

    This post doesn't have to be perfect to contribute to idea-space and move us all closer to a more nuanced understanding of our world. It can throw an idea out there in an unpolished, unfinished and currently not fully compelling to me form. 

    Others who have the same intuition can help develop it into something much stronger on draft 2 that I, and others, find more compelling. I want that process to happen. Therefore, I want more pieces like this with intelligent and kind comments pushing back on the biggest logical flaws on the EA forum frontpage. 

    Then new pieces, draft 2 of these ideas, can be written taking into account the previous comments. 

    Others comment again. New pieces are written again. 

    This is the mechanism I see for us incubating more new good ideas that go against our currently established ones.

     Starting such discussions is very valuable and neglected on the margin in conversation spaces in this community (like this forum). This is why I felt it was very appropriate to strong upvote this piece. It adds value on the margin to have less well-established ideas entertained and discussed. Therefore I want it to be engaged with more. Therefore I strong upvoted. 


     

Comment by Sophia on Making Effective Altruism Enormous · 2022-07-26T03:14:50.803Z · EA · GW

I doubt anyone disagrees with either of our above two comments. 🙂

I just have noticed that when people focus on growing faster, they sometimes push for strategies that I think do more harm than good because we all forget the higher level goals mid project.

I'm not against a lot of faster growth strategies than currently get implemented.

I am against focusing on faster growth because the higher level goal of "faster growth" makes it easy to miss some big picture considerations.

A better higher level goal, in my mind, is focus on fundamentals (like scope insensitivity or cause neutrality or the Pareto principal applied to career choice and donations) over conclusions.

I think this would result in faster growth with much less of the downsides I see in focusing on faster growth.

I'm not against faster growth, I am against focusing on it. 🤣

Human psychology is hard to manage. I think we need to have helpful slogans that come easily to mind because none of us are as smart as we think we are. 🤣😅 (I speak from experience 🤣)

Focus on fundamentals. I think that will get us further.

Comment by Sophia on Making Effective Altruism Enormous · 2022-07-26T03:06:39.846Z · EA · GW

Agreed.

Comment by Sophia on Making Effective Altruism Enormous · 2022-07-26T03:06:13.736Z · EA · GW

We don't need everyone to have a 4 dimensional take on EA.

Let's be more inclusive. No need for all the moral philosophy for these ideas to be constructive.

However, it is easy to give an overly simplistic impression. We are asking some of the hardest questions humanity could ask. How do we make this century go well? What should we do with our careers in light of this?

Let's be inclusive but slowly enough to give people a nuanced impression. And slowly enough to be some social support to people questioning their past choices and future plans.

Comment by Sophia on Making Effective Altruism Enormous · 2022-07-26T02:47:10.810Z · EA · GW

A shorter explainer on why focusing on fast growth could be harmful:

Focusing on fast means focusing on spreading ideas fast. Ideas that are fast to spread tend to be 1 dimensional.

Many 1d versions of the EA ideas could do more harm than good. Let's not do much more harm than good by spreading unhelpful, 1 dimensional takes on extremely complicated and nuanced questions.

Let's spread 2 dimensional takes on EA that are honest, nuanced and intelligent where people think for themselves.

The 2d takes that include the fundamental concepts (scope insensitivity and cause neutrality etc) that are most robust. One where people recognize no-one has all the answers yet because these are hard questions. Where they also recognize smart people have done some thinking and that is better than no thinking.

Let's get an enormous EA sooner rather than later.

But not so quickly that we end up accidentally doing a lot more harm than good!

Comment by Sophia on Making Effective Altruism Enormous · 2022-07-26T01:04:35.659Z · EA · GW

Changing minds and hearts is a slow process. I unfortunately agree too much with your statement that there are no shortcuts. This is one key reason why I think we can only grow so fast.

Growing this community in a way that allows people to think for themselves in a nuanced and intelligent way seems necessarily a bit slow (so glad that compounding growth makes being enormous this century still totally feasible to me!).

Comment by Sophia on Making Effective Altruism Enormous · 2022-07-26T00:59:50.363Z · EA · GW

I agree that focusing on epistemics leads to conclusions worth having. I am personally skeptical of fellowships unless they are very focused on first principles and when discussing conclusions, great objections are allowed to take the discussion completely off topic for three hours.

Demonstrating reasoning processes well and racing to a bottom line conclusion don't seem very compatible to me.

Comment by Sophia on Making Effective Altruism Enormous · 2022-07-26T00:46:49.923Z · EA · GW

If it's a question of giving people either a sense of this community's epistemics or the bottom line conclusion, I strongly think you are doing a lot more good if you choose epistemics.

Every objection is an opportunity to add nuance to your view and their view.

If you successfully demonstrate great epistemics and people keep coming back, your worldviews will converge based on the strongest arguments from everyone involved in the many conversations happening at your local group.

Focus on epistemics and you'll all end up with great conclusions (and if they are different to the existing commonly held views in the community, that's even better, write a forum post together and let that insight benefit the whole movement!).

Comment by Sophia on Making Effective Altruism Enormous · 2022-07-26T00:30:50.210Z · EA · GW

You don't need to convince everyone of everything you think in a single event. 🙂 You probably didn't form your worldview in the space of two hours either. 😉

When someone says they think giving locally is better, ask them why. Point out exactly what you agree with (e.g. it is easier to have an in-depth understanding of your local context) and why you still hold your view (e.g. that there are such large wealth disparities between different countries that there are some really low hanging fruit, like basic preventative measures of diseases like malaria, that you currently guess that you can still make more of a difference by donating elsewhere).

If you can honestly communicate why you think what you do, the reasons your view differs from the person you are talking to, in a patient and kind way, I think your local group will be laying the groundwork for a much larger movement of people who care deeply about helping others as much as they can with some of their resources. A movement that also thinks about the hard but important questions in a really thoughtful and intelligent way.

The best way to change other people's minds for me is to keep in mind that I haven't got everything figured out and this person might be able to point to nuance I've missed.

These really are incredibly challenging topics that no-one in this community or in any community has fully figured out yet. It didn't always happen in the first conversation, but every person whose mind I have ever ended up changing significantly over many conversations added nuance to my views too.

Each event, each conversation, can be a small nudge or shift (for you or the other person). If your group is a nice place to hang out, some people will keep coming back for more talks and conversations.

Changing people's mind overnight is hard. Changing their minds and your mind over a year, while you all develop more nuanced views on these complicated but still important questions, is much more tractable and, I think, impactful.

Comment by Sophia on Making Effective Altruism Enormous · 2022-07-25T12:42:48.045Z · EA · GW

I agree with that. 🙂

I consider myself a part of the community and I am not employed in an EA org, nor do I intend to be anytime soon so I know that having an EA job or funding is not needed for that.

 I meant the capacity to give people a nuanced enough understanding of the existing ideas and thinking processes as well as the capacity to give people the feeling that this is their community, that they belong in EA spaces, and that they can push back on anything they disagree with.

It's quite hard to communicate the fundamental ideas and how they link to current conclusions in a nuanced way. I think integrating people into any community in a way that avoids fracturing or without losing the trust that community members have with each other (but still allowing new community members to push back on old ideas that they disagree with) takes time and can only be done, I think, if we grow at a slow enough pace. 

 

Comment by Sophia on Making Effective Altruism Enormous · 2022-07-25T02:06:19.942Z · EA · GW

I also think faster is better if the end size of our community stays the same. 👌🏼 I also think it's possible that faster growth increases the end size of our community too. 🙂 

Sorry if my past comment came across a bit harshly (I clearly have just been over-thinking this topic recently 😛)![1]

 I do have an intuition, which I explain in more detail below, that lots of ways of growing really fast could end up making our community's end size smaller. 😟

Therefore, I feel like focusing on fast growth is much less important than focusing on laying the groundwork to have a big end capacity (even if  it takes us a while to get there). 

It's so easy to get caught up in short-term metrics so I think bringing the focus to short-term fast growth could take away attention from thinking about whether short-term growth is costing us long-term growth.

 I don't think we're in danger of disappearing given our current momentum. 

I do think we're in danger of leaving a bad impression on a lot of people though and so I think it is important to manage that as well as we can. My intuition is that it will be easier to work out how to form a good impression if we don't grow very fast in a very small amount of time. 

 Having said that, I'm also not against broad outreach efforts. I simply think that when doing broad outreach, it is really important to keep in mind whether the messages being sent out lay the groundwork for a nuanced impression later on (it's easy to spread memes that makes more nuanced communication much harder). 

However, I think memes about us are likely to spread if we're trying to do big projects that attract media attention, whether or not we are the ones to spread those broad outreach messages. 

I could totally buy into it being important to do our best to try and get the broad outreach messages we think are most valuable out there if we're going to have attention regardless of whether we strategically prepare for it. 

I have concrete examples in my post here of what I call "campground impacts" (our impact through our influence on people outside the EA community). If outreach results in a "worse campground", then I think our community's net impact will be smaller (so I'm against it). If outreach results in a "better campground", then I think our community's net impact will be bigger (so I'm for it).  If faster-growth strategies result in a better campground then I'm probably for them, if they result in a worse campground, then I'm probably against them. 😛
 

  1. ^

    I went back and edited it after Zach replied to more accurately convey my vibe but my first draft was all technicalities and no friendly vibes which I think is no way to have a good forum discussion! (sorry!)

    (ok, you caught me, I mainly went back to add emojis, but I swear emojis are an integral part of good vibes when discussing complex topics in writing 😛🤣: cartoon facial expressions really do seem better than no facial expressions to convey that I am an actual human being who isn't actually meaning to be harsh when I just blurt out some  unpolished thoughts in a random forum comment😶😔🤔💡😃😊)

Comment by Sophia on Making Effective Altruism Enormous · 2022-07-25T01:13:23.783Z · EA · GW

I actually think being welcoming to a broad range of people and ideas is really about being focused on conveying to people who are new to effective altruism that the effective altruism project is about a question. 

If they don't agree with the current set of conclusions, that is fine! That's encouraged, in fact. 

People who disagree with our current bottom line conclusions can still be completely on board with the effective altruism project (and decide whether their effective altruism project is helped by engaging with the community for themselves).

If in conversations with new people, the message that we get across is that the bottom-line is not as important as the reasoning processes that get us there, then I think we naturally will be more welcoming to a broader range of people and ideas in a way that is genuine. 

Coming across as genuine is such an important part of leaving a good impression so I don't think we can "pretend" to be broader spectrum than we actually are. 

We can be honest about exactly where we are at we are while still encouraging others to take a broader view than us by distinguishing the effective altruism project from the community. 

I think there is a tonne of value to making sure we are advocating for the project and not the community in outreach efforts with people who haven't interacted that much with the community. 

 If newcomers don't want to engage with our community, they can still care a tonne about the effective altruism project. They can collaborate with members of the community to the extent it helps them do what they believe is best for helping others as much as they can with whatever resources they are putting towards the effective altruism project.

 I'd love to see us become exceptionally good at going down tangents with new people to explore the merits of the ideas they have.  This makes them and us way more open to new ideas that are developed in these conversations.  It also is a great way to demonstrate how people in this community think to people who haven't interacted with us much before. 

How we think is much more core to effective altruism than any conclusion we have right now (at least as I see it). Showing how this community thinks will, eventually, lead people we have these conversations with to conclusions we'd be interested in anyway (if we're doing those conversations well).

Comment by Sophia on Making Effective Altruism Enormous · 2022-07-25T00:43:16.688Z · EA · GW

I agree that EA being enormous eventually would be very good. 🙂

However, I think there are plenty of ways that quick, short-term growth strategies could end up stunting our growth. 😓

I also think that being much more welcoming might be surprisingly significant due to compounding growth (as I explain below). 🌞

It sounds small, "be more welcoming", but a small change in angle between two paths can result in a very different end destination. It is absolutely possible for marginal changes to completely change our trajectory!

We probably don't want effective altruism to lose its nuances. I also think nuanced communication is relatively slow (because it is often best done, at least in part, in many conversations with people in the community)[1].  I think that we could manage a 30% growth rate and keep our community about a nuanced version of effective altruism, but we probably couldn't triple our community's size every year and stay nuanced.

However,  growth compounds. Growing "only" 30% is not really that slow if we think in decades!

If we grow at a rate of 30% each year, then we'll be 500,000 times as big in 50 years as we are now.[2]

Obviously growth will taper off (we're not going to grow exponentially forever), but I think at what point it tapers off is a very big deal. That saturation point, that maximum community size we hit, is more important for EA ending up enormous. We can probably grow by focusing on "slow" growth strategies, and still end up enormous relatively soon (30% is actually very fast growth but can be done without loads of the sorts of activities you might typically think of as fast-growth strategies).[3]

I actually think one of the biggest factors in how big we grow is how good an impression we leave on people who don't end up in our community. We will taper off earlier if we have a reputation for being unpleasant. We can grow at 30% with local groups doing a lot of the work to leave a lot of people with a great impression whether or not they decide to engage much with the community after they've formed that first impression.

If we have a reputation for being a lovely community, we're much more likely to be able to grow exponentially for a long time.

Therefore, I do think being really nice and welcoming is a really huge deal and more short-term strategies for fast growth that leave people confused and often feeling negatively about us could, in the end, result in our size capping out much earlier.

Whether or not we have the capacity for all the people who could be interested in effective altruism right now (only being able to grow so fast in a nuanced way limits our capacity), we still do have the capacity to leave more people with a good impression.

More of my thoughts on what could be important to focus on are here.

  1. ^

    Books and articles don't talk back and so can't explore the various miscellaneous thoughts that pop up for a person who is engaging with EA material, when that material is thought-provoking for them. 

  2. ^

     (I find the weird neatness of these numbers quite poetic 😻)

  3. ^

    This isn't to say I'm against broad outreach efforts. It's just to say that it is really important to lay the groundwork for a nuanced impression later on with any broad outreach effort.  

Comment by Sophia on EA is becoming increasingly inaccessible, at the worst possible time · 2022-07-24T01:00:51.380Z · EA · GW


Also, I think conversations by the original authors of a lot of the more fleshed-out ideas are much more nuanced than the messages that get spread.  

E.g. on 4: 80k has a long list of potential highest priority cause areas that are worth exploring for longtermists and Holden, in his 80k podcast episode and the forum post he wrote says that for most people probably shouldn't go directly into AI (and instead should build aptitudes). 

Nuanced ideas are harder to spread but also people feeling like they don't have permission in community spaces (in local groups or on the forum) to say under-developed things means it is much less likely for the off-the-beaten-track stuff that has been mentioned but not fleshed out to come up in conversation (or to get developed further). 

Comment by Sophia on EA is becoming increasingly inaccessible, at the worst possible time · 2022-07-24T01:00:35.167Z · EA · GW

This was such a great articulation of such a core tension to effective altruism community building. 

A key part of this tension comes from the fact that most ideas, even good ideas, will sound like bad ideas the first time they are aired. Ideas from extremely intelligent people and ideas that have potential to be iterated into something much stronger do not come into existence fully-formed. 

Leaving more room for curious and open-minded people to put forward their butterfly ideas without being shamed/made to feel unintelligent means having room for bad ideas with poor justification. Not leaving room for unintelligent-sounding ideas with poor justification selects for people who are most willing to defer. Having room to delve into more tangents off the beaten track of what has already been fleshed out leaves the danger for that side-tracking to be a dead-end (and most tangents will be) and no-one wants to stick their neck out and explore an idea that almost definitely is going nowhere (but should be explored anyway just in case). 

Leaving room for ideas that don't yet sound intelligent is hard to do while still keeping the conversation nuanced (but I think not doing it is even worse). 

Comment by Sophia on Leaning into EA Disillusionment · 2022-07-23T13:35:31.588Z · EA · GW

I think a big thing I feel after reading this is a lot more disillusioned about community-building. 

It is really unhealthy that people feel like they can’t dissent from more established (more-fleshed out?) thoughts/arguments/conclusions. 

Where is this pressure to agree with existing ideas and this pressure against dissent coming from? (some early thoughts to flesh out more 🤔)

This post isn’t the only thing that makes me feel that there is way too much pressure to agree and way too little room to develop butterfly ideas (that are never well-argued the first time they are aired, but some of which could iterate into more fleshed-out ideas down the road if given a bit more room to fly). 

My guess is there is a lot more uncertainty among people who fleshed out a lot of the ideas that now feel unquestionable, but uncertainty is hard to communicate. It is also much easier to build on someone else’s groundwork than to start an argument from scratch, making it easier to develop existing ideas and harder to get new ones, even potentially good ones, off the ground. 

I also think it’s incredibly important to make sure people who are doing community building feel fully comfortable saying exactly what they think, even if it isn’t their impression of the “consensus” view. No talking point should ever be repeated by someone who doesn’t buy into it because that person can’t defend it if questioned because it’s not their view. My guess is the original talking points got written up as inspiration or prompts, but weren’t ever intended to be repeated without question and without buy-in. It’s such a big ask of people though to figure out that they don’t really believe the things they are saying. It is especially hard in a community that values legible thinking and intelligence so much and can be quite punishing to half-formed thoughts. There is often a long period between not fully buying-in and having a really well-fleshed out reason for disagreeing. This period where you have to admit you feel you don’t really agree but you don’t know why yet is hard to be honest about, especially in this community. I don’t think addressing this pressure has easy answers but I think addressing it seems pretty incredibly important anyway for creating healthy spaces with healthy epistemics. 

More thoughts and suggestions on how we maybe can improve

I agree with so much of this post's suggestions. I also have so many random half-baked ideas in so many half-finished google docs. 

Maybe get better at giving people who are not extremely dedicated or all-in good vibes/a good impression because we see them as future potential allies (even if we don't have the capacity to fully on-board them into the community)

It just does seem so important to make sure we have a culture where people really feel they don’t have to take all or leave all of effective altruism to have a voice in this community or to have a place in our spaces. The all-in and then all-out dynamic has a tonne of negative side-effects and I’ve definitely seen it a lot in the people I’ve known. 

I can see why the strategy of only accepting and focusing on full-in extremely dedicated people makes sense given how capacity constrained community building is and given this community probably can only accommodate so many new people at a time (detailed and nuanced communication is so time-consuming and within community trust seems important but is hard to build with too much growth too fast).

 It is challenging to create room for dissent and still have a high trust community with enough common ground for it to be possible for us to cohesively be all in the same community. 

I'm not sure exactly what a feasible alternative strategy looks like, but it seems plausible to me that we can get better at developing allies to collaborate with who come by a local group and have a good time and some food for thought without this feeling like an all-in or all-out type engagement.

It seems good to me to have more allies who give us that sometimes nuanced but sometimes (naturally) missing-the-mark critique, who feel comfortable thinking divergently and developing ideas independently. Many of those divergent ideas will be bad (of course, that's how new ideas work) but some might be good, and when they're developed, those people who got good vibes from their local group are keen to share them with us because we've left a good enough impression that they feel we'll really want to listen to them. I think there are ways of having a broader group of people who are sympathetic but who think differently who we don't try and do the "super detailed and nuanced take on every view ever had about how to help others as much as possible" thing. Not sure exactly how to do this well though, messaging gets mixed-up easily and I think there are definitely ways to implement this idea that could make things worse.

More separate communities for the thinking (the place where EA is supposed to be a question) and the doing (the place for action on specific causes/current conclusions)

Maybe it is also important to separate the communities that are supposed to be about thinking and the ones that are supposed to be about acting on current best guesses. 

Effective altruism groups sound like they sometimes are seen as a recruiting ground for specific cause areas. I think this might be creating a lot of pressure to come to specific conclusions. Building infrastructure within the effective altruism brand for specific causes maybe also makes it harder for anyone to change their minds. This makes effective altruism feel like much less of a question and much more like a set of conclusions. 

 Ideally, groups that are supposed to be about the question “how do we help others as much as possible?” should be places where everyone is encouraged to engage with the ideas but also to dissent from them and to digress for hours when someone has an intelligent objection. If effective altruism is not a question, then we cannot say it is. If the conclusions newcomers are supposed to adopt are pre-written, then effective altruism is not a question.  


Separating encouragement of the effective altruism project from the effective altruism community

Maybe also we need to be better at not making it feel like the effective altruism project and the effective altruism community come together. Groups can be places where we encourage thinking about how big a part of our lives we want the effective altruism project to be and what our best guesses are on how to do that. The community is just one tool to help with the effective altruism project, to the extent that the EA project feels like something that group members want to have as a part of their lives. If collaborating with the community is productive, then good, if a person feels like they can do the EA project or achieve any of their other goals better by not being in the community, that should be strongly encouraged too! 


More articulation of the specific problems like this one

I’m so very glad you articulated your thoughts because I think it’s posts like this that help us better capture exactly what we don’t want to be and more of what we do want to be. There have been a few and I think each one is getting us closer (and we'll iterate on each other's ideas until we narrow down what we do and don't want the effective altruism community to be). 



(just for context given I wrote way too many thoughts: I used to do quite a bit of community building and clearly have way too many opinions on stuff given my experience is so out-of-date, I am still pretty engaged with my local community, I care a lot about the EA project, a lot of my friends consider themselves engaged with the effective altruism community but many aren't but everyone I'm close to knows lots about the EA community because they're friends with me and I talk way too much about my random interests, I have a job outside the EA community ecosystem, I haven't yet been disillusioned but I cheated by having over-confident friends who loudly dissented so I think this helped a tonne in me avoiding a lot of the feelings described here) 

 

Comment by Sophia on EA can sound less weird, if we want it to · 2022-07-23T09:35:34.369Z · EA · GW

That was a really clarifying reply! 

tl;dr 

  • I see language and framing as closely related (which is why I conflated them in my previous comment)
  • Using more familiar language (e.g. less unfamiliar jargon) often makes things sound less weird
  • I agree that weird ideas often sound weirder when you make them clearer (e.g. when you are clearer by using language the person you are talking to understands more easily)
  • I agree that it is better to be weird than to be misleading
  • However, weird ideas in plain English often sound weird because there is missing context, sounding weird does not help with giving a more accurate impression in this case
  • Breaking ideas into smaller pieces helps with sounding less weird while laying the groundwork to give an overall more accurate impression
  • Do you think a caveated version of this advice like "make EA ideas sound less weird without being misleading" is better than no advice?


I intuitively see vocabulary (jargon v. plain English) and framing as pretty closely related (there seems to almost be a continuum between "an entirely foreign language" to "extremely relatable and familiar language and framing of an idea"), so I conflated them a lot in my previous comment. They aren't the same though and it's definitely less of a red flag to me if someone can't find a relatable framing than if they can't put their point into plain English. 

I think there is a clear mechanism for less jargon to make ideas sound less weird. When you find a way to phrase things in a more familiar way (with more familiar language; i.e. more familiar vocabulary, phrasing, framings, metaphors etc), the idea should sound more familiar and, therefore, less weird (so finding a more normal, jargon-free way of saying something should make it sound less weird). 

However, I can also see there is also a mechanism for weird ideas to sound more weird with less jargon.  If the idea is weird enough, then being more clear by using more familiar language will often make the idea sound weirder. 

If there is no perfect "translation" of an idea into the familiar, then I agree that it's better to actually communicate the idea and sound weird than to sound less weird due to creating a false impression. 

 "an AI system may deliberately kill all the humans" sounds really weird to most people partly because most people don't have the context to make sense of the phrase (likewise with your other plain English example). That missing context, that inferential gap, is a lot like a language barrier. 

The heuristic "make yourself sound less weird" seems helpful here.

I think it sounds a lot less weird to say "an AI system might be hard to control and because of that, some experts think it could be really dangerous". This doesn't mean the same thing as "an AI might kill us all", but I think it lays the groundwork to build the idea in a way  that most people can then better contextualize and understand more accurately from there.

Often, the easiest way to make EA ideas sound less weird is just to break up ideas into smaller pieces. Often EA ideas sound weird because we go way too fast and try to give way more information than a person can actually process at once. This is not actually that helpful for leaving accurate impressions. They might technically know the conclusion, but without enough steps to motivate it, an isolated, uncontextualized conclusion is pretty meaningless.

I think the advice "try to sound less weird" is often a helpful  heuristic. You've convinced me that a caveat to the advice is maybe needed: I'd much rather we were weird than misleading! 

What do you think of a caveated version of the advice like "make EA ideas sound less weird without being misleading"? Do you think this caveated version, or something like it, is better than no advice?

Comment by Sophia on EA can sound less weird, if we want it to · 2022-07-22T05:59:26.102Z · EA · GW

Relevant example 😅🤣🤮😿
 

Comment by Sophia on EA can sound less weird, if we want it to · 2022-07-22T05:56:59.528Z · EA · GW

If few people actually have their own views on why AI is an important cause area to be able to translate them into plain English, then few people should be trying to convince others that AI is a big deal in a local group. 

 I think it is counterproductive for people who don't understand the argument they are making well enough to put the arguments into plain English to instead parrot off some jargon.

If you can't put the point you are trying to express in language the person you are talking to can understand, then there is no point talking to that person about the topic.  

I agree that we shouldn't make up arguments that sound less weird that we don't think are true but I think EA could sound a lot less weird and say things we believe. Using jargon in places where there are not many people who are new to the community where everyone there understands 90% of the jargon seems fine. But technical language used in groups of people who are unlikely to understand that jargon will obviously obscure meaning, making everyone less accountable to what they are saying because less people can call them out on it. This is not good for creating a community of people with good epistemics! 

It is much harder to notice that I'm parroting arguments I don't fully understand if I am using jargon because I know, at least subconsciously, that less people here can call me out on my logic not adding up. 

If I say "one instrumental strategy for EA is to talk about this other related stuff" but couldn't figure out that another way to say it is, that tells me I don't actually know what point I was trying to make. For example, I could instead make the point by saying "it is useful to make EA cause areas relatable because people need something they find relatable before they start engaging deeply with it (deeply enough to end up understanding the more core reasons for thinking these causes are a big deal)."  

If I can't explain what I mean by mesa-optimisation in a way that makes sense to 90% of my audience (which in local groups, is people who are relatively new to effective altruism and the whole AI thing) in the context of the point I'm trying to make, I probably don't really understand the point I'm trying to make well enough for it to be better to use the word "mesa-optimisation" over talking about something I know well enough to say it clearly and plainly so everyone listening can poke holes in it and we can have the sorts of conversations that are actually good to have in a local group (relatable, a tiny bit on the edge of the Overton Window but still understandable and well-reasoned rather than weird and obscure which can end up making conversations more didactic, alienating and preachy). 

There is a difference between "can't" and "inconvenient". It is much lower effort to sound weird even if you understand something. However, I suspect that if you understand the point you are trying to make, with some effort and thought, you can find the less weird way to present it. If you can't, even after thinking about it, that makes me think that you don't know what you're trying to say (and therefore shouldn't be saying it in the first place). 

Comment by Sophia on EA for dumb people? · 2022-07-22T04:56:01.937Z · EA · GW

lol, yeah, totally agree (strong upvoted).

 I think in hindsight I might literally have been subconsciously indicating in-groupness ("indicating in-groupness" means trying to show I fit in 🤮 -- feels so much worse in plain English for a reason, jargon is more precise but still often less obvious what is meant, so it's often easier to hide behind it) because my dumb brain likes for people to think I'm smarter than I am. 

In my defense, it's so easy to, in the moment, to use the first way of expressing what I mean that comes to mind. 

I am sure that I am more likely to think of technical ways of expressing myself because technical language makes a person sound smart and sounding smart gets socially rewarded. 

I so strongly reflectively disagree with this impulse but the tribal instinct to fit in really is so strong (in every human being) and really hard to notice in the moment. 

I think it takes much more brain power to find the precise and accessible way to say something so, ironically, more technical language often means the opposite of the impression it gives.

 This whole thing reminds me of the Richard Feymann take that if you can't explain something in language everyone can understand, that's probably because you don't understand it well enough. I think that we, as a community, would be better off if we managed to get good at rewarding more precise and accessible language and better at punishing unnecessary uses of jargon (like here!!!).[1] 

I kind of love the irony of me having clearly done something that I think is a pretty perfect example of exactly what I, when I reflect, believe we need to do a whole lot less of as a community🤣

  1. ^

    I think it's also good to be nice on the forum and I think Lorenzo nailed this balance perfectly. Their comment was friendly and kind, with a suggested replacement term, but still made me feel like using unnecessary jargon was a bad thing (making using unnecessary jargon feel like something I shouldn't have done which  will likely make my subconscious less likely to instinctively want to use unnecessary jargon in the future👌).

Comment by Sophia on It's OK not to go into AI (for students) · 2022-07-22T03:58:39.552Z · EA · GW

I strong upvoted this because:
1) I think AI governance is a big deal (the argument for this has been fleshed out elsewhere by others in the community) and 
2) I think this comment is directionally correct beyond the AI governance bit even if I don't think it quite fully fleshes out the case for it (I'll have a go at fleshing out the case when I have more time but this is a time-consuming thing to do and my first attempt will be crap even if there is actually something to it). 

I think that strong upvoting was appropriate because:
1)  stating beliefs that go against the perceived consensus view is hard and takes courage
2) the only way the effective altruism community develops new good ideas is if people feel they have permission to state views that are different from the community "accepted" view. 

I think some example steps for forming new good ideas are:
1) someone states, without a fully fleshed out case, what they believe
2) others then think about whether that seems true to them and begin to flesh out reasons for their gut-level intuition
3) other people pushback on those reasons and point out the nuance
4) the people who initially have the gut-level hunch that the statement is true either change their minds or iterate their argument so it incorporates the nuance that others have pointed out for them. If the latter happens then,
5) More nuanced versions of the arguments are written up and steps 3 to 5 repeat themselves as much as necessary for the new good ideas to have a fleshed out case for them. 

Comment by Sophia on On funding, trust relationships, and scaling our community [PalmCone memo] · 2022-07-19T01:21:59.363Z · EA · GW

I think a necessary condition to us keeping a lot of the amazing trust we have in this community is that we believe that that trust is valuable. I get that grifters are going to be an issue. I also think that grifters are going to have a much easier time if there isn't a lot of openness and transparency within the movement. 

Openness and transparency, like we've seen historically, seems only possible with high degrees of trust. 

Posting a post on the importance of trust seems like a good starting point for getting people on board with the idea that doing the things that foster trust are worth doing (I think the things that foster trust tend to foster trust because they are good signals/can help us tell grifters and trustworthy people apart so I think this sort of thing hits two birds with one stone).

Comment by Sophia on On funding, trust relationships, and scaling our community [PalmCone memo] · 2022-07-19T01:21:42.204Z · EA · GW

I have written up a draft template post on the importance of trust within the community (and trust with others we might want to cooperate with in the future, eg. like the people who made that UN report on future generations mattering a tonne happen). 

Let me know if you would like a link, anyone reading this is also very welcome to reach out! 

Feedback to the draft content/points and also social accountability are very welcome.

A quick disclaimer: I don't have a perfect historical track record of always doing the things I believe are important so there is some chance I won't finish fleshing the post out or actually post it (though I've  been pretty good at doing my very high priority things for the last couple of years and this seems reasonably likely to remain pretty high on my priority list until I post it)

I will write a couple more paragraphs on why I think this post might help as a reply to this comment. 

Comment by Sophia on AI Game Tree · 2022-07-18T09:56:35.041Z · EA · GW

I am very excited about this event (thanks organisers for putting it on)

Comment by Sophia on EA for dumb people? · 2022-07-18T01:25:53.159Z · EA · GW

It's just my general feeling on the forum recently that a few different groups of people are talking past each other sometimes and all saying valuable true things (but still, as always, people generally are good at finding common ground which is something I love about the EA community). 

Really, I just really want everyone reading to understand where everyone else is coming from. This vaguely makes me want to be more precise when other people are saying the same thing in plain English. It also makes me want to optimise for accessibility when everyone else is saying something in technical jargon that is an idea that more people could get value from understanding. 

Ideally I'd be a good enough at writing to be precise and accessible at the same time though (but both precision and making comments easier to understand for a broader group of readers is so time consuming so I often try to either do one or the other and sometimes I'm terrible and make a quick comment that is definitely neither 🤣). 

Comment by Sophia on EA for dumb people? · 2022-07-18T01:24:03.369Z · EA · GW

Some of my personal thoughts on jargon and why I chose, pretty insensitively given the context of this post, to use some anyway

 I used the "second moment of a distribution" jargon here initially (without the definition that I later edited in) because I feel like sometimes people talk past each other. I wanted to say what I meant in a way that could be understood more by people who might not be sure exactly what everyone else precisely meant. Plain English sometimes lacks precision for the sake of being inclusive (inclusivity that I personally think is incredibly valuable, not just in the context of this post). And often precision is totally unnecessary to get across the key idea. 

However, when you say something in language that is a little less precise, it naturally has more room for different interpretations. Some interpretations readers might agree with and some they might not. The reason jargon tends to exist is because it is really precise. I was trying to find a really precise way of saying the vibe of what many other people were saying so everyone all felt a tiny bit more on the same page (no idea if I succeeded though or if it was actually worth it or if it was actually even needed and whether this is all actually just in my head). 

Comment by Sophia on EA for dumb people? · 2022-07-18T00:13:22.163Z · EA · GW

Thanks 😊. 

Yeah, I've noticed that this is a big conversation right now. 

My personal take

EA ideas are nuanced and ideas do/should move quickly as the world changes and our information about it changes too. It is hard to move quickly with a very large group of people. 

However, the core bit of effective altruism, something like "help others as much as we can and change our minds when we're given a good reason to", does seem like an idea that has room for a much wider ecosystem than we have. 

I'm personally hopeful we'll get better at striking a balance. 

I think it might be possible to both have a small group that is highly connected and dedicated (who maybe can move quickly) whilst also having more much adjacent people and groups that feel part of our wider team. 

Multiple groups co-existing means we can broadly be more inclusive, with communities that accommodate a very wide range of caring and curious people, where everyone who cares about the effective altruism project can feel they belong and can add value. 

At the same time, we can maybe still get the advantages of a smaller group, because smaller groups still exist too.

More elaboration (because I overthink everything 🤣)

Organisations like GWWC do wonders for creating a version of effective altruism that is more accessible that is distinct from the vibe of, say, the academic field of "global priorities research". 

I think it is probably worth it on the margin to invest a little more effort into the people that are sympathetic to the core effective altruism idea, but maybe might, for whatever reason, not find a full sense of meaning and belonging within the smaller group of people who are more intense and more weird. 

I also think it might be helpful to put a tonne of thought into what community builders are supposed to be optimizing for. Exactly what that thing is, I'm not sure, but I feel like it hasn't quite been nailed just yet and lots of people are trying to move us closer to this from different sides. 

Some people seem to be pushing for things like less jargon and more inclusivity. Others are pointing out that there is a trade-off here because we do want some people to be thinking outside the Overton Window. The community also seems quite capacity constrained and high-fidelity communication takes so much time and effort.

If we're trying to talk to 20 people for one hour, we're not spending 20 hours talking to just one incredibly curious person who has plenty of reasonable objections and, therefore, need someone, or several people, to explore the various nuances with them (like people did with me, possibly mistakenly 😛, when I first became interested in effective altruism and I'm so incredibly grateful they did). If we're spending 20 hours having in-depth conversations with one person, that means we're not having in-depth conversations with someone else. These trade-offs sadly exist whether or not we are consciously aware of them. 

I think there are some things we can do that are big wins at low cost though, like just being nice to anyone who is curious about this "effective altruism" thing (even if we don't spend 20 hours with everyone, we can usually spend 5 minutes just saying hello and making people who care feel welcome and that them showing up is valued, because imo, it should definitely be valued!). 

Personally, I hope there will be more groups that are about effective altruism ideas where more people can feel like they truly belong. These wider groups would maybe be a little bit distinct from the smaller group(s) of people who are willing to be really weird and move really fast and give up everything for the effective altruism project. However, maybe everyone, despite having their own little sub-communities, still sees each other as wider allies without needing to be under one single banner. 

Basically, I feel like the core thrust of effective altruism (helping others more effectively using reason and evidence to form views) could fit a lot more people. I feel like it's good to have more tightly knit groups who have a more specific purpose (like trying to push the frontiers of doing as much good as possible in possibly less legible ways to a large audience).

 I am hopeful these two types of communities can co-exist. I personally suspect that finding ways for these two groups of people to cooperate and feel like they are on the same team could be quite good for helping us achieve our common goal of helping others better (and I think posts like this one and its response do wonders for all sorts of different people to remind us we are, in fact, all in it together, and that we can find little pockets for everyone who cares deeply to help us all help others more).

Comment by Sophia on Comments for shorter Cold Takes pieces · 2022-07-17T22:40:14.261Z · EA · GW

😅🙏😊

Comment by Sophia on Senior EA 'ops' roles: if you want to undo the bottleneck, hire differently · 2022-07-17T22:30:58.887Z · EA · GW

Great (and also unsurprising so I'm now trying to work out why I felt the need to write the initial comment)

I think I wrote the initial comment less because I expected anyone to reflectively disagree and more because I think we all make snap judgements that maybe take conscious effort to notice and question.

I don't expect anyone to advocate for people because they speak more jargon (largely because I think very highly of people in this community). I do expect it to be harder to understand someone who comes from a different cultural bubble and, therefore, harder to work out if they are aligned with your values enough. Jargon often gives precision that makes people more legible. Also human beings are pretty instinctively tribal and we naturally trust people who indicate in some way (e.g. in their language) they are more like us. I think it's also easy for these things to get conflated (it's hard to tell where a gut feeling comes from and once we have a gut feeling, we naturally are way more likely to have supporting arguments pop into our heads than opposing ones).

Anyway, I feel there is something I'm pointing to even if I've failed to articulate it.

Obviously EA hiring is pretty good because big things are getting accomplished and have already happened. I probably should have said initially that this does feel quite marginal. My guess as an outsider is that hiring is, overall, done quite a bit better than at the median non-profit organisation.

I think the reason it's tempting to criticize EA orgs is because we're all more invested in them being as good as they can possibly can be and so want to point out perceived flaws to improve them (though this instinct might often be counter-productive because it takes up scarce attention, so sorry about that!).

Comment by Sophia on Comments for shorter Cold Takes pieces · 2022-07-17T12:01:12.549Z · EA · GW

Hi Linch, I'm sorry for taking so long to reply to this! I mainly just noticed I was conflating several intuitions and I needed to think more to tease them out.

(my head's no longer in this and I honestly never settled on a view/teased out the threads but I wanted to say something because I felt it was quite rude of me to have never replied)

Comment by Sophia on EA for dumb people? · 2022-07-17T11:53:52.410Z · EA · GW

There are also limited positions in organisations as well as limited capacity of senior people to train up junior people but, again, I'm optimistic that 1) this won't be so permanent and 2) we can work out how to better make sure the people who care deeply about effective altruism who have careers outside effective altruism organisations also feel like  valued members of the community.

Comment by Sophia on EA for dumb people? · 2022-07-17T11:50:08.742Z · EA · GW

Will we permanently have low capacity? 

I think it is hard to grow fast and stay nuanced but I personally am optimistic about ending up as a large community in the long-run (not next year, but maybe next decade) and I think we can sow seeds that help with that (eg. by maybe making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere).

Comment by Sophia on Is it possible for EA to remain nuanced and be more welcoming to newcomers? A distinction for discussions on topics like this one. · 2022-07-17T02:47:18.164Z · EA · GW

Yeah, I also find it very de-stabilizing and then completely forget my own journey instantly once I've reconciled everything and am feeling stable and coherent again. 

It's nice to hear I'm not the only one here who isn't 99.999 percentile stoically unaffected by this. 

 I think one way to deal with this is to mainly select for people with these weird dispositions who are unusually good at coping with this. 

I think an issue with this is that the other 99% of planet Earth might be good allies to have in this whole "save the world" project and could actually get on board if we do community building exceptionally well. On the other hand, maybe this is too big an ask because community building is just really hard and by optimising for inclusivity, we maybe trade-off against other things that we care about possibly even more. 

I personally don't know what the optimal message for community-builders is but I hope we keep having these sorts of conversations. Even if it turns out that there is no good answer, I think it's still worth it on expectation to think hard about this.

If we can guide community-builders better to manage these complexities and nuances, I think we'll be able to create a much stronger ecosystem to help us tackle the world's most pressing problems. 

Also, thank you for navigating through my replies! I really appreciate you taking the time to read them. 

 I will limit myself to just this one comment so I don't drown this post's comment thread in any more of my spam. 😅