Resources for better understanding aptitudes? 2022-04-20T15:39:00.568Z
[Podcast] Is scientific progress slowing? with James Evans 2022-03-31T13:19:59.797Z
Results from UChicago EA's Personal Finance Workshop 2022-03-19T16:44:28.689Z
[Creative Writing Contest] [Fiction] The One Who Walked Sanctum 2021-10-27T13:33:33.777Z
Some mental health resources tailored for EAs 2021-08-20T16:04:45.319Z
Building my Scout Mindset: #2 2021-08-17T01:14:58.629Z
On what kinds of Twitter accounts would you be most interested in seeing research? 2021-07-17T12:27:03.792Z
Building my Scout Mindset: #1 2021-07-16T18:55:03.032Z
Building my Scout Mindset: Introduction 2021-07-16T18:15:56.642Z
Miranda_Zhang's Shortform 2021-07-08T14:55:40.336Z
[Feedback Request] Hypertext Fiction Piece on Existential Hope 2021-05-30T15:44:40.506Z
'Are We Doomed?' Memos 2021-05-19T13:51:09.697Z


Comment by Miranda_Zhang ( on How to start a blog in 5 seconds for $0 · 2022-07-04T14:34:11.568Z · EA · GW

Thanks for this! I'm hoping to start a future-proof personal website + blog and was looking into using Hugo w/ Github pages. What do you think of using static site generators as opposed to, say, Blot?

Comment by Miranda_Zhang ( on Announcing: EA Engineers · 2022-07-04T13:24:29.082Z · EA · GW

So excited you are launching this! Great to see more field-building efforts.

Comment by Miranda_Zhang ( on We need more discussion and clarity on how university groups create value · 2022-07-02T22:06:12.944Z · EA · GW

Liked this a lot - reframing the goal of CB as optimizing for high alignment and high competence is useful.

I'm not sure I totally agree, though. I want there to be some EA community-building that is optimizing for alignment but not competence: I imagine this would be focused on spreading awareness of the principles—as there (probably) remains a significant number who may be sympathetic but haven't heard of EA—as well as encouraging personal reflection, application, and general community vibes. I haven't totally let go of the Singer & GWWC vision of spreading EA memes throughout society.

However, I do think optimizing for alignment + competence is the right direction for meta-EA (e.g., talent search to help tackle X cause), and helps explain why I think field-building is the frontier of meta-EA.

Comment by Miranda_Zhang ( on A summary of every Replacing Guilt post · 2022-06-30T00:44:04.066Z · EA · GW

Thank you for doing this - never thought I wanted this, but I definitely do! I also took notes but very messily, and it's so useful to have a summary (especially for people who haven't read it yet).

Comment by Miranda_Zhang ( on Less often discussed EA emotional patterns · 2022-06-30T00:27:36.149Z · EA · GW

Strongly upvoted for fleshing out and articulating specific emotional phenomena that (a) I think drew me to EA and (b) have made it hard for me to actually understand + embody EA principles. I've perused a lot of the self-care tag and I don't think anyone has articulated it as precisely as you have here.

The below quote, in particular, captures a learning that has been useful for me (if still leaning into using impact as a justification/rationale).

Ironically, having your impact define your self-worth can actually reduce your impact in multiple ways

Comment by Miranda_Zhang ( on Four reasons I find AI safety emotionally compelling · 2022-06-29T14:39:51.308Z · EA · GW

I really appreciate the sentiment behind this - I get the sense that working on AI safety can feel very doom-y at times, and appreciate any efforts to alleviate that mental stress.

But I also worry that leaning into these specific reasons may lead to intellectual blindspots. E.g., Believing that aligned AI will make every other cause redundant leads me to emotionally discount considerations such as temporal discount rate or tractability. If you can justify your work as a silver bullet, then how much longer would you be willing to work on that, even when it seems impossible? Where does one draw the line?

My main point here is that these reasons can be great motivators, but should only be called upon after someone has intellectually mapped out the reasons why they are working on AI and what would need to change for them to stop working on it.

Comment by Miranda_Zhang ( on Effective Altruism as Coordination & Field Incubation · 2022-06-16T13:44:28.443Z · EA · GW

I found the concrete implications distinguishing between this more cause-oriented model of EA really useful, thanks!

I also agree, at least based on my own perception of the current cultural shift (away from GHD and farmed animal welfare, and towards longtermist approaches), that the most marginally impactful meta-EA opportunities might increasingly be in field-building.

Comment by Miranda_Zhang ( on Miranda_Zhang's Shortform · 2022-06-11T16:09:44.955Z · EA · GW

Optionality cost is a useful reminder that option value consists not only of minimising opportunity cost but also increasing your options (which might require committing to an opportunity).

This line in particular feels very EA: 

As Ami Vora writes, “It’s not prioritization until it hurts.” 

Comment by Miranda_Zhang ( on ‘EA Architect’: Updates on Civilizational Shelters & Career Options · 2022-06-09T22:49:00.581Z · EA · GW

I really respect your drive in leading this project and was excited to read all the updates!

Would love to view the EA office space design database too : )

Comment by Miranda_Zhang ( on What’s the theory of change of “Come to the bay over the summer!”? · 2022-06-08T21:59:40.554Z · EA · GW

I love that you wrote this because I grappled with a slightly bigger version of this, which was 'move to the Bay,' and I wasn't able to get a detailed theory of change from the people who were recommending this to me.

I think point 4 is especially interesting and something that motivated my decision to move (essentially, 'experience Berkely EA culture'). Ironically, most people focused on the first three points (network effects). I do think I'm unsure whether point 4 (specifically, the shift towards maximization, which feels related to totalising EA) is a net positive. Though perhaps by "theory of change" you really just meant the effect of coming to Berkeley, and not claiming that coming to Berkeley is net positive for one's impact?

Comment by Miranda_Zhang ( on Miranda_Zhang's Shortform · 2022-06-08T16:48:11.736Z · EA · GW

Realizing that what drove me to EA was largely wanting to "feel like I could help people" and not, "help the most beings." This leads me to, for example, really be into helping as many people as I can individually help flourish (at the expense of selecting for people who might be able to make the most impact)*.

This feels like a useful specification of my "A" side and how/why the "E" side is something I should work on!

*A more useful reframing of this is to put it into impact terms. Do I think the best way to make impact is to

(1) find the right contexts/problems wherein a given person can have an outsized impact

(2) focus on specific people that I think have the highest chance of having an outsized impact?

Comment by Miranda_Zhang ( on Mastermind Groups: A new Peer Support Format to help EAs aim higher · 2022-06-08T02:14:00.906Z · EA · GW

This was interesting, thanks! I haven't heard of Mastermind Groups before but in general, I'm excited about trialling more peer-support interventions. This is the approach I took with UChicago EA's career planning program,* which was in turn inspired by microsolidarity practices. I think these interventions provide a useful alternative to the more individual-focused approaches such as 1:1s, 80k career advising, and one-off events.

*It's worth nothing that this one iteration did update me towards , "selection is important," which seems similar to what Steve Thompson is saying - not because I didn't think attendees got value out of it, but because I felt I wasn't creating as much impact as I hoped.

Comment by Miranda_Zhang ( on Miranda_Zhang's Shortform · 2022-06-08T01:34:47.338Z · EA · GW

Thanks, this is a good tip! Unfortunately, the current options I'm considering seem more hands-off than this (i.e., the expectation is that I would start with little oversight from a manager), but this might be a hidden upside because I'm forced to just try things. : )

Comment by Miranda_Zhang ( on Notes on impostor syndrome · 2022-06-08T01:31:15.464Z · EA · GW

Thank you for this - I found it at least as useful as Luisa's (fantastic) post. : ) 

I teared up reading this, mostly because I felt really validated in how I've slowly been tackling my imposter syndrome (getting feedback, reminding myself not to focus on comparisons, focusing on better mapping the world and not making useless value judgments). I also happen to think that you are a wonderful member of the EA community, who is doing good work with the Forum, so this nudges me towards thinking that if really cool people feel this way, maybe I can be a really cool person too!

Comment by Miranda_Zhang ( on Miranda_Zhang's Shortform · 2022-06-07T21:42:24.400Z · EA · GW

Thing I should think about in the future: is this "enough" question even useful? What would it even mean to be "agentic/strategic enough?"

edit: Oh, this might be insidiously following from my thought around certain roles being especially important/impactful/high-status. It would make sense to consider myself as falling short if the goal were to be in the heavy tail for a particular role. 

But this probably isn't the goal. Probably the goal is to figure out my comparative advantage, because this is where my personal impact (how much good I, as an individual, can take responsibility for) and world impact (how much good this creates for the world) converges. In this case, there's no such thing as "strategic enough" - if my comparative advantage doesn't lie in strategy, that doesn't mean I'm not "strategic enough" because I was never 'meant to' be in strategy anyway! 

So the question isn't, "Am I strategic enough?" But rather, "Am I more suited for strategy-heavy roles or strategy-light roles?"

Comment by Miranda_Zhang ( on Miranda_Zhang's Shortform · 2022-06-07T21:39:35.763Z · EA · GW

A big concern that's cropped up during my current work trial is whether I'm actually just not agentic/strategic/have-good-judgment-enough to take on strategy roles at EA orgs.

I think part of this is driven by low self-confidence, but part of this is the very plausible intuition that not everyone can be in the heavy tail and maybe I am not in the heavy tail for strategy roles. And this feels bad, I guess, because part of me thinks "strategy roles" are the highest-status roles within the meta-EA space, and status is nice.

But not nice enough to sacrifice impact! It seems possible, though, that I actually could be good at strategy and I'm bottlenecked by insecurity (which leads me to defer to others & constantly seek help rather than being agentic). 

My current solution is to flag this for my future manager and ensure we are trialling both strategy and operations work. This feels like a supportive way for me to see where my comparative advantage lies - if I hear, "man, you suck at strategy, but your ops work is pretty good!" Then I would consider this a win!

My brain now wants to think about the scenario where I'm actually just bad at both. But then I'll have to take the advice I give my members: "Well, then you got really valuable information - you just aren't a great fit for these specific roles, so now you get to explore options which might be great fits instead!"

Comment by Miranda_Zhang ( on LW4EA: Beyond Astronomical Waste · 2022-05-25T00:49:56.468Z · EA · GW

I feel like this post relies on an assumption that this world is (or likely could be) a simulation, which made it difficult for me to grapple with. I suppose maybe I should just read Bostrom's Simulation Argument first.

But maybe I'm getting something wrong here about the post's assumptions?

Comment by Miranda_Zhang ( on Look Out The Window · 2022-05-13T00:51:15.783Z · EA · GW

Really fantastic. Feels like this could be the new 'utopia speech!'

Comment by Miranda_Zhang ( on Happiness course as a community building exercise and mental health intervention for EAs · 2022-05-11T13:57:49.763Z · EA · GW

Thanks for this! This is exactly the kind of programming I was thinking of when I reflected on the personal finance workshop I ran for my group.

Question - what leads you to think the below?

The happiness course increased people’s compassion and self-trust, but it may have reduced the extent to which they view things analytically (i.e. they may engage more with their emotions to the detriment of their reason).

Comment by Miranda_Zhang ( on Longtermism, aliens, AI · 2022-05-08T19:39:40.850Z · EA · GW

I think there's room for divergence here (i.e., I can imagine longtermists who only focus on the human race) but generally, I expect that longtermism aligns with "the flourishing of moral agents in general, rather than just future generations of people." My belief largely draws from one of Michael Aird's posts.

This is because many longtermists are worried about existential risk (x-risk), which specifically refers to the curtailing of humanity's potential. This includes both our values⁠—which could lead to wanting to protect alien life, if we consider them moral patients and so factor them into our moral calculations—and potential super-/non-human descendants. 

However, I'm less certain that longtermists worried about x-risk would be happy to let AI 'take over' and for humans to go extinct. That seems to get into more transhumanist territory. C.f. disagreement over Max Tegmark's various AI aftermath scenarios, which runs the spectrum of human/AIcoexistence.

Comment by Miranda_Zhang ( on What comes after the intro fellowship? · 2022-05-07T18:44:42.398Z · EA · GW

Thanks for writing this up - I definitely feel like the uni pipeline needs to flesh out everything between the Intro Fellowship and graduating (including options for people who don't want to be group organizers). 

Re: career MVP stuff, I'm running an adaptation of GCP's career program that has been going decently! I think career planning and accountability is definitely something uni groups could do more of.

Comment by Miranda_Zhang ( on LW4EA: How to Not Lose an Argument · 2022-05-07T01:33:49.117Z · EA · GW

Hmm. I am sometimes surprised by how often LW posts take something I've seen in other circumstances (e.g., CBT) and repackages it. This is one of those instances - which, to be fair, Scott Alexander completely acknowledges!

I like the reminder that "showing people you are more than just their opponent" can be a simple way to orient conversations towards a productive discussion. This is really simple advice but useful in polarized/heated contexts. I feel like the post could have been shortened to just the last half, though.

Comment by Miranda_Zhang ( on What We Owe the Past · 2022-05-07T00:56:22.586Z · EA · GW

Upvoted because I thought this was a novel contribution (in the context of longtermism) and because I feel some intuitive sympathy with the idea of maintaining-a-coherent-identity.

But also agree with other commenters that this argument seems to break down when you consider the many issues that much of society has since shifted its views on (c.f. the moral monsters narrative).

I still think there's something in this idea that could be relevant to contemporary EA, though I'd need to think for longer to figure out what it is. Maybe something around option value? A lot of longtermist thought is anchored around preserving option-value for future generations, but perhaps there's some argument that we should maintain the choices of past generations (which is, why, for example, codifying things in laws and institutions can be so impactful).

Comment by Miranda_Zhang ( on Beware Invisible Mistakes · 2022-05-05T00:23:07.703Z · EA · GW

Thanks for synthesizing a core point that several recent posts have been getting at! I especially want to highlight the importance of creating a community that is capable of institutionally recognizing + rewarding + supporting failure.

What can the EA community do to reward people who fail? And - equally important - how can the community support people who fail? Failing is hard, in no small part because it's possible that failure entails real net negative consequences, and that's emotionally challenging to handle.

With a number of recent posts around failure transparency (one, two, three), it seems like the climate is ripe for someone to come up with a starting point.

Comment by Miranda_Zhang ( on Nathan Young's Shortform · 2022-05-02T23:46:42.312Z · EA · GW

I actually prefer "scale, tractability, neglectedness" but nobody uses that lol

Comment by Miranda_Zhang ( on Miranda_Zhang's Shortform · 2022-05-02T23:45:48.959Z · EA · GW

I wonder if anyone has read these books here?

In particular, 'Inventing Human Rights: A History' seems relevant to Moral Circle Expansion.

edit: I should've read the list fully! I've actually read The Honor Code. I didn't find it that impressive but I guess the general idea makes sense. If we can make effective altruism something to be proud of - something to aspire to for people outside the movement, including people who currently denigrate it as being too elitist/out-of-touch/etc. - then we stand a chance at moral revolution.

Comment by Miranda_Zhang ( on Has this EA critique article been discussed/responded to? · 2022-05-02T23:42:51.500Z · EA · GW

Not to this specific article but this post seems relevant:

Comment by Miranda_Zhang ( on My Reflections Facilitating for EA Virtual Programs · 2022-05-02T17:06:54.319Z · EA · GW

Thanks for this! I hadn't thought about EA VP much but if it's true that it is the main way for potential EAs that aren't near existing groups to get into an EA learning pipeline (which seems reasonable), it seems likely that EA VP could be pretty important and relatively neglected.

I also am fascinated by the people first approach you're talking about. That's definitely the direction I have found myself leaning more towards, and I'm hoping UChicago EA will adopt similar approaches for the next year (e.g., having more 1:1s, focusing on social community).

Comment by Miranda_Zhang ( on Is there such a thing as a 'Meta Think Tank'? · 2022-05-01T01:59:33.306Z · EA · GW

Don't think this is quite what you were looking for, but a program at UPenn runs an annual survey ranking think tanks worldwide.

Comment by Miranda_Zhang ( on LW4EA: Philosophical Landmines · 2022-04-30T01:29:20.300Z · EA · GW

As someone interested in messaging, I liked this! Carefully choosing one's words - and being aware of how someone might perceive a word you use, like "consequentialism" or "moral" - can be important in ensuring a conversation goes well.

Comment by Miranda_Zhang ( on You should write on the EA Forum · 2022-04-29T18:23:54.324Z · EA · GW

Love the addition of the emojis

 (I don't remember seeing them before but I might be wrong!)

Comment by Miranda_Zhang ( on Working in US policy as a foreign national: Immigration pathways and types of impact · 2022-04-26T23:59:47.111Z · EA · GW

This might be the clearest articulation I've seen yet of work by immigration status in the US, period. Thanks for summarizing this! It's great to have something to point to in the future.

Comment by Miranda_Zhang ( on Mid-career people: strongly consider switching to EA work · 2022-04-26T23:16:48.810Z · EA · GW

I really appreciate you writing this post, especially since you note "I focus on mid-career people here partly because I think EA career advice for mid-career people is undersupplied at the moment."

I agree with the implication that more resources should be dedicated to outreach to mid-career people, especially since senior management/mentorship seems to be a bottleneck for causes like AI safety. To that end, efforts like these seem valuable!

Comment by Miranda_Zhang ( on Three Reflections from 101 EA Global Conversations · 2022-04-25T23:41:37.205Z · EA · GW

Really enjoyed this post and the takeaways, which I thought were insightful and ~fairly novel (at least amongst EAG(x) reflections). I'm a big proponent of 3) and definitely think it can be useful to have things written up in advance of the conference, too. People may not be inclined to read it at the conference but at least they'll have something to refer to after!

Thanks for this, Akash!

Comment by Miranda_Zhang ( on What are the key claims of EA? · 2022-04-25T20:54:56.652Z · EA · GW

Reminds me of this spreadsheet made by Adam S, which I generally really like.

I agree that it would be nice to have a more detailed, up-to-date typology of EA's core & common principles, as the latter seems like the more controversial class (e.g., should longtermism be considered the natural consequence of impartial welfarism)?

Comment by Miranda_Zhang ( on The case for not pursuing a career in an EA organization · 2022-04-25T20:49:02.230Z · EA · GW

That sounds very exciting. Will be keeping my eyes posted for your post (though I'd be grateful if you could ping me with it when you post, too)!

Comment by Miranda_Zhang ( on EA can be hard: links for that · 2022-04-25T11:42:48.931Z · EA · GW

Thanks for this! There are some resources that I haven't read before. Would you mind if I copied this list over to my post, Mental Health Resources tailored for EAs (WIP)?

Comment by Miranda_Zhang ( on Optimize... Everything? · 2022-04-24T23:28:17.804Z · EA · GW

As someone who intuitively relates to Maya & can understand where they're coming from, I really enjoyed your comment. In particular, I thought your point on "maximising our resources does mean taking into account the importance of human connection and acting accordingly" was eloquently articulated.

I will note, however, that this frame isn't wholly satisfactory to me as it can lead me to view self-care etc. only as instrumental to the goal of optimizing for impact. While this is somewhat addressed by the post Aiming for the minimum of self-care is dangerous, this outcome-focused frame (e.g., "self-care is necessary to sustainably make impact") still leads me to feel like I have no value outside of the impact I can have and ties my self-worth too much to consequences.

But I know this isn't a problem for everyone - maybe this is just because I don't identify as a consequentialist, or because of my mental health issues! Regardless, I appreciated your thorough response to this post.

Comment by Miranda_Zhang ( on Calling for Student Submissions: AI Safety Distillation Contest · 2022-04-23T21:48:58.122Z · EA · GW

Really excited for this - I think distillation will be useful not only for checking the distiller's understanding, but also in better communicating ideas around AI safety. Thanks for starting up this project!

Comment by Miranda_Zhang ( on My experience with imposter syndrome — and how to (partly) overcome it · 2022-04-22T13:38:23.678Z · EA · GW

I think this is a great example of a thoughtful self-care post, and I'd love to see more posts like this.

I really appreciate how detailed and open this post is. The concrete examples bit is especially useful, as it shows the clear disparity between one's distorted (imposter-y) perceptions and the mainstream (undistorted?) perception!

I've ordered the two workbooks and will see if those help me specifically tackle ound self-esteem, scrupulosity, and imposter syndrome! More broadly, I found the Feeling Good Handbook useful for general depression/anxiety symptoms.

Comment by Miranda_Zhang ( on My experience with imposter syndrome — and how to (partly) overcome it · 2022-04-22T12:12:13.581Z · EA · GW

Would love to hear about what worked for you! I think I feel that particular point much more strongly than the rest of imposter syndrome (e.g., I don't notice myself avoiding jobs that seem too important) but I don't feel like I've made that much progress in ~2 years of therapy.

edit: updated towards 'maybe I do actually have imposter syndrome alongside general low-self-esteem' after I took the screening tool & realized that I do have a bottleneck around " putting your work out there"

Comment by Miranda_Zhang ( on The case for not pursuing a career in an EA organization · 2022-04-21T13:47:36.991Z · EA · GW

This is a really good point, thank you for adding important nuance! I think coordination within the EA community is important for ensuring that we engage + sustain the entire spectrum of talent. I'd be keen for people with good fits* to work on engaging people who are less likely to be in the 'heavy-tail' of impact.

*e.g., have a strong comparative advantage, are already embedded in communities that may find it harder to pivot

I also have a strong reaction to Marc's "collateral damage" phrase. I feel sad that this may be a perception people hold, and I do very much want people to feel like they can contribute impactfully beyond mainstream priority paths. I think this could be partly a communication issue, where there's conflation between (1) what the [meta-]EA community should prioritize next, (2) what the [cause-specific, e.g. x-risk] community should prioritize next,** and (3) what this specific individual could do to have the most impact. My original comment was intended to get at (1) and (2), but acknowledge that (3) can look very different - more like what Marc is suggesting.

**And that's ignoring that there aren't clear distinctions between (1) and (2). Usually there's significant overlap!

I find the claim that people could upskill into significantly more impactful paths to be really interesting. This seems ~related to my belief that far more people than we currently expect can become extremely impactful, provided we identify their specific comparative advantages. I'd be excited for someone to think about potential mechanisms for (a) supporting later-stage professionals in identifying + pivoting  higher-impact opportunities and (b) constructing paths for early-career individuals to upskill specifically with a higher-impact path in mind.

Comment by on [deleted post] 2022-04-20T21:09:56.318Z

While I'm intuitively sympathetic to recommendations for self-care within the EA community, I didn't upvote this post because it's so brief and doesn't back up claims like "Taking care of oneself doesn’t get talked about enough in the community building space" or "running an EA group may jeopardise your well-being."

Comment by Miranda_Zhang ( on Resources for better understanding aptitudes? · 2022-04-20T17:43:25.004Z · EA · GW

I'm looking for both! I think resources to build aptitudes can help people to test their fit, while a breakdown of subskills can deepen people's understanding of what the aptitude entails.

This project sounds really exciting - I see upskilling as one of the biggest bottlenecks in community-building right now, so having a centralized directory seems useful.

Comment by Miranda_Zhang ( on LW4EA: How Much is Your Time Worth? · 2022-04-20T00:28:58.385Z · EA · GW

I was having trouble understanding this post because my first thought was, "But I'll be paid regardless of how productive I am during the heatwave, given how unlikely it is that someone will notice the impact on my productivity!"

... But that, of course, ignores the objective reduction in value produced. This seems particularly bad if one is working directly in EA.

Comment by Miranda_Zhang ( on Free-spending EA might be a big problem for optics and epistemics · 2022-04-16T23:19:11.401Z · EA · GW

Thank you so much for this post. It eloquently captures concerns that I've increasingly heard from group members (e.g., I know a fairly-aligned member who wondered whether a retreat we were running was a "waste of CEA's money"). While I agree that the funding situation is a boon to the movement, I also agree that we should carefully consider its impact on optics/epistemics. I also think all your suggestions sound reasonable and I'd be really excited to see, for example,

  • a 'go-to' justification (ideally including a BOTEC) for spending money on events
  • more M&E for meta-EA funding, particularly spending from group organizers (and I say this despite it very much being against my self-interest, because I think this would substantially increase the effort of getting funding. So, I guess I'd really appreciate if an existing meta-EA funder looked into creating infrastructure for this)
  • a nuanced explanation of EA's funding situation
Comment by Miranda_Zhang ( on LW4EA: Can the Chain Still Hold You? · 2022-04-16T23:11:30.086Z · EA · GW

This particular quote really struck me!

Compared to situational effects, we tend to overestimate the effects of lasting dispositions on people's behavior — the fundamental attribution error. But I, for one, was only taught to watch out for this error in explaining the behavior of individual humans, even though the bias also appears when explaining the behavior of humans as a species.

It makes me optimistic for the future of humanity—perhaps we really can improve the moral arc of our species. Though it's unclear to me that the opposite can't be equally plausible, which just brings me to neutral I suppose.

I'm also not totally sure what the takeaway of this post is. 

Sometimes it's good to check if the chain can still hold you. Do not be tamed by the tug of history. Maybe with a few new tools and techniques you can just get up and walk away — to a place you've never seen before.

I am guessing that this is a prompt for individuals to try breaking from their habits/adopt growth mindsets, but I find the conclusion overly abstract.

Comment by Miranda_Zhang ( on The case for not pursuing a career in an EA organization · 2022-04-16T23:01:52.862Z · EA · GW

Thanks for this! I agree that the case for working at an EA org. seems less clear if you have already established career capital in a field or organization.

Regardless, the most important crux here is this belief

I think having 1% of humanity lightly engaged in EA-related activities is more valuable than having 0,0001% deeply engaged. 

The necessity of EA alignment/engagement is an enduring question within movement-building. Perhaps the most relevant version of it right now is around AI safety: I know several group organizers who believe that a) AI is one of the most important causes and b) EA alignment is crucial to being able to do good alignment work, which means that it's more important to get the right % of humanity deeply engaged in EA activities.

Another way of framing this is that impact might be heavy-tailed: that is, ~most of the impact might come from people at the very tail-end of the population (e.g., people who are deeply engaged in EA). If that were true, then that might mean that it's still more impactful to deeply engage a few people than to shallowly engage many people.

I guess that the people who are likeliest to believe that impact is heavy-tailed would also prioritize x-risk reduction (esp. from AI) the most, which would also reduce their perception of the impact of earning-to-give (because of longtermism's funding situation, as you note). I'm not sure that those kinds of group organizers would agree that they should prioritize activities that promote 'shallow' EA engagement (e.g., local volunteering) or high-absorbency paths (e.g. earning-to-give), because it's plausible that the marginal impact of deeper engagement outweighs the increased exposure.

But none of this contravenes your overall point that for some individuals, the most marginally impactful thing they could do may not be to work at an EA org. 

edit: used "shallowly" twice, incorrectly

Comment by Miranda_Zhang ( on Learning by writing in groups · 2022-04-12T22:36:52.649Z · EA · GW

Ooh, I've been doing similar stuff independently and think a group could be helpful!

When would the virtual Writing Group start + for how long would it run?

Comment by Miranda_Zhang ( on University Groups Should Do More Retreats · 2022-04-12T02:32:05.287Z · EA · GW

Thanks for compiling all this in one compact post! Having just run a retreat, this seems like ~common knowledge among community-builders, but it'll be very useful to have a single post to point to instead of having to gather pieces of knowledge together.