Posts

Apply to help run EAGxIndia, Berkeley, Singapore and Future Forum! 2022-05-21T20:44:54.436Z
What comes after the intro fellowship? 2022-05-05T00:57:27.821Z
The availability bias in job hunting 2022-04-30T14:53:10.737Z
Some benefits and risks of failure transparency 2022-03-27T02:09:35.305Z
Add randomness to your EA conferences 2022-03-04T21:09:19.484Z
Career Factors: A framework for understanding success in career paths 2022-03-01T18:04:45.602Z
The EA Hub is suspending new feature development (with plans to retire) 2022-02-27T22:02:56.772Z
Ideas from network science about EA community building 2022-02-17T09:34:38.579Z
What (standalone) LessWrong posts would you recommend to most EA community members? 2022-02-09T00:31:41.103Z
When in doubt, apply* 2022-01-23T19:07:45.975Z
Coordination within EA: community & ecosystems 2021-10-20T17:49:20.566Z
Exit opportunities after management consulting 2021-08-16T10:54:04.827Z
Maximizing impact during consulting: building career capital, direct work and more. 2021-08-13T13:59:32.077Z
Considerations and advice on entering management consulting 2021-08-11T13:06:12.995Z
The EA Hub: What’s New in 2021 2021-06-13T13:30:32.932Z
What is meta Effective Altruism? 2021-06-02T06:47:05.612Z
Tips for Choosing Enterprise Software 2021-04-09T13:37:25.142Z
What drew me to EA: Reflections on EA as relief, growth, and community 2021-03-26T03:04:37.659Z
Apply Now to the EA Fellowship Weekend! (March 26-28) 2021-02-20T00:27:28.343Z
Exploratory Careers Landscape Survey 2020: Group Organisers 2021-01-30T10:41:18.710Z
EA Organisations: If you created a sequence about your approach to doing good, what would you write? 2021-01-20T16:07:59.156Z
Effective Altruism as a Social Movement 2021-01-16T06:55:25.804Z
The funnel or the individual: Two approaches to understanding EA engagement 2021-01-08T18:06:32.330Z
LessWrong/EA New Year's Ultra Party 2020-12-18T05:15:39.413Z
How can we improve online EA social events? 2020-10-21T01:35:50.242Z
Suggestions for Online EA Discussion Norms 2020-09-24T01:42:34.784Z
A Step-by-Step Guide to Running Independent Projects 2020-07-15T11:49:49.625Z
Annotated List of EA Career Advice Resources 2020-07-13T06:12:48.348Z
Annotated List of Project Ideas & Volunteering Resources 2020-07-06T03:29:39.312Z
2019 Ethnic Diversity Community Survey 2020-05-12T04:49:36.746Z
Case Study: Volunteer Research and Management at ALLFED 2020-04-25T07:12:36.961Z
What career advice gaps are you trying to fill? 2020-04-20T02:57:23.390Z
What (other) posts are you planning on writing? 2020-04-04T06:18:46.618Z
Effective Environmentalism Community Updates 2020-03-29T20:41:00.601Z
Group Organiser Survey on Career Advice Bottlenecks 2020-02-17T18:53:16.821Z
vaidehi_agarwalla's Shortform 2019-12-06T21:03:43.762Z
Ethnic Diversity Experience Survey 2019-11-12T11:48:47.849Z
Call for beta-testers for the EA Pen Pals Project! 2019-07-26T19:02:03.422Z
What posts you are planning on writing? 2019-07-24T05:12:23.069Z
A Guide to Early Stage EA Group-Building at Liberal Arts Colleges 2019-07-02T12:53:23.752Z

Comments

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on EA Forum feature suggestion thread · 2022-06-23T17:43:22.138Z · EA · GW

Yeah it's the recurring events that is the main problem.

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Results of a survey of international development professors on EA · 2022-06-22T22:23:40.341Z · EA · GW

Did you ask any questions about people's impressions of EA (adjacent) organizations like JPAL, IDInisght, CGD or GiveWell?

I would also be curious whether you have intuitions on which subsets of the profession might be more or less interested in EA / contributing to the movement.

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Results of a survey of international development professors on EA · 2022-06-22T22:21:02.043Z · EA · GW

What is the background of most IDEV professors? I'd also be curious more generally on an overview of the field and how it compares to e.g. developmental economics (I know very little about it!)

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on EA Forum feature suggestion thread · 2022-06-16T14:41:17.848Z · EA · GW

For events it would be useful to get notifications a fixed amount of time before the event rather than when they are uploaded to the system. Right now I get 2-8 notifications at a time often for the same recurring event.

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on EA Forum feature suggestion thread · 2022-06-13T17:05:50.689Z · EA · GW

Adding to the better preview image for twitter, I notice other sites have top quotes from the article as a cover image, which I think is pretty interesting. 

(probably not worth implementing, but just for inspiration really)

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on What’s the theory of change of “Come to the bay over the summer!”? · 2022-06-12T15:48:12.639Z · EA · GW

I was going to comment pretty much exactly the same thing, thanks for doing the hard work for me :)

I think part of what is missing here for me is a bit of the context before hand 

  • who is saying come to the bay? it seems like this message is shared in specific circles
  • what factors fell into place for you to have a positive experience, where others may not have, e.g. the kinds of thing Chana points out, and Joseph Lemien's comment
  • (this one is unfair, since it's a bit out of scope for the post) how one might actually go about going to the bay. I think it's pretty fuzzy and unclear how to navigate that space unless you already know the relevant people
Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on EA Forum feature suggestion thread · 2022-06-11T15:47:40.414Z · EA · GW

That's awesome!

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Lifeguards · 2022-06-11T00:28:57.322Z · EA · GW

A comment from a friend (I've paraphrased a bit): 

In this post two things stand out: 

  1. This advice seems to be particularly targeted at college students / undergraduates / people early in their careers (based on Section 2) and I expect many undergraduates might read this post.
  2. Your post links to 2 articles from Eliezer Yudkowsky's / MIRI's perspective of AI alignment, which is a (but importantly, not the only) perspective of alignment research that is particularly dire. Also, several people working on alignment do in fact have plans (link to vanessa kosoy), even if they are skeptical they will work. 

The way that these articles are linked assumes they are an accepted view or presents them in a fairly unnuanced way which seems concerning, especially coupled with the framing of "we have to save the world" (which Benjamin Hilton has commented on).

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Lifeguards · 2022-06-10T23:49:40.483Z · EA · GW

I really like these nuances. I think one of the problems with the drowning child parable / early EA thinking more generally was (and still is, to a large extent) very focused on the actions of the individual. 

It's definitely easier and more accurate to model individual behavior, but I think we (as a community) could do more to improve our models of group behavior even though it's more difficult and costly to do so. 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Thoughts on naming EA conferences · 2022-06-09T19:07:58.886Z · EA · GW

I agree that the current names aren't ideal. A mix of thoughts:

  • Agree that the "x" is misleading. It could be interested to considering throwing out the EAGx branding and see what we come up with (keeping in mind that there are costs of rebranding) 
    • I ran the EA Fellowship Weekend which was targeted at people going through the virtual fellowship program - don't think it's ideal, but it was clear that this was for newer people since it's for people going through the fellowship 
    • a name signalling that this is a local or regional community-driven event (vs. CEA team planning in a particular location) could be good? 
    • words that positively signal being new as a postive, e.g. related to exploring, investigating, discovery, experimentation - you're on the start of an exciting adventure or something? 
  • We need an "EAGxx" to be what an EAGx used to be (an event where people with minimal knowledge of EA can attend, much more top-of-funnel). In some regions, this would be very useful. 
    • These conferences could not use EAGx branding altogether, and possibly not even reference EA in the title if it makes sense, and just make it clear that EA Group X is running it / it's funded by CEA (if it is). E.g. Catalyst was a really cool name for a biosecurity summit that was run a few years back.
    • Also: Communicate to people that the bar for EAGx has increased since before to people having at least a basic understanding of EA
  • RE: cause area - What's the value of adding EAGx to the branding of a cause-specific summit? I imagine that people in the community will know it's organized by EAs, and for people outside the community it could be at best neutral but at worst confusing or off-putting if they aren't interested in EA. But wonder if I'm missing something? 

I'm really bad with names, so I haven't suggested specific examples. 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Vael Gates: Risks from Advanced AI (June 2022) · 2022-06-08T23:44:07.684Z · EA · GW

It would be helpful to have the slides / transcript in the post body (I expect you'd get more feedback that way)

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Deference Culture in EA · 2022-06-08T01:19:04.277Z · EA · GW

Could you a few specific examples of times you have seen EAs deferring too little? 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Just Say No to Utilitarianism · 2022-06-03T17:45:29.697Z · EA · GW

Meta comment: I'd prefer if you cross-posted the whole post because I'm unlikely to go to a new link.

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Power dynamics between people in EA · 2022-06-01T22:55:16.479Z · EA · GW

Thanks for writing this Julia! I think this could be a helpful resource for newer EAs who haven't interacted much with the community, a lot goes unsaid and surprises people when they are first figuring out the social dynamics in EA. 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Effektiv Spenden - Review of the Year 2021 · 2022-05-31T20:25:19.411Z · EA · GW

we were able to respond to individual needs—sometimes at very short notice—without having to compromise on the effectiveness of the charities we recommended.

 

Would love to hear more about this - have you previously made this trade-off? How / why?

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on EA NYC is Hiring for Multiple Roles · 2022-05-31T15:09:20.914Z · EA · GW

So exciting! It would be helpful to mention somewhere at the top of the post whether you are able to sponsor work visas.

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Who's hiring? (May-September 2022) · 2022-05-30T14:06:42.960Z · EA · GW

Thanks for flagging this Martin!

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on High Impact Medicine, 6 months later - Update & Key Lessons · 2022-05-30T00:24:25.995Z · EA · GW

it’s so common in the EA world to say, if you don’t believe in malaria nets, you must have an emotional problem.

I'm not saying that.

The point I was trying to make was actually the opposite - that even for the "cold and calculating" EAs it can be emotionally difficult to choose the intervention (in this case malaria nets) which doesn't give you the "fuzzies" or feeling of doing good that something else might. 

I was trying to say that it's normal to feel like some decisions are emotionally harder than others, and framings which focus on that may be likely to come across as dismissive of other people's actions. (Of course, i didn't elaborate this in the original comment) 

Malaria nets should not be this symbol where believing in them is a core part of the EA faith.

I don't make this claim in my comment - I am just using malaria nets as an example since you used it earlier, and it's an accepted shorthand for "commonly recommended effective intervention" (but maybe we should just say that - maybe we shouldn't use the shorthand).  

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Revisiting the karma system · 2022-05-30T00:05:11.322Z · EA · GW

I think having +6/7 votes seems too high because it's not clear who is voting when you see the breakdown of karma to votes. I end up never using my strong upvote, but would if it was in the 3-5 range

I think it would also help if we had upvotes for usefulness vs I agree with this vs "this applies to me" other things, and if we had in-post polls especially for anecdotal posts.

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on High Impact Medicine, 6 months later - Update & Key Lessons · 2022-05-29T08:12:28.868Z · EA · GW

I really like framings which acknowledge how hard (emotionally) it can be to choose malaria nets.

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Request: feedback on my EAIF application · 2022-05-29T08:02:07.994Z · EA · GW

Is an earning to give or donations focused the right strategy in Romania, or is something else is more impactful?

My guess would be exploring this more would be good, I think the case has not really been made yet. I'm more excited about filling the need for software engineers.

So my point about e.g. the website isnt really about asking for funding for it for not, or focusing on content or not, but rather about whether the high level goal (effective giving) makes sense as your strategy.

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Will there be an EA answer to the predictable famines later this year? · 2022-05-28T15:33:03.707Z · EA · GW

I don't have an answer, but I appreciate you raising this ahead of time and think it would be great if we could have well thought out answers and resources for this. 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Who's hiring? (May-September 2022) · 2022-05-28T15:30:19.489Z · EA · GW

Momentum is hiring a product manager and software engineers, with rolling applications for UX designers and partnerships managers. We work in-person in San Francisco and we can help with visas. We're looking for mission-aligned, talented people who are excited to move fast and break things. 

We're a venture-backed, EA startup that aims to increase effective giving by building donation pages that emphasize creative, recurring donations tied to moments in your life (e.g. offset your carbon footprint when you buy gas, or give to AI alignment every time you scroll Facebook) and we use behavioral science to nudge new donors to support EA charities. Read more about us here. 

Open roles

You can see all of our open roles here, but we’d love to hear from you even if you’re not a perfect fit for anything listed. We’re actively recruiting for:

  1. Product Manager: We’re hiring our first dedicated product manager. We’re looking for someone to work closely with design, growth, and tech to conduct user interviews, build roadmaps, specify features, and more.
  2. Software Engineer: We’re looking for engineers to meet our growing tech needs. If you have over two years of professional experience, we likely have use for you. We’re hiring mid to senior level engineers on frontend, backend, and full-stack.

More about our hiring.

If you’re interested in learning more, you can write a comment, send an email to ari@givemomentum.com, or apply to an open role. 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Who's hiring? (May-September 2022) · 2022-05-28T15:21:05.058Z · EA · GW

Apply to join the EAGxIndia 2023, the Future Forum and the EAGxBerkeley 2023 teams. The application takes 15 minutes and the deadline is Tuesday May 31st at 11:59pm Pacific Time. We are also seeking (separate application) volunteers to run EAGxSingapore 2022.  Read more.

These roles are a great fit for someone who:

  • Wants to test their fit for event management, operations and community building
  • Depending on team structure:
    • Has the capacity to take on 5-10 hours of flexible work per week leading up to the conference
    • Is able to work full- or near-fulltime 2 weeks before the conference
  • Could be good fits for any of the following (specific needs will vary based on the location)
    • Strategy (Goals, Metrics)
    • Project management (Budgeting, Team lead, Evaluation
    • Production (Venue, Catering, AV, Health & Safety)
    • Admissions (Application review, Stewardship)
    • Content (Speaker Selection & Liaison, Swapcard Manager)
    • Communications (Emails, Marketing, PR, Website)
  • Is organized, reliable, and handles crisis situations well

Logistics

  • Pay: All positions are paid, exact compensation will vary by event. EAGx organizers are paid the standard CEA contractor rate, read more here.
  • Location
    • The EAGxBerkeley and India roles are mostly remote—you don’t have to be based in India or the Bay. You’ll likely be expected to attend 1-2 pre-event retreats and a requirement that all core team members are in-person at least 1 week before the event. All else equal we would prefer to have people in similar timezones for better coordination.
    • The Future Forum would prefer in-person roles, in June and July, based in the Bay Area. They can compensate some of the travel cost.
  • Team structure: Team structures are very flexible and depend on the applicants we get and their strengths.
Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Who's hiring? (May-September 2022) · 2022-05-28T15:20:21.431Z · EA · GW

Apply to join the EAGxIndia 2023, the Future Forum and the EAGxBerkeley 2023 teams. The application takes 15 minutes and the deadline is Tuesday May 31st at 11:59pm Pacific Time. We are also seeking (separate application) volunteers to run EAGxSingapore 2022.  Read more.

These roles are a great fit for someone who:

  • Wants to test their fit for event management, operations and community building
  • Depending on team structure:
    • Has the capacity to take on 5-10 hours of flexible work per week leading up to the conference
    • Is able to work full- or near-fulltime 2 weeks before the conference
  • Could be good fits for any of the following (specific needs will vary based on the location)
    • Strategy (Goals, Metrics)
    • Project management (Budgeting, Team lead, Evaluation
    • Production (Venue, Catering, AV, Health & Safety)
    • Admissions (Application review, Stewardship)
    • Content (Speaker Selection & Liaison, Swapcard Manager)
    • Communications (Emails, Marketing, PR, Website)
  • Is organized, reliable, and handles crisis situations well

Logistics

  • Pay: All positions are paid, exact compensation will vary by event. EAGx organizers are paid the standard CEA contractor rate, read more here.
  • Location
    • The EAGxBerkeley and India roles are mostly remote—you don’t have to be based in India or the Bay. You’ll likely be expected to attend 1-2 pre-event retreats and a requirement that all core team members are in-person at least 1 week before the event. All else equal we would prefer to have people in similar timezones for better coordination.
    • The Future Forum would prefer in-person roles, in June and July, based in the Bay Area. They can compensate some of the travel cost.
  • Team structure: Team structures are very flexible and depend on the applicants we get and their strengths.
Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Monthly Overload of EA - June 2022 · 2022-05-27T21:58:46.461Z · EA · GW

I like the rebranding. 10/10.

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Request: feedback on my EAIF application · 2022-05-27T21:57:06.426Z · EA · GW

(small side note: I'd suggest changing the title of this post to be more specific so people know that you're requesting feedback, rather than giving feedback on the EAIF application)

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Request: feedback on my EAIF application · 2022-05-27T16:06:00.497Z · EA · GW

Hi Ariel, Thanks for sharing this document and asking for feedback publicly! 

A few thoughts on your application:

General principles :

  • It may in fact be better for you to spend some more time building a track record by doing a small project with very narrow goals (e.g. 1 outreach project or 1 program). Happy to brainstorm what that might be.
  • In your application, I'd also encourage you to focus on just 1 minimum viable project that will help you build a track record
  • It's okay to be narrow if you're deliberate about it, and point to the scope for expansion later. You don't need to cover everything at once.

What I see as the most promising strategy based on your application (and my extremely limited knowledge of Romania):

  • Leaning into your comparative advantage to do outreach to the tech sector (and balancing the risks of founders effects while you do so by doing targeted outreach to avoid non-tech people from being excluded - maybe by giving the project a more specific name e.g. EA Romania Tech outreach or something)
    • At the same time, I wouldn't underestimate the challenges of recruiting in the tech sector, even if you're part of it. I might spend more time in your application talking about what recruiting strategies you would use
  • Establishing a schelling point / person (you) for EAs in Romania via monthly meetups & 1-1s

Strategy feedback :
 

  • Right now, your application consists of too many goals and your strategy also has a lot of sub-items which I don't think are realistic to accomplish well in 6 months. I think if you narrowed your focus and chose 1-2 goals, as well as reduced your strategy section to maybe 3-5 core activities (at most), this would be better.
  • Areas to narrow or cut :
    • I don't see obvious benefits of setting up a legal entity / NGO in Romania at such an early stage
    • I don't know if donations are necessarily the right thing to focus on right now - if your group is producing impact, EA funders would be likely to fund it (that being said, it's good perhaps for your org's sustainability to have multiple donors, I just wouldn't focus on this)
    • I'm not sure how much value a website or content creation would bring at this stage - I could imagine copying an existing group's site and changing the language could be more than sufficient.
      • Since it's in your 1 line summary, it seems like you think this is important. If so, I'd make more of a case for it in the application.
    • It's not clear to me how you would support students (outside of general meetups & 1-1s), maybe you could mention that you would forward them to existing resources (e.g. CEA's UGAP program)

 

Feedback on the application itself (vs. the strategy)

  • I would make the "what we have accomplished" section a bit more easy to understand. For example, what role did you play in getting 4 participants to EAGxPrague or the intro fellowship? Did you have 1-1 calls, encourage people to attend etc. I also wouldn't lead with followers on social media - to me that is a vanity metric. I'd be more interested in how active your local WhatsApp group is, or if people are actively working on independent projects.
  • I left a comment or two in the google doc as well.

 

I hope this is helpful!

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on What are we as the EA community? · 2022-05-25T14:57:19.217Z · EA · GW

+1 to a polls feature! Add to forum feature thread?

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on The EA movement’s values are drifting. You’re allowed to stay put. · 2022-05-25T00:35:45.562Z · EA · GW

Thanks this is helpful, and potentially a useful top-level post

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on The EA movement’s values are drifting. You’re allowed to stay put. · 2022-05-24T17:40:57.426Z · EA · GW

I think the challenge is that the recent changes can be described in a number of different ways:

  • Object level changes to fields, disciplines or industries that we focus on which is priorities shift
  • Changes in (attitudes and behaviors) regarding spending which maybe could be described as lifestyle shift (and relatedly, increasing importance ascribed to EA time, which could be a bit of a values shift)
  • A more ambitious and less risk averse attitude, which maybe is a culture shift

I'm not quite sure how I'd summarise these changes with 1 phrase or word, but these things in combination does create a certain... "aesthetic" that feels coherent - I could create a "2022 EA starter pack" meme that would probably capture the above pretty accurately. 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on The EA movement’s values are drifting. You’re allowed to stay put. · 2022-05-24T17:33:41.975Z · EA · GW

This was mainly a linguistic comment because I find that sometimes people disagree with a post if the terminology used is wrong, so I wanted to get ahead of that. I think I probably could have been more clear that I think you've identified something important and true here, and I am somewhat concerned about how memes spread and wouldn't want people who haven't updated along those lines to feel less like they are part of the EA community.  

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on The EA movement’s values are drifting. You’re allowed to stay put. · 2022-05-24T17:25:31.616Z · EA · GW

While I understand the point you're making, the comment you linked is (to my non-STEM mind) pretty hard to parse. Would you be able to give a less technical, more ELI5 explanation?

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on EA needs to understand its “failures” better · 2022-05-24T14:41:50.438Z · EA · GW

I wrote a related post on this topic: https://forum.effectivealtruism.org/posts/5ZznqbRthKCbAB9Fk/some-benefits-and-risks-of-failure-transparency

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on The EA movement’s values are drifting. You’re allowed to stay put. · 2022-05-24T01:01:15.485Z · EA · GW

Relatively unpolished but posting anyway.
 

Finally : +1 for posting anyways, I appreciate it. I find alternative framings of ideas I've heard before and things I don't fully agree with really useful (more so than ideas I agree with actually) for teasing out what I actually think and clarifying my thoughts on complicated topics. 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on The EA movement’s values are drifting. You’re allowed to stay put. · 2022-05-24T00:58:20.927Z · EA · GW

Am in agreement with most of your post, except for one thing: calling these changes to our values.

The following is the beginnings of a chain of thinking that isn't fully fleshed out, but I hope is useful. All word choices are probably suboptimal. I don't hold the implications of these views very strongly or at all, I'm mostly trying to puzzle things out and provide arguments I think are somewhat strong or compelling.

All the things you mention don't seem like values to me - they seem more like strategies or approaches to doing good (which 

"Core" values are things like truth-seekingness, epistemic humility or maximizing impact or something, whereas for example "cause neutrality" and by extension "longtermism" are downstream of that. 

 But we also have "secondary values" (terrible wording) which are influenced by our core values and our worldview and specific (cognitive) beliefs about how the world works (this influence each other but are somewhat independent).

I can see a version of EA where the core values -> longtermism chain becomes replaced with just longtermism as a default (just like in current EA we take the core values -> helping people in developing countries chain is something of a default - I don't think it's very often that people come into EA strongly opposing this value set - this isn't a bad thing - these are the low hanging fruit). 

Why are core & secondary values important to distinguish?

  1. People who are on board with the changes do not see the shared values as conflicting with the core values they see it as a natural progression of core values. Just like we thought that "everyone matters" leads to "donate to help improve the lives of poor people in developing countries" so too is the connection between "everyone matters" to "future people should be our priority". 
    1. Implication: people reading this post may say "this isn't value drift"
  2. I think are core values are really important and the real glue of our community, a glue that will withstand the test of time and ideally let us adapt, change and grow as we get new information and we update our beliefs. 
    1. Maybe this is to idealistic, and in practice simply saying "but we share the same core values" even if true, is simply not enough. 
    2. In practice, the level of secondary values can be more useful: maybe technical AI safety researchers and farmed animal welfare advocates just don't have that much in common or the inferential distance is a bit too much in terms of their models of the world, impact, risk aversion etc. etc. 
Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on EA culture is special; we should proceed with intentionality · 2022-05-24T00:41:26.242Z · EA · GW

+1 to transparency!

I would love to see more community builders share their theories of change, even if they are just 1/2 page google docs with a few bullets and links to other articles (and where their opinions differ), and periodically update this (say, every 6 months or so) with major changes, examples of where they were wrong (this is by far the most important to me)

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on The EA movement’s values are drifting. You’re allowed to stay put. · 2022-05-24T00:35:31.494Z · EA · GW

Similarly, less funding towards evidence-backed ideas, and more funding for long shot projects, of which much or most of the expected value comes from a small chance of big success.


Suggestion to change to: "Less proportion of funding towards" since the total amount of funding to e.g. GiveWell backed charities has increased but overall there is stil more funding (at least as of now, though that may change in the near future) towards that. 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Apply to help run EAGxIndia, Berkeley, Singapore and Future Forum! · 2022-05-23T19:40:22.496Z · EA · GW

Yes i just added it - thanks for the flag! 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Death to 1 on 1s · 2022-05-21T16:34:11.589Z · EA · GW

Like others, I empathise (quite a lot, for reasons stated by Amber) with the gist of this post, but have met a lot of interesting people in (planned) 1-1s. 

I'll just point out one specific point: 

You could be getting into a wonderful conversation -- but then they have to go. "Sorry, I'm meeting someone else in ten minutes".

I think this is a good and prosocial thing to do, and doesn't dehumanize but instead shows respect for the other conference attendees you have meetings with. 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Deferring · 2022-05-17T02:28:47.454Z · EA · GW

In light of the other discussions, delegating choice seems better than deferring to experts.

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Deferring · 2022-05-17T01:46:03.107Z · EA · GW

Related post to the importance of delegating choice, but that was not framed as a trade-off between buying into a thing vs doing it was Jan Kulveit's What to do with people from a few years ago. 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Deferring · 2022-05-17T01:41:01.094Z · EA · GW

I think the important thing with delegation which Howie pointed out, is that there is a social contract in the example you gave of event organising between the volunteer / volunteer manager or employer / contractor where I'd expect that in the process of choosing to sign up for this job, the person makes a decision based on their own thinking (or epistemic deference) to contribute to this event - I think this is what you mean by high bandwidth?

 

If so, I feel in agreement with the statement: "I feel particularly uncomfortable with people in the meta space delegating choice without high bandwidth, and without explicit understanding that that's what they're doing"

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Deferring · 2022-05-17T01:35:24.297Z · EA · GW

Just to make sure I understand correctly  is"delegating choice" is "delegating a choice (of an action to be made)" ?

If so, I think this is a much better phrase at least than deferring to authority, and would even propose editing the OP to suggest this as an alternative phrase / address this so that others don't get the wrong impression - based on our conversation it seems we have more agreement than I would have guessed from reading the OP alone. 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Deferring · 2022-05-17T01:23:20.322Z · EA · GW

That makes sense, and feels mostly in line with what I would imagine. 

Maybe this is a small point (since there will be many more junior than senior roles in the long run) : I feel like the senior group would likely join an org for many other reasons than deference to authority (e.g. not wanting to found an org themselves, wanting to work with particular people they feel they could get a good work environment from, or because of epistemic deference). It seems like in practice those would be much stronger motivating reasons than authority, and I'm having a hard time picturing someone doing this in practice. 

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Advice on how to get a remote personal/executive assistant · 2022-05-16T19:20:08.814Z · EA · GW

Has anyone got good leads for US-timezone friendly VAs?

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Deferring · 2022-05-16T17:41:08.388Z · EA · GW

(fwiw I upvoted this post, because I thought it raised a lot of interesting points that are worth discussing despite disagreeing some bits).

In sum: I think your post sometimes lacks specificity which makes people think you're talking more generally than (I suspect) you are.

  1. Who exactly you're proposing doesn't buy into the agenda - this is left vague in your post. Are you envisioning 20% of people? 50%? What kinds of roles are these folks in? Is it only junior level non-technical roles or even mid-managers doing direct work?

Those details matter because I think I'd be fine with e.g. junior ops people at an AI org not fully buying the specific research agenda of that org, but I'm not sure about the other roles here.

  1. Who do you count as the EA community or movement? I think if we are thinking big tent EA where you have people with the needed skills for the movement but not necessarily a deep understanding of EA, I'm more sympathetic to this argument. But if we're thinking core community EA where many people are doing things like community building or for whom EA is a bit part of their lives, I feel much more uncomfortable with people deferring to authority - perhaps I feel particularly uncomfortable with people in the meta space deferring to authority.
Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Deferring · 2022-05-16T17:20:02.910Z · EA · GW

"Deferring to experts" might be a less loaded term. Also defining what experts are especially for a lot of EA fields that are newer and less well established could help.

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Some thoughts on Scaling · 2022-05-15T02:41:16.906Z · EA · GW

I really love that main quote, am curious how you came across it. I'd also love to know if you would recommend any summaries or sections of the books you recommend in the footnotes. 

Some specific areas I liked:

1) I think the phrase "scaling-as-a-whole" is very useful and gets to the crux of the problem with the current situation, I also think I resonate somewhat with this sentence in particular: 

Scaling of a part without respect to a whole often distorts organizational coherence 


Although I'd quibble over what organizational coherence means exactly, I might say something more vague. (I'd be curious how you define the term) 

Here's an alternative "scaling a part without respect to the whole leads to another part being a limiting factor"

  • But this is not the full picture - e.g. if funding is abundant talent becomes the limiting factor, but there's other externalities that don't get captured by saying "talent is a limiting factor"

Another: ""scaling a part without respect to the whole leads to (negative) externalities that disrupt the balance of the existing system"

  • Not a huge fan of this one either (assumes a lot - e.g. that the current system has a good balance, that balance is desirable etc.), but I think they are getting somewhere. 

2) I like this proposed change in framing to consider the EA movement more holistically. 

The key difference in practice, I believe, is shifting away from:

Scaling-As-A-Goal as *against* Scaling-As-A-Whole

and instead move towards:

Scaling-As-A-Goal as *a function of* Scaling-As-A-Whole.[3]

This means that when we are scaling a particular part of a company or community, we are actively looking for information/feedback from other parts to understand its externalities.

Comment by Vaidehi Agarwalla (vaidehi_agarwalla) on Bad Omens in Current Community Building · 2022-05-15T00:39:26.499Z · EA · GW

+1 to the concern on epistemics, that is one of my bigger concerns also.

Really excited for the new syllabus! Please do share it when it's ready :)