vaidehi_agarwalla's Shortform

post by vaidehi_agarwalla · 2019-12-06T21:03:43.762Z · EA · GW · 46 comments

46 comments

Comments sorted by top scores.

comment by vaidehi_agarwalla · 2020-07-15T07:17:29.448Z · EA(p) · GW(p)

Collection [EA · GW] of Constraints in EA

Replies from: vaidehi_agarwalla
comment by vaidehi_agarwalla · 2021-01-07T02:28:34.333Z · EA(p) · GW(p)

Related posts: 

The Case for the EA Hotel [EA · GW] by Halffull. Kind of a summary of the above constraints, explaining how the EA hotel could fill the need for the lack of mobility in the middle (what OP calls a “chasm”), trying to explain the vetting and talent constraints in the EA community. The first part is especially useful for outlining this underlying model.

Which community building projects get funded? [EA · GW] By AnonymousEAForumAccount. It raises an important question, but I (Vaidehi) think the analysis misses the important questions. I’ve built off the original spreadsheet with categories here.

comment by vaidehi_agarwalla · 2020-07-10T12:53:50.455Z · EA(p) · GW(p)

Mini Collection [EA · GW] - Non-typical EA Movement Building

Basically, these are ways of spreading EA ideas, philosophies or furthering concrete EA goals in ways that are different from the typical community building models that local groups use.


Suggestions welcome!

Replies from: vaidehi_agarwalla
comment by vaidehi_agarwalla · 2020-07-11T01:05:49.778Z · EA(p) · GW(p)

This quote from Kelsey Piper:

Maybe pretty early on, it just became obvious that there wasn’t a lot of value in preaching to people on a topic that they weren’t necessarily there for, and that I had a lot of thoughts on the conversations people were already having.
Then I think one thing you can do to share any reasoning system, but it works particularly well for effective altruism is just to apply it consistently, in a principled way, to problems that people care about. Then, they’ll see whether your tools look like useful tools. If they do, then they’ll be interested in learning more about that.
...
My ideal effective altruist movement had insightful nuanced, productive, takes on lots and lots of other things so that people could be like, "Oh, I see how effective altruists have tools for answering questions. I want the people who have tools for answering questions to teach me about those tools. I want to know what they think the most important questions are. I want to sort of learn about their approach."
comment by vaidehi_agarwalla · 2020-03-22T23:59:18.055Z · EA(p) · GW(p)

How valuable is building a high-quality (for-profit) event app for future EA conferences?

There are 6 eag(x) conferences a year. this number will probably increase over time and more conferences will come up as EA grows- I'd expect somewhere between 80-200 EA-related conferences and related events in the next 10 years. This includes cause-area specific conferences, like Catalyst and other large events.

A typical 2.5 day conference with on average ~300 attendees spending 30 hours = 9,000 man-hours would be a range of 720,000-1,800,000 man hours over 10 years. Of this time, I'd expect 90% to be taken up doing meetings, attending events, eating etc. Of the remaining 10%, so 7,200-18,000 saving 1% of this time is in the range of 7,200- 18,000 hours or roughly seems pretty useful!

For reference, 1 year of work (a 40 hours work-week for 50 weeks) = 2000 hours.

Replies from: vaidehi_agarwalla, vaidehi_agarwalla, BrianTan, DonyChristie
comment by vaidehi_agarwalla · 2021-01-05T15:26:44.637Z · EA(p) · GW(p)

Pricing estimate if we pay for an event conferencing app: Swapcard, recently used by CEA for EAGx events costs approximately USD$7 per user.

Using my previous estimate, the total cost over 10 years would be between USD $168,000-420,000 without any discounting. Discounting 50% for technology becoming cheaper, and charity discounts, we could conservatively say $84,000-$210,000 total cost. 

Not sure what to do with this information, or how to compute the value of this money saved (assuming our benevolent EA ally / app creator gives us access for a heavily discounted price, otherwise the savings are not that important).

comment by vaidehi_agarwalla · 2020-12-07T13:35:44.863Z · EA(p) · GW(p)

Given the pandemic, I would actually upgrade the potential cost effectiveness of this, because we can now add Student Summits and EAGxVirtuals as potentially regular events, bringing the total in a non-COVID year to up to 8 events. 

comment by BrianTan · 2020-12-19T04:24:20.348Z · EA(p) · GW(p)

Hm I think Swapcard is good enough for now, and I like it more than the Grip app. I think this comes down to what specific features people want in the conference app and why this would make things easier or better.

Of course it would be good to centralize platforms in the future (i.e. maybe the EA Hub also becomes a Conference platform), but I don't see that being a particularly good use of time.

comment by DonyChristie · 2020-12-12T05:35:20.442Z · EA(p) · GW(p)

+1 the math there. How does building an app compare to throwing more resources at finding better pre-existing apps? 

I'll just add I find it kind of annoying how the event app keeps getting switched up. I thought Grip was better than whatever was used recently for EAGxAsia_Pacific (Catalyst?). 

Replies from: vaidehi_agarwalla
comment by vaidehi_agarwalla · 2020-12-18T10:59:29.383Z · EA(p) · GW(p)

I think CEA has looked at a number of apps - it wold definitely be worth checking with them to see how many apps they've considered out of the total number of apps available, and possibly follow the 37% rule. 

Replies from: Habryka
comment by Habryka · 2020-12-19T01:55:00.912Z · EA(p) · GW(p)

It seems plausible, though overall not that likely, to me that maybe the LessWrong team should just build our own conference platform into the forum. We might look into that next year as we are also looking to maybe organize some conferences.

Replies from: vaidehi_agarwalla
comment by vaidehi_agarwalla · 2020-12-19T04:49:50.172Z · EA(p) · GW(p)

That would be interesting! I'd be interested to see if that happens - I think there are probably a benefits from integration with the LW/EA Forum. In what scenario do you think this would be the most likely?

Replies from: Habryka
comment by Habryka · 2020-12-20T04:59:44.011Z · EA(p) · GW(p)

I think it's most likely if the LessWrong team decides to run a conference, and then after looking into alternatives for a bit, decides that it's best to just build our own thing. 

I think it's much more likely if LW runs a conference than if CEA runs another conference, not because I would want to prioritize a LW conference app over an EAG app, but because I expect the first version of it to be pretty janky, and I wouldn't want to inflict that on the poor CEA team without being the people who built it directly and know in which ways it might break. 

comment by vaidehi_agarwalla · 2020-12-18T10:58:28.780Z · EA(p) · GW(p)

An incomplete list of movement building terms

I plan to keep adding and refining this list over time, I've just put my current list here in case anyone else has ideas. 

Movement Building: Any work to build a movement including but not limited to: community infrastructure, community building, cause, field or discipline development and outreach. Movement building does not need to involve using or spreading the EA brand.

Movement building is a subset of "Meta EA" or "meta altruism".

Types of Movement Building:

Community Infrastructure: The development of community-wide products and services that help develop the community. Online examples include building wikis, forums, tools and websites. Offline examples include conferences, community houses, and regional networks.

Note: Some community infrastructure may be limited to certain subgroups within the community, such as events and services for leaders or affiliated organisations. Such events might still provide benefits to the wider community, especially when they improve coordination and communication, and where relevant should be considered as infrastructure.  

Community Building: Influencing individuals to take actions based on the ideas and principles of the EA movement. This is often accomplished through the development of groups (local & online) organised by geography, shared interests, careers, causes and more. Local groups are the most common, but certain locations (e.g. "hub" cities like London) may also have subgroups that based cause or career. 

Field or Discipline Development: Developing new or influencing existing academic disciplines or fields through the creation of new organisations, advocacy or funding academics to work in this field. Closely related to Professionalization.   

Network Building: Developing the EA network to include non-EA actors, organisations and communities. See Community vs Network [EA · GW] by David Nash and EA for non-EA People: External Movement Building by Danny Lipsitz.

Professionalization: Giving an occupation, activity, or group professional qualities. This can be done by creating a career out of, increasing the status of, raising the qualifications required for or improving the training given for a occupation, activity or group. 

Other terms

CEA's Funnel Model: A model of movement building which focuses on the different stages of involvement people have with EA, based off of corporate sales funnel models.

Community: A group of people connected by a set of shared values, norms, or beliefs. 

Alternative definition: "A community is a group of humans with a shared identity who care for each other.” - Konrad [EA · GW]

Ideology: A set of ideas and beliefs that represent a particular worldview.  

Network: A group of people with varying degrees of connection to each other.

Organic Movement Growth: Movement growth that occurs organically, without explicit intentions (other than perhaps very broad actions like mass-targeted publications).

Social Movement: A group of people working to achieve a goal or set of goals through collective action. Differentiated from an intellectual movement because of the specification and emphasis on concrete actions.

comment by vaidehi_agarwalla · 2020-02-26T20:07:00.098Z · EA(p) · GW(p)

Meta-level thought:

When asking about resources, a good practice might be to mention resources you've already come across and why those sources weren't helpful (if you found any), so that people don't need to recommend the most common resources multiple times.

Also, once we have an EA-relevant search engine, it would be useful to refer people to that even before they ask a question in case that question has been asked or that resource already exists.

The primary goal of both suggestions would be to make questions more specific, in-depth and hopefully either expanding movement knowledge or identifying gaps in knowledge. The secondary goal would be to save time!

comment by vaidehi_agarwalla · 2021-07-24T03:25:00.397Z · EA(p) · GW(p)

Quick BOTEC of person-hours spent on EA Job Applications per annum.

I created a Guesstimate model to estimate a total of ~14,000 to 100,000 person-hours or ~7 to 51 FTE are spent per year (90% CI). This comes to an estimated USD $ 320,000 to $3,200,000 unpaid labour time. 

  • All assumptions for my calculations are in the Guesstimate
  • The distribution of effort spent by candidates is heavy-tailed; a small percentage of candidates may spend 3 to 10x more time than the median candidate.
  • I am not very good at interpreting the guesstimate, so if someone can state this better / more accurately than would be helpful
  • Keen to get feedback on whether I've over/underestimated any variables.
  • I'd expect this to grow at a rate of ~5-10% per year at least.

Sources: My own experience as a recruiter, applying to EA jobs and interviewing staff at some EA orgs.

 

Edited the unpaid labour time to reflect Linch's suggestions.

Replies from: Linch
comment by Linch · 2021-07-24T09:49:30.576Z · EA(p) · GW(p)

Keen to get feedback on whether I've over/underestimated any variables.

I think 

Average person's value of time (USD)

As a normal distribution between $20-30 is too low, many EA applicants counterfactually have upper middle class professional jobs in the US. 

I also want to flag that you are assuming that the time is

unpaid labour time

but many EA orgs do in fact pay for work trials. "trial week" especially should almost always be paid. 

Replies from: vaidehi_agarwalla
comment by vaidehi_agarwalla · 2021-07-24T11:08:30.024Z · EA(p) · GW(p)

Hi Linch, thanks for the input!

I'll adjust the estimate a bit higher. In the Guesstimate I do discount the hours to say that 75% of the total hours are unpaid (trial week hours cone to 5% of the total hours).

Replies from: joshjacobson
comment by Josh Jacobson (joshjacobson) · 2021-07-26T23:19:27.056Z · EA(p) · GW(p)

I did not review the model, but only 75% of hours being unpaid seems much too low based on my experience having gone through the job hiring process (including later stages) with 10-15 EA orgs.

Replies from: vaidehi_agarwalla
comment by vaidehi_agarwalla · 2021-07-26T23:26:40.009Z · EA(p) · GW(p)

Okay, so I used a different method to estimate the total manhours and my new estimate is something like 60%. I basically assumed that 50% of Round 2 -4 in the application process is paid, and 100% of the work trial.

 I expect that established / longtermist orgs are disproportionately likely to pay for work tests, compared to new or animal / GH&D orgs. 

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2021-07-29T01:47:29.703Z · EA(p) · GW(p)

I think Josh was claiming that 75% was "too low", as in the total % of unpaid hours being more like 90% or something.

When I applied to a bunch of jobs [EA(p) · GW(p)], I was paid for ~30 of the ~80 hours I spent (not counting a long CEA work trial — if you include that, it's more like 80 out of 130 hours). If you average Josh and I, maybe you get back to an average of 75%?

*****

This isn't part of your calculation,  but I wonder what fraction of unique applicants to EA jobs have any connection to the EA community beyond applying for one job?

In my experience trying to hire [EA · GW] for one role with ~200 applicants, ~1/3 of them neither had any connection to EA in their resumes nor provided further information in their applications about what drew them to EA. This doesn't mean there wasn't some connection, but a lot of people just seemed to be looking for any job they could find. (The role was more generic than some and required no prior EA experience, so maybe drew a higher fraction of outside applicants.)

Someone having no other connection to the EA community doesn't mean we should ignore the value of their time, and the people who apply to the most jobs are likely to have the strongest connections, so this factor may not be too important, but it could bear consideration for a more in-depth analysis.

comment by vaidehi_agarwalla · 2020-10-31T01:28:27.382Z · EA(p) · GW(p)

How important is it to measure the medium term (5-50 years) impact of interventions?

I think that taking the medium-term impact into account is especially lacking in the meta space, since building out infrastructure is exactly the kind of project that could take several years to set up with little progress before gains are made. 

I'd also be interested in how many /which organisations plan to measure their impact on this 5-50 year timescale. I think it would be very interesting to see the impact of various GH&D charities on a 5 or 10 year timescale.  

comment by vaidehi_agarwalla · 2020-10-29T12:46:30.120Z · EA(p) · GW(p)

A Typology of EA Careers Advice

The Local Career Advice Network recently completed a pilot workshop to help group organiers develop and implement robust career 1-1 strategies. During this process we compiled all existing EA careers advice & strategy, and found several open questions. This post provides an overview of the different kinds of careers research one could do. We will write more posts trying to explain the value of the different kinds of research. 

Movement-level research 

Individual-level research

  • This research idenitifes best practices, framworks and tips on how to have a successful, fulfilling career. This research could help them find a career that is the right choice for them: that is aligned with their values, that they can excel at, and that they are motivated to stay in in the long-term.
  • Risks: Causing harm by reducing an individual's impact in the long-term, or pursuing a path where they don't have a good personal fit. They might be turned away from the EA movement.
  • EA Examples: 80,000 Hours' 2017 Career Guide and Career Profiles
  • Non-EA examples: So Good They Can't Ignore You by Cal Newport

Advice intervention research

  • This research identifies interventions that can help achieve both movement-level or individual-level advice. Interventions prioritise
  • Risks: All of the above if it doesn't balance between the two.
  • EA Examples: Animal Advocacy Careers is preregistering a study of their career 1-1 calls, Literature review [EA · GW] on what works to promote charitable donations
  • Non-EA Examples: Research on the effectiveness of coaching/mentoring.
Replies from: vaidehi_agarwalla
comment by vaidehi_agarwalla · 2020-10-29T12:59:00.695Z · EA(p) · GW(p)

I think movement-level advice is most useful for setting movement-level strategy, rather than informing individual actions because personal fit considerations are quite important. However, I think this has the consequence that some paths are much more clearly defined than others, making it difficult for people who don't have those interests to define a path.

comment by vaidehi_agarwalla · 2021-01-20T14:54:56.650Z · EA(p) · GW(p)

Reasons for/against Facebook & plans to migrate the community out of there

Epistemitc Status: My very rough thoughts. I am confident of the reasons for/against, but the last section is mostly speculation so I won't attempt to clarify my certainty levels

Reasons for moving away from Facebook

  • Facebook promotes bad discussion norms (see Point 4 here [EA · GW])
  • Poor movement knowledge retention
  • Irritating to navigate: It's easy to not be aware that certain groups exist (since there are dozens) and it's annoying to filter through all the other stuff in Facebook to get to them

Reasons against

  • Extremely high switching costs
    • start-up costs (see Neels' comment)
    • harder to pay attention to new platform
    • easier to integrate with existing scoial media
  • Offputting/intimidating to newer members
  • Past attempts haven't taken off (e.g. the EA London Discussion Board [EA · GW], but that was also not promoted super hard)
  • Existing online space (the Forum) is a bit too formal/initimidating

How would we make the switch? In order of increasing speculativeness

  • One subcommunity at a time. It seems like most EA groups are already more active in their spaces other than Facebook, but it would be interesting to see this replicated on the cause area level by understanding what the community members' needs are and seeing if there's a way to have alternatives.
  • Moving certain services found on Facebook to other sites: having a good opportunities board so people go to another place for ea jobs & volunteer opportunities, moving the editing & review group to the forum (?), making it easier for people to reach out to each other (e.g. EA Hub Community directory). Then it may be easier to move whatever is left (e.g. discussions) to a new platform.
  • Encouraging ~100 active community members to not use Facebook for a week as an experiment and track the outcomes
  • Make the Forum less intimidating so people feel more comfortable posting (profile pictures? Heart reacts? Embedded discord server or other chat function? Permanent Walled Garden?)

Things I'll be tracking that might update me towards how possible this is

  • LessWrong's experience with the Walled Garden
  • The EA Hub is improving our Community Directory & introducing some other services in 2021 possibly including 1-1 Matching and an Opportunities Board.
  • Cause area Slacks
    • Effective Environmentalism Slack group (not very active right now, but we haven't done a lot of active efforts to encourage people to use the Slack yet. Might do this later in the year).
    • IIDM & Progress Studies Slack
  • Changes in Forum culture over time
  • If there are any EA groups or subcommunities already moving away from Facebook, please let me know so I can track you :)
Replies from: Neel Nanda, aarongertler, florian-z
comment by Neel Nanda · 2021-01-21T08:50:48.382Z · EA(p) · GW(p)
  • Offputting/intimidating to newer members

I want to emphasise this point, since I think it applies to both new and more experienced members. I personally find it quite high mental load to actively pay attention to communities on a new platform. Some of these are start-up costs (learning a new interface etc), but there are also ongoing costs of needing to check the new site, etc. And it is much easier to add something to an existing place I already check

comment by Aaron Gertler (aarongertler) · 2021-01-21T10:15:23.216Z · EA(p) · GW(p)

I don't think the Forum is likely to serve as a good "group discussion platform" at any point in the near future. This isn't about culture so much as form; we don't have Slack's "infinite continuous thread about one topic" feature, which is also present on Facebook and Discord, and that seems like the natural form for an ongoing discussion to take. You can configure many bits of the Forum to feel more discussion-like (e.g. setting all the comment threads you see  to be "newest first"), but it feels like a round peg/square hole situation.

On the other hand, Slack seems reasonable for this!

Replies from: Tsunayoshi
comment by Tsunayoshi · 2021-01-21T15:13:23.489Z · EA(p) · GW(p)

There is also a quite active EA Discord server, which serves the function of "endless group discussions" fairly well, so another Slack workspace might have negligible benefits.

comment by florian-z · 2021-01-21T19:52:01.047Z · EA(p) · GW(p)

Another possible reason against might be:
In some countries there is a growing number of people who intentionally don't use Facebook. Even if their reasons for their decision may be flawed, it might make recruiting more difficult. While I perceive this as quite common among German academics, Germany might also just be an outlier.

Moving certain services found on Facebook to other sites: [...], making it easier for people to reach out to each other (e.g. EA Hub Community directory). Then it may be easier to move whatever is left (e.g. discussions) to a new platform.

I think the EA Hub is in a good position to grow and replace some of the functions that Facebook is currently being used for in the community.

comment by vaidehi_agarwalla · 2019-12-06T21:03:43.897Z · EA(p) · GW(p)

Could regular small donations to Facebook Fundraisers increase donations from non-EAs?

The day before Giving Tuesday, I made a donation to a EA Facebook charity that had seen no donations in a few weeks. After I donated to about 3 other people donated within the next 2 hours (well before the Giving Tuesday start time). From what I remember, the total amount increased by more than the minimum amount and the individuals appeared not to be affiliated with EA, so it seems possible that this fundraiser might have somehow been raised to their attention. (Of course it's possible that with Giving Tuesday approaching they would have donated anyway.)

However, it made think that regularly donating to fundraisers could keep them on people's feeds inspire them to donate, and that this could be a pretty low-cost experiment to run. Since you can't see amounts, you could donate the minimum amount on a regular basis (say every month or so - about $60 USD per year). The actual design of the experiment would be fairly straight forward as well: use the previous year as a baseline of activity for a group of EA organisations and then experiment with who donates, when they donate, and different donation amounts. If you want to get more in-depth you could also look at other factors of the individual who donates (i.e. how many FB friends they have).

Experimental design

Using EA Giving Tuesday's had 28 charities that people could donate to. Of that, you could select 10 charities as your controls, and 10 similar charities (i.e. similar cause, intervention, size) as your experimental group, and recruit 5 volunteer donors per charity to donate once a month on a randomly selected day. They would make the donation without adding any explanation or endorsement.

Then you could use both the previous year's data and the current year's controlled charities to compare the effects. You would want to track whether non-volunteer donations or traffic was gained after the volunteer donations.

Caveats: This would be limited to countries where Facebook Fundraising is set up.

comment by vaidehi_agarwalla · 2020-12-05T14:13:54.126Z · EA(p) · GW(p)

What are the low-hanging fruit or outliers of EA community building?

(where community building is defined as growing the number of engaged EAs who are likely to take medium-to-large sized actions in accordance to the EA values and/or framework. it could include group activities, events, infrastructure building, resource)

  • the EA community talks a lot about low-hanging fruits and the outlier interventions that are 100x or 1000x better than the next best intervention
  • it seems plausible that either of these exist for community building

 

Low hanging fruits

  • from working in the community building space for the last 2+ years, i have found what i believe are many low-hanging fruit (which are decently impactful) but no extreme outliers that are orders of magnitude more impactful than the next best thing
  • I think low hanging fruits are relatively neglected areas of community building
  • The biggest one that I observed is careers advice outside of 80K's general scope is very neglected, and within those there are mostly similar effectiveness interventions (or at least not 100-1000x apart). 
  • What other low-hanging fruit do you think there are?

 

Extreme Outliers

  • i would guess that any outlier interventions could fall into 1 of two categories (which obviously don't pose undue risk to the community):
    1. Intervention that is moderately to very good at achieving X (where X can be either recruitment, education, engagement or retention, see more), but also have the property of scaling very quickly (e.g. a web service, written resource or a skill that could be taught to many group organisers )  
    2. Intervention is very good at recruiting a few extremely engaged, aligned & talented people (the hits based model, where you have 99% failure and 1% success), or getting them engaged (I imagine there's fewer education or retention interventions)
  • Do you know of clearly obvious outlier interventions ?
Replies from: BrianTan, edoarad
comment by BrianTan · 2020-12-05T14:47:32.733Z · EA(p) · GW(p)

I think introductory fellowships are extreme outlier interventions. EA Philippines' 8-week Intro to EA Discussion Group (patterned after Stanford's Arete fellowship) in May-July 2020  was by far our best activity yet. 31 signed up and 15 graduated, and out of the graduates, I believe we've created the following counterfactual impact:

  1. One became the president of our student chapter EA Blue
  2. Another became a core team member of EA Blue
  3. Two have since taken the GWWC pledge
  4. Three have become new volunteers (spending ~1-2 hrs/week) for EA Philippines (we actually got two more volunteers aside from these three, but those two I would say were not counterfactual ones)
  5. Helped lead to a few career plan changes (I will write a separate impact report about EA PH's 2020, and can talk about this more there).

EA Blue is now doing an Introductory Fellowship similar to ours with 26 participants, which I'm a facilitator for, and I think we're having similarly good results!

comment by EdoArad (edoarad) · 2020-12-06T14:17:27.309Z · EA(p) · GW(p)

I don't have an answer, but I'm curious - why don't you publish it as a proper post?

Replies from: vaidehi_agarwalla
comment by vaidehi_agarwalla · 2020-12-07T01:20:38.425Z · EA(p) · GW(p)

This is a very rough post and I don't know how much I would stick to this framing of the question if I spent more time thinking it over!

Replies from: edoarad
comment by EdoArad (edoarad) · 2020-12-07T06:28:06.137Z · EA(p) · GW(p)

Makes sense, even though it feels alright to me as a post :) 

I'd really like to see more answers to this question! 

comment by vaidehi_agarwalla · 2021-01-07T02:24:37.240Z · EA(p) · GW(p)

Collection [EA · GW] of anecdotal evidence of EA career/impact frustrations

After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation [EA · GW] by EA Applicant. Most upvoted post on the forum, sparked a lot of recent discussion on the topic. 8 commenters resonated with OP on the time investment and/or disappointment (1 [EA · GW],2 [EA · GW],3 [EA · GW],4 [EA · GW],5 [EA · GW],6 [EA · GW],7 [EA · GW],8 [EA · GW]). There were 194 unique upvotes. 

My mistakes on the path to impact [EA · GW] by Denise Melchin. Another highly upvoted post talking about the emphasis on working at EA organisations and direct EA work. There were 161 unique upvotes. Resonated comments (1 [EA · GW],2 [EA · GW],3 [EA · GW],4 [EA · GW],5 [EA · GW])

Effective Altruism and Meaning in Life [EA · GW] by extra_ordinary. A personal account of the talent gaps, and why the OP moved away from this because too much of their self-worth was associated with success in EA-related things. 4 comments in support of the post. Resonated comments (1 [EA · GW],2 [EA · GW]). There were 55 unique upvotes. 

Replies from: BrianTan, vaidehi_agarwalla
comment by vaidehi_agarwalla · 2021-01-07T02:34:01.965Z · EA(p) · GW(p)

EA’s Image Problem [EA · GW] by Tom Davidson. 4 years old but the criticisms are still relevant. See also many comments. 

comment by vaidehi_agarwalla · 2020-02-29T15:35:14.076Z · EA(p) · GW(p)

I brainstormed a list of questions that might help evaluate how promising climate change adaptation efforts would be.

Would anyone have any additions/feedback or answers to these questions?

https://docs.google.com/document/d/19VryYtikXQEEOeXtjgApWWKoof3dRfQNVjza7HbnuHU/edit?usp=sharing

comment by vaidehi_agarwalla · 2020-08-05T09:06:40.115Z · EA(p) · GW(p)

Is anyone aware of/planning on doing any research related to the expected spike in interest for pandemic research due to COVID? 

It would be interesting to see how much new interest is generated, and for which types of roles (e.g. doctors vs researchers). This could be useful to a) identify potential skilled biosecurity recruits b) find out what motivated them about COVID-19 c) figure out how neglected this will be in 5-10 years 

I'd imagine doing a survey after the pandemic starts to die down might be more valuable than right now (maybe after the second wave) so that we're tracking the longer-term impact rather than the immediate reactions. 

An MVP version could be just looking at application rates across a variety of relevant fields.  

Replies from: hapless
comment by hapless · 2020-08-05T19:02:30.406Z · EA(p) · GW(p)

Having done some research on post-graduate education in the past, it's surprisingly difficult to access application rates for classes of programs. Some individual schools publish their application/admission rates, but usually as advertising, so there's a fair bit of cherry picking. It's somewhat more straightforward to access completion rates (at least in the US, universities report this to government). However, that MVP would still be interesting with just a few data points: if any EAs have relationships to a couple relevant programs (in say biosecurity, epidemiology), it may be worth reaching out directly in 6-12 months!

A more general point, which I've seen some discussion of here, is how near-miss catastrophes prepare society for a more severe version of the same catastrophe. This would be interesting to explore both theoretically (what's the sweet spot for a near-miss to encourage further work, but not dissuade prevention policies) and empirically.

One historical example might be, for example, does a civilization which experienced a bad famine experience fewer famines in a period following that bad famine? How long is that period? In particular, that makes me think of MichaelA's recently excellent Some history topics it might be very valuable to investigate [EA · GW].

Replies from: Khorton
comment by Khorton · 2020-08-05T19:39:50.866Z · EA(p) · GW(p)

In the UK could you access application numbers with a Freedom of Information request?

comment by vaidehi_agarwalla · 2020-07-25T15:09:02.584Z · EA(p) · GW(p)

Some thoughts on stage-wise development of moral circle

Status: Very rough, I mainly want to know if there's already some research/thinking on this.

  • Jean Piaget, a early childhood psychologist from the 1960s, suggested a stage sequential model of childhood developemnt. He suggesting that we progress through different levels of development, and each stage is necessary to develop to the next.
  • Perhaps we can make a similar argument for moral circle expansion. In other words: you cannot run when you don't know how to walk. If you ask someone to believe X, then X+1, then X+2, this makes some sense. if you jump from X to 10X to 10000X (they may even perceive 10000X as Y, an entirely different thing which makes no sense), it becomes a little more difficult for them to adjust over a short period of time.
  • Anecdotally seems true from a number of EAs I've spoken to who've updated to longtermism over time.
  • For most people, changing one's beliefs and moral circles takes time. So we need to create a movement which can accomodate this. Peter Singer sums it up quite well: "there are people who come into the animal movement because of their concern for cats and dogs who later move on to understand that the number of farm animals suffering is vastly greater than the number of cats and dogs suffering and that typically the farm animals suffer more than the cats and dogs, and so they’ve added to the strength of the broader, and as I see more important, animal welfare organizations or animal rights organizations that are working for farm animals. So I think it’s possible that something similar can happen in the EA movement."
  • Risk to the movement is that we lose people who could have become EAs because we turn them off the movement by making it too "weird"

Further research on this topic that could verify my hypothesis:

  • Studying changes in moral attitudes regarding other issues such as slavery, racism, LGBT rights etc. over time, and how long it took individuals/communities to change their attitudes (and behaviors)
Replies from: David_Moss, Misha_Yagudin
comment by David_Moss · 2020-08-07T06:46:12.388Z · EA(p) · GW(p)

My sense is that the idea of sequential stages for moral development is exceedingly likely to be false and in the case of the most prominent theory of this kind, Kolhlberg's, completely debunked in the sense that there was never any good evidence for it (I find the social intuitionist model much more plausible), so I don't see much appeal to trying to understand cause selection in these terms.

That said, I'm sure there's a rough sense in which people tend to adopt less weird beliefs before they adopt more weird ones and I think that thinking about this in terms of more/less weird beliefs is likely more informative than thinking about this in terms of more/less distant areas in a "moral circle".

I don't think there's a clear non-subjective sense in which causes are more or less weird though. For example, there are many EAs who value the wellbeing of non-actual people in the distant future and not suffering wild animals and vice versa, so which is weirder or more distant from the centre of this posited circle? I hear people assume conflicting answers to this question from time to time (people tend to assume their area is less weird).

I would also agree that getting people to agree to beliefs which are less far from what they currently believe can make them more positively inclined to subsequently adopt beliefs related to that belief which are further from their current beliefs. It seems like there are a bunch of non-competing reasons why this could be the case though. For example:

  • Sometimes belief x1 itself gives a person epistemic reason to believe x2
  • Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
  • Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group

Notably none of these require that we assume anything about moral circles or general sequences of belief.

Replies from: vaidehi_agarwalla
comment by vaidehi_agarwalla · 2020-08-07T08:30:32.189Z · EA(p) · GW(p)

Yeah I think you're right. I didn't need to actually reference Piaget (it just prompted the thought). To be clear, I wasn't trying to imply that Piaget/Kohlberg's theories were correct or sound, but rather applying the model to another issue. I didn't make that very clear.  I don't think my argument really requirs the empirical implications of the model (especially because I wasn't trying to imply moral judgement that one moral circle is necessary better/worse). However I didn't flag this. [meta note: I also posted it pretty quickly, didn't think it through it much since it's a short form]

I broadly agree with all your points. 

I think my general point of x,10x,100x makes more sense if you're looking along one axes (eg. A class of beings like future humans) rather than all the ways you can expand your moral circle - which I also think might be better to think of as a sphere or more complex shape to account for different dimensions/axes. 

I was thinking about the more concrete cases where you go from cats and dogs -> pigs and cows or people in my home country -> people in other countries. 

Re the other reasons you gave:

  • Sometimes belief x1 itself gives a person epistemic reason to believe x2

I think this is kind of what I was trying to say, where there can be some important incremental movement here. (Of course if x2 is very different from x1 then maybe not).

  • Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things

This is an interesting point I haven't thought much about. 

  • Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group

I think this is probably the strongest non-step-wise reason. 

comment by Misha_Yagudin · 2020-08-07T04:30:51.476Z · EA(p) · GW(p)

If longtermism is one of the latest stages of moral circle development than your anecdotal data suffers from major selection effects.

Anecdotally seems true from a number of EAs I've spoken to who've updated to longtermism over time.