vaidehi_agarwalla's Shortform

post by vaidehi_agarwalla · 2019-12-06T21:03:43.762Z · score: 1 (1 votes) · EA · GW · 14 comments

14 comments

Comments sorted by top scores.

comment by vaidehi_agarwalla · 2020-07-10T12:53:50.455Z · score: 13 (6 votes) · EA(p) · GW(p)

Mini Collection [EA · GW] - Non-typical EA Movement Building

Basically, these are ways of spreading EA ideas, philosophies or furthering concrete EA goals in ways that are different from the typical community building models that local groups use.


Suggestions welcome!

comment by vaidehi_agarwalla · 2020-07-11T01:05:49.778Z · score: 5 (4 votes) · EA(p) · GW(p)

This quote from Kelsey Piper:

Maybe pretty early on, it just became obvious that there wasn’t a lot of value in preaching to people on a topic that they weren’t necessarily there for, and that I had a lot of thoughts on the conversations people were already having.
Then I think one thing you can do to share any reasoning system, but it works particularly well for effective altruism is just to apply it consistently, in a principled way, to problems that people care about. Then, they’ll see whether your tools look like useful tools. If they do, then they’ll be interested in learning more about that.
...
My ideal effective altruist movement had insightful nuanced, productive, takes on lots and lots of other things so that people could be like, "Oh, I see how effective altruists have tools for answering questions. I want the people who have tools for answering questions to teach me about those tools. I want to know what they think the most important questions are. I want to sort of learn about their approach."
comment by vaidehi_agarwalla · 2020-07-15T07:17:29.448Z · score: 12 (5 votes) · EA(p) · GW(p)

Collection [EA · GW] of Constraints in EA

comment by vaidehi_agarwalla · 2020-02-26T20:07:00.098Z · score: 8 (6 votes) · EA(p) · GW(p)

Meta-level thought:

When asking about resources, a good practice might be to mention resources you've already come across and why those sources weren't helpful (if you found any), so that people don't need to recommend the most common resources multiple times.

Also, once we have an EA-relevant search engine, it would be useful to refer people to that even before they ask a question in case that question has been asked or that resource already exists.

The primary goal of both suggestions would be to make questions more specific, in-depth and hopefully either expanding movement knowledge or identifying gaps in knowledge. The secondary goal would be to save time!

comment by vaidehi_agarwalla · 2020-07-25T15:09:02.584Z · score: 6 (5 votes) · EA(p) · GW(p)

Some thoughts on stage-wise development of moral circle

Status: Very rough, I mainly want to know if there's already some research/thinking on this.

  • Jean Piaget, a early childhood psychologist from the 1960s, suggested a stage sequential model of childhood developemnt. He suggesting that we progress through different levels of development, and each stage is necessary to develop to the next.
  • Perhaps we can make a similar argument for moral circle expansion. In other words: you cannot run when you don't know how to walk. If you ask someone to believe X, then X+1, then X+2, this makes some sense. if you jump from X to 10X to 10000X (they may even perceive 10000X as Y, an entirely different thing which makes no sense), it becomes a little more difficult for them to adjust over a short period of time.
  • Anecdotally seems true from a number of EAs I've spoken to who've updated to longtermism over time.
  • For most people, changing one's beliefs and moral circles takes time. So we need to create a movement which can accomodate this. Peter Singer sums it up quite well: "there are people who come into the animal movement because of their concern for cats and dogs who later move on to understand that the number of farm animals suffering is vastly greater than the number of cats and dogs suffering and that typically the farm animals suffer more than the cats and dogs, and so they’ve added to the strength of the broader, and as I see more important, animal welfare organizations or animal rights organizations that are working for farm animals. So I think it’s possible that something similar can happen in the EA movement."
  • Risk to the movement is that we lose people who could have become EAs because we turn them off the movement by making it too "weird"

Further research on this topic that could verify my hypothesis:

  • Studying changes in moral attitudes regarding other issues such as slavery, racism, LGBT rights etc. over time, and how long it took individuals/communities to change their attitudes (and behaviors)
comment by David_Moss · 2020-08-07T06:46:12.388Z · score: 6 (2 votes) · EA(p) · GW(p)

My sense is that the idea of sequential stages for moral development is exceedingly likely to be false and in the case of the most prominent theory of this kind, Kolhlberg's, completely debunked in the sense that there was never any good evidence for it (I find the social intuitionist model much more plausible), so I don't see much appeal to trying to understand cause selection in these terms.

That said, I'm sure there's a rough sense in which people tend to adopt less weird beliefs before they adopt more weird ones and I think that thinking about this in terms of more/less weird beliefs is likely more informative than thinking about this in terms of more/less distant areas in a "moral circle".

I don't think there's a clear non-subjective sense in which causes are more or less weird though. For example, there are many EAs who value the wellbeing of non-actual people in the distant future and not suffering wild animals and vice versa, so which is weirder or more distant from the centre of this posited circle? I hear people assume conflicting answers to this question from time to time (people tend to assume their area is less weird).

I would also agree that getting people to agree to beliefs which are less far from what they currently believe can make them more positively inclined to subsequently adopt beliefs related to that belief which are further from their current beliefs. It seems like there are a bunch of non-competing reasons why this could be the case though. For example:

  • Sometimes belief x1 itself gives a person epistemic reason to believe x2
  • Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
  • Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group

Notably none of these require that we assume anything about moral circles or general sequences of belief.

comment by vaidehi_agarwalla · 2020-08-07T08:30:32.189Z · score: 1 (1 votes) · EA(p) · GW(p)

Yeah I think you're right. I didn't need to actually reference Piaget (it just prompted the thought). To be clear, I wasn't trying to imply that Piaget/Kohlberg's theories were correct or sound, but rather applying the model to another issue. I didn't make that very clear.  I don't think my argument really requirs the empirical implications of the model (especially because I wasn't trying to imply moral judgement that one moral circle is necessary better/worse). However I didn't flag this. [meta note: I also posted it pretty quickly, didn't think it through it much since it's a short form]

I broadly agree with all your points. 

I think my general point of x,10x,100x makes more sense if you're looking along one axes (eg. A class of beings like future humans) rather than all the ways you can expand your moral circle - which I also think might be better to think of as a sphere or more complex shape to account for different dimensions/axes. 

I was thinking about the more concrete cases where you go from cats and dogs -> pigs and cows or people in my home country -> people in other countries. 

Re the other reasons you gave:

  • Sometimes belief x1 itself gives a person epistemic reason to believe x2

I think this is kind of what I was trying to say, where there can be some important incremental movement here. (Of course if x2 is very different from x1 then maybe not).

  • Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things

This is an interesting point I haven't thought much about. 

  • Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group

I think this is probably the strongest non-step-wise reason. 

comment by Misha_Yagudin · 2020-08-07T04:30:51.476Z · score: 1 (1 votes) · EA(p) · GW(p)

If longtermism is one of the latest stages of moral circle development than your anecdotal data suffers from major selection effects.

Anecdotally seems true from a number of EAs I've spoken to who've updated to longtermism over time.

comment by vaidehi_agarwalla · 2019-12-06T21:03:43.897Z · score: 6 (5 votes) · EA(p) · GW(p)

Could regular small donations to Facebook Fundraisers increase donations from non-EAs?

The day before Giving Tuesday, I made a donation to a EA Facebook charity that had seen no donations in a few weeks. After I donated to about 3 other people donated within the next 2 hours (well before the Giving Tuesday start time). From what I remember, the total amount increased by more than the minimum amount and the individuals appeared not to be affiliated with EA, so it seems possible that this fundraiser might have somehow been raised to their attention. (Of course it's possible that with Giving Tuesday approaching they would have donated anyway.)

However, it made think that regularly donating to fundraisers could keep them on people's feeds inspire them to donate, and that this could be a pretty low-cost experiment to run. Since you can't see amounts, you could donate the minimum amount on a regular basis (say every month or so - about $60 USD per year). The actual design of the experiment would be fairly straight forward as well: use the previous year as a baseline of activity for a group of EA organisations and then experiment with who donates, when they donate, and different donation amounts. If you want to get more in-depth you could also look at other factors of the individual who donates (i.e. how many FB friends they have).

Experimental design

Using EA Giving Tuesday's had 28 charities that people could donate to. Of that, you could select 10 charities as your controls, and 10 similar charities (i.e. similar cause, intervention, size) as your experimental group, and recruit 5 volunteer donors per charity to donate once a month on a randomly selected day. They would make the donation without adding any explanation or endorsement.

Then you could use both the previous year's data and the current year's controlled charities to compare the effects. You would want to track whether non-volunteer donations or traffic was gained after the volunteer donations.

Caveats: This would be limited to countries where Facebook Fundraising is set up.

comment by vaidehi_agarwalla · 2020-03-22T23:59:18.055Z · score: 5 (5 votes) · EA(p) · GW(p)

How valuable is building a high-quality (for-profit) event app for future EA conferences?

There are 6 eag(x) conferences a year. this number will probably increase over time and more conferences will come up as EA grows- I'd expect somewhere between 80-200 EA-related conferences and related events in the next 10 years. This includes cause-area specific conferences, like Catalyst and other large events.

A typical 2.5 day conference with on average ~300 attendees spending 30 hours = 9,000 man-hours would be a range of 720,000-1,800,000 man hours over 10 years. Of this time, I'd expect 90% to be taken up doing meetings, attending events, eating etc. Of the remaining 10%, so 7,200-18,000 saving 1% of this time is in the range of 7,200- 18,000 hours or roughly seems pretty useful!

For reference, 1 year of work (a 40 hours work-week for 50 weeks) = 2000 hours.

comment by vaidehi_agarwalla · 2020-02-29T15:35:14.076Z · score: 2 (2 votes) · EA(p) · GW(p)

I brainstormed a list of questions that might help evaluate how promising climate change adaptation efforts would be.

Would anyone have any additions/feedback or answers to these questions?

https://docs.google.com/document/d/19VryYtikXQEEOeXtjgApWWKoof3dRfQNVjza7HbnuHU/edit?usp=sharing

comment by vaidehi_agarwalla · 2020-08-05T09:06:40.115Z · score: 1 (1 votes) · EA(p) · GW(p)

Is anyone aware of/planning on doing any research related to the expected spike in interest for pandemic research due to COVID? 

It would be interesting to see how much new interest is generated, and for which types of roles (e.g. doctors vs researchers). This could be useful to a) identify potential skilled biosecurity recruits b) find out what motivated them about COVID-19 c) figure out how neglected this will be in 5-10 years 

I'd imagine doing a survey after the pandemic starts to die down might be more valuable than right now (maybe after the second wave) so that we're tracking the longer-term impact rather than the immediate reactions. 

An MVP version could be just looking at application rates across a variety of relevant fields.  

comment by hapless · 2020-08-05T19:02:30.406Z · score: 0 (2 votes) · EA(p) · GW(p)

Having done some research on post-graduate education in the past, it's surprisingly difficult to access application rates for classes of programs. Some individual schools publish their application/admission rates, but usually as advertising, so there's a fair bit of cherry picking. It's somewhat more straightforward to access completion rates (at least in the US, universities report this to government). However, that MVP would still be interesting with just a few data points: if any EAs have relationships to a couple relevant programs (in say biosecurity, epidemiology), it may be worth reaching out directly in 6-12 months!

A more general point, which I've seen some discussion of here, is how near-miss catastrophes prepare society for a more severe version of the same catastrophe. This would be interesting to explore both theoretically (what's the sweet spot for a near-miss to encourage further work, but not dissuade prevention policies) and empirically.

One historical example might be, for example, does a civilization which experienced a bad famine experience fewer famines in a period following that bad famine? How long is that period? In particular, that makes me think of MichaelA's recently excellent Some history topics it might be very valuable to investigate [EA · GW].

comment by Khorton · 2020-08-05T19:39:50.866Z · score: 2 (1 votes) · EA(p) · GW(p)

In the UK could you access application numbers with a Freedom of Information request?