EA Forum Reader
View: New · Old · Topnext page (older posts) →
next page (older posts) →
I think that people who are really enthusiastic about EA are pretty likely to stick around even when they're infuriated by things EAs are saying.
[...]
If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.
I would guess it depends quite a bit on these people's total exposure to EA at the time when they encounter something they find infuriating (or even just somewhat off / getting a vibe that this community probably is "not for them").
If we're imagining people who've already had 10 or even 100 hours of total EA exposure, then I'm inclined to agree with your claim and sentiment. (Though I think there would still be exceptions, and I suspect I'm at least a bit more into "try hard to avoid people bouncing for reasons unrelated to actual goal misalignment" than you.)
I'm less sure for people who are super new to EA as a school of thought or community.
We don't need to look at hypothetical cases to establish this. My memory of events 10 years ago is obviously hazy but I'm fairly sure that I had encountered both GiveWell's website and Overcoming Bias years before I actually got into EA. At that time I didn't understand what they were really about, and from skimming they didn't clear my bar of "this seems worth engaging with". I think Overcoming Bias seemed like some generic libertarian blog to me, and at the time I thought libertarians were deluded and callous; and for GiveWell I had landed on some in-the-weeds page on some specific intervention and I was like "whatever I'm not that interested in malaria [or whatever the page was about]". Just two of the many links you open, glance at for a few seconds, and then never (well, in this case luckily not quite) come back to.
This case is obviously very different from what we're discussing here. But I think it serves to reframe the discussion by illustrating that there are a number of different reasons for why someone might bounce from EA depending on a number of that person's property, with the amount of prior exposure being a key one. I'm skeptical that any blanket statement of type "it's OK if people bounce for reason X" will do a good job at describing a good strategy for dealing with this issue.
kathrynmecrow on Seeking (paid) volunteers to test introductory EA contentStrong upvote for fantastic introductory opportunities that are paid! :)
wild_animal_initiative on Wild Animal Initiative featured in VoxThanks, Will!
wild_animal_initiative on Wild Animal Initiative featured in VoxHollis here from Wild Animal Initiative! Our thinking behind posting this on the EA Forum:
I've added a link to make it clearer that this wasn't originally written for the forum. We only want to post things here that are relevant and useful to the community, so we welcome your feedback!
evhub on Concerns with ACE's Recent BehaviorThat's a great point; I agree with that.
clairezabel on Concerns with ACE's Recent Behavior[As is always the default, but perhaps worth repeating in sensitive situations, my views are my own and by default I'm not speaking on behalf of the Open Phil. I don't do professional grantmaking in this area, haven't been following it closely recently, and others at Open Phil might have different opinions.]
I'm disappointed by ACE's comment [EA(p) · GW(p)] (I thought Jakub's comment [EA(p) · GW(p)] seemed very polite and even-handed, and not hostile, given the context, nor do I agree with characterizing what seems to me to be sincere concern in the OP just as hostile) and by some of the other instances of ACE behavior documented in the OP. I used to be a board member at ACE, but one of the reasons I didn't seek a second term was because I was concerned about ACE drifting away from focusing on just helping animals as effectively as possible, and towards integrating/compromising between that and human-centered social justice concerns, in a way that I wasn't convinced was based on open-minded analysis or strong and rigorous cause-agnostic reasoning. I worry about this dynamic leading to an unpleasant atmosphere for those with different perspectives, and decreasing the extent to which ACE has a truth-seeking culture that would reliably reach good decisions about how to help as many animals as possible.
I think one can (hopefully obviously) take a very truth-seeking and clear-minded approach that leads to and involves doing more human-centered social justice activism, but I worry that that isn't what's happening at ACE; instead, I worry that other perspectives (which happen to particularly favor social justice issues and adopt some norms from certain SJ communities) are becoming more influential via processes that aren't particularly truth-tracking.
Charity evaluators have a lot of power over the norms in the spaces they operate in, and so I think that for the health of the ecosystem it's particularly important for them to model openness in response to feedback, and rigorous, non-partisan, analytical approaches to charity evaluation/research in general, and general encouragement of truth-seeking, open-minded discourse norms. But I tentatively don't think that's what's going on here, and if it is, I more confidently worry that charities looking on may not interpret things that way; I think the natural reaction of a charity (that values a current or future possible ACE Top or Standout charity designation) to the situation with Anima is to feel a lot of pressure to adopt norms, focuses, and diversity goals it may not agree it ought to prioritize, and that don't seem intrinsically connected to the task of helping animals as effectively as possible, and for that charity worry that pushback might be met with aggression and reprisal (even if that's not what would in fact happen).
This makes me really sad. I think ACE has one of the best missions in the world, and what they do is incredibly important. I really hope I'm wrong about the above and they are making the best possible choices, and are on the path to saving as many animals as possible, and helping the rest of the EAA ecosystem do the same.
wild_animal_initiative on Wild Animal Initiative featured in VoxHi Will, Hollis here! You're correct, this is crossposted from our blog, which has a different tone from most EA Forum posts. I've added a note and a link above to clarify.
timothy_liptrot on On Mike Berkowitz's 80k PodcastOriginal median voter theorem paper, Duncan Black in 1948
Let us suppose that a decision is to be determined by vote of a committee. The members of the committee may meet in a single room, or they may be scattered over an area of the country as are the electors in a parliamentary constituency. Proposals are advanced, we assume, in the form of motions on a particular topic or in favor of one of a number of candidates. We do not inquire into the genesis of the motions but simply assume that given motions have been put forward. In the case of the selection of candidates, we assume that determinate candidates have offered themselves for election and that one is to be chosen by means of voting. For convenience we shall speak as if one of a number of alternative mo- tions, and not candidates, was being selected.
Let there be n members in the committee, where n is odd. We suppose that an ordering of the points on the horizontal axis representing motions exists, rendering the preference curves of all members single-peaked. The points on the horizontal axis corresponding to the members' optimums are named O, 02, 03, . . . , in the order of their occurrence. The middle or median optimum will be the (n + I)/2th, and, in Figure 3, only this median optimum, the one im- mediately above it and the one immediately below it are shown
Anyway, this is really a pedagogic question. How best should we teach politics? Some people advocate that we should disregard the MVT because it is both "obvious" and "false". Setting that contradiction aside, I think the underlying assumption that only theories with perfect data fit should be taught is wrong. By the same logic, physics should not teach Newtownian mechanics because it is wrong relative to quantum mechanics. You can't just give the reader quantum mechanics, you need to start with a theory they can understand then update it.
landfish on The $100trn opportunity: ESG investing should be a top priority for EA careersThanks!
michaela on [deleted]My understanding is that:
So it'd be cool if someone could (eventually) edit this entry to be consistent with those points.